From patchwork Fri Oct 2 14:41:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 267703 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D89C7C4727D for ; Fri, 2 Oct 2020 14:42:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9F4A4207DE for ; Fri, 2 Oct 2020 14:42:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601649748; bh=faswXucbeuzZqOUQXHuvRp84THj+t6EmxOfz9TN2TVA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=MNGtVKzsNiTAhCBiHEJP9DuTVcYzgT1BTGI16M961AM4LKBetx07aiCKP3MBZbFGK HfzUJwu7zdblZ88EmyjlGWn8ll/kZA+ahM3+SwdO7nxHSyZnzDI9Yq87wCXmS3XZQy JUEWRgx4owSxElCdW7nYCChJY4uNwENSafxWzPHk= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388168AbgJBOm2 (ORCPT ); Fri, 2 Oct 2020 10:42:28 -0400 Received: from mail.kernel.org ([198.145.29.99]:60722 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726017AbgJBOm1 (ORCPT ); Fri, 2 Oct 2020 10:42:27 -0400 Received: from lore-desk.redhat.com (unknown [176.207.245.61]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id E37C120708; Fri, 2 Oct 2020 14:42:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601649747; bh=faswXucbeuzZqOUQXHuvRp84THj+t6EmxOfz9TN2TVA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Gj7gcBIzjKA6yzu4AHVaiSlKXZsChYAJitEi21J6NdSqFjsAVZOSyaP38GjnvhMPT UaTywkVYfmuAZrJhl2aGCYr9ICk99Ex9jUAhAzlyQr43eWLl/V92vTJdNi2u+NORm3 AuE/j3Zy6N5wZ8QmnZRLv45Axk5qxiQsV0Fs+NUE= From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, sameehj@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, lorenzo.bianconi@redhat.com, echaudro@redhat.com Subject: [PATCH v4 bpf-next 01/13] xdp: introduce mb in xdp_buff/xdp_frame Date: Fri, 2 Oct 2020 16:41:59 +0200 Message-Id: <766be172da94e75c9ab963c29b110545aeb4bfa6.1601648734.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Introduce multi-buffer bit (mb) in xdp_frame/xdp_buffer data structure in order to specify if this is a linear buffer (mb = 0) or a multi-buffer frame (mb = 1). In the latter case the shared_info area at the end of the first buffer is been properly initialized to link together subsequent buffers. Signed-off-by: Lorenzo Bianconi --- include/net/xdp.h | 8 ++++++-- net/core/xdp.c | 1 + 2 files changed, 7 insertions(+), 2 deletions(-) diff --git a/include/net/xdp.h b/include/net/xdp.h index 3814fb631d52..42f439f9fcda 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -72,7 +72,8 @@ struct xdp_buff { void *data_hard_start; struct xdp_rxq_info *rxq; struct xdp_txq_info *txq; - u32 frame_sz; /* frame size to deduce data_hard_end/reserved tailroom*/ + u32 frame_sz:31; /* frame size to deduce data_hard_end/reserved tailroom*/ + u32 mb:1; /* xdp non-linear buffer */ }; /* Reserve memory area at end-of data area. @@ -96,7 +97,8 @@ struct xdp_frame { u16 len; u16 headroom; u32 metasize:8; - u32 frame_sz:24; + u32 frame_sz:23; + u32 mb:1; /* xdp non-linear frame */ /* Lifetime of xdp_rxq_info is limited to NAPI/enqueue time, * while mem info is valid on remote CPU. */ @@ -141,6 +143,7 @@ void xdp_convert_frame_to_buff(struct xdp_frame *frame, struct xdp_buff *xdp) xdp->data_end = frame->data + frame->len; xdp->data_meta = frame->data - frame->metasize; xdp->frame_sz = frame->frame_sz; + xdp->mb = frame->mb; } static inline @@ -167,6 +170,7 @@ int xdp_update_frame_from_buff(struct xdp_buff *xdp, xdp_frame->headroom = headroom - sizeof(*xdp_frame); xdp_frame->metasize = metasize; xdp_frame->frame_sz = xdp->frame_sz; + xdp_frame->mb = xdp->mb; return 0; } diff --git a/net/core/xdp.c b/net/core/xdp.c index 48aba933a5a8..884f140fc3be 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -454,6 +454,7 @@ struct xdp_frame *xdp_convert_zc_to_xdp_frame(struct xdp_buff *xdp) xdpf->headroom = 0; xdpf->metasize = metasize; xdpf->frame_sz = PAGE_SIZE; + xdpf->mb = xdp->mb; xdpf->mem.type = MEM_TYPE_PAGE_ORDER0; xsk_buff_free(xdp); From patchwork Fri Oct 2 14:42:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 289098 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9BA4C47423 for ; Fri, 2 Oct 2020 14:42:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 87850207DE for ; Fri, 2 Oct 2020 14:42:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601649753; bh=LxNPYsGGK2VbzekPFKAjqYOFrt5K/1kDpfiN9h5n05I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=o9e+F5xM2U+fg9KlcNnrajwmTpAKrTg2GL7Nfc3MLlsiHDZPxdeKsNvTkbOE52WQH 8Us+bqXCsk3MugHJ8Jz4aAfB3XMF63BKXfDxbJP4wiwivn8UkiTrLQ0w/WAAm+PT+q 0ueqlBjCKI4U9lyJMh0UfT3Ci7fiNRN2GS9E2scg= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388174AbgJBOmc (ORCPT ); Fri, 2 Oct 2020 10:42:32 -0400 Received: from mail.kernel.org ([198.145.29.99]:60750 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726017AbgJBOmc (ORCPT ); Fri, 2 Oct 2020 10:42:32 -0400 Received: from lore-desk.redhat.com (unknown [176.207.245.61]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 5178C20719; Fri, 2 Oct 2020 14:42:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601649750; bh=LxNPYsGGK2VbzekPFKAjqYOFrt5K/1kDpfiN9h5n05I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=zDOdnIvFlldyEYf1C2uiHfd7/4zY3EBJSr7D4M2p+jBKwjK6cl+FJ0NWiDiuCeHLq 9Jj8hjwCSMc+LB3fLf4vLiejaxl9oCNBJ68Upmm197CkNIbwCZfjCMKRRuz8PRXTqF bLN0uhcQ0QwN5eeYCc0TDLDqx65E8gAzQ8+pzR0M= From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, sameehj@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, lorenzo.bianconi@redhat.com, echaudro@redhat.com Subject: [PATCH v4 bpf-next 02/13] xdp: initialize xdp_buff mb bit to 0 in all XDP drivers Date: Fri, 2 Oct 2020 16:42:00 +0200 Message-Id: <31f641800f63ad8c221785c54295112f01935106.1601648734.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Initialize multi-buffer bit (mb) to 0 in all XDP-capable drivers. This is a preliminary patch to enable xdp multi-buffer support. Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/amazon/ena/ena_netdev.c | 1 + drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c | 1 + drivers/net/ethernet/cavium/thunder/nicvf_main.c | 1 + drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c | 1 + drivers/net/ethernet/intel/i40e/i40e_txrx.c | 1 + drivers/net/ethernet/intel/ice/ice_txrx.c | 1 + drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 1 + drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c | 1 + drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c | 1 + drivers/net/ethernet/mellanox/mlx4/en_rx.c | 1 + drivers/net/ethernet/mellanox/mlx5/core/en_rx.c | 1 + drivers/net/ethernet/netronome/nfp/nfp_net_common.c | 1 + drivers/net/ethernet/qlogic/qede/qede_fp.c | 1 + drivers/net/ethernet/sfc/rx.c | 1 + drivers/net/ethernet/socionext/netsec.c | 1 + drivers/net/ethernet/ti/cpsw.c | 1 + drivers/net/ethernet/ti/cpsw_new.c | 1 + drivers/net/hyperv/netvsc_bpf.c | 1 + drivers/net/tun.c | 2 ++ drivers/net/veth.c | 1 + drivers/net/virtio_net.c | 2 ++ drivers/net/xen-netfront.c | 1 + net/core/dev.c | 1 + 23 files changed, 25 insertions(+) diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c index e8131dadc22c..339319b97853 100644 --- a/drivers/net/ethernet/amazon/ena/ena_netdev.c +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c @@ -1595,6 +1595,7 @@ static int ena_clean_rx_irq(struct ena_ring *rx_ring, struct napi_struct *napi, res_budget = budget; xdp.rxq = &rx_ring->xdp_rxq; xdp.frame_sz = ENA_PAGE_SIZE; + xdp.mb = 0; do { xdp_verdict = XDP_PASS; diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c index fcc262064766..344644b6dd4d 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c @@ -139,6 +139,7 @@ bool bnxt_rx_xdp(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 cons, xdp.data_end = *data_ptr + *len; xdp.rxq = &rxr->xdp_rxq; xdp.frame_sz = PAGE_SIZE; /* BNXT_RX_PAGE_MODE(bp) when XDP enabled */ + xdp.mb = 0; orig_data = xdp.data; rcu_read_lock(); diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_main.c b/drivers/net/ethernet/cavium/thunder/nicvf_main.c index 0a94c396173b..7fdabaabab1b 100644 --- a/drivers/net/ethernet/cavium/thunder/nicvf_main.c +++ b/drivers/net/ethernet/cavium/thunder/nicvf_main.c @@ -553,6 +553,7 @@ static inline bool nicvf_xdp_rx(struct nicvf *nic, struct bpf_prog *prog, xdp.data_end = xdp.data + len; xdp.rxq = &rq->xdp_rxq; xdp.frame_sz = RCV_FRAG_LEN + XDP_PACKET_HEADROOM; + xdp.mb = 0; orig_data = xdp.data; rcu_read_lock(); diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c index fe4caf7aad7c..8410e713162e 100644 --- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c +++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c @@ -366,6 +366,7 @@ static u32 dpaa2_eth_run_xdp(struct dpaa2_eth_priv *priv, xdp.frame_sz = DPAA2_ETH_RX_BUF_RAW_SIZE - (dpaa2_fd_get_offset(fd) - XDP_PACKET_HEADROOM); + xdp.mb = 0; xdp_act = bpf_prog_run_xdp(xdp_prog, &xdp); diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c index d43ce13a93c9..5df07bc98283 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c @@ -2332,6 +2332,7 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget) xdp.frame_sz = i40e_rx_frame_truesize(rx_ring, 0); #endif xdp.rxq = &rx_ring->xdp_rxq; + xdp.mb = 0; while (likely(total_rx_packets < (unsigned int)budget)) { struct i40e_rx_buffer *rx_buffer; diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c index eae75260fe20..d641f513b8d9 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -1089,6 +1089,7 @@ int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget) #if (PAGE_SIZE < 8192) xdp.frame_sz = ice_rx_frame_truesize(rx_ring, 0); #endif + xdp.mb = 0; /* start the loop to process Rx packets bounded by 'budget' */ while (likely(total_rx_pkts < (unsigned int)budget)) { diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c index a190d5c616fc..39f9d2032b9d 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c @@ -2298,6 +2298,7 @@ static int ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector, #if (PAGE_SIZE < 8192) xdp.frame_sz = ixgbe_rx_frame_truesize(rx_ring, 0); #endif + xdp.mb = 0; while (likely(total_rx_packets < budget)) { union ixgbe_adv_rx_desc *rx_desc; diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c index 82fce27f682b..1fbc740c266e 100644 --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c @@ -1129,6 +1129,7 @@ static int ixgbevf_clean_rx_irq(struct ixgbevf_q_vector *q_vector, struct xdp_buff xdp; xdp.rxq = &rx_ring->xdp_rxq; + xdp.mb = 0; /* Frame size depend on rx_ring setup when PAGE_SIZE=4K */ #if (PAGE_SIZE < 8192) diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c index f6616c8933ca..01661ade9009 100644 --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c @@ -3558,6 +3558,7 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi, xdp.data = data + MVPP2_MH_SIZE + MVPP2_SKB_HEADROOM; xdp.data_end = xdp.data + rx_bytes; xdp.frame_sz = PAGE_SIZE; + xdp.mb = 0; if (bm_pool->pkt_size == MVPP2_BM_SHORT_PKT_SIZE) xdp.rxq = &rxq->xdp_rxq_short; diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c index 99d7737e8ad6..de1ae36b068e 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c @@ -684,6 +684,7 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud xdp_prog = rcu_dereference(ring->xdp_prog); xdp.rxq = &ring->xdp_rxq; xdp.frame_sz = priv->frag_info[0].frag_stride; + xdp.mb = 0; doorbell_pending = 0; /* We assume a 1:1 mapping between CQEs and Rx descriptors, so Rx diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index 599f5b5ebc97..82c3e755dadd 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -1133,6 +1133,7 @@ static void mlx5e_fill_xdp_buff(struct mlx5e_rq *rq, void *va, u16 headroom, xdp->data_end = xdp->data + len; xdp->rxq = &rq->xdp_rxq; xdp->frame_sz = rq->buff.frame0_sz; + xdp->mb = 0; } static struct sk_buff * diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c index b150da43adb2..69fab1010752 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c @@ -1824,6 +1824,7 @@ static int nfp_net_rx(struct nfp_net_rx_ring *rx_ring, int budget) true_bufsz = xdp_prog ? PAGE_SIZE : dp->fl_bufsz; xdp.frame_sz = PAGE_SIZE - NFP_NET_RX_BUF_HEADROOM; xdp.rxq = &rx_ring->xdp_rxq; + xdp.mb = 0; tx_ring = r_vec->xdp_ring; while (pkts_polled < budget) { diff --git a/drivers/net/ethernet/qlogic/qede/qede_fp.c b/drivers/net/ethernet/qlogic/qede/qede_fp.c index a2494bf85007..14a54094ca08 100644 --- a/drivers/net/ethernet/qlogic/qede/qede_fp.c +++ b/drivers/net/ethernet/qlogic/qede/qede_fp.c @@ -1096,6 +1096,7 @@ static bool qede_rx_xdp(struct qede_dev *edev, xdp.data_end = xdp.data + *len; xdp.rxq = &rxq->xdp_rxq; xdp.frame_sz = rxq->rx_buf_seg_size; /* PAGE_SIZE when XDP enabled */ + xdp.mb = 0; /* Queues always have a full reset currently, so for the time * being until there's atomic program replace just mark read diff --git a/drivers/net/ethernet/sfc/rx.c b/drivers/net/ethernet/sfc/rx.c index aaa112877561..286feb510c21 100644 --- a/drivers/net/ethernet/sfc/rx.c +++ b/drivers/net/ethernet/sfc/rx.c @@ -301,6 +301,7 @@ static bool efx_do_xdp(struct efx_nic *efx, struct efx_channel *channel, xdp.data_end = xdp.data + rx_buf->len; xdp.rxq = &rx_queue->xdp_rxq_info; xdp.frame_sz = efx->rx_page_buf_step; + xdp.mb = 0; xdp_act = bpf_prog_run_xdp(xdp_prog, &xdp); rcu_read_unlock(); diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c index 806eb651cea3..0f0567083a6c 100644 --- a/drivers/net/ethernet/socionext/netsec.c +++ b/drivers/net/ethernet/socionext/netsec.c @@ -947,6 +947,7 @@ static int netsec_process_rx(struct netsec_priv *priv, int budget) xdp.rxq = &dring->xdp_rxq; xdp.frame_sz = PAGE_SIZE; + xdp.mb = 0; rcu_read_lock(); xdp_prog = READ_ONCE(priv->xdp_prog); diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c index 9fd1f77190ad..558e0abb03c1 100644 --- a/drivers/net/ethernet/ti/cpsw.c +++ b/drivers/net/ethernet/ti/cpsw.c @@ -407,6 +407,7 @@ static void cpsw_rx_handler(void *token, int len, int status) xdp.data_hard_start = pa; xdp.rxq = &priv->xdp_rxq[ch]; xdp.frame_sz = PAGE_SIZE; + xdp.mb = 0; port = priv->emac_port + cpsw->data.dual_emac; ret = cpsw_run_xdp(priv, ch, &xdp, page, port); diff --git a/drivers/net/ethernet/ti/cpsw_new.c b/drivers/net/ethernet/ti/cpsw_new.c index f779d2e1b5c5..7baab97e302a 100644 --- a/drivers/net/ethernet/ti/cpsw_new.c +++ b/drivers/net/ethernet/ti/cpsw_new.c @@ -350,6 +350,7 @@ static void cpsw_rx_handler(void *token, int len, int status) xdp.data_hard_start = pa; xdp.rxq = &priv->xdp_rxq[ch]; xdp.frame_sz = PAGE_SIZE; + xdp.mb = 0; ret = cpsw_run_xdp(priv, ch, &xdp, page, priv->emac_port); if (ret != CPSW_XDP_PASS) diff --git a/drivers/net/hyperv/netvsc_bpf.c b/drivers/net/hyperv/netvsc_bpf.c index 440486d9c999..a4bafc64997f 100644 --- a/drivers/net/hyperv/netvsc_bpf.c +++ b/drivers/net/hyperv/netvsc_bpf.c @@ -50,6 +50,7 @@ u32 netvsc_run_xdp(struct net_device *ndev, struct netvsc_channel *nvchan, xdp->data_end = xdp->data + len; xdp->rxq = &nvchan->xdp_rxq; xdp->frame_sz = PAGE_SIZE; + xdp->mb = 0; memcpy(xdp->data, data, len); diff --git a/drivers/net/tun.c b/drivers/net/tun.c index be69d272052f..d8380feb7626 100644 --- a/drivers/net/tun.c +++ b/drivers/net/tun.c @@ -1641,6 +1641,7 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun, xdp.data_end = xdp.data + len; xdp.rxq = &tfile->xdp_rxq; xdp.frame_sz = buflen; + xdp.mb = 0; act = bpf_prog_run_xdp(xdp_prog, &xdp); if (act == XDP_REDIRECT || act == XDP_TX) { @@ -2388,6 +2389,7 @@ static int tun_xdp_one(struct tun_struct *tun, xdp_set_data_meta_invalid(xdp); xdp->rxq = &tfile->xdp_rxq; xdp->frame_sz = buflen; + xdp->mb = 0; act = bpf_prog_run_xdp(xdp_prog, xdp); err = tun_xdp_act(tun, xdp_prog, xdp, act); diff --git a/drivers/net/veth.c b/drivers/net/veth.c index 091e5b4ba042..e25af95a532d 100644 --- a/drivers/net/veth.c +++ b/drivers/net/veth.c @@ -711,6 +711,7 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq, /* SKB "head" area always have tailroom for skb_shared_info */ xdp.frame_sz = (void *)skb_end_pointer(skb) - xdp.data_hard_start; xdp.frame_sz += SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + xdp.mb = 0; orig_data = xdp.data; orig_data_end = xdp.data_end; diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 7145c83c6c8c..3d39d7622840 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -690,6 +690,7 @@ static struct sk_buff *receive_small(struct net_device *dev, xdp.data_meta = xdp.data; xdp.rxq = &rq->xdp_rxq; xdp.frame_sz = buflen; + xdp.mb = 0; orig_data = xdp.data; act = bpf_prog_run_xdp(xdp_prog, &xdp); stats->xdp_packets++; @@ -860,6 +861,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, xdp.data_meta = xdp.data; xdp.rxq = &rq->xdp_rxq; xdp.frame_sz = frame_sz - vi->hdr_len; + xdp.mb = 0; act = bpf_prog_run_xdp(xdp_prog, &xdp); stats->xdp_packets++; diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c index 3e9895bec15f..00440ad34ca8 100644 --- a/drivers/net/xen-netfront.c +++ b/drivers/net/xen-netfront.c @@ -870,6 +870,7 @@ static u32 xennet_run_xdp(struct netfront_queue *queue, struct page *pdata, xdp->data_end = xdp->data + len; xdp->rxq = &queue->xdp_rxq; xdp->frame_sz = XEN_PAGE_SIZE - XDP_PACKET_HEADROOM; + xdp->mb = 0; act = bpf_prog_run_xdp(prog, xdp); switch (act) { diff --git a/net/core/dev.c b/net/core/dev.c index 9d55bf5d1a65..1e78b028518d 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -4640,6 +4640,7 @@ static u32 netif_receive_generic_xdp(struct sk_buff *skb, /* SKB "head" area always have tailroom for skb_shared_info */ xdp->frame_sz = (void *)skb_end_pointer(skb) - xdp->data_hard_start; xdp->frame_sz += SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + xdp->mb = 0; orig_data_end = xdp->data_end; orig_data = xdp->data; From patchwork Fri Oct 2 14:42:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 267702 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48362C4727D for ; Fri, 2 Oct 2020 14:42:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 11B3E207DE for ; Fri, 2 Oct 2020 14:42:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601649757; bh=eKPJVzu6E5/S5wENfNGlIxA6X+/VOWn+imBWPRph7Mk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=XQ/sWcFva0nLtK2RIsTkX0zBfYB7FkWTdIcV+ytHnoFxPOAC3/AVtUJ7uYPvp37xs v1sHkFsgwIC4R33SA/O7ammcZSzKirMMWy5Nz0Shv5QB3pdo2eE+4trp4zaawCKP8L SeZm8xQE+mnKXiCsQ7iRthIURMyNAgo/KuTH2sSU= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388178AbgJBOmg (ORCPT ); Fri, 2 Oct 2020 10:42:36 -0400 Received: from mail.kernel.org ([198.145.29.99]:60782 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726017AbgJBOme (ORCPT ); Fri, 2 Oct 2020 10:42:34 -0400 Received: from lore-desk.redhat.com (unknown [176.207.245.61]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id A11FF2074B; Fri, 2 Oct 2020 14:42:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601649754; bh=eKPJVzu6E5/S5wENfNGlIxA6X+/VOWn+imBWPRph7Mk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QqY2jLgiGp6kuq9XCyi0Ib8uCdIxBzyfH38XERRsxHDTATMaxQQSE+HNzaAxxWwaC st8hLTznLEXcKqFQO7mDz9d/u048cFoDhX5Sp8NPOy/iOu8HFFSVjzeKfae+lo9MLF AU9zO34FqBPNqsmHEukI6f7kQKgMuWLiXm9AyfQY= From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, sameehj@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, lorenzo.bianconi@redhat.com, echaudro@redhat.com Subject: [PATCH v4 bpf-next 03/13] net: mvneta: update mb bit before passing the xdp buffer to eBPF layer Date: Fri, 2 Oct 2020 16:42:01 +0200 Message-Id: <291c3bd6daa3529fab6b07a93585d1d71ae4f280.1601648734.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Update multi-buffer bit (mb) in xdp_buff to notify XDP/eBPF layer and XDP remote drivers if this is a "non-linear" XDP buffer. Access skb_shared_info only if xdp_buff mb is set Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/marvell/mvneta.c | 42 +++++++++++++++++---------- 1 file changed, 26 insertions(+), 16 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index d095718355d3..a431e8478297 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -2027,12 +2027,17 @@ static void mvneta_xdp_put_buff(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, struct xdp_buff *xdp, int sync_len, bool napi) { - struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp); + struct skb_shared_info *sinfo; int i; + if (likely(!xdp->mb)) + goto out; + + sinfo = xdp_get_shared_info_from_buff(xdp); for (i = 0; i < sinfo->nr_frags; i++) page_pool_put_full_page(rxq->page_pool, skb_frag_page(&sinfo->frags[i]), napi); +out: page_pool_put_page(rxq->page_pool, virt_to_head_page(xdp->data), sync_len, napi); } @@ -2234,7 +2239,6 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp, int data_len = -MVNETA_MH_SIZE, len; struct net_device *dev = pp->dev; enum dma_data_direction dma_dir; - struct skb_shared_info *sinfo; if (*size > MVNETA_MAX_RX_BUF_SIZE) { len = MVNETA_MAX_RX_BUF_SIZE; @@ -2259,9 +2263,6 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp, xdp->data = data + pp->rx_offset_correction + MVNETA_MH_SIZE; xdp->data_end = xdp->data + data_len; xdp_set_data_meta_invalid(xdp); - - sinfo = xdp_get_shared_info_from_buff(xdp); - sinfo->nr_frags = 0; } static void @@ -2272,9 +2273,9 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp, struct page *page) { struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp); + int data_len, len, nfrags = xdp->mb ? sinfo->nr_frags : 0; struct net_device *dev = pp->dev; enum dma_data_direction dma_dir; - int data_len, len; if (*size > MVNETA_MAX_RX_BUF_SIZE) { len = MVNETA_MAX_RX_BUF_SIZE; @@ -2288,17 +2289,21 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp, rx_desc->buf_phys_addr, len, dma_dir); - if (data_len > 0 && sinfo->nr_frags < MAX_SKB_FRAGS) { - skb_frag_t *frag = &sinfo->frags[sinfo->nr_frags]; + if (data_len > 0 && nfrags < MAX_SKB_FRAGS) { + skb_frag_t *frag = &sinfo->frags[nfrags]; skb_frag_off_set(frag, pp->rx_offset_correction); skb_frag_size_set(frag, data_len); __skb_frag_set_page(frag, page); - sinfo->nr_frags++; - - rx_desc->buf_phys_addr = 0; + nfrags++; + } else { + page_pool_put_full_page(rxq->page_pool, page, true); } + + rx_desc->buf_phys_addr = 0; + sinfo->nr_frags = nfrags; *size -= len; + xdp->mb = 1; } static struct sk_buff * @@ -2306,7 +2311,7 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, struct xdp_buff *xdp, u32 desc_status) { struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp); - int i, num_frags = sinfo->nr_frags; + int i, num_frags = xdp->mb ? sinfo->nr_frags : 0; struct sk_buff *skb; skb = build_skb(xdp->data_hard_start, PAGE_SIZE); @@ -2319,6 +2324,9 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, skb_put(skb, xdp->data_end - xdp->data); mvneta_rx_csum(pp, desc_status, skb); + if (likely(!xdp->mb)) + return skb; + for (i = 0; i < num_frags; i++) { skb_frag_t *frag = &sinfo->frags[i]; @@ -2338,13 +2346,14 @@ static int mvneta_rx_swbm(struct napi_struct *napi, { int rx_proc = 0, rx_todo, refill, size = 0; struct net_device *dev = pp->dev; - struct xdp_buff xdp_buf = { - .frame_sz = PAGE_SIZE, - .rxq = &rxq->xdp_rxq, - }; struct mvneta_stats ps = {}; struct bpf_prog *xdp_prog; u32 desc_status, frame_sz; + struct xdp_buff xdp_buf; + + xdp_buf.data_hard_start = NULL; + xdp_buf.frame_sz = PAGE_SIZE; + xdp_buf.rxq = &rxq->xdp_rxq; /* Get number of received packets */ rx_todo = mvneta_rxq_busy_desc_num_get(pp, rxq); @@ -2377,6 +2386,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi, frame_sz = size - ETH_FCS_LEN; desc_status = rx_status; + xdp_buf.mb = 0; mvneta_swbm_rx_frame(pp, rx_desc, rxq, &xdp_buf, &size, page); } else { From patchwork Fri Oct 2 14:42:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 289097 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 13CFCC4727D for ; Fri, 2 Oct 2020 14:42:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C280B208B6 for ; Fri, 2 Oct 2020 14:42:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601649761; bh=S/93yEwe9N0oMZRZSBWUZ2mWvYgyKCZBmf9CGEZiUhk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=jy4tLr+Inki5rKwup2qphcsCx1n00fVd6bUCY262Otlyu2MtVRd7pf1YGWdcYB4DT bjqn/RIq69Z5vM21iLwFm3jSIgTWbpxtqVjuiPgP+TS4y3d7nyiKD0KrHZZyxwV6EK hUNfG29wr9eiR9er9//v3FsgXmUPkAQGXy1pSwz0= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388197AbgJBOml (ORCPT ); Fri, 2 Oct 2020 10:42:41 -0400 Received: from mail.kernel.org ([198.145.29.99]:60808 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726017AbgJBOmi (ORCPT ); Fri, 2 Oct 2020 10:42:38 -0400 Received: from lore-desk.redhat.com (unknown [176.207.245.61]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 08E20206FA; Fri, 2 Oct 2020 14:42:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601649757; bh=S/93yEwe9N0oMZRZSBWUZ2mWvYgyKCZBmf9CGEZiUhk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=a4MToq/YVevcVPqy4LCiSEW3uaTD5SzGqeEQC6c1hAAqRZ/MGxMn3etpST9q6ePg7 GuGjfwrmPi1C1muSaTcVAJGqauXWzkJOonzEuIHHdwCsVy6vnl+MJhsTs3X0xhaFFu l92cEYL5ejr51oLYOhQEj2o9ij609G8BHMS1kEzA= From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, sameehj@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, lorenzo.bianconi@redhat.com, echaudro@redhat.com Subject: [PATCH v4 bpf-next 04/13] xdp: add multi-buff support to xdp_return_{buff/frame} Date: Fri, 2 Oct 2020 16:42:02 +0200 Message-Id: X-Mailer: git-send-email 2.26.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Take into account if the received xdp_buff/xdp_frame is non-linear recycling/returning the frame memory to the allocator Signed-off-by: Lorenzo Bianconi --- include/net/xdp.h | 18 ++++++++++++++++-- net/core/xdp.c | 39 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 55 insertions(+), 2 deletions(-) diff --git a/include/net/xdp.h b/include/net/xdp.h index 42f439f9fcda..4d47076546ff 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -208,10 +208,24 @@ void __xdp_release_frame(void *data, struct xdp_mem_info *mem); static inline void xdp_release_frame(struct xdp_frame *xdpf) { struct xdp_mem_info *mem = &xdpf->mem; + struct skb_shared_info *sinfo; + int i; /* Curr only page_pool needs this */ - if (mem->type == MEM_TYPE_PAGE_POOL) - __xdp_release_frame(xdpf->data, mem); + if (mem->type != MEM_TYPE_PAGE_POOL) + return; + + if (likely(!xdpf->mb)) + goto out; + + sinfo = xdp_get_shared_info_from_frame(xdpf); + for (i = 0; i < sinfo->nr_frags; i++) { + struct page *page = skb_frag_page(&sinfo->frags[i]); + + __xdp_release_frame(page_address(page), mem); + } +out: + __xdp_release_frame(xdpf->data, mem); } int xdp_rxq_info_reg(struct xdp_rxq_info *xdp_rxq, diff --git a/net/core/xdp.c b/net/core/xdp.c index 884f140fc3be..6d4fd4dddb00 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -370,18 +370,57 @@ static void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct) void xdp_return_frame(struct xdp_frame *xdpf) { + struct skb_shared_info *sinfo; + int i; + + if (likely(!xdpf->mb)) + goto out; + + sinfo = xdp_get_shared_info_from_frame(xdpf); + for (i = 0; i < sinfo->nr_frags; i++) { + struct page *page = skb_frag_page(&sinfo->frags[i]); + + __xdp_return(page_address(page), &xdpf->mem, false); + } +out: __xdp_return(xdpf->data, &xdpf->mem, false); } EXPORT_SYMBOL_GPL(xdp_return_frame); void xdp_return_frame_rx_napi(struct xdp_frame *xdpf) { + struct skb_shared_info *sinfo; + int i; + + if (likely(!xdpf->mb)) + goto out; + + sinfo = xdp_get_shared_info_from_frame(xdpf); + for (i = 0; i < sinfo->nr_frags; i++) { + struct page *page = skb_frag_page(&sinfo->frags[i]); + + __xdp_return(page_address(page), &xdpf->mem, true); + } +out: __xdp_return(xdpf->data, &xdpf->mem, true); } EXPORT_SYMBOL_GPL(xdp_return_frame_rx_napi); void xdp_return_buff(struct xdp_buff *xdp) { + struct skb_shared_info *sinfo; + int i; + + if (likely(!xdp->mb)) + goto out; + + sinfo = xdp_get_shared_info_from_buff(xdp); + for (i = 0; i < sinfo->nr_frags; i++) { + struct page *page = skb_frag_page(&sinfo->frags[i]); + + __xdp_return(page_address(page), &xdp->rxq->mem, true); + } +out: __xdp_return(xdp->data, &xdp->rxq->mem, true); } From patchwork Fri Oct 2 14:42:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 289096 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6EC5C47423 for ; Fri, 2 Oct 2020 14:42:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B458D206FA for ; Fri, 2 Oct 2020 14:42:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601649767; bh=DHwBCoNNncxM6JrG4uH+mEpyA4XxNAtuDwPzRels8lE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=UY3jGCtP/CVpif3m/J04wZDyWQ0I4HmMaPemC+uNZMdBx7APQRZiSxWZZPCQEvp0Y zuxxHd2I04W5FF1iJmGiVrMqqr1EzWVroRtMNuF9QdK8LR0ph5SEu5CMKLhuMXHoDX RR59hYuDPQOzSKoqVS9dVNtn/qnYBJugmmvHD+F4= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388202AbgJBOmq (ORCPT ); Fri, 2 Oct 2020 10:42:46 -0400 Received: from mail.kernel.org ([198.145.29.99]:60844 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726017AbgJBOmm (ORCPT ); Fri, 2 Oct 2020 10:42:42 -0400 Received: from lore-desk.redhat.com (unknown [176.207.245.61]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 0CD1C20708; Fri, 2 Oct 2020 14:42:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601649761; bh=DHwBCoNNncxM6JrG4uH+mEpyA4XxNAtuDwPzRels8lE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jeltZfKJhEb7loo6HXku7TettDBMYy+9Vdvf+oS2UpbUSloyQpI2cjzp1ugPazfXJ TR3q5xlgGqg0bJxYJ2JQ4kpzRKkN1Q14jDp4TvvUN4J58NKSuqepuNdfuQRtP80DUK JqefJe42vBXIJ9ZtDjBO4GGY2ZcGdqfdh6yq3bpk= From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, sameehj@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, lorenzo.bianconi@redhat.com, echaudro@redhat.com Subject: [PATCH v4 bpf-next 05/13] net: mvneta: add multi buffer support to XDP_TX Date: Fri, 2 Oct 2020 16:42:03 +0200 Message-Id: <983d6a55ddf2ef68b0269554ec2f0487271b4d12.1601648734.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Introduce the capability to map non-linear xdp buffer running mvneta_xdp_submit_frame() for XDP_TX and XDP_REDIRECT Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/marvell/mvneta.c | 79 +++++++++++++++++---------- 1 file changed, 49 insertions(+), 30 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index a431e8478297..f709650974ea 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -1852,8 +1852,8 @@ static void mvneta_txq_bufs_free(struct mvneta_port *pp, bytes_compl += buf->skb->len; pkts_compl++; dev_kfree_skb_any(buf->skb); - } else if (buf->type == MVNETA_TYPE_XDP_TX || - buf->type == MVNETA_TYPE_XDP_NDO) { + } else if ((buf->type == MVNETA_TYPE_XDP_TX || + buf->type == MVNETA_TYPE_XDP_NDO) && buf->xdpf) { if (napi && buf->type == MVNETA_TYPE_XDP_TX) xdp_return_frame_rx_napi(buf->xdpf); else @@ -2046,43 +2046,62 @@ static int mvneta_xdp_submit_frame(struct mvneta_port *pp, struct mvneta_tx_queue *txq, struct xdp_frame *xdpf, bool dma_map) { - struct mvneta_tx_desc *tx_desc; - struct mvneta_tx_buf *buf; - dma_addr_t dma_addr; + struct skb_shared_info *sinfo = xdp_get_shared_info_from_frame(xdpf); + int i, num_frames = xdpf->mb ? sinfo->nr_frags + 1 : 1; + struct mvneta_tx_desc *tx_desc = NULL; + struct page *page; - if (txq->count >= txq->tx_stop_threshold) + if (txq->count + num_frames >= txq->tx_stop_threshold) return MVNETA_XDP_DROPPED; - tx_desc = mvneta_txq_next_desc_get(txq); + for (i = 0; i < num_frames; i++) { + struct mvneta_tx_buf *buf = &txq->buf[txq->txq_put_index]; + skb_frag_t *frag = i ? &sinfo->frags[i - 1] : NULL; + int len = frag ? skb_frag_size(frag) : xdpf->len; + dma_addr_t dma_addr; - buf = &txq->buf[txq->txq_put_index]; - if (dma_map) { - /* ndo_xdp_xmit */ - dma_addr = dma_map_single(pp->dev->dev.parent, xdpf->data, - xdpf->len, DMA_TO_DEVICE); - if (dma_mapping_error(pp->dev->dev.parent, dma_addr)) { - mvneta_txq_desc_put(txq); - return MVNETA_XDP_DROPPED; + tx_desc = mvneta_txq_next_desc_get(txq); + if (dma_map) { + /* ndo_xdp_xmit */ + void *data; + + data = frag ? skb_frag_address(frag) : xdpf->data; + dma_addr = dma_map_single(pp->dev->dev.parent, data, + len, DMA_TO_DEVICE); + if (dma_mapping_error(pp->dev->dev.parent, dma_addr)) { + for (; i >= 0; i--) + mvneta_txq_desc_put(txq); + return MVNETA_XDP_DROPPED; + } + buf->type = MVNETA_TYPE_XDP_NDO; + } else { + page = frag ? skb_frag_page(frag) + : virt_to_page(xdpf->data); + dma_addr = page_pool_get_dma_addr(page); + if (frag) + dma_addr += skb_frag_off(frag); + else + dma_addr += sizeof(*xdpf) + xdpf->headroom; + dma_sync_single_for_device(pp->dev->dev.parent, + dma_addr, len, + DMA_BIDIRECTIONAL); + buf->type = MVNETA_TYPE_XDP_TX; } - buf->type = MVNETA_TYPE_XDP_NDO; - } else { - struct page *page = virt_to_page(xdpf->data); + buf->xdpf = i ? NULL : xdpf; - dma_addr = page_pool_get_dma_addr(page) + - sizeof(*xdpf) + xdpf->headroom; - dma_sync_single_for_device(pp->dev->dev.parent, dma_addr, - xdpf->len, DMA_BIDIRECTIONAL); - buf->type = MVNETA_TYPE_XDP_TX; + if (!i) + tx_desc->command = MVNETA_TXD_F_DESC; + tx_desc->buf_phys_addr = dma_addr; + tx_desc->data_size = len; + + mvneta_txq_inc_put(txq); } - buf->xdpf = xdpf; - tx_desc->command = MVNETA_TXD_FLZ_DESC; - tx_desc->buf_phys_addr = dma_addr; - tx_desc->data_size = xdpf->len; + /*last descriptor */ + tx_desc->command |= MVNETA_TXD_L_DESC | MVNETA_TXD_Z_PAD; - mvneta_txq_inc_put(txq); - txq->pending++; - txq->count++; + txq->pending += num_frames; + txq->count += num_frames; return MVNETA_XDP_TX; } From patchwork Fri Oct 2 14:42:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 267701 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C909C47423 for ; Fri, 2 Oct 2020 14:42:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 29A212074B for ; Fri, 2 Oct 2020 14:42:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601649772; bh=MeoOo4bg3O/0X0i354wiVWGpLoopVwV3Irb8n4wowas=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=irr+2IuTjnxi5C3bYVSFc3HE0yEHiS2B8YzgqMiiB8/9T3SlAqyBqyE/krCeS3I1y w7+mK9WNs031MfBA4J54XYSHhEivvP3HWmGXAAsHDqp2hPGyfFG1Jgcudplogp1nAM 0z8p2X+LUXZ9kx7DcrbRfPGC7k+ONB+8cJitqdec= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388208AbgJBOmv (ORCPT ); Fri, 2 Oct 2020 10:42:51 -0400 Received: from mail.kernel.org ([198.145.29.99]:60902 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726017AbgJBOmv (ORCPT ); Fri, 2 Oct 2020 10:42:51 -0400 Received: from lore-desk.redhat.com (unknown [176.207.245.61]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 6CCE6206FA; Fri, 2 Oct 2020 14:42:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601649770; bh=MeoOo4bg3O/0X0i354wiVWGpLoopVwV3Irb8n4wowas=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ObQmjaDmcbhtZX+9SYFt1aWtGUSr5nUostE1DcAL0hD4TkguOd31v4pKQAfw1vRBw Sq+9fZtxdWFFHhV5TFEHEU5qES+RsAgiFrHk1ORA8xFPTPoseiOeKlI2EHNoRql207 85JGkzBi1esE6qR1aTit5WMw7KxEFaLX8UJMgYfI= From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, sameehj@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, lorenzo.bianconi@redhat.com, echaudro@redhat.com Subject: [PATCH v4 bpf-next 06/13] bpf: introduce bpf_xdp_get_frags_{count, total_size} helpers Date: Fri, 2 Oct 2020 16:42:04 +0200 Message-Id: X-Mailer: git-send-email 2.26.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Sameeh Jubran Introduce the two following bpf helpers in order to provide some metadata about a xdp multi-buff fame to bpf layer: - bpf_xdp_get_frags_count() get the number of fragments for a given xdp multi-buffer. * bpf_xdp_get_frags_total_size() get the total size of fragments for a given xdp multi-buffer. Signed-off-by: Sameeh Jubran Co-developed-by: Lorenzo Bianconi Signed-off-by: Lorenzo Bianconi --- include/uapi/linux/bpf.h | 14 ++++++++++++ net/core/filter.c | 42 ++++++++++++++++++++++++++++++++++ tools/include/uapi/linux/bpf.h | 14 ++++++++++++ 3 files changed, 70 insertions(+) diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 4f556cfcbfbe..0715995eb18c 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -3668,6 +3668,18 @@ union bpf_attr { * Return * The helper returns **TC_ACT_REDIRECT** on success or * **TC_ACT_SHOT** on error. + * + * int bpf_xdp_get_frags_count(struct xdp_buff *xdp_md) + * Description + * Get the number of fragments for a given xdp multi-buffer. + * Return + * The number of fragments + * + * int bpf_xdp_get_frags_total_size(struct xdp_buff *xdp_md) + * Description + * Get the total size of fragments for a given xdp multi-buffer. + * Return + * The total size of fragments for a given xdp multi-buffer. */ #define __BPF_FUNC_MAPPER(FN) \ FN(unspec), \ @@ -3823,6 +3835,8 @@ union bpf_attr { FN(seq_printf_btf), \ FN(skb_cgroup_classid), \ FN(redirect_neigh), \ + FN(xdp_get_frags_count), \ + FN(xdp_get_frags_total_size), \ /* */ /* integer value in 'imm' field of BPF_CALL instruction selects which helper diff --git a/net/core/filter.c b/net/core/filter.c index 3fb6adad1957..4c55b788c4c5 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -3739,6 +3739,44 @@ static const struct bpf_func_proto bpf_xdp_adjust_head_proto = { .arg2_type = ARG_ANYTHING, }; +BPF_CALL_1(bpf_xdp_get_frags_count, struct xdp_buff*, xdp) +{ + struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp); + + return xdp->mb ? sinfo->nr_frags : 0; +} + +const struct bpf_func_proto bpf_xdp_get_frags_count_proto = { + .func = bpf_xdp_get_frags_count, + .gpl_only = false, + .ret_type = RET_INTEGER, + .arg1_type = ARG_PTR_TO_CTX, +}; + +BPF_CALL_1(bpf_xdp_get_frags_total_size, struct xdp_buff*, xdp) +{ + struct skb_shared_info *sinfo; + int nfrags, i, size = 0; + + if (likely(!xdp->mb)) + return 0; + + sinfo = xdp_get_shared_info_from_buff(xdp); + nfrags = min_t(u8, sinfo->nr_frags, MAX_SKB_FRAGS); + + for (i = 0; i < nfrags; i++) + size += skb_frag_size(&sinfo->frags[i]); + + return size; +} + +const struct bpf_func_proto bpf_xdp_get_frags_total_size_proto = { + .func = bpf_xdp_get_frags_total_size, + .gpl_only = false, + .ret_type = RET_INTEGER, + .arg1_type = ARG_PTR_TO_CTX, +}; + BPF_CALL_2(bpf_xdp_adjust_tail, struct xdp_buff *, xdp, int, offset) { void *data_hard_end = xdp_data_hard_end(xdp); /* use xdp->frame_sz */ @@ -7092,6 +7130,10 @@ xdp_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) return &bpf_xdp_redirect_map_proto; case BPF_FUNC_xdp_adjust_tail: return &bpf_xdp_adjust_tail_proto; + case BPF_FUNC_xdp_get_frags_count: + return &bpf_xdp_get_frags_count_proto; + case BPF_FUNC_xdp_get_frags_total_size: + return &bpf_xdp_get_frags_total_size_proto; case BPF_FUNC_fib_lookup: return &bpf_xdp_fib_lookup_proto; #ifdef CONFIG_INET diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 4f556cfcbfbe..0715995eb18c 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -3668,6 +3668,18 @@ union bpf_attr { * Return * The helper returns **TC_ACT_REDIRECT** on success or * **TC_ACT_SHOT** on error. + * + * int bpf_xdp_get_frags_count(struct xdp_buff *xdp_md) + * Description + * Get the number of fragments for a given xdp multi-buffer. + * Return + * The number of fragments + * + * int bpf_xdp_get_frags_total_size(struct xdp_buff *xdp_md) + * Description + * Get the total size of fragments for a given xdp multi-buffer. + * Return + * The total size of fragments for a given xdp multi-buffer. */ #define __BPF_FUNC_MAPPER(FN) \ FN(unspec), \ @@ -3823,6 +3835,8 @@ union bpf_attr { FN(seq_printf_btf), \ FN(skb_cgroup_classid), \ FN(redirect_neigh), \ + FN(xdp_get_frags_count), \ + FN(xdp_get_frags_total_size), \ /* */ /* integer value in 'imm' field of BPF_CALL instruction selects which helper From patchwork Fri Oct 2 14:42:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 289095 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 384F5C47423 for ; Fri, 2 Oct 2020 14:42:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 02F3220795 for ; Fri, 2 Oct 2020 14:42:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601649778; bh=lFcWqY7rLwu78iOPKE7givKvyCMBOdqm51HhVpZKRoQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=K7LvczQva4MTVKnu3e16xTTqoMyPRahrtQ6oyrNb/kkd06qYsHlVVPsVqd1zvmwVW qZxptF4Pfs5r46rI0aRI3hicy5d77tVOzRqheTTkGXEeRtUROMVSqLOUtc5li0zrMm kBDgIuPbLeiwnc939zPBFfvAr7ND+KUl54o1pDws= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388216AbgJBOm5 (ORCPT ); Fri, 2 Oct 2020 10:42:57 -0400 Received: from mail.kernel.org ([198.145.29.99]:60940 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726017AbgJBOm5 (ORCPT ); Fri, 2 Oct 2020 10:42:57 -0400 Received: from lore-desk.redhat.com (unknown [176.207.245.61]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id C66F120708; Fri, 2 Oct 2020 14:42:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601649776; bh=lFcWqY7rLwu78iOPKE7givKvyCMBOdqm51HhVpZKRoQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=0ut98xk9SC9/wRWzNj7wXYK6m9Gzt+FyLkNf+1IFxsI5SLtSHNsdzIbHRYScQSjsB fhcoRyS7Byw9S9eLAj0sSG3dwlfDjKgljb+HcngnqS597p23+Y70Zbz2qt9DJSAbMY +sq+Fy5klrgofErhh7YS1KGGwgfvoOQJrG/up0ek= From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, sameehj@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, lorenzo.bianconi@redhat.com, echaudro@redhat.com Subject: [PATCH v4 bpf-next 07/13] samples/bpf: add bpf program that uses xdp mb helpers Date: Fri, 2 Oct 2020 16:42:05 +0200 Message-Id: <8ace573b1a8d74f1a1bd072c08283a8560bd1b48.1601648734.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Sameeh Jubran The bpf program returns XDP_PASS for every packet and calculates the total number of bytes in its linear and paged parts. The program is executed with: ./xdp_mb [if name] and has the following output format: [if index]: [rx packet count] pkt/sec, [number of bytes] bytes/sec Signed-off-by: Shay Agroskin Signed-off-by: Sameeh Jubran Signed-off-by: Lorenzo Bianconi --- samples/bpf/Makefile | 3 + samples/bpf/xdp_mb_kern.c | 68 ++++++++++++++ samples/bpf/xdp_mb_user.c | 182 ++++++++++++++++++++++++++++++++++++++ 3 files changed, 253 insertions(+) create mode 100644 samples/bpf/xdp_mb_kern.c create mode 100644 samples/bpf/xdp_mb_user.c diff --git a/samples/bpf/Makefile b/samples/bpf/Makefile index 4f1ed0e3cf9f..12e32516f02a 100644 --- a/samples/bpf/Makefile +++ b/samples/bpf/Makefile @@ -54,6 +54,7 @@ tprogs-y += task_fd_query tprogs-y += xdp_sample_pkts tprogs-y += ibumad tprogs-y += hbm +tprogs-y += xdp_mb # Libbpf dependencies LIBBPF = $(TOOLS_PATH)/lib/bpf/libbpf.a @@ -111,6 +112,7 @@ task_fd_query-objs := bpf_load.o task_fd_query_user.o $(TRACE_HELPERS) xdp_sample_pkts-objs := xdp_sample_pkts_user.o $(TRACE_HELPERS) ibumad-objs := bpf_load.o ibumad_user.o $(TRACE_HELPERS) hbm-objs := bpf_load.o hbm.o $(CGROUP_HELPERS) +xdp_mb-objs := xdp_mb_user.o # Tell kbuild to always build the programs always-y := $(tprogs-y) @@ -172,6 +174,7 @@ always-y += ibumad_kern.o always-y += hbm_out_kern.o always-y += hbm_edt_kern.o always-y += xdpsock_kern.o +always-y += xdp_mb_kern.o ifeq ($(ARCH), arm) # Strip all except -D__LINUX_ARM_ARCH__ option needed to handle linux diff --git a/samples/bpf/xdp_mb_kern.c b/samples/bpf/xdp_mb_kern.c new file mode 100644 index 000000000000..f366bce92fc7 --- /dev/null +++ b/samples/bpf/xdp_mb_kern.c @@ -0,0 +1,68 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +/* + * Copyright 2020 Amazon.com, Inc. or its affiliates. All rights reserved. + */ +#define KBUILD_MODNAME "foo" +#include +#include +#include +#include +#include +#include +#include +#include + +/* count RX packets */ +struct { + __uint(type, BPF_MAP_TYPE_PERCPU_ARRAY); + __type(key, u32); + __type(value, long); + __uint(max_entries, 1); +} rx_cnt SEC(".maps"); + +/* count RX fragments */ +struct { + __uint(type, BPF_MAP_TYPE_PERCPU_ARRAY); + __type(key, u32); + __type(value, long); + __uint(max_entries, 1); +} rx_frags SEC(".maps"); + +/* count total number of bytes */ +struct { + __uint(type, BPF_MAP_TYPE_PERCPU_ARRAY); + __type(key, u32); + __type(value, long); + __uint(max_entries, 1); +} tot_len SEC(".maps"); + +SEC("xdp_mb") +int xdp_mb_prog(struct xdp_md *ctx) +{ + void *data_end = (void *)(long)ctx->data_end; + void *data = (void *)(long)ctx->data; + u32 frag_offset = 0, frag_size = 0; + u32 key = 0, nfrags; + long *value; + int i, len; + + value = bpf_map_lookup_elem(&rx_cnt, &key); + if (value) + *value += 1; + + len = data_end - data; + nfrags = bpf_xdp_get_frags_count(ctx); + len += bpf_xdp_get_frags_total_size(ctx); + + value = bpf_map_lookup_elem(&tot_len, &key); + if (value) + *value += len; + + value = bpf_map_lookup_elem(&rx_frags, &key); + if (value) + *value += nfrags; + + return XDP_PASS; +} + +char _license[] SEC("license") = "GPL"; diff --git a/samples/bpf/xdp_mb_user.c b/samples/bpf/xdp_mb_user.c new file mode 100644 index 000000000000..6f555e94b748 --- /dev/null +++ b/samples/bpf/xdp_mb_user.c @@ -0,0 +1,182 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +/* + * Copyright 2020 Amazon.com, Inc. or its affiliates. All rights reserved. + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "bpf_util.h" +#include +#include + +static __u32 xdp_flags = XDP_FLAGS_UPDATE_IF_NOEXIST | XDP_FLAGS_DRV_MODE; +static __u32 prog_id; +static int rx_cnt_fd, tot_len_fd, rx_frags_fd; +static int ifindex; + +static void int_exit(int sig) +{ + __u32 curr_prog_id = 0; + + if (bpf_get_link_xdp_id(ifindex, &curr_prog_id, xdp_flags)) { + printf("bpf_get_link_xdp_id failed\n"); + exit(1); + } + if (prog_id == curr_prog_id) + bpf_set_link_xdp_fd(ifindex, -1, xdp_flags); + else if (!curr_prog_id) + printf("couldn't find a prog id on a given interface\n"); + else + printf("program on interface changed, not removing\n"); + exit(0); +} + +/* count total packets and bytes per second */ +static void poll_stats(int interval) +{ + unsigned int nr_cpus = bpf_num_possible_cpus(); + __u64 rx_frags_cnt[nr_cpus], rx_frags_cnt_prev[nr_cpus]; + __u64 tot_len[nr_cpus], tot_len_prev[nr_cpus]; + __u64 rx_cnt[nr_cpus], rx_cnt_prev[nr_cpus]; + int i; + + memset(rx_frags_cnt_prev, 0, sizeof(rx_frags_cnt_prev)); + memset(tot_len_prev, 0, sizeof(tot_len_prev)); + memset(rx_cnt_prev, 0, sizeof(rx_cnt_prev)); + + while (1) { + __u64 n_rx_pkts = 0, rx_frags = 0, rx_len = 0; + __u32 key = 0; + + sleep(interval); + + /* fetch rx cnt */ + assert(bpf_map_lookup_elem(rx_cnt_fd, &key, rx_cnt) == 0); + for (i = 0; i < nr_cpus; i++) + n_rx_pkts += (rx_cnt[i] - rx_cnt_prev[i]); + memcpy(rx_cnt_prev, rx_cnt, sizeof(rx_cnt)); + + /* fetch rx frags */ + assert(bpf_map_lookup_elem(rx_frags_fd, &key, rx_frags_cnt) == 0); + for (i = 0; i < nr_cpus; i++) + rx_frags += (rx_frags_cnt[i] - rx_frags_cnt_prev[i]); + memcpy(rx_frags_cnt_prev, rx_frags_cnt, sizeof(rx_frags_cnt)); + + /* count total bytes of packets */ + assert(bpf_map_lookup_elem(tot_len_fd, &key, tot_len) == 0); + for (i = 0; i < nr_cpus; i++) + rx_len += (tot_len[i] - tot_len_prev[i]); + memcpy(tot_len_prev, tot_len, sizeof(tot_len)); + + if (n_rx_pkts) + printf("ifindex %i: %10llu pkt/s, %10llu frags/s, %10llu bytes/s\n", + ifindex, n_rx_pkts / interval, rx_frags / interval, + rx_len / interval); + } +} + +static void usage(const char *prog) +{ + fprintf(stderr, + "%s: %s [OPTS] IFACE\n\n" + "OPTS:\n" + " -F force loading prog\n", + __func__, prog); +} + +int main(int argc, char **argv) +{ + struct rlimit r = {RLIM_INFINITY, RLIM_INFINITY}; + struct bpf_prog_load_attr prog_load_attr = { + .prog_type = BPF_PROG_TYPE_XDP, + }; + int prog_fd, opt; + struct bpf_prog_info info = {}; + __u32 info_len = sizeof(info); + const char *optstr = "F"; + struct bpf_program *prog; + struct bpf_object *obj; + char filename[256]; + int err; + + while ((opt = getopt(argc, argv, optstr)) != -1) { + switch (opt) { + case 'F': + xdp_flags &= ~XDP_FLAGS_UPDATE_IF_NOEXIST; + break; + default: + usage(basename(argv[0])); + return 1; + } + } + + if (optind == argc) { + usage(basename(argv[0])); + return 1; + } + + if (setrlimit(RLIMIT_MEMLOCK, &r)) { + perror("setrlimit(RLIMIT_MEMLOCK)"); + return 1; + } + + ifindex = if_nametoindex(argv[optind]); + if (!ifindex) { + perror("if_nametoindex"); + return 1; + } + + snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]); + prog_load_attr.file = filename; + + if (bpf_prog_load_xattr(&prog_load_attr, &obj, &prog_fd)) + return 1; + + prog = bpf_program__next(NULL, obj); + if (!prog) { + printf("finding a prog in obj file failed\n"); + return 1; + } + + if (!prog_fd) { + printf("bpf_prog_load_xattr: %s\n", strerror(errno)); + return 1; + } + + rx_cnt_fd = bpf_object__find_map_fd_by_name(obj, "rx_cnt"); + rx_frags_fd = bpf_object__find_map_fd_by_name(obj, "rx_frags"); + tot_len_fd = bpf_object__find_map_fd_by_name(obj, "tot_len"); + if (rx_cnt_fd < 0 || rx_frags_fd < 0 || tot_len_fd < 0) { + printf("bpf_object__find_map_fd_by_name failed\n"); + return 1; + } + + if (bpf_set_link_xdp_fd(ifindex, prog_fd, xdp_flags) < 0) { + printf("ERROR: link set xdp fd failed on %d\n", ifindex); + return 1; + } + + err = bpf_obj_get_info_by_fd(prog_fd, &info, &info_len); + if (err) { + printf("can't get prog info - %s\n", strerror(errno)); + return err; + } + prog_id = info.id; + + signal(SIGINT, int_exit); + signal(SIGTERM, int_exit); + + poll_stats(1); + + return 0; +} From patchwork Fri Oct 2 14:42:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 267700 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5500AC47423 for ; Fri, 2 Oct 2020 14:43:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 11BB02074B for ; Fri, 2 Oct 2020 14:43:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601649782; bh=dB1yq4ULn7DJ3/BsHkzrZDjsyKFHyuueOGwuTCFAzbM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=EZC3BS2lIK5KtIVQustGrJvk4T4FgIkEeU+X3CTlNy70getYq44uXm1Jt3gJEBxMy 2f/Qc+f8kOuNC7Tozybg0s/7tm3LupoVmOfz37Le8btTnpeOCJbeeUmiZBUPH2yMUJ bxJ8toATdCTPEgRLRmLFiiUPGoAkk4MFYMWc92FM= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388224AbgJBOnB (ORCPT ); Fri, 2 Oct 2020 10:43:01 -0400 Received: from mail.kernel.org ([198.145.29.99]:60970 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726017AbgJBOnA (ORCPT ); Fri, 2 Oct 2020 10:43:00 -0400 Received: from lore-desk.redhat.com (unknown [176.207.245.61]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 1A92A207DE; Fri, 2 Oct 2020 14:42:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601649780; bh=dB1yq4ULn7DJ3/BsHkzrZDjsyKFHyuueOGwuTCFAzbM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fIo4UkQHYEYVzKGSbQnmudflyrfXBDH6cpRDxP6IN4AQCiHmdomA3peLotqpqicD7 X0YSEXZeeM2gVNFKY/BllxOduYuYlEVufy/p7OQIbV7ie6/X3b6KaYVFcPy5NMQ0Iq dGx9xTWKSw54/AFaAi9oGlEMw6dIrGUfJVtfJ1fI= From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, sameehj@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, lorenzo.bianconi@redhat.com, echaudro@redhat.com Subject: [PATCH v4 bpf-next 08/13] bpf: move user_size out of bpf_test_init Date: Fri, 2 Oct 2020 16:42:06 +0200 Message-Id: <4f633a8510040b7bc9b7fc680a33b86654396e51.1601648734.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Rely on data_size_in in bpf_test_init routine signature. This is a preliminary patch to introduce xdp multi-buff selftest Signed-off-by: Lorenzo Bianconi --- net/bpf/test_run.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index c1c30a9f76f3..bd291f5f539c 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -171,11 +171,10 @@ __diag_pop(); ALLOW_ERROR_INJECTION(bpf_modify_return_test, ERRNO); -static void *bpf_test_init(const union bpf_attr *kattr, u32 size, - u32 headroom, u32 tailroom) +static void *bpf_test_init(const union bpf_attr *kattr, u32 user_size, + u32 size, u32 headroom, u32 tailroom) { void __user *data_in = u64_to_user_ptr(kattr->test.data_in); - u32 user_size = kattr->test.data_size_in; void *data; if (size < ETH_HLEN || size > PAGE_SIZE - headroom - tailroom) @@ -495,7 +494,8 @@ int bpf_prog_test_run_skb(struct bpf_prog *prog, const union bpf_attr *kattr, if (kattr->test.flags || kattr->test.cpu) return -EINVAL; - data = bpf_test_init(kattr, size, NET_SKB_PAD + NET_IP_ALIGN, + data = bpf_test_init(kattr, kattr->test.data_size_in, + size, NET_SKB_PAD + NET_IP_ALIGN, SKB_DATA_ALIGN(sizeof(struct skb_shared_info))); if (IS_ERR(data)) return PTR_ERR(data); @@ -632,7 +632,8 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr, /* XDP have extra tailroom as (most) drivers use full page */ max_data_sz = 4096 - headroom - tailroom; - data = bpf_test_init(kattr, max_data_sz, headroom, tailroom); + data = bpf_test_init(kattr, kattr->test.data_size_in, + max_data_sz, headroom, tailroom); if (IS_ERR(data)) return PTR_ERR(data); @@ -698,7 +699,7 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog, if (size < ETH_HLEN) return -EINVAL; - data = bpf_test_init(kattr, size, 0, 0); + data = bpf_test_init(kattr, kattr->test.data_size_in, size, 0, 0); if (IS_ERR(data)) return PTR_ERR(data); From patchwork Fri Oct 2 14:42:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 289094 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54529C4727D for ; Fri, 2 Oct 2020 14:43:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1CFD520795 for ; Fri, 2 Oct 2020 14:43:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601649786; bh=f8ZZJ5+Q+i60RqTkYKstoG1GgUDRCcWa4fDxO6qFmUE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=mKxQx9CSf/GC+rVisfHBK+026w3doMVp1jY62K8E974InauUyar3sd2CPiWgwW3Ar YMcRKyaqF3fDmhgU3EV+EBttduXqasYjw6TokZNIoNX0vrFVwMh86mDVGvWL5oa42H UGN3AtsIto+uV7ubvnZ5q2u6BQ3p+LYNRSl89ZS4= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388230AbgJBOnF (ORCPT ); Fri, 2 Oct 2020 10:43:05 -0400 Received: from mail.kernel.org ([198.145.29.99]:32770 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726017AbgJBOnE (ORCPT ); Fri, 2 Oct 2020 10:43:04 -0400 Received: from lore-desk.redhat.com (unknown [176.207.245.61]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id A2ED72085B; Fri, 2 Oct 2020 14:43:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601649784; bh=f8ZZJ5+Q+i60RqTkYKstoG1GgUDRCcWa4fDxO6qFmUE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=H/LHN+Xq6SI/snyEz261dAtXpELklHCRaH/kT6MggUrm+gKBHwKu8MuwfzeEwRxih Kxf6v9nvIc8ufBYjwoFXUiGQhkMVbWGAtokWsfc5pfQnRa39lM1HM12DNCd4X3flTH g7BbOvkPNcAMEW1fmUmHSDqnmfgKA7KaE8rQN3iE= From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, sameehj@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, lorenzo.bianconi@redhat.com, echaudro@redhat.com Subject: [PATCH v4 bpf-next 09/13] bpf: introduce multibuff support to bpf_prog_test_run_xdp() Date: Fri, 2 Oct 2020 16:42:07 +0200 Message-Id: X-Mailer: git-send-email 2.26.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Introduce the capability to allocate a xdp multi-buff in bpf_prog_test_run_xdp routine. This is a preliminary patch to introduce the selftests for new xdp multi-buff ebpf helpers Signed-off-by: Lorenzo Bianconi --- net/bpf/test_run.c | 51 ++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 43 insertions(+), 8 deletions(-) diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index bd291f5f539c..ec7286cd051b 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -617,44 +617,79 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr, { u32 tailroom = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); u32 headroom = XDP_PACKET_HEADROOM; - u32 size = kattr->test.data_size_in; u32 repeat = kattr->test.repeat; struct netdev_rx_queue *rxqueue; + struct skb_shared_info *sinfo; struct xdp_buff xdp = {}; + u32 max_data_sz, size; u32 retval, duration; - u32 max_data_sz; + int i, ret, data_len; void *data; - int ret; if (kattr->test.ctx_in || kattr->test.ctx_out) return -EINVAL; - /* XDP have extra tailroom as (most) drivers use full page */ max_data_sz = 4096 - headroom - tailroom; + size = min_t(u32, kattr->test.data_size_in, max_data_sz); + data_len = size; - data = bpf_test_init(kattr, kattr->test.data_size_in, - max_data_sz, headroom, tailroom); + data = bpf_test_init(kattr, size, max_data_sz, headroom, tailroom); if (IS_ERR(data)) return PTR_ERR(data); xdp.data_hard_start = data; xdp.data = data + headroom; xdp.data_meta = xdp.data; - xdp.data_end = xdp.data + size; + xdp.data_end = xdp.data + data_len; xdp.frame_sz = headroom + max_data_sz + tailroom; + sinfo = xdp_get_shared_info_from_buff(&xdp); + if (unlikely(kattr->test.data_size_in > size)) { + void __user *data_in = u64_to_user_ptr(kattr->test.data_in); + + while (size < kattr->test.data_size_in) { + skb_frag_t *frag = &sinfo->frags[sinfo->nr_frags]; + struct page *page; + int data_len; + + page = alloc_page(GFP_KERNEL); + if (!page) { + ret = -ENOMEM; + goto out; + } + + __skb_frag_set_page(frag, page); + data_len = min_t(int, kattr->test.data_size_in - size, + PAGE_SIZE); + skb_frag_size_set(frag, data_len); + if (copy_from_user(page_address(page), data_in + size, + data_len)) { + ret = -EFAULT; + goto out; + } + sinfo->nr_frags++; + size += data_len; + } + xdp.mb = 1; + } + rxqueue = __netif_get_rx_queue(current->nsproxy->net_ns->loopback_dev, 0); xdp.rxq = &rxqueue->xdp_rxq; bpf_prog_change_xdp(NULL, prog); ret = bpf_test_run(prog, &xdp, repeat, &retval, &duration, true); if (ret) goto out; + if (xdp.data != data + headroom || xdp.data_end != xdp.data + size) - size = xdp.data_end - xdp.data; + size += xdp.data_end - xdp.data - data_len; + ret = bpf_test_finish(kattr, uattr, xdp.data, size, retval, duration); out: bpf_prog_change_xdp(prog, NULL); + for (i = 0; i < sinfo->nr_frags; i++) + __free_page(skb_frag_page(&sinfo->frags[i])); kfree(data); + return ret; } From patchwork Fri Oct 2 14:42:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 267699 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC2CFC47423 for ; Fri, 2 Oct 2020 14:43:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 814B02137B for ; Fri, 2 Oct 2020 14:43:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601649790; bh=nTkR6CoGaIVQB6iOYx0cq2wjexcHkxglFc3NY+26Wsw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=sv6fRVlI8gbpA+4GyobrnL98OO04eSe+aev3kcB0pThGiB+oAgZukj7FipMNJ+lir eSSZgkHaNsurpsG7MRABpCY1KV/e+Te5ls5tBfjU1/9+REeEEA+r8GL4pSUglYaLx+ 3XwEO3Os+Uzvr5V2gk3ZEaurDsvE9QwmxU4OxK9E= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388178AbgJBOnK (ORCPT ); Fri, 2 Oct 2020 10:43:10 -0400 Received: from mail.kernel.org ([198.145.29.99]:32808 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726017AbgJBOnI (ORCPT ); Fri, 2 Oct 2020 10:43:08 -0400 Received: from lore-desk.redhat.com (unknown [176.207.245.61]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 10620206FA; Fri, 2 Oct 2020 14:43:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601649787; bh=nTkR6CoGaIVQB6iOYx0cq2wjexcHkxglFc3NY+26Wsw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Ux2FDOoM8BjL412Vr1UJU5hLGWpEa17G0a1LiL+kj5JCLk2juJr37L64GaJv3iSmt xGfecMnzxslUMZAm4XO5tLy4fEZB2fDDTN+dCqzDUvtu4BKAJPqTWETbadlsvJkGu8 D+SE2TFTsp87i5mSN++6nP+eS51p+SeY/TDTIG50= From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, sameehj@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, lorenzo.bianconi@redhat.com, echaudro@redhat.com Subject: [PATCH v4 bpf-next 10/13] bpf: test_run: add skb_shared_info pointer in bpf_test_finish signature Date: Fri, 2 Oct 2020 16:42:08 +0200 Message-Id: <25a087aee8345fd87244d2e860e6636db36af01a.1601648734.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org introduce skb_shared_info pointer in bpf_test_finish signature in order to copy back paged data from a xdp multi-buff frame to userspace buffer Tested-by: Eelco Chaudron Signed-off-by: Lorenzo Bianconi --- net/bpf/test_run.c | 58 ++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 51 insertions(+), 7 deletions(-) diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index ec7286cd051b..7e33181f88ee 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -79,9 +79,23 @@ static int bpf_test_run(struct bpf_prog *prog, void *ctx, u32 repeat, return ret; } +static int bpf_test_get_buff_data_len(struct skb_shared_info *sinfo) +{ + int i, size = 0; + + if (likely(!sinfo)) + return 0; + + for (i = 0; i < sinfo->nr_frags; i++) + size += skb_frag_size(&sinfo->frags[i]); + + return size; +} + static int bpf_test_finish(const union bpf_attr *kattr, union bpf_attr __user *uattr, const void *data, - u32 size, u32 retval, u32 duration) + struct skb_shared_info *sinfo, u32 size, + u32 retval, u32 duration) { void __user *data_out = u64_to_user_ptr(kattr->test.data_out); int err = -EFAULT; @@ -96,8 +110,35 @@ static int bpf_test_finish(const union bpf_attr *kattr, err = -ENOSPC; } - if (data_out && copy_to_user(data_out, data, copy_size)) - goto out; + if (data_out) { + int len = copy_size - bpf_test_get_buff_data_len(sinfo); + + if (copy_to_user(data_out, data, len)) + goto out; + + if (sinfo) { + int i, offset = len, data_len; + + for (i = 0; i < sinfo->nr_frags; i++) { + skb_frag_t *frag = &sinfo->frags[i]; + + if (offset >= copy_size) { + err = -ENOSPC; + break; + } + + data_len = min_t(int, copy_size - offset, + skb_frag_size(frag)); + if (copy_to_user(data_out + offset, + skb_frag_address(frag), + data_len)) + goto out; + + offset += data_len; + } + } + } + if (copy_to_user(&uattr->test.data_size_out, &size, sizeof(size))) goto out; if (copy_to_user(&uattr->test.retval, &retval, sizeof(retval))) @@ -598,7 +639,8 @@ int bpf_prog_test_run_skb(struct bpf_prog *prog, const union bpf_attr *kattr, /* bpf program can never convert linear skb to non-linear */ if (WARN_ON_ONCE(skb_is_nonlinear(skb))) size = skb_headlen(skb); - ret = bpf_test_finish(kattr, uattr, skb->data, size, retval, duration); + ret = bpf_test_finish(kattr, uattr, skb->data, NULL, size, retval, + duration); if (!ret) ret = bpf_ctx_finish(kattr, uattr, ctx, sizeof(struct __sk_buff)); @@ -683,7 +725,9 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr, if (xdp.data != data + headroom || xdp.data_end != xdp.data + size) size += xdp.data_end - xdp.data - data_len; - ret = bpf_test_finish(kattr, uattr, xdp.data, size, retval, duration); + ret = bpf_test_finish(kattr, uattr, xdp.data, sinfo, size, retval, + duration); + out: bpf_prog_change_xdp(prog, NULL); for (i = 0; i < sinfo->nr_frags; i++) @@ -793,8 +837,8 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog, do_div(time_spent, repeat); duration = time_spent > U32_MAX ? U32_MAX : (u32)time_spent; - ret = bpf_test_finish(kattr, uattr, &flow_keys, sizeof(flow_keys), - retval, duration); + ret = bpf_test_finish(kattr, uattr, &flow_keys, NULL, + sizeof(flow_keys), retval, duration); if (!ret) ret = bpf_ctx_finish(kattr, uattr, user_ctx, sizeof(struct bpf_flow_keys)); From patchwork Fri Oct 2 14:42:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 289093 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6421FC47423 for ; Fri, 2 Oct 2020 14:43:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 38EED207DE for ; Fri, 2 Oct 2020 14:43:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601649794; bh=KqdOIEBWHwpdkF+8jfdovWiCtyDc6YI5ZYHE4ztCj5Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=GC0C9g5EQe4/IAqP6L+5UoDtQLikMbAukL8JDEtZLvgagQzEl/6aXlnFcVUru3qQ1 zBvDfeVPRoiQFk/WrvdGL/oBBI+GI9GaHhHI0FYztcHHjpZZnFlEvlPkMRF2nyj02G Bn92VrAQXtCVwGaJ7ZIW+v/1NLUjYwW0jCH3WK98= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388237AbgJBOnN (ORCPT ); Fri, 2 Oct 2020 10:43:13 -0400 Received: from mail.kernel.org ([198.145.29.99]:32850 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726017AbgJBOnL (ORCPT ); Fri, 2 Oct 2020 10:43:11 -0400 Received: from lore-desk.redhat.com (unknown [176.207.245.61]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 6E6A020708; Fri, 2 Oct 2020 14:43:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601649790; bh=KqdOIEBWHwpdkF+8jfdovWiCtyDc6YI5ZYHE4ztCj5Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OIds9OuUrNBgUDU3FEVZXGANfCKgS+sV8u4HNGhg7wykOtvZXjS2LjV4SCKYI0W0H hMj4nS08yznLWsajDy98UlJurHftfa32x8J+RjAuRohG0L9A11cn9JkPaXZZozz3Vj CeeyQfGfg6C5fJz1KsT21oJvvc5x8LbnEgY6XXL4= From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, sameehj@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, lorenzo.bianconi@redhat.com, echaudro@redhat.com Subject: [PATCH v4 bpf-next 11/13] bpf: add xdp multi-buffer selftest Date: Fri, 2 Oct 2020 16:42:09 +0200 Message-Id: X-Mailer: git-send-email 2.26.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Introduce xdp multi-buffer selftest for the following ebpf helpers: - bpf_xdp_get_frags_total_size - bpf_xdp_get_frags_count Co-developed-by: Eelco Chaudron Signed-off-by: Eelco Chaudron Signed-off-by: Lorenzo Bianconi --- .../testing/selftests/bpf/prog_tests/xdp_mb.c | 79 +++++++++++++++++++ .../selftests/bpf/progs/test_xdp_multi_buff.c | 24 ++++++ 2 files changed, 103 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/xdp_mb.c create mode 100644 tools/testing/selftests/bpf/progs/test_xdp_multi_buff.c diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_mb.c b/tools/testing/selftests/bpf/prog_tests/xdp_mb.c new file mode 100644 index 000000000000..4b1aca2d31e5 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/xdp_mb.c @@ -0,0 +1,79 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include +#include + +#include "test_xdp_multi_buff.skel.h" + +static void test_xdp_mb_check_len(void) +{ + int test_sizes[] = { 128, 4096, 9000 }; + struct test_xdp_multi_buff *pkt_skel; + __u8 *pkt_in = NULL, *pkt_out = NULL; + __u32 duration = 0, retval, size; + int err, pkt_fd, i; + + /* Load XDP program */ + pkt_skel = test_xdp_multi_buff__open_and_load(); + if (CHECK(!pkt_skel, "pkt_skel_load", "test_xdp_mb skeleton failed\n")) + goto out; + + /* Allocate resources */ + pkt_out = malloc(test_sizes[ARRAY_SIZE(test_sizes) - 1]); + if (CHECK(!pkt_out, "malloc", "Failed pkt_out malloc\n")) + goto out; + + pkt_in = malloc(test_sizes[ARRAY_SIZE(test_sizes) - 1]); + if (CHECK(!pkt_in, "malloc", "Failed pkt_in malloc\n")) + goto out; + + pkt_fd = bpf_program__fd(pkt_skel->progs._xdp_check_mb_len); + if (pkt_fd < 0) + goto out; + + /* Run test for specific set of packets */ + for (i = 0; i < ARRAY_SIZE(test_sizes); i++) { + int frags_count; + + /* Run test program */ + err = bpf_prog_test_run(pkt_fd, 1, pkt_in, test_sizes[i], + pkt_out, &size, &retval, &duration); + + if (CHECK(err || retval != XDP_PASS || size != test_sizes[i], + "test_run", "err %d errno %d retval %d size %d[%d]\n", + err, errno, retval, size, test_sizes[i])) + goto out; + + /* Verify test results */ + frags_count = DIV_ROUND_UP( + test_sizes[i] - pkt_skel->data->test_result_xdp_len, + getpagesize()); + + if (CHECK(pkt_skel->data->test_result_frags_count != frags_count, + "result", "frags_count = %llu != %u\n", + pkt_skel->data->test_result_frags_count, frags_count)) + goto out; + + if (CHECK(pkt_skel->data->test_result_frags_len != test_sizes[i] - + pkt_skel->data->test_result_xdp_len, + "result", "frags_len = %llu != %llu\n", + pkt_skel->data->test_result_frags_len, + test_sizes[i] - pkt_skel->data->test_result_xdp_len)) + goto out; + } +out: + if (pkt_out) + free(pkt_out); + if (pkt_in) + free(pkt_in); + + test_xdp_multi_buff__destroy(pkt_skel); +} + +void test_xdp_mb(void) +{ + if (test__start_subtest("xdp_mb_check_len_frags")) + test_xdp_mb_check_len(); +} diff --git a/tools/testing/selftests/bpf/progs/test_xdp_multi_buff.c b/tools/testing/selftests/bpf/progs/test_xdp_multi_buff.c new file mode 100644 index 000000000000..b7527829a3ed --- /dev/null +++ b/tools/testing/selftests/bpf/progs/test_xdp_multi_buff.c @@ -0,0 +1,24 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include +#include + +__u64 test_result_frags_count = UINT64_MAX; +__u64 test_result_frags_len = UINT64_MAX; +__u64 test_result_xdp_len = UINT64_MAX; + +SEC("xdp_check_mb_len") +int _xdp_check_mb_len(struct xdp_md *xdp) +{ + void *data_end = (void *)(long)xdp->data_end; + void *data = (void *)(long)xdp->data; + + test_result_xdp_len = (__u64)(data_end - data); + test_result_frags_len = bpf_xdp_get_frags_total_size(xdp); + test_result_frags_count = bpf_xdp_get_frags_count(xdp); + return XDP_PASS; +} + +char _license[] SEC("license") = "GPL"; From patchwork Fri Oct 2 14:42:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 267698 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 430F9C47423 for ; Fri, 2 Oct 2020 14:43:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 158A420795 for ; Fri, 2 Oct 2020 14:43:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601649799; bh=xk9bFAyppCfcxz8mjaSlx2KdcKBpE7HdmJ6obgHpAUA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=VjBImKRrDyIF38QdDLaq7yIGmTKBsXYdYm1o+5Yt6ru/LlLmt5KbxAd+tIK3GMDyT SuiafIJ5n3CefZFhtVG4ERCzbAcI9ImPMCfM+4wOX8HdqHv2l7CElpOH+tDaXebGjc vh9f2zlcly5G3CtVTfauk5Rb95hqBdgU7ADPoTfc= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388244AbgJBOnQ (ORCPT ); Fri, 2 Oct 2020 10:43:16 -0400 Received: from mail.kernel.org ([198.145.29.99]:32896 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388238AbgJBOnQ (ORCPT ); Fri, 2 Oct 2020 10:43:16 -0400 Received: from lore-desk.redhat.com (unknown [176.207.245.61]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 7A20620719; Fri, 2 Oct 2020 14:43:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601649795; bh=xk9bFAyppCfcxz8mjaSlx2KdcKBpE7HdmJ6obgHpAUA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lvQCA4gS8s2oaHbxKETf8NhsPmapw6Z4otKapTi7CTRX7Fadf429N8dkj+QWUADDH 7ruVDrh8QlRHNVayvhc1sww3FNbiEHvyqF1gRXH/OVq3ZOEK7sxKVdO6dWLXYyGxvv tAHgyhnKTEI0iueMbuQi6yw8hmXf1sF+Y21F97hk= From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, sameehj@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, lorenzo.bianconi@redhat.com, echaudro@redhat.com Subject: [PATCH v4 bpf-next 12/13] net: mvneta: enable jumbo frames for XDP Date: Fri, 2 Oct 2020 16:42:10 +0200 Message-Id: <48dbbc156eaaadfcc341f76991ff126cc5ed4b56.1601648734.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Enable the capability to receive jumbo frames even if the interface is running in XDP mode Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/marvell/mvneta.c | 10 ---------- 1 file changed, 10 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index f709650974ea..e3352ed13ea8 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -3743,11 +3743,6 @@ static int mvneta_change_mtu(struct net_device *dev, int mtu) mtu = ALIGN(MVNETA_RX_PKT_SIZE(mtu), 8); } - if (pp->xdp_prog && mtu > MVNETA_MAX_RX_BUF_SIZE) { - netdev_info(dev, "Illegal MTU value %d for XDP mode\n", mtu); - return -EINVAL; - } - dev->mtu = mtu; if (!netif_running(dev)) { @@ -4445,11 +4440,6 @@ static int mvneta_xdp_setup(struct net_device *dev, struct bpf_prog *prog, struct mvneta_port *pp = netdev_priv(dev); struct bpf_prog *old_prog; - if (prog && dev->mtu > MVNETA_MAX_RX_BUF_SIZE) { - NL_SET_ERR_MSG_MOD(extack, "Jumbo frames not supported on XDP"); - return -EOPNOTSUPP; - } - if (pp->bm_priv) { NL_SET_ERR_MSG_MOD(extack, "Hardware Buffer Management not supported on XDP"); From patchwork Fri Oct 2 14:42:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 289092 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88210C47423 for ; Fri, 2 Oct 2020 14:43:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 588E220708 for ; Fri, 2 Oct 2020 14:43:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601649801; bh=X2lmksPI9nQGFdNp2KO6XfRfikz0sJW9YQmr7+W2m0s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=gB5pNlm2TB9BwH4s2Cw6CX+OovxLzcbzienR7aATsEjSYuJFIaWqbzZ8D7HCj2TFx BdEDb/dmcvh41ZfxX3LqlOW8Kjz2BaVvJNyJcQLGb3twm5vMaliDcvaIH0C/Q3umth et7yQGY3Cma3vxR4KQIvWqt28J/pKJ8CP/xRUEJw= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388248AbgJBOnU (ORCPT ); Fri, 2 Oct 2020 10:43:20 -0400 Received: from mail.kernel.org ([198.145.29.99]:32926 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388016AbgJBOnT (ORCPT ); Fri, 2 Oct 2020 10:43:19 -0400 Received: from lore-desk.redhat.com (unknown [176.207.245.61]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id C65532074B; Fri, 2 Oct 2020 14:43:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601649799; bh=X2lmksPI9nQGFdNp2KO6XfRfikz0sJW9YQmr7+W2m0s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pEQUqqJuqx7p7XIZ6+r+zOGFkrQuivzVp4kcEpDVq9NW+j/RKCkpJyH8rWvW8GVg1 az4qE3/3OHytiT0gCY/RF6RhTPujZBKLkNGnR/hF5q9HqT9uEfJNzUu6KC8mZCpeXu 7FQ9G1gvJU8t48BtOU9nw38RG5dx5nnTU6akzp10= From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net, shayagr@amazon.com, sameehj@amazon.com, john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com, lorenzo.bianconi@redhat.com, echaudro@redhat.com Subject: [PATCH v4 bpf-next 13/13] bpf: cpumap: introduce xdp multi-buff support Date: Fri, 2 Oct 2020 16:42:11 +0200 Message-Id: <94ff09e2b8bf01eade7ae73d3cb0984423d3abe9.1601648734.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Introduce __xdp_build_skb_from_frame and xdp_build_skb_from_frame utility routines to build the skb from xdp_frame. Add xdp multi-buff support to cpumap Signed-off-by: Lorenzo Bianconi --- include/net/xdp.h | 5 ++++ kernel/bpf/cpumap.c | 45 +------------------------------ net/core/xdp.c | 64 +++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 70 insertions(+), 44 deletions(-) diff --git a/include/net/xdp.h b/include/net/xdp.h index 4d47076546ff..8d9224ef75ee 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -134,6 +134,11 @@ void xdp_warn(const char *msg, const char *func, const int line); #define XDP_WARN(msg) xdp_warn(msg, __func__, __LINE__) struct xdp_frame *xdp_convert_zc_to_xdp_frame(struct xdp_buff *xdp); +struct sk_buff *__xdp_build_skb_from_frame(struct xdp_frame *xdpf, + struct sk_buff *skb, + struct net_device *dev); +struct sk_buff *xdp_build_skb_from_frame(struct xdp_frame *xdpf, + struct net_device *dev); static inline void xdp_convert_frame_to_buff(struct xdp_frame *frame, struct xdp_buff *xdp) diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index c61a23b564aa..fa07b4226836 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -155,49 +155,6 @@ static void cpu_map_kthread_stop(struct work_struct *work) kthread_stop(rcpu->kthread); } -static struct sk_buff *cpu_map_build_skb(struct xdp_frame *xdpf, - struct sk_buff *skb) -{ - unsigned int hard_start_headroom; - unsigned int frame_size; - void *pkt_data_start; - - /* Part of headroom was reserved to xdpf */ - hard_start_headroom = sizeof(struct xdp_frame) + xdpf->headroom; - - /* Memory size backing xdp_frame data already have reserved - * room for build_skb to place skb_shared_info in tailroom. - */ - frame_size = xdpf->frame_sz; - - pkt_data_start = xdpf->data - hard_start_headroom; - skb = build_skb_around(skb, pkt_data_start, frame_size); - if (unlikely(!skb)) - return NULL; - - skb_reserve(skb, hard_start_headroom); - __skb_put(skb, xdpf->len); - if (xdpf->metasize) - skb_metadata_set(skb, xdpf->metasize); - - /* Essential SKB info: protocol and skb->dev */ - skb->protocol = eth_type_trans(skb, xdpf->dev_rx); - - /* Optional SKB info, currently missing: - * - HW checksum info (skb->ip_summed) - * - HW RX hash (skb_set_hash) - * - RX ring dev queue index (skb_record_rx_queue) - */ - - /* Until page_pool get SKB return path, release DMA here */ - xdp_release_frame(xdpf); - - /* Allow SKB to reuse area used by xdp_frame */ - xdp_scrub_frame(xdpf); - - return skb; -} - static void __cpu_map_ring_cleanup(struct ptr_ring *ring) { /* The tear-down procedure should have made sure that queue is @@ -364,7 +321,7 @@ static int cpu_map_kthread_run(void *data) struct sk_buff *skb = skbs[i]; int ret; - skb = cpu_map_build_skb(xdpf, skb); + skb = __xdp_build_skb_from_frame(xdpf, skb, xdpf->dev_rx); if (!skb) { xdp_return_frame(xdpf); continue; diff --git a/net/core/xdp.c b/net/core/xdp.c index 6d4fd4dddb00..a6bdefed92e6 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -507,3 +507,67 @@ void xdp_warn(const char *msg, const char *func, const int line) WARN(1, "XDP_WARN: %s(line:%d): %s\n", func, line, msg); }; EXPORT_SYMBOL_GPL(xdp_warn); + +struct sk_buff *__xdp_build_skb_from_frame(struct xdp_frame *xdpf, + struct sk_buff *skb, + struct net_device *dev) +{ + struct skb_shared_info *sinfo = xdp_get_shared_info_from_frame(xdpf); + unsigned int headroom = sizeof(*xdpf) + xdpf->headroom; + int i, num_frags = xdpf->mb ? sinfo->nr_frags : 0; + void *hard_start = xdpf->data - headroom; + + skb = build_skb_around(skb, hard_start, xdpf->frame_sz); + if (unlikely(!skb)) + return NULL; + + skb_reserve(skb, headroom); + __skb_put(skb, xdpf->len); + if (xdpf->metasize) + skb_metadata_set(skb, xdpf->metasize); + + if (likely(!num_frags)) + goto out; + + for (i = 0; i < num_frags; i++) { + skb_frag_t *frag = &sinfo->frags[i]; + + skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, + skb_frag_page(frag), skb_frag_off(frag), + skb_frag_size(frag), xdpf->frame_sz); + } + +out: + /* Essential SKB info: protocol and skb->dev */ + skb->protocol = eth_type_trans(skb, dev); + + /* Optional SKB info, currently missing: + * - HW checksum info (skb->ip_summed) + * - HW RX hash (skb_set_hash) + * - RX ring dev queue index (skb_record_rx_queue) + */ + + /* Until page_pool get SKB return path, release DMA here */ + xdp_release_frame(xdpf); + + /* Allow SKB to reuse area used by xdp_frame */ + xdp_scrub_frame(xdpf); + + return skb; +} +EXPORT_SYMBOL_GPL(__xdp_build_skb_from_frame); + +struct sk_buff *xdp_build_skb_from_frame(struct xdp_frame *xdpf, + struct net_device *dev) +{ + struct sk_buff *skb; + + skb = kmem_cache_alloc(skbuff_head_cache, GFP_ATOMIC); + if (unlikely(!skb)) + return NULL; + + memset(skb, 0, offsetof(struct sk_buff, tail)); + + return __xdp_build_skb_from_frame(xdpf, skb, dev); +} +EXPORT_SYMBOL_GPL(xdp_build_skb_from_frame);