From patchwork Thu Sep 3 20:58:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 261561 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, UNWANTED_LANGUAGE_BODY, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2B4EC433E2 for ; Thu, 3 Sep 2020 20:59:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 86308206C9 for ; Thu, 3 Sep 2020 20:59:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1599166758; bh=jGFg9HepHezzAGnfdT1zopmNIIsBMcFzzfvdUYqTGvs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=RWM8IC8mCBHxIlW2e//OjMmeti6jSRwqmSbHNVWeveoA6kp5Se4X+kVOfAe/Xezlz ovqrFPRmxvrvI7hq/gOYLHqMCYrfOMGxRj79mKhQxU/zjtyY5n/R/y6FI6W9+klF5s CvrU9m+RfR809QyQIufVVyYsncZlNbiXv/VPVyt4= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728697AbgICU7R (ORCPT ); Thu, 3 Sep 2020 16:59:17 -0400 Received: from mail.kernel.org ([198.145.29.99]:56042 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726088AbgICU7P (ORCPT ); Thu, 3 Sep 2020 16:59:15 -0400 Received: from lore-desk.redhat.com (unknown [151.66.86.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id B53C6206C0; Thu, 3 Sep 2020 20:59:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1599166754; bh=jGFg9HepHezzAGnfdT1zopmNIIsBMcFzzfvdUYqTGvs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bHOX3U+/ytTB9CvsUP3j3x+dZF8ttjSFfq29cSyAAWFZ+Yf68+pIotIcc+tXz9ZPm fvrFSvBw6kdvN/SgKUKs/dy8HydKFBwOzj2N3Cwfj+c0kd3NcZslWagzh01zymzSAU TZzN5nhUJ8/CHGllxNkOgXUUtSPQABgWSW/Eh+LU= From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: bpf@vger.kernel.org, davem@davemloft.net, lorenzo.bianconi@redhat.com, brouer@redhat.com, echaudro@redhat.com, sameehj@amazon.com, kuba@kernel.org, john.fastabend@gmail.com, daniel@iogearbox.net, ast@kernel.org, shayagr@amazon.com Subject: [PATCH v2 net-next 1/9] xdp: introduce mb in xdp_buff/xdp_frame Date: Thu, 3 Sep 2020 22:58:45 +0200 Message-Id: <1e8e82f72e46264b7a7a1ac704d24e163ebed100.1599165031.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: References: MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Introduce multi-buffer bit (mb) in xdp_frame/xdp_buffer to specify if shared_info area has been properly initialized for non-linear xdp buffers Signed-off-by: Lorenzo Bianconi --- include/net/xdp.h | 8 ++++++-- net/core/xdp.c | 1 + 2 files changed, 7 insertions(+), 2 deletions(-) diff --git a/include/net/xdp.h b/include/net/xdp.h index 3814fb631d52..42f439f9fcda 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -72,7 +72,8 @@ struct xdp_buff { void *data_hard_start; struct xdp_rxq_info *rxq; struct xdp_txq_info *txq; - u32 frame_sz; /* frame size to deduce data_hard_end/reserved tailroom*/ + u32 frame_sz:31; /* frame size to deduce data_hard_end/reserved tailroom*/ + u32 mb:1; /* xdp non-linear buffer */ }; /* Reserve memory area at end-of data area. @@ -96,7 +97,8 @@ struct xdp_frame { u16 len; u16 headroom; u32 metasize:8; - u32 frame_sz:24; + u32 frame_sz:23; + u32 mb:1; /* xdp non-linear frame */ /* Lifetime of xdp_rxq_info is limited to NAPI/enqueue time, * while mem info is valid on remote CPU. */ @@ -141,6 +143,7 @@ void xdp_convert_frame_to_buff(struct xdp_frame *frame, struct xdp_buff *xdp) xdp->data_end = frame->data + frame->len; xdp->data_meta = frame->data - frame->metasize; xdp->frame_sz = frame->frame_sz; + xdp->mb = frame->mb; } static inline @@ -167,6 +170,7 @@ int xdp_update_frame_from_buff(struct xdp_buff *xdp, xdp_frame->headroom = headroom - sizeof(*xdp_frame); xdp_frame->metasize = metasize; xdp_frame->frame_sz = xdp->frame_sz; + xdp_frame->mb = xdp->mb; return 0; } diff --git a/net/core/xdp.c b/net/core/xdp.c index 48aba933a5a8..884f140fc3be 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -454,6 +454,7 @@ struct xdp_frame *xdp_convert_zc_to_xdp_frame(struct xdp_buff *xdp) xdpf->headroom = 0; xdpf->metasize = metasize; xdpf->frame_sz = PAGE_SIZE; + xdpf->mb = xdp->mb; xdpf->mem.type = MEM_TYPE_PAGE_ORDER0; xsk_buff_free(xdp); From patchwork Thu Sep 3 20:58:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 261560 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE995C43461 for ; Thu, 3 Sep 2020 20:59:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 92566208C7 for ; Thu, 3 Sep 2020 20:59:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1599166770; bh=7yi9bTxOu1iS+VDXVy6aMdGtvip+lRfsQez7FElZpDE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=XFXtwQmEySjiCwT59WCqseiHLGBssCm8a9WlhUqs6TG51A6S/QyJblmEvepGV98rT WgpwjeuwfukP4OPbklga8tNE7p738fozlMk2ZmE5tqVYU4vTeGRbslcMmygA1UalcH OmaM3I8eHL8L5oIgT3lwLocWgZ0DKrEht4J6NmVg= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726088AbgICU73 (ORCPT ); Thu, 3 Sep 2020 16:59:29 -0400 Received: from mail.kernel.org ([198.145.29.99]:56182 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729037AbgICU72 (ORCPT ); Thu, 3 Sep 2020 16:59:28 -0400 Received: from lore-desk.redhat.com (unknown [151.66.86.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 324CE20797; Thu, 3 Sep 2020 20:59:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1599166767; bh=7yi9bTxOu1iS+VDXVy6aMdGtvip+lRfsQez7FElZpDE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iWyplRlSR1WYqdJTVZfhdXLUGCD4JGGhOr+lK+/09JELEhGRnROrqKe5WTjxJe7AH wLo4lVYYJy6EH4ngB8d6Cqzs8bEx+FeI3REqP+kBj7uMy7RzK+G5lVA9zRxic9cLjM qQLfxGSheW39Qg3BzmM1WgGFGkRxjnRBw+G9N1QU= From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: bpf@vger.kernel.org, davem@davemloft.net, lorenzo.bianconi@redhat.com, brouer@redhat.com, echaudro@redhat.com, sameehj@amazon.com, kuba@kernel.org, john.fastabend@gmail.com, daniel@iogearbox.net, ast@kernel.org, shayagr@amazon.com Subject: [PATCH v2 net-next 3/9] net: mvneta: update mb bit before passing the xdp buffer to eBPF layer Date: Thu, 3 Sep 2020 22:58:47 +0200 Message-Id: <25198d8424778abe9ee3fe25bba542143201b030.1599165031.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: References: MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Update multi-buffer bit (mb) in xdp_buff to notify XDP/eBPF layer and XDP remote drivers if this is a "non-linear" XDP buffer. Access skb_shared_info only if xdp_buff mb is set Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/marvell/mvneta.c | 37 +++++++++++++++------------ 1 file changed, 21 insertions(+), 16 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index 832bbb8b05c8..4f745a2b702a 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -2027,11 +2027,11 @@ mvneta_xdp_put_buff(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, struct xdp_buff *xdp, int sync_len, bool napi) { struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp); - int i; + int i, num_frames = xdp->mb ? sinfo->nr_frags : 0; page_pool_put_page(rxq->page_pool, virt_to_head_page(xdp->data), sync_len, napi); - for (i = 0; i < sinfo->nr_frags; i++) + for (i = 0; i < num_frames; i++) page_pool_put_full_page(rxq->page_pool, skb_frag_page(&sinfo->frags[i]), napi); } @@ -2175,6 +2175,7 @@ mvneta_run_xdp(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, len = xdp->data_end - xdp->data_hard_start - pp->rx_offset_correction; data_len = xdp->data_end - xdp->data; + act = bpf_prog_run_xdp(prog, xdp); /* Due xdp_adjust_tail: DMA sync for_device cover max len CPU touch */ @@ -2234,7 +2235,6 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp, int data_len = -MVNETA_MH_SIZE, len; struct net_device *dev = pp->dev; enum dma_data_direction dma_dir; - struct skb_shared_info *sinfo; if (MVNETA_SKB_SIZE(rx_desc->data_size) > PAGE_SIZE) { len = MVNETA_MAX_RX_BUF_SIZE; @@ -2256,9 +2256,7 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp, xdp->data = data + pp->rx_offset_correction + MVNETA_MH_SIZE; xdp->data_end = xdp->data + data_len; xdp_set_data_meta_invalid(xdp); - - sinfo = xdp_get_shared_info_from_buff(xdp); - sinfo->nr_frags = 0; + xdp->mb = 0; *size = rx_desc->data_size - len; rx_desc->buf_phys_addr = 0; @@ -2269,7 +2267,7 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp, struct mvneta_rx_desc *rx_desc, struct mvneta_rx_queue *rxq, struct xdp_buff *xdp, int *size, - struct page *page) + int *nfrags, struct page *page) { struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp); struct net_device *dev = pp->dev; @@ -2288,13 +2286,18 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp, rx_desc->buf_phys_addr, len, dma_dir); - if (data_len > 0 && sinfo->nr_frags < MAX_SKB_FRAGS) { - skb_frag_t *frag = &sinfo->frags[sinfo->nr_frags]; + if (data_len > 0 && *nfrags < MAX_SKB_FRAGS) { + skb_frag_t *frag = &sinfo->frags[*nfrags]; skb_frag_off_set(frag, pp->rx_offset_correction); skb_frag_size_set(frag, data_len); __skb_frag_set_page(frag, page); - sinfo->nr_frags++; + *nfrags = *nfrags + 1; + + if (rx_desc->status & MVNETA_RXD_LAST_DESC) { + sinfo->nr_frags = *nfrags; + xdp->mb = true; + } rx_desc->buf_phys_addr = 0; } @@ -2306,7 +2309,7 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, struct xdp_buff *xdp, u32 desc_status) { struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp); - int i, num_frags = sinfo->nr_frags; + int i, num_frags = xdp->mb ? sinfo->nr_frags : 0; skb_frag_t frags[MAX_SKB_FRAGS]; struct sk_buff *skb; @@ -2341,13 +2344,14 @@ static int mvneta_rx_swbm(struct napi_struct *napi, { int rx_proc = 0, rx_todo, refill, size = 0; struct net_device *dev = pp->dev; - struct xdp_buff xdp_buf = { - .frame_sz = PAGE_SIZE, - .rxq = &rxq->xdp_rxq, - }; struct mvneta_stats ps = {}; struct bpf_prog *xdp_prog; u32 desc_status, frame_sz; + struct xdp_buff xdp_buf; + int nfrags; + + xdp_buf.frame_sz = PAGE_SIZE; + xdp_buf.rxq = &rxq->xdp_rxq; /* Get number of received packets */ rx_todo = mvneta_rxq_busy_desc_num_get(pp, rxq); @@ -2379,6 +2383,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi, size = rx_desc->data_size; frame_sz = size - ETH_FCS_LEN; desc_status = rx_desc->status; + nfrags = 0; mvneta_swbm_rx_frame(pp, rx_desc, rxq, &xdp_buf, &size, page, &ps); @@ -2387,7 +2392,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi, continue; mvneta_swbm_add_rx_fragment(pp, rx_desc, rxq, &xdp_buf, - &size, page); + &size, &nfrags, page); } /* Middle or Last descriptor */ if (!(rx_status & MVNETA_RXD_LAST_DESC)) From patchwork Thu Sep 3 20:58:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 261559 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56270C43461 for ; Thu, 3 Sep 2020 20:59:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 25782206B8 for ; Thu, 3 Sep 2020 20:59:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1599166776; bh=sI2OR9218VcpkTKPyYDqwSm5WvzWWEDkrE3GFTMr8ng=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=NosF/enQydSCRjS/MdFQMHADXm1SA2gBZrhDiUOsCVL+tK9UU61ahrCMEHUT7caWt grXNSNs9d4nW82RYDsWYyytWb0987UodlVLZFqOYnX6fmL9RIVzIv0rlMGbk3TItFR jResP7c04lVTyDwJUgHn37wEgR+sPT73pWYrMCjw= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729278AbgICU7f (ORCPT ); Thu, 3 Sep 2020 16:59:35 -0400 Received: from mail.kernel.org ([198.145.29.99]:56228 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729189AbgICU7d (ORCPT ); Thu, 3 Sep 2020 16:59:33 -0400 Received: from lore-desk.redhat.com (unknown [151.66.86.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 6524620897; Thu, 3 Sep 2020 20:59:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1599166772; bh=sI2OR9218VcpkTKPyYDqwSm5WvzWWEDkrE3GFTMr8ng=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=vY0DAuaqy+6k9HOcfEe3hvz87toUX1wAgvtrISv9g7UdSqa8mZKpZDWyIS5JTRKG5 gTYxfInm27fSsPe6lwc53aEyECqf/N7xBHNSP81w8Tr8QaI0LGEg73WpZTU7ttIdbX dwrJ15yCO8YB4YAM9GWUtJJk+60aw0sbVf9X0tAQ= From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: bpf@vger.kernel.org, davem@davemloft.net, lorenzo.bianconi@redhat.com, brouer@redhat.com, echaudro@redhat.com, sameehj@amazon.com, kuba@kernel.org, john.fastabend@gmail.com, daniel@iogearbox.net, ast@kernel.org, shayagr@amazon.com Subject: [PATCH v2 net-next 5/9] net: mvneta: add multi buffer support to XDP_TX Date: Thu, 3 Sep 2020 22:58:49 +0200 Message-Id: <2a5b39dd780f9d3ef7ff060699beca57413c3761.1599165031.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: References: MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Introduce the capability to map non-linear xdp buffer running mvneta_xdp_submit_frame() for XDP_TX and XDP_REDIRECT Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/marvell/mvneta.c | 79 +++++++++++++++++---------- 1 file changed, 49 insertions(+), 30 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index 4f745a2b702a..65fbed957e4f 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -1854,8 +1854,8 @@ static void mvneta_txq_bufs_free(struct mvneta_port *pp, bytes_compl += buf->skb->len; pkts_compl++; dev_kfree_skb_any(buf->skb); - } else if (buf->type == MVNETA_TYPE_XDP_TX || - buf->type == MVNETA_TYPE_XDP_NDO) { + } else if ((buf->type == MVNETA_TYPE_XDP_TX || + buf->type == MVNETA_TYPE_XDP_NDO) && buf->xdpf) { xdp_return_frame(buf->xdpf); } } @@ -2040,43 +2040,62 @@ static int mvneta_xdp_submit_frame(struct mvneta_port *pp, struct mvneta_tx_queue *txq, struct xdp_frame *xdpf, bool dma_map) { - struct mvneta_tx_desc *tx_desc; - struct mvneta_tx_buf *buf; - dma_addr_t dma_addr; + struct skb_shared_info *sinfo = xdp_get_shared_info_from_frame(xdpf); + int i, num_frames = xdpf->mb ? sinfo->nr_frags + 1 : 1; + struct mvneta_tx_desc *tx_desc = NULL; + struct page *page; - if (txq->count >= txq->tx_stop_threshold) + if (txq->count + num_frames >= txq->tx_stop_threshold) return MVNETA_XDP_DROPPED; - tx_desc = mvneta_txq_next_desc_get(txq); + for (i = 0; i < num_frames; i++) { + struct mvneta_tx_buf *buf = &txq->buf[txq->txq_put_index]; + skb_frag_t *frag = i ? &sinfo->frags[i - 1] : NULL; + int len = frag ? skb_frag_size(frag) : xdpf->len; + dma_addr_t dma_addr; - buf = &txq->buf[txq->txq_put_index]; - if (dma_map) { - /* ndo_xdp_xmit */ - dma_addr = dma_map_single(pp->dev->dev.parent, xdpf->data, - xdpf->len, DMA_TO_DEVICE); - if (dma_mapping_error(pp->dev->dev.parent, dma_addr)) { - mvneta_txq_desc_put(txq); - return MVNETA_XDP_DROPPED; + tx_desc = mvneta_txq_next_desc_get(txq); + if (dma_map) { + /* ndo_xdp_xmit */ + void *data; + + data = frag ? page_address(skb_frag_page(frag)) + : xdpf->data; + dma_addr = dma_map_single(pp->dev->dev.parent, data, + len, DMA_TO_DEVICE); + if (dma_mapping_error(pp->dev->dev.parent, dma_addr)) { + for (; i >= 0; i--) + mvneta_txq_desc_put(txq); + return MVNETA_XDP_DROPPED; + } + buf->type = MVNETA_TYPE_XDP_NDO; + } else { + page = frag ? skb_frag_page(frag) + : virt_to_page(xdpf->data); + dma_addr = page_pool_get_dma_addr(page); + if (!frag) + dma_addr += sizeof(*xdpf) + xdpf->headroom; + dma_sync_single_for_device(pp->dev->dev.parent, + dma_addr, len, + DMA_BIDIRECTIONAL); + buf->type = MVNETA_TYPE_XDP_TX; } - buf->type = MVNETA_TYPE_XDP_NDO; - } else { - struct page *page = virt_to_page(xdpf->data); + buf->xdpf = i ? NULL : xdpf; - dma_addr = page_pool_get_dma_addr(page) + - sizeof(*xdpf) + xdpf->headroom; - dma_sync_single_for_device(pp->dev->dev.parent, dma_addr, - xdpf->len, DMA_BIDIRECTIONAL); - buf->type = MVNETA_TYPE_XDP_TX; + if (!i) + tx_desc->command = MVNETA_TXD_F_DESC; + tx_desc->buf_phys_addr = dma_addr; + tx_desc->data_size = len; + + mvneta_txq_inc_put(txq); } - buf->xdpf = xdpf; - tx_desc->command = MVNETA_TXD_FLZ_DESC; - tx_desc->buf_phys_addr = dma_addr; - tx_desc->data_size = xdpf->len; + /*last descriptor */ + if (tx_desc) + tx_desc->command |= MVNETA_TXD_L_DESC | MVNETA_TXD_Z_PAD; - mvneta_txq_inc_put(txq); - txq->pending++; - txq->count++; + txq->pending += num_frames; + txq->count += num_frames; return MVNETA_XDP_TX; } From patchwork Thu Sep 3 20:58:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 261558 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, MENTIONS_GIT_HOSTING, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3FABDC433E2 for ; Thu, 3 Sep 2020 20:59:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 15A8F206B8 for ; Thu, 3 Sep 2020 20:59:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1599166785; bh=WjAJcozqYUApukYBbu27u+BBeSfDcbmdQWNonq247W8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=ZKfmklruL5NwsMI1GR3HGPZg6M+zenHUJJfS8o030hQTcP4Y2n+nFEgUdsEXM0oDD ogGD1elwVFoBdykRHTHHiFskm1te6uPgAnJHaywC3ffV4aWEu9Gj9AogWDCblOR/QH rFazkq3ggN+OMjge8cA6lteo84KjqMPlXpvX7aIY= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729371AbgICU7o (ORCPT ); Thu, 3 Sep 2020 16:59:44 -0400 Received: from mail.kernel.org ([198.145.29.99]:56306 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729311AbgICU7i (ORCPT ); Thu, 3 Sep 2020 16:59:38 -0400 Received: from lore-desk.redhat.com (unknown [151.66.86.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 94583206CA; Thu, 3 Sep 2020 20:59:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1599166777; bh=WjAJcozqYUApukYBbu27u+BBeSfDcbmdQWNonq247W8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=c3nAjvG10T6GSK34bZ18wVNmZGoCwFZ56ueaXnam5HajhW65foHRwOFRKFsfPBi4t GpDp+gTBjWCHVBnvQagHCiikFiytqOqP8Rt1osBQIFePvrkEqtgPG0o6g2HRXlTUz5 EubrieyQBgt1zvPJy1RNSIn4fBY2p9sM4fC4XmqM= From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: bpf@vger.kernel.org, davem@davemloft.net, lorenzo.bianconi@redhat.com, brouer@redhat.com, echaudro@redhat.com, sameehj@amazon.com, kuba@kernel.org, john.fastabend@gmail.com, daniel@iogearbox.net, ast@kernel.org, shayagr@amazon.com Subject: [PATCH v2 net-next 7/9] bpf: helpers: add multibuffer support Date: Thu, 3 Sep 2020 22:58:51 +0200 Message-Id: X-Mailer: git-send-email 2.26.2 In-Reply-To: References: MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Sameeh Jubran The implementation is based on this [0] draft by Jesper D. Brouer. Provided two new helpers: * bpf_xdp_get_frag_count() * bpf_xdp_get_frags_total_size() [0] xdp mb design - https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org Signed-off-by: Sameeh Jubran Signed-off-by: Lorenzo Bianconi --- include/uapi/linux/bpf.h | 14 ++++++++++++ net/core/filter.c | 39 ++++++++++++++++++++++++++++++++++ tools/include/uapi/linux/bpf.h | 14 ++++++++++++ 3 files changed, 67 insertions(+) diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index c4a6d245619c..53db75095306 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -3590,6 +3590,18 @@ union bpf_attr { * * Return * 0 on success, or a negative error in case of failure. + * + * int bpf_xdp_get_frag_count(struct xdp_buff *xdp_md) + * Description + * Get the total number of frags for a given packet. + * Return + * The number of frags + * + * int bpf_xdp_get_frags_total_size(struct xdp_buff *xdp_md) + * Description + * Get the total size of frags for a given packet. + * Return + * The total size of frags for a given packet. */ #define __BPF_FUNC_MAPPER(FN) \ FN(unspec), \ @@ -3742,6 +3754,8 @@ union bpf_attr { FN(d_path), \ FN(copy_from_user), \ FN(xdp_adjust_mb_header), \ + FN(xdp_get_frag_count), \ + FN(xdp_get_frags_total_size), \ /* */ /* integer value in 'imm' field of BPF_CALL instruction selects which helper diff --git a/net/core/filter.c b/net/core/filter.c index ae6b10cf062d..ba058fc16440 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -3526,6 +3526,41 @@ static const struct bpf_func_proto bpf_xdp_adjust_mb_header_proto = { .arg2_type = ARG_ANYTHING, }; +BPF_CALL_1(bpf_xdp_get_frag_count, struct xdp_buff*, xdp) +{ + struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp); + + return xdp->mb ? sinfo->nr_frags : 0; +} + +const struct bpf_func_proto bpf_xdp_get_frag_count_proto = { + .func = bpf_xdp_get_frag_count, + .gpl_only = false, + .ret_type = RET_INTEGER, + .arg1_type = ARG_PTR_TO_CTX, +}; + +BPF_CALL_1(bpf_xdp_get_frags_total_size, struct xdp_buff*, xdp) +{ + struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp); + int nfrags, i; + int size = 0; + + nfrags = xdp->mb ? sinfo->nr_frags : 0; + + for (i = 0; i < nfrags && i < MAX_SKB_FRAGS; i++) + size += skb_frag_size(&sinfo->frags[i]); + + return size; +} + +const struct bpf_func_proto bpf_xdp_get_frags_total_size_proto = { + .func = bpf_xdp_get_frags_total_size, + .gpl_only = false, + .ret_type = RET_INTEGER, + .arg1_type = ARG_PTR_TO_CTX, +}; + BPF_CALL_2(bpf_xdp_adjust_tail, struct xdp_buff *, xdp, int, offset) { void *data_hard_end = xdp_data_hard_end(xdp); /* use xdp->frame_sz */ @@ -6889,6 +6924,10 @@ xdp_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) return &bpf_xdp_adjust_tail_proto; case BPF_FUNC_xdp_adjust_mb_header: return &bpf_xdp_adjust_mb_header_proto; + case BPF_FUNC_xdp_get_frag_count: + return &bpf_xdp_get_frag_count_proto; + case BPF_FUNC_xdp_get_frags_total_size: + return &bpf_xdp_get_frags_total_size_proto; case BPF_FUNC_fib_lookup: return &bpf_xdp_fib_lookup_proto; #ifdef CONFIG_INET diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 392d52a2ecef..dd4669096cbb 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -3591,6 +3591,18 @@ union bpf_attr { * * Return * 0 on success, or a negative error in case of failure. + * + * int bpf_xdp_get_frag_count(struct xdp_buff *xdp_md) + * Description + * Get the total number of frags for a given packet. + * Return + * The number of frags + * + * int bpf_xdp_get_frags_total_size(struct xdp_buff *xdp_md) + * Description + * Get the total size of frags for a given packet. + * Return + * The total size of frags for a given packet. */ #define __BPF_FUNC_MAPPER(FN) \ FN(unspec), \ @@ -3743,6 +3755,8 @@ union bpf_attr { FN(d_path), \ FN(copy_from_user), \ FN(xdp_adjust_mb_header), \ + FN(xdp_get_frag_count), \ + FN(xdp_get_frags_total_size), \ /* */ /* integer value in 'imm' field of BPF_CALL instruction selects which helper From patchwork Thu Sep 3 20:58:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 261557 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A184C47439 for ; Thu, 3 Sep 2020 20:59:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 26807206C0 for ; Thu, 3 Sep 2020 20:59:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1599166791; bh=peyukAcC2BN2wjt+dwbi/n6wfqFM65fensqMyu1gMcY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=ZiamfjAGgVyuxbhkNnE9ob5Ezo7JcmLbDNZkwz/2JPFIuzFbVU3z5EL2E32Oh932s OKJNXBLR0d7D13cdf9gHEwwEs5vvR9gtGy334MuWIvPDVOcypAx8cRpGTaP1HOO22H 35FF63G/wt9pjbG9uME45eOkce4lSv4fdu9VQjr4= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729406AbgICU7s (ORCPT ); Thu, 3 Sep 2020 16:59:48 -0400 Received: from mail.kernel.org ([198.145.29.99]:56394 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729348AbgICU7n (ORCPT ); Thu, 3 Sep 2020 16:59:43 -0400 Received: from lore-desk.redhat.com (unknown [151.66.86.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id C803920709; Thu, 3 Sep 2020 20:59:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1599166783; bh=peyukAcC2BN2wjt+dwbi/n6wfqFM65fensqMyu1gMcY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iwIfBf3jO5GjJwnuQ157HAatSlArsIYVGNxRBCmNYE+V9cK14vEquDJl8dQPKPdKB gHyGEjGbPuh+UCta4VhB2wjCyhat9i6BasFCPt2+XK/m36DE9edaLn20teBnQFIjWa 02RQ+LGsmYoo++AEYCP3WV5MDPOZ87PoaG5QF2ng= From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: bpf@vger.kernel.org, davem@davemloft.net, lorenzo.bianconi@redhat.com, brouer@redhat.com, echaudro@redhat.com, sameehj@amazon.com, kuba@kernel.org, john.fastabend@gmail.com, daniel@iogearbox.net, ast@kernel.org, shayagr@amazon.com Subject: [PATCH v2 net-next 9/9] net: mvneta: enable jumbo frames for XDP Date: Thu, 3 Sep 2020 22:58:53 +0200 Message-Id: X-Mailer: git-send-email 2.26.2 In-Reply-To: References: MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Enable the capability to receive jumbo frames even if the interface is running in XDP mode Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/marvell/mvneta.c | 10 ---------- 1 file changed, 10 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index 65fbed957e4f..85853fefcfd1 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -3737,11 +3737,6 @@ static int mvneta_change_mtu(struct net_device *dev, int mtu) mtu = ALIGN(MVNETA_RX_PKT_SIZE(mtu), 8); } - if (pp->xdp_prog && mtu > MVNETA_MAX_RX_BUF_SIZE) { - netdev_info(dev, "Illegal MTU value %d for XDP mode\n", mtu); - return -EINVAL; - } - dev->mtu = mtu; if (!netif_running(dev)) { @@ -4439,11 +4434,6 @@ static int mvneta_xdp_setup(struct net_device *dev, struct bpf_prog *prog, struct mvneta_port *pp = netdev_priv(dev); struct bpf_prog *old_prog; - if (prog && dev->mtu > MVNETA_MAX_RX_BUF_SIZE) { - NL_SET_ERR_MSG_MOD(extack, "Jumbo frames not supported on XDP"); - return -EOPNOTSUPP; - } - if (pp->bm_priv) { NL_SET_ERR_MSG_MOD(extack, "Hardware Buffer Management not supported on XDP");