From patchwork Thu May 13 16:58:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matteo Croce X-Patchwork-Id: 439390 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2DA65C43461 for ; Thu, 13 May 2021 16:59:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 00CBD6143C for ; Thu, 13 May 2021 16:59:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230104AbhEMRAq (ORCPT ); Thu, 13 May 2021 13:00:46 -0400 Received: from mail-ej1-f51.google.com ([209.85.218.51]:40674 "EHLO mail-ej1-f51.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229978AbhEMRAn (ORCPT ); Thu, 13 May 2021 13:00:43 -0400 Received: by mail-ej1-f51.google.com with SMTP id n2so40818203ejy.7; Thu, 13 May 2021 09:59:32 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=rxLHJd8XD9Hb4Rf+opd9ZLjsRH1xJ79b5KXjgff+Bjc=; b=KFVwvMZ6Ja8A3NeQA9rMp2l9pQuoiku0t+WMOUaxmDPPM8rpg+8HFA//6tCVS00cpx Ja3U4aHxXNib682L5QfkRB2ueixxIB2xbYuN1kGRYjbrvX/MNXvfe/i5iwhFa1gZ10Ya jc5bnAhGdKMkW/I/z01ooI9UJmMn8q+qrN4GbnDu2xWY4NFx2lpFs8/CuRAVH/m7sDDj w2rrW2jMZBkhlHMgcEHPjGy090Sq89UMWIdic2aBBg4267jvmWKzaWyhtsq2xntTct9o mmKHXX8Yj/7ojHodx/b4nGXsDBKi5DYPqEGizegRdSwHZTRZK/7/QJC25LutVSwqF3Ff s3Iw== X-Gm-Message-State: AOAM533IwMBqC5rxQOp8BvULF8KJb1FWpM29m9doeOZVi3JRQ/OgpBCT q8cOS7bNCAW2uhswJDpt1mO1siJj2ZamtyQR X-Google-Smtp-Source: ABdhPJzNfQFARn/zPNO4hoyHVkKONaOFliYJKvo89sBhiZsOEvjF+mAbgI9X4R5U6JV8zjP2yJmHUQ== X-Received: by 2002:a17:906:b6c5:: with SMTP id ec5mr44910742ejb.290.1620925171875; Thu, 13 May 2021 09:59:31 -0700 (PDT) Received: from msft-t490s.teknoraver.net (net-5-94-253-60.cust.vodafonedsl.it. [5.94.253.60]) by smtp.gmail.com with ESMTPSA id w11sm2959431ede.54.2021.05.13.09.59.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 May 2021 09:59:31 -0700 (PDT) From: Matteo Croce To: netdev@vger.kernel.org, linux-mm@kvack.org Cc: Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , "David S. Miller" , Jakub Kicinski , Thomas Petazzoni , Marcin Wojtas , Russell King , Mirko Lindner , Stephen Hemminger , Tariq Toukan , Jesper Dangaard Brouer , Ilias Apalodimas , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Boris Pismenny , Arnd Bergmann , Andrew Morton , "Peter Zijlstra (Intel)" , Vlastimil Babka , Yu Zhao , Will Deacon , Fenghua Yu , Roman Gushchin , Hugh Dickins , Peter Xu , Jason Gunthorpe , Jonathan Lemon , Alexander Lobakin , Cong Wang , wenxu , Kevin Hao , Jakub Sitnicki , Marco Elver , Willem de Bruijn , Miaohe Lin , Yunsheng Lin , Guillaume Nault , linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, Matthew Wilcox , Eric Dumazet , David Ahern , Lorenzo Bianconi , Saeed Mahameed , Andrew Lunn , Paolo Abeni , Sven Auhagen Subject: [PATCH net-next v5 1/5] mm: add a signature in struct page Date: Thu, 13 May 2021 18:58:42 +0200 Message-Id: <20210513165846.23722-2-mcroce@linux.microsoft.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210513165846.23722-1-mcroce@linux.microsoft.com> References: <20210513165846.23722-1-mcroce@linux.microsoft.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Matteo Croce This is needed by the page_pool to avoid recycling a page not allocated via page_pool. The page->signature field is aliased to page->lru.next and page->compound_head, but it can't be set by mistake because the signature value is a bad pointer, and can't trigger a false positive in PageTail() because the last bit is 0. Co-developed-by: Matthew Wilcox (Oracle) Signed-off-by: Matthew Wilcox (Oracle) Signed-off-by: Matteo Croce --- include/linux/mm.h | 12 +++++++----- include/linux/mm_types.h | 12 ++++++++++++ include/net/page_pool.h | 2 ++ net/core/page_pool.c | 4 ++++ 4 files changed, 25 insertions(+), 5 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 322ec61d0da7..48268d2d0282 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1668,10 +1668,12 @@ struct address_space *page_mapping(struct page *page); static inline bool page_is_pfmemalloc(const struct page *page) { /* - * Page index cannot be this large so this must be - * a pfmemalloc page. + * This is not a tail page; compound_head of a head page is unused + * at return from the page allocator, and will be overwritten + * by callers who do not care whether the page came from the + * reserves. */ - return page->index == -1UL; + return page->compound_head & 2; } /* @@ -1680,12 +1682,12 @@ static inline bool page_is_pfmemalloc(const struct page *page) */ static inline void set_page_pfmemalloc(struct page *page) { - page->index = -1UL; + page->compound_head = 2; } static inline void clear_page_pfmemalloc(struct page *page) { - page->index = 0; + page->compound_head = 0; } /* diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 5aacc1c10a45..44cf328e94e2 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -96,6 +96,18 @@ struct page { unsigned long private; }; struct { /* page_pool used by netstack */ + /** + * @pp_magic: magic value to avoid recycling non + * page_pool allocated pages. + * It aliases with page->lru.next + */ + unsigned long pp_magic; + /** + * @pp: pointer to page_pool. + * It aliases with page->lru.prev + */ + struct page_pool *pp; + unsigned long _pp_mapping_pad; /** * @dma_addr: might require a 64-bit value on * 32-bit architectures. diff --git a/include/net/page_pool.h b/include/net/page_pool.h index b4b6de909c93..24b3d42c62c0 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -63,6 +63,8 @@ */ #define PP_ALLOC_CACHE_SIZE 128 #define PP_ALLOC_CACHE_REFILL 64 +#define PP_SIGNATURE (POISON_POINTER_DELTA + 0x40) + struct pp_alloc_cache { u32 count; struct page *cache[PP_ALLOC_CACHE_SIZE]; diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 3c4c4c7a0402..9de5d8c08c17 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -221,6 +221,8 @@ static struct page *__page_pool_alloc_page_order(struct page_pool *pool, return NULL; } + page->pp_magic = PP_SIGNATURE; + /* Track how many pages are held 'in-flight' */ pool->pages_state_hold_cnt++; trace_page_pool_state_hold(pool, page, pool->pages_state_hold_cnt); @@ -341,6 +343,8 @@ void page_pool_release_page(struct page_pool *pool, struct page *page) DMA_ATTR_SKIP_CPU_SYNC); page_pool_set_dma_addr(page, 0); skip_dma_unmap: + page->pp_magic = 0; + /* This may be the last page returned, releasing the pool, so * it is not safe to reference pool afterwards. */ From patchwork Thu May 13 16:58:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matteo Croce X-Patchwork-Id: 437699 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1CEB3C433B4 for ; Thu, 13 May 2021 17:00:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F3ACD6121E for ; Thu, 13 May 2021 16:59:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230225AbhEMRBG (ORCPT ); Thu, 13 May 2021 13:01:06 -0400 Received: from mail-ej1-f50.google.com ([209.85.218.50]:44810 "EHLO mail-ej1-f50.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229471AbhEMRAw (ORCPT ); Thu, 13 May 2021 13:00:52 -0400 Received: by mail-ej1-f50.google.com with SMTP id gx5so40797174ejb.11; Thu, 13 May 2021 09:59:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0NpvO6mejm2YD3XXfyOawknELXlmCuYeK+r5UZXV1Uw=; b=PTWhnKHxTzd/3QqPuqNLl1+gaySGRv0XkJljkuMf9+QNWaMYDCoIJCM4wNCUoc5ZwM ZktmA4Um6inX1+e5nVFo+FcdiKOr/bWX3mQGhJ9xBuuOEovWozcnjkiJwR1Eq11wx/Wk Wkew4vEvTkt1WGbTzfV67r17hC4bYpKSuwevVXmP+n/XaUP3ltxu8V05nDxTV/fNSE2Y RO2fd74JXTOJV0CK+vv8UakyIqT7UO9L0ZGGkf1VjkUtIlnX4toYtY+03acdjIQCjP8J 4kI/6scPjEiOy0Zxio0pvIb2ETSadPKD+Q74yBIYgQxWBc0m3+DlJxrmwYA3C6xiaR3L SJxw== X-Gm-Message-State: AOAM531mFwxTHYbKlXVL8uwgnYKMlCKNSTB+c3NQwxpyPgzXjoo/X5Kd 5844omqbqw4UWQLWFvLFPDaSjE3309N4ZDEM X-Google-Smtp-Source: ABdhPJxJKQlIvhD1ypfnRzHdTpjgOE9oL/IlTH2hn9u8CU1UrChIiimBznBhdREbtK+qU5ujn2Hhmw== X-Received: by 2002:a17:907:7216:: with SMTP id dr22mr44771734ejc.185.1620925180980; Thu, 13 May 2021 09:59:40 -0700 (PDT) Received: from msft-t490s.teknoraver.net (net-5-94-253-60.cust.vodafonedsl.it. [5.94.253.60]) by smtp.gmail.com with ESMTPSA id w11sm2959431ede.54.2021.05.13.09.59.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 May 2021 09:59:40 -0700 (PDT) From: Matteo Croce To: netdev@vger.kernel.org, linux-mm@kvack.org Cc: Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , "David S. Miller" , Jakub Kicinski , Thomas Petazzoni , Marcin Wojtas , Russell King , Mirko Lindner , Stephen Hemminger , Tariq Toukan , Jesper Dangaard Brouer , Ilias Apalodimas , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Boris Pismenny , Arnd Bergmann , Andrew Morton , "Peter Zijlstra (Intel)" , Vlastimil Babka , Yu Zhao , Will Deacon , Fenghua Yu , Roman Gushchin , Hugh Dickins , Peter Xu , Jason Gunthorpe , Jonathan Lemon , Alexander Lobakin , Cong Wang , wenxu , Kevin Hao , Jakub Sitnicki , Marco Elver , Willem de Bruijn , Miaohe Lin , Yunsheng Lin , Guillaume Nault , linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, Matthew Wilcox , Eric Dumazet , David Ahern , Lorenzo Bianconi , Saeed Mahameed , Andrew Lunn , Paolo Abeni , Sven Auhagen Subject: [PATCH net-next v5 2/5] skbuff: add a parameter to __skb_frag_unref Date: Thu, 13 May 2021 18:58:43 +0200 Message-Id: <20210513165846.23722-3-mcroce@linux.microsoft.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210513165846.23722-1-mcroce@linux.microsoft.com> References: <20210513165846.23722-1-mcroce@linux.microsoft.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Matteo Croce This is a prerequisite patch, the next one is enabling recycling of skbs and fragments. Add an extra argument on __skb_frag_unref() to handle recycling, and update the current users of the function with that. Signed-off-by: Matteo Croce --- drivers/net/ethernet/marvell/sky2.c | 2 +- drivers/net/ethernet/mellanox/mlx4/en_rx.c | 2 +- include/linux/skbuff.h | 8 +++++--- net/core/skbuff.c | 4 ++-- net/tls/tls_device.c | 2 +- 5 files changed, 10 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/marvell/sky2.c b/drivers/net/ethernet/marvell/sky2.c index 222c32367b2c..aa0cde1dc5c0 100644 --- a/drivers/net/ethernet/marvell/sky2.c +++ b/drivers/net/ethernet/marvell/sky2.c @@ -2503,7 +2503,7 @@ static void skb_put_frags(struct sk_buff *skb, unsigned int hdr_space, if (length == 0) { /* don't need this page */ - __skb_frag_unref(frag); + __skb_frag_unref(frag, false); --skb_shinfo(skb)->nr_frags; } else { size = min(length, (unsigned) PAGE_SIZE); diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c index e35e4d7ef4d1..cea62b8f554c 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c @@ -526,7 +526,7 @@ static int mlx4_en_complete_rx_desc(struct mlx4_en_priv *priv, fail: while (nr > 0) { nr--; - __skb_frag_unref(skb_shinfo(skb)->frags + nr); + __skb_frag_unref(skb_shinfo(skb)->frags + nr, false); } return 0; } diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index dbf820a50a39..7fcfea7e7b21 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3081,10 +3081,12 @@ static inline void skb_frag_ref(struct sk_buff *skb, int f) /** * __skb_frag_unref - release a reference on a paged fragment. * @frag: the paged fragment + * @recycle: recycle the page if allocated via page_pool * - * Releases a reference on the paged fragment @frag. + * Releases a reference on the paged fragment @frag + * or recycles the page via the page_pool API. */ -static inline void __skb_frag_unref(skb_frag_t *frag) +static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle) { put_page(skb_frag_page(frag)); } @@ -3098,7 +3100,7 @@ static inline void __skb_frag_unref(skb_frag_t *frag) */ static inline void skb_frag_unref(struct sk_buff *skb, int f) { - __skb_frag_unref(&skb_shinfo(skb)->frags[f]); + __skb_frag_unref(&skb_shinfo(skb)->frags[f], false); } /** diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 3ad22870298c..12b7e90dd2b5 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -664,7 +664,7 @@ static void skb_release_data(struct sk_buff *skb) skb_zcopy_clear(skb, true); for (i = 0; i < shinfo->nr_frags; i++) - __skb_frag_unref(&shinfo->frags[i]); + __skb_frag_unref(&shinfo->frags[i], false); if (shinfo->frag_list) kfree_skb_list(shinfo->frag_list); @@ -3495,7 +3495,7 @@ int skb_shift(struct sk_buff *tgt, struct sk_buff *skb, int shiftlen) fragto = &skb_shinfo(tgt)->frags[merge]; skb_frag_size_add(fragto, skb_frag_size(fragfrom)); - __skb_frag_unref(fragfrom); + __skb_frag_unref(fragfrom, false); } /* Reposition in the original skb */ diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c index 76a6f8c2eec4..ad11db2c4f63 100644 --- a/net/tls/tls_device.c +++ b/net/tls/tls_device.c @@ -127,7 +127,7 @@ static void destroy_record(struct tls_record_info *record) int i; for (i = 0; i < record->num_frags; i++) - __skb_frag_unref(&record->frags[i]); + __skb_frag_unref(&record->frags[i], false); kfree(record); } From patchwork Thu May 13 16:58:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matteo Croce X-Patchwork-Id: 437340 Delivered-To: patch@linaro.org Received: by 2002:a02:c901:0:0:0:0:0 with SMTP id t1csp502251jao; Thu, 13 May 2021 10:00:16 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy3xm9vf2esh/EelsopvPG3xwFm3xq6g0mi5d79pU8WjatWlYLlnW0p9JpvbU1GpXFB/hau X-Received: by 2002:a05:6402:111a:: with SMTP id u26mr4016116edv.260.1620925215955; Thu, 13 May 2021 10:00:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620925215; cv=none; d=google.com; s=arc-20160816; b=Lyu+t7KRE9JP5EVFSeAPo8TkHm4yp2AY1CmLDJkNzk3NKG1cLn6pVqHf1LIIsZ5YBk gjkKEmetGy85V0TXhC1bs3nX/3oyreS2ns2jbkvgJVwfVb9DENXLbCRzqH/s/iT1Fa4J +8AYm0974QmOxHp79k/E0LuguZl+0VBKOPFxnt1HOANBhHpmcwwUBNJdfWL3mHoJ6Px+ YdTppK06hmpRXwmv90uk3MJYoB53+qoqiR4uIivBNq9sK9fLzTraVVT9tT9QH5wcKBBB c1dTO2l0OpPVCqrR0Xur89SzDtLWKSfp+pYNNUIOdogiE58UzMzcVoVOZCbTbgyOnaCW kodA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=RPSD4rOiIyDkht9dP550Ooc8Tm65GKpSWccVLOdF3gM=; b=IVmOnbyQFwuysIF4BJyRI8EI6+w6V9yMgeE6IoFPtQUL2g65UfKId+X04aVkuHgQaQ SeEatINR6hEcgw3aREudrV/9AwipwBzPl235EePnbhDHRh7IUlQf16vY3aL4sggAnzG9 ym7utOcYh3QvcLNppKIIQKJxnhXAkngNmypH3NNOvK593ktE5d93BaUuo03nTr9SYgSh Hzam0T6GpIUAHQW61fe+UFcwmZ4f33r6x4YItO9QTgVzoNcoV17fDsXbpqsyUX9SWzNp YGvm+1KmDbkuKU++nC+dGkBpSkhtMgmmFWuob2nBjar6gSqRyXi0Mmp1drvdwgyVK4il eLTQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g18si3572494ejr.273.2021.05.13.10.00.14; Thu, 13 May 2021 10:00:15 -0700 (PDT) Received-SPF: pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230347AbhEMRBV (ORCPT + 8 others); Thu, 13 May 2021 13:01:21 -0400 Received: from mail-ej1-f53.google.com ([209.85.218.53]:38804 "EHLO mail-ej1-f53.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230181AbhEMRBH (ORCPT ); Thu, 13 May 2021 13:01:07 -0400 Received: by mail-ej1-f53.google.com with SMTP id b25so40762631eju.5; Thu, 13 May 2021 09:59:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=RPSD4rOiIyDkht9dP550Ooc8Tm65GKpSWccVLOdF3gM=; b=QJuoC3i6YjeUKQkRU5kJr+wXHcgO+ZVTatAwf/Qp8QUUiS+b+8dSulH6vFKVCre8Nv VQQDB4EM5I5aRe4KvyylEtlOu2Tqd8hwD1YjMXK1palq2VJGV1Nnxs+GxlfYBiAPF1Jj R153geiLxE5O+jUNKColwIpaUz4IkMGEMv6ybZ0vYsKxJBWGxABP1q9jnu0hIWsMYAy5 hF01lDIhWtQDu6CgAlFGU2yVs4GQpPtoPiGuUzrHF5/EGkJ/UZ+FjZSHlQYO6rRuweIO PqUcGAECsjOqiVLvXjxpOuF5gUXpgUbbl4ugCcJvVajEOfmvhqOjiVuS5WLqKMOpBt7v gvHw== X-Gm-Message-State: AOAM533da4lcSM36GO/CF21Js2xwamVHVFOqjib5fMoofho5ON5XzRbe nZP4K1dueWdZ1dIJZNp8S5+72eeLAyEYRygf X-Received: by 2002:a17:906:57c3:: with SMTP id u3mr45648755ejr.162.1620925195918; Thu, 13 May 2021 09:59:55 -0700 (PDT) Received: from msft-t490s.teknoraver.net (net-5-94-253-60.cust.vodafonedsl.it. [5.94.253.60]) by smtp.gmail.com with ESMTPSA id w11sm2959431ede.54.2021.05.13.09.59.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 May 2021 09:59:55 -0700 (PDT) From: Matteo Croce To: netdev@vger.kernel.org, linux-mm@kvack.org Cc: Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , "David S. Miller" , Jakub Kicinski , Thomas Petazzoni , Marcin Wojtas , Russell King , Mirko Lindner , Stephen Hemminger , Tariq Toukan , Jesper Dangaard Brouer , Ilias Apalodimas , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Boris Pismenny , Arnd Bergmann , Andrew Morton , "Peter Zijlstra (Intel)" , Vlastimil Babka , Yu Zhao , Will Deacon , Fenghua Yu , Roman Gushchin , Hugh Dickins , Peter Xu , Jason Gunthorpe , Jonathan Lemon , Alexander Lobakin , Cong Wang , wenxu , Kevin Hao , Jakub Sitnicki , Marco Elver , Willem de Bruijn , Miaohe Lin , Yunsheng Lin , Guillaume Nault , linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, Matthew Wilcox , Eric Dumazet , David Ahern , Lorenzo Bianconi , Saeed Mahameed , Andrew Lunn , Paolo Abeni , Sven Auhagen Subject: [PATCH net-next v5 3/5] page_pool: Allow drivers to hint on SKB recycling Date: Thu, 13 May 2021 18:58:44 +0200 Message-Id: <20210513165846.23722-4-mcroce@linux.microsoft.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210513165846.23722-1-mcroce@linux.microsoft.com> References: <20210513165846.23722-1-mcroce@linux.microsoft.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Ilias Apalodimas Up to now several high speed NICs have custom mechanisms of recycling the allocated memory they use for their payloads. Our page_pool API already has recycling capabilities that are always used when we are running in 'XDP mode'. So let's tweak the API and the kernel network stack slightly and allow the recycling to happen even during the standard operation. The API doesn't take into account 'split page' policies used by those drivers currently, but can be extended once we have users for that. The idea is to be able to intercept the packet on skb_release_data(). If it's a buffer coming from our page_pool API recycle it back to the pool for further usage or just release the packet entirely. To achieve that we introduce a bit in struct sk_buff (pp_recycle:1) and a field in struct page (page->pp) to store the page_pool pointer. Storing the information in page->pp allows us to recycle both SKBs and their fragments. The SKB bit is needed for a couple of reasons. First of all in an effort to affect the free path as less as possible, reading a single bit, is better that trying to derive identical information for the page stored data. We do have a special mark in the page, that won't allow this to happen, but again deciding without having to read the entire page is preferable. The driver has to take care of the sync operations on it's own during the buffer recycling since the buffer is, after opting-in to the recycling, never unmapped. Since the gain on the drivers depends on the architecture, we are not enabling recycling by default if the page_pool API is used on a driver. In order to enable recycling the driver must call skb_mark_for_recycle() to store the information we need for recycling in page->pp and enabling the recycling bit, or page_pool_store_mem_info() for a fragment. Co-developed-by: Jesper Dangaard Brouer Signed-off-by: Jesper Dangaard Brouer Co-developed-by: Matteo Croce Signed-off-by: Matteo Croce Signed-off-by: Ilias Apalodimas --- include/linux/skbuff.h | 28 +++++++++++++++++++++++++--- include/net/page_pool.h | 9 +++++++++ net/core/page_pool.c | 23 +++++++++++++++++++++++ net/core/skbuff.c | 25 +++++++++++++++++++++---- 4 files changed, 78 insertions(+), 7 deletions(-) -- 2.31.1 diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 7fcfea7e7b21..057b40ad29bd 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -40,6 +40,9 @@ #if IS_ENABLED(CONFIG_NF_CONNTRACK) #include #endif +#ifdef CONFIG_PAGE_POOL +#include +#endif /* The interface for checksum offload between the stack and networking drivers * is as follows... @@ -667,6 +670,8 @@ typedef unsigned char *sk_buff_data_t; * @head_frag: skb was allocated from page fragments, * not allocated by kmalloc() or vmalloc(). * @pfmemalloc: skbuff was allocated from PFMEMALLOC reserves + * @pp_recycle: mark the packet for recycling instead of freeing (implies + * page_pool support on driver) * @active_extensions: active extensions (skb_ext_id types) * @ndisc_nodetype: router type (from link layer) * @ooo_okay: allow the mapping of a socket to a queue to be changed @@ -791,10 +796,12 @@ struct sk_buff { fclone:2, peeked:1, head_frag:1, - pfmemalloc:1; + pfmemalloc:1, + pp_recycle:1; /* page_pool recycle indicator */ #ifdef CONFIG_SKB_EXTENSIONS __u8 active_extensions; #endif + /* fields enclosed in headers_start/headers_end are copied * using a single memcpy() in __copy_skb_header() */ @@ -3088,7 +3095,13 @@ static inline void skb_frag_ref(struct sk_buff *skb, int f) */ static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle) { - put_page(skb_frag_page(frag)); + struct page *page = skb_frag_page(frag); + +#ifdef CONFIG_PAGE_POOL + if (recycle && page_pool_return_skb_page(page_address(page))) + return; +#endif + put_page(page); } /** @@ -3100,7 +3113,7 @@ static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle) */ static inline void skb_frag_unref(struct sk_buff *skb, int f) { - __skb_frag_unref(&skb_shinfo(skb)->frags[f], false); + __skb_frag_unref(&skb_shinfo(skb)->frags[f], skb->pp_recycle); } /** @@ -4699,5 +4712,14 @@ static inline u64 skb_get_kcov_handle(struct sk_buff *skb) #endif } +#ifdef CONFIG_PAGE_POOL +static inline void skb_mark_for_recycle(struct sk_buff *skb, struct page *page, + struct page_pool *pp) +{ + skb->pp_recycle = 1; + page_pool_store_mem_info(page, pp); +} +#endif + #endif /* __KERNEL__ */ #endif /* _LINUX_SKBUFF_H */ diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 24b3d42c62c0..ce75abeddb29 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -148,6 +148,8 @@ inline enum dma_data_direction page_pool_get_dma_dir(struct page_pool *pool) return pool->p.dma_dir; } +bool page_pool_return_skb_page(void *data); + struct page_pool *page_pool_create(const struct page_pool_params *params); #ifdef CONFIG_PAGE_POOL @@ -253,4 +255,11 @@ static inline void page_pool_ring_unlock(struct page_pool *pool) spin_unlock_bh(&pool->ring.producer_lock); } +/* Store mem_info on struct page and use it while recycling skb frags */ +static inline +void page_pool_store_mem_info(struct page *page, struct page_pool *pp) +{ + page->pp = pp; +} + #endif /* _NET_PAGE_POOL_H */ diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 9de5d8c08c17..fa9f17db7c48 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -626,3 +626,26 @@ void page_pool_update_nid(struct page_pool *pool, int new_nid) } } EXPORT_SYMBOL(page_pool_update_nid); + +bool page_pool_return_skb_page(void *data) +{ + struct page_pool *pp; + struct page *page; + + page = virt_to_head_page(data); + if (unlikely(page->pp_magic != PP_SIGNATURE)) + return false; + + pp = (struct page_pool *)page->pp; + + /* Driver set this to memory recycling info. Reset it on recycle. + * This will *not* work for NIC using a split-page memory model. + * The page will be returned to the pool here regardless of the + * 'flipped' fragment being in use or not. + */ + page->pp = NULL; + page_pool_put_full_page(pp, virt_to_head_page(data), false); + + return true; +} +EXPORT_SYMBOL(page_pool_return_skb_page); diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 12b7e90dd2b5..9581af44d587 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -70,6 +70,9 @@ #include #include #include +#ifdef CONFIG_PAGE_POOL +#include +#endif #include #include @@ -645,10 +648,15 @@ static void skb_free_head(struct sk_buff *skb) { unsigned char *head = skb->head; - if (skb->head_frag) + if (skb->head_frag) { +#ifdef CONFIG_PAGE_POOL + if (skb->pp_recycle && page_pool_return_skb_page(head)) + return; +#endif skb_free_frag(head); - else + } else { kfree(head); + } } static void skb_release_data(struct sk_buff *skb) @@ -664,7 +672,7 @@ static void skb_release_data(struct sk_buff *skb) skb_zcopy_clear(skb, true); for (i = 0; i < shinfo->nr_frags; i++) - __skb_frag_unref(&shinfo->frags[i], false); + __skb_frag_unref(&shinfo->frags[i], skb->pp_recycle); if (shinfo->frag_list) kfree_skb_list(shinfo->frag_list); @@ -1046,6 +1054,7 @@ static struct sk_buff *__skb_clone(struct sk_buff *n, struct sk_buff *skb) n->nohdr = 0; n->peeked = 0; C(pfmemalloc); + C(pp_recycle); n->destructor = NULL; C(tail); C(end); @@ -1725,6 +1734,7 @@ int pskb_expand_head(struct sk_buff *skb, int nhead, int ntail, skb->cloned = 0; skb->hdr_len = 0; skb->nohdr = 0; + skb->pp_recycle = 0; atomic_set(&skb_shinfo(skb)->dataref, 1); skb_metadata_clear(skb); @@ -3495,7 +3505,7 @@ int skb_shift(struct sk_buff *tgt, struct sk_buff *skb, int shiftlen) fragto = &skb_shinfo(tgt)->frags[merge]; skb_frag_size_add(fragto, skb_frag_size(fragfrom)); - __skb_frag_unref(fragfrom, false); + __skb_frag_unref(fragfrom, skb->pp_recycle); } /* Reposition in the original skb */ @@ -5285,6 +5295,13 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from, if (skb_cloned(to)) return false; + /* We can't coalesce skb that are allocated from slab and page_pool + * The recycle mark is on the skb, so that might end up trying to + * recycle slab allocated skb->head + */ + if (to->pp_recycle != from->pp_recycle) + return false; + if (len <= skb_tailroom(to)) { if (len) BUG_ON(skb_copy_bits(from, 0, skb_put(to, len), len)); From patchwork Thu May 13 16:58:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matteo Croce X-Patchwork-Id: 437698 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E129C433ED for ; Thu, 13 May 2021 17:00:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E8FEF6121E for ; Thu, 13 May 2021 17:00:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230314AbhEMRCB (ORCPT ); Thu, 13 May 2021 13:02:01 -0400 Received: from mail-ed1-f43.google.com ([209.85.208.43]:38811 "EHLO mail-ed1-f43.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230318AbhEMRBT (ORCPT ); Thu, 13 May 2021 13:01:19 -0400 Received: by mail-ed1-f43.google.com with SMTP id n25so31761751edr.5; Thu, 13 May 2021 10:00:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=aJ11lVMhqfzweOJ3ay4/tPGlK2PxGXGfkKMCyU2xp1w=; b=kq0rleRkaXhj4VeYgTOqrrqPCs9IzSWb9EavsRyTiSmlKTmZaDwmTSQLh7yrJJPaAM /EpjjtVPkqW5gT2HBCNEcKHo22ZPdqBk7HItS30f14vKNDLT1IZJjUeSKvFcFt5qusLP 0UhBOYBQAcfnRWmX1PKzj9xYiKDf8QfUi216nh9cxnOwlKXo3/lZWBH0LuUDbYp+vDbD yWa6w0ICTWnt1CapUFNWAfPdQvYrq1n0/PqHpQn3i7uNoCIm6SvmvCllC3kogtI+gzMe 0iK0lwTroviPK6hXVo2K8O/wqGuL/VURrQAYKqtFa++nVCn01C7fTPlx9j7B922X8QBu VtYw== X-Gm-Message-State: AOAM532CtxyJ55siu1KeU+irzkGvILRXk5Fv6W0qSp3NlQHKlwot6DeO rpUXF/naaUd1YOipFHnQXfhdw+3E32BBGYQx X-Google-Smtp-Source: ABdhPJz78AblB+Dytg88UNqBKz2trgLIRPEr4K7NeibFB2zuV2wvD405/VSLNB5AkNpELTdvTz0eTw== X-Received: by 2002:a05:6402:284:: with SMTP id l4mr52315371edv.299.1620925207391; Thu, 13 May 2021 10:00:07 -0700 (PDT) Received: from msft-t490s.teknoraver.net (net-5-94-253-60.cust.vodafonedsl.it. [5.94.253.60]) by smtp.gmail.com with ESMTPSA id w11sm2959431ede.54.2021.05.13.10.00.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 May 2021 10:00:06 -0700 (PDT) From: Matteo Croce To: netdev@vger.kernel.org, linux-mm@kvack.org Cc: Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , "David S. Miller" , Jakub Kicinski , Thomas Petazzoni , Marcin Wojtas , Russell King , Mirko Lindner , Stephen Hemminger , Tariq Toukan , Jesper Dangaard Brouer , Ilias Apalodimas , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Boris Pismenny , Arnd Bergmann , Andrew Morton , "Peter Zijlstra (Intel)" , Vlastimil Babka , Yu Zhao , Will Deacon , Fenghua Yu , Roman Gushchin , Hugh Dickins , Peter Xu , Jason Gunthorpe , Jonathan Lemon , Alexander Lobakin , Cong Wang , wenxu , Kevin Hao , Jakub Sitnicki , Marco Elver , Willem de Bruijn , Miaohe Lin , Yunsheng Lin , Guillaume Nault , linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, Matthew Wilcox , Eric Dumazet , David Ahern , Lorenzo Bianconi , Saeed Mahameed , Andrew Lunn , Paolo Abeni , Sven Auhagen Subject: [PATCH net-next v5 4/5] mvpp2: recycle buffers Date: Thu, 13 May 2021 18:58:45 +0200 Message-Id: <20210513165846.23722-5-mcroce@linux.microsoft.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210513165846.23722-1-mcroce@linux.microsoft.com> References: <20210513165846.23722-1-mcroce@linux.microsoft.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Matteo Croce Use the new recycling API for page_pool. In a drop rate test, the packet rate is more than doubled, from 962 Kpps to 2047 Kpps. perf top on a stock system shows: Overhead Shared Object Symbol 30.67% [kernel] [k] page_pool_release_page 8.37% [kernel] [k] get_page_from_freelist 7.34% [kernel] [k] free_unref_page 6.47% [mvpp2] [k] mvpp2_rx 4.69% [kernel] [k] eth_type_trans 4.55% [kernel] [k] __netif_receive_skb_core 4.40% [kernel] [k] build_skb 4.29% [kernel] [k] kmem_cache_free 4.00% [kernel] [k] kmem_cache_alloc 3.81% [kernel] [k] dev_gro_receive With packet rate stable at 962 Kpps: tx: 0 bps 0 pps rx: 477.4 Mbps 962.6 Kpps tx: 0 bps 0 pps rx: 477.6 Mbps 962.8 Kpps tx: 0 bps 0 pps rx: 477.6 Mbps 962.9 Kpps tx: 0 bps 0 pps rx: 477.2 Mbps 962.1 Kpps tx: 0 bps 0 pps rx: 477.5 Mbps 962.7 Kpps And this is the same output with recycling enabled: Overhead Shared Object Symbol 12.75% [mvpp2] [k] mvpp2_rx 9.56% [kernel] [k] __netif_receive_skb_core 9.29% [kernel] [k] build_skb 9.27% [kernel] [k] eth_type_trans 8.39% [kernel] [k] kmem_cache_alloc 7.85% [kernel] [k] kmem_cache_free 7.36% [kernel] [k] page_pool_put_page 6.45% [kernel] [k] dev_gro_receive 4.72% [kernel] [k] __xdp_return 3.06% [kernel] [k] page_pool_refill_alloc_cache With packet rate above 2000 Kpps: tx: 0 bps 0 pps rx: 1015 Mbps 2046 Kpps tx: 0 bps 0 pps rx: 1015 Mbps 2047 Kpps tx: 0 bps 0 pps rx: 1015 Mbps 2047 Kpps tx: 0 bps 0 pps rx: 1015 Mbps 2047 Kpps tx: 0 bps 0 pps rx: 1015 Mbps 2047 Kpps The major performance increase is explained by the fact that the most CPU consuming functions (page_pool_release_page, get_page_from_freelist and free_unref_page) are no longer called on a per packet basis. The test was done by sending to the macchiatobin 64 byte ethernet frames with an invalid ethertype, so the packets are dropped early in the RX path. Signed-off-by: Matteo Croce --- drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c index b2259bf1d299..9dceabece56c 100644 --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c @@ -3847,6 +3847,7 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi, struct mvpp2_pcpu_stats ps = {}; enum dma_data_direction dma_dir; struct bpf_prog *xdp_prog; + struct xdp_rxq_info *rxqi; struct xdp_buff xdp; int rx_received; int rx_done = 0; @@ -3912,15 +3913,15 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi, else frag_size = bm_pool->frag_size; - if (xdp_prog) { - struct xdp_rxq_info *xdp_rxq; + if (bm_pool->pkt_size == MVPP2_BM_SHORT_PKT_SIZE) + rxqi = &rxq->xdp_rxq_short; + else + rxqi = &rxq->xdp_rxq_long; - if (bm_pool->pkt_size == MVPP2_BM_SHORT_PKT_SIZE) - xdp_rxq = &rxq->xdp_rxq_short; - else - xdp_rxq = &rxq->xdp_rxq_long; + if (xdp_prog) { + xdp.rxq = rxqi; - xdp_init_buff(&xdp, PAGE_SIZE, xdp_rxq); + xdp_init_buff(&xdp, PAGE_SIZE, rxqi); xdp_prepare_buff(&xdp, data, MVPP2_MH_SIZE + MVPP2_SKB_HEADROOM, rx_bytes, false); @@ -3964,7 +3965,7 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi, } if (pp) - page_pool_release_page(pp, virt_to_page(data)); + skb_mark_for_recycle(skb, virt_to_page(data), pp); else dma_unmap_single_attrs(dev->dev.parent, dma_addr, bm_pool->buf_size, DMA_FROM_DEVICE, From patchwork Thu May 13 16:58:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matteo Croce X-Patchwork-Id: 439388 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E959DC433B4 for ; Thu, 13 May 2021 17:01:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C1E286143D for ; Thu, 13 May 2021 17:01:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230336AbhEMRCO (ORCPT ); Thu, 13 May 2021 13:02:14 -0400 Received: from mail-ej1-f51.google.com ([209.85.218.51]:41581 "EHLO mail-ej1-f51.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230210AbhEMRBo (ORCPT ); Thu, 13 May 2021 13:01:44 -0400 Received: by mail-ej1-f51.google.com with SMTP id k10so9606994ejj.8; Thu, 13 May 2021 10:00:33 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=opbCv80kZO0WR00E8CvyXu7JXQ2Kf6zwvEyZ//vM1Tw=; b=VAhfWFA8nFT24eDiZTRNa5rnd8SHDkpJBG8BTofyDiXvoPQPYgU8lvYF+WICxw5TOx cv0NBNEZj2PQ4C2hGb7aD6B5b9k/R0pInqszK1N2d8ddvaXw8OGcZ6ArYMgTtIdxySP4 SdktQEF4ZnL1DhnoDlVfRzQDuaPQSzRwjwZStT1ohhEaChVG0QGRKa1MxdHaj8IEzxhH JId9Qo3vRsSZ9YGeup2Y22ibbJXndMQHXPvkzMQBk4I3ghhv9jlGpu6FFC0RuAmmWueQ 7uf+Lppgm3OJxgdRwczm+BJZLHcxViheM5uOYy/KB1aucF4MhK9swtwv888Lg9no8oQf 1KeA== X-Gm-Message-State: AOAM530NWIdzo6EeMu6qlrDHN7T6SteVMJt67BW/bHk6l+/cKymjdCrg y5/RAz5YE043KV3SDQIGHz/oVg+eMp2G5uTY X-Google-Smtp-Source: ABdhPJxImITrQc7A7BC8YrzU/gQf+8smgk/vcbRtFyyFBWomH5/JFH9j16xBhSt71cMxnuAzH2+gFA== X-Received: by 2002:a17:906:3bca:: with SMTP id v10mr12852999ejf.121.1620925232961; Thu, 13 May 2021 10:00:32 -0700 (PDT) Received: from msft-t490s.teknoraver.net (net-5-94-253-60.cust.vodafonedsl.it. [5.94.253.60]) by smtp.gmail.com with ESMTPSA id w11sm2959431ede.54.2021.05.13.10.00.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 May 2021 10:00:32 -0700 (PDT) From: Matteo Croce To: netdev@vger.kernel.org, linux-mm@kvack.org Cc: Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , "David S. Miller" , Jakub Kicinski , Thomas Petazzoni , Marcin Wojtas , Russell King , Mirko Lindner , Stephen Hemminger , Tariq Toukan , Jesper Dangaard Brouer , Ilias Apalodimas , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Boris Pismenny , Arnd Bergmann , Andrew Morton , "Peter Zijlstra (Intel)" , Vlastimil Babka , Yu Zhao , Will Deacon , Fenghua Yu , Roman Gushchin , Hugh Dickins , Peter Xu , Jason Gunthorpe , Jonathan Lemon , Alexander Lobakin , Cong Wang , wenxu , Kevin Hao , Jakub Sitnicki , Marco Elver , Willem de Bruijn , Miaohe Lin , Yunsheng Lin , Guillaume Nault , linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, Matthew Wilcox , Eric Dumazet , David Ahern , Lorenzo Bianconi , Saeed Mahameed , Andrew Lunn , Paolo Abeni , Sven Auhagen Subject: [PATCH net-next v5 5/5] mvneta: recycle buffers Date: Thu, 13 May 2021 18:58:46 +0200 Message-Id: <20210513165846.23722-6-mcroce@linux.microsoft.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210513165846.23722-1-mcroce@linux.microsoft.com> References: <20210513165846.23722-1-mcroce@linux.microsoft.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Matteo Croce Use the new recycling API for page_pool. In a drop rate test, the packet rate increased di 10%, from 269 Kpps to 296 Kpps. perf top on a stock system shows: Overhead Shared Object Symbol 21.78% [kernel] [k] __pi___inval_dcache_area 21.66% [mvneta] [k] mvneta_rx_swbm 7.00% [kernel] [k] kmem_cache_alloc 6.05% [kernel] [k] eth_type_trans 4.44% [kernel] [k] kmem_cache_free.part.0 3.80% [kernel] [k] __netif_receive_skb_core 3.68% [kernel] [k] dev_gro_receive 3.65% [kernel] [k] get_page_from_freelist 3.43% [kernel] [k] page_pool_release_page 3.35% [kernel] [k] free_unref_page And this is the same output with recycling enabled: Overhead Shared Object Symbol 24.10% [kernel] [k] __pi___inval_dcache_area 23.02% [mvneta] [k] mvneta_rx_swbm 7.19% [kernel] [k] kmem_cache_alloc 6.50% [kernel] [k] eth_type_trans 4.93% [kernel] [k] __netif_receive_skb_core 4.77% [kernel] [k] kmem_cache_free.part.0 3.93% [kernel] [k] dev_gro_receive 3.03% [kernel] [k] build_skb 2.91% [kernel] [k] page_pool_put_page 2.85% [kernel] [k] __xdp_return The test was done with mausezahn on the TX side with 64 byte raw ethernet frames. Signed-off-by: Matteo Croce --- drivers/net/ethernet/marvell/mvneta.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index 7d5cd9bc6c99..6d2f8dce4900 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -2320,7 +2320,7 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp, } static struct sk_buff * -mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, +mvneta_swbm_build_skb(struct mvneta_port *pp, struct page_pool *pool, struct xdp_buff *xdp, u32 desc_status) { struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp); @@ -2331,7 +2331,7 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, if (!skb) return ERR_PTR(-ENOMEM); - page_pool_release_page(rxq->page_pool, virt_to_page(xdp->data)); + skb_mark_for_recycle(skb, virt_to_page(xdp->data), pool); skb_reserve(skb, xdp->data - xdp->data_hard_start); skb_put(skb, xdp->data_end - xdp->data); @@ -2343,7 +2343,10 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, skb_frag_page(frag), skb_frag_off(frag), skb_frag_size(frag), PAGE_SIZE); - page_pool_release_page(rxq->page_pool, skb_frag_page(frag)); + /* We don't need to reset pp_recycle here. It's already set, so + * just mark fragments for recycling. + */ + page_pool_store_mem_info(skb_frag_page(frag), pool); } return skb; @@ -2425,7 +2428,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi, mvneta_run_xdp(pp, rxq, xdp_prog, &xdp_buf, frame_sz, &ps)) goto next; - skb = mvneta_swbm_build_skb(pp, rxq, &xdp_buf, desc_status); + skb = mvneta_swbm_build_skb(pp, pp, &xdp_buf, desc_status); if (IS_ERR(skb)) { struct mvneta_pcpu_stats *stats = this_cpu_ptr(pp->stats);