From patchwork Wed Aug 18 03:32:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 500356 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6AABBC43214 for ; Wed, 18 Aug 2021 03:33:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5589261131 for ; Wed, 18 Aug 2021 03:33:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237386AbhHRDe0 (ORCPT ); Tue, 17 Aug 2021 23:34:26 -0400 Received: from szxga01-in.huawei.com ([45.249.212.187]:8033 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237147AbhHRDeW (ORCPT ); Tue, 17 Aug 2021 23:34:22 -0400 Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.54]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4GqD4y2dPszYqJb; Wed, 18 Aug 2021 11:33:22 +0800 (CST) Received: from dggpemm500005.china.huawei.com (7.185.36.74) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Wed, 18 Aug 2021 11:33:27 +0800 Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Wed, 18 Aug 2021 11:33:27 +0800 From: Yunsheng Lin To: , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH RFC 1/7] page_pool: refactor the page pool to support multi alloc context Date: Wed, 18 Aug 2021 11:32:17 +0800 Message-ID: <1629257542-36145-2-git-send-email-linyunsheng@huawei.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1629257542-36145-1-git-send-email-linyunsheng@huawei.com> References: <1629257542-36145-1-git-send-email-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Currently the page pool assumes the caller MUST guarantee safe non-concurrent access, e.g. softirq for rx. This patch refactors the page pool to support multi allocation contexts, in order to support the tx recycling support in the page pool(tx means 'socket to netdev' here). Signed-off-by: Yunsheng Lin --- include/net/page_pool.h | 10 ++++++ net/core/page_pool.c | 86 +++++++++++++++++++++++++++---------------------- 2 files changed, 57 insertions(+), 39 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index a408240..8d4ae4b 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -135,6 +135,9 @@ struct page_pool { }; struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp); +struct page *__page_pool_alloc_pages(struct page_pool *pool, + struct pp_alloc_cache *alloc, + gfp_t gfp); static inline struct page *page_pool_dev_alloc_pages(struct page_pool *pool) { @@ -155,6 +158,13 @@ static inline struct page *page_pool_dev_alloc_frag(struct page_pool *pool, return page_pool_alloc_frag(pool, offset, size, gfp); } +struct page *page_pool_drain_frag(struct page_pool *pool, struct page *page, + long drain_count); +void page_pool_free_frag(struct page_pool *pool, struct page *page, + long drain_count); +void page_pool_empty_alloc_cache_once(struct page_pool *pool, + struct pp_alloc_cache *alloc); + /* get the stored dma direction. A driver might decide to treat this locally and * avoid the extra cache line from page_pool to determine the direction */ diff --git a/net/core/page_pool.c b/net/core/page_pool.c index e140905..7194dcc 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -110,7 +110,8 @@ EXPORT_SYMBOL(page_pool_create); static void page_pool_return_page(struct page_pool *pool, struct page *page); noinline -static struct page *page_pool_refill_alloc_cache(struct page_pool *pool) +static struct page *page_pool_refill_alloc_cache(struct page_pool *pool, + struct pp_alloc_cache *alloc) { struct ptr_ring *r = &pool->ring; struct page *page; @@ -140,7 +141,7 @@ static struct page *page_pool_refill_alloc_cache(struct page_pool *pool) break; if (likely(page_to_nid(page) == pref_nid)) { - pool->alloc.cache[pool->alloc.count++] = page; + alloc->cache[alloc->count++] = page; } else { /* NUMA mismatch; * (1) release 1 page to page-allocator and @@ -151,27 +152,28 @@ static struct page *page_pool_refill_alloc_cache(struct page_pool *pool) page = NULL; break; } - } while (pool->alloc.count < PP_ALLOC_CACHE_REFILL); + } while (alloc->count < PP_ALLOC_CACHE_REFILL); /* Return last page */ - if (likely(pool->alloc.count > 0)) - page = pool->alloc.cache[--pool->alloc.count]; + if (likely(alloc->count > 0)) + page = alloc->cache[--alloc->count]; spin_unlock(&r->consumer_lock); return page; } /* fast path */ -static struct page *__page_pool_get_cached(struct page_pool *pool) +static struct page *__page_pool_get_cached(struct page_pool *pool, + struct pp_alloc_cache *alloc) { struct page *page; /* Caller MUST guarantee safe non-concurrent access, e.g. softirq */ - if (likely(pool->alloc.count)) { + if (likely(alloc->count)) { /* Fast-path */ - page = pool->alloc.cache[--pool->alloc.count]; + page = alloc->cache[--alloc->count]; } else { - page = page_pool_refill_alloc_cache(pool); + page = page_pool_refill_alloc_cache(pool, alloc); } return page; @@ -252,6 +254,7 @@ static struct page *__page_pool_alloc_page_order(struct page_pool *pool, /* slow path */ noinline static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, + struct pp_alloc_cache *alloc, gfp_t gfp) { const int bulk = PP_ALLOC_CACHE_REFILL; @@ -265,13 +268,13 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, return __page_pool_alloc_page_order(pool, gfp); /* Unnecessary as alloc cache is empty, but guarantees zero count */ - if (unlikely(pool->alloc.count > 0)) - return pool->alloc.cache[--pool->alloc.count]; + if (unlikely(alloc->count > 0)) + return alloc->cache[--alloc->count]; /* Mark empty alloc.cache slots "empty" for alloc_pages_bulk_array */ - memset(&pool->alloc.cache, 0, sizeof(void *) * bulk); + memset(alloc->cache, 0, sizeof(void *) * bulk); - nr_pages = alloc_pages_bulk_array(gfp, bulk, pool->alloc.cache); + nr_pages = alloc_pages_bulk_array(gfp, bulk, alloc->cache); if (unlikely(!nr_pages)) return NULL; @@ -279,7 +282,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, * page element have not been (possibly) DMA mapped. */ for (i = 0; i < nr_pages; i++) { - page = pool->alloc.cache[i]; + page = alloc->cache[i]; if ((pp_flags & PP_FLAG_DMA_MAP) && unlikely(!page_pool_dma_map(pool, page))) { put_page(page); @@ -287,7 +290,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, } page_pool_set_pp_info(pool, page); - pool->alloc.cache[pool->alloc.count++] = page; + alloc->cache[alloc->count++] = page; /* Track how many pages are held 'in-flight' */ pool->pages_state_hold_cnt++; trace_page_pool_state_hold(pool, page, @@ -295,8 +298,8 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, } /* Return last page */ - if (likely(pool->alloc.count > 0)) - page = pool->alloc.cache[--pool->alloc.count]; + if (likely(alloc->count > 0)) + page = alloc->cache[--alloc->count]; else page = NULL; @@ -307,19 +310,27 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, /* For using page_pool replace: alloc_pages() API calls, but provide * synchronization guarantee for allocation side. */ -struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp) +struct page *__page_pool_alloc_pages(struct page_pool *pool, + struct pp_alloc_cache *alloc, + gfp_t gfp) { struct page *page; /* Fast-path: Get a page from cache */ - page = __page_pool_get_cached(pool); + page = __page_pool_get_cached(pool, alloc); if (page) return page; /* Slow-path: cache empty, do real allocation */ - page = __page_pool_alloc_pages_slow(pool, gfp); + page = __page_pool_alloc_pages_slow(pool, alloc, gfp); return page; } +EXPORT_SYMBOL(__page_pool_alloc_pages); + +struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp) +{ + return __page_pool_alloc_pages(pool, &pool->alloc, gfp); +} EXPORT_SYMBOL(page_pool_alloc_pages); /* Calculate distance between two u32 values, valid if distance is below 2^(31) @@ -522,11 +533,9 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data, } EXPORT_SYMBOL(page_pool_put_page_bulk); -static struct page *page_pool_drain_frag(struct page_pool *pool, - struct page *page) +struct page *page_pool_drain_frag(struct page_pool *pool, struct page *page, + long drain_count) { - long drain_count = BIAS_MAX - pool->frag_users; - /* Some user is still using the page frag */ if (likely(page_pool_atomic_sub_frag_count_return(page, drain_count))) @@ -543,13 +552,9 @@ static struct page *page_pool_drain_frag(struct page_pool *pool, return NULL; } -static void page_pool_free_frag(struct page_pool *pool) +void page_pool_free_frag(struct page_pool *pool, struct page *page, + long drain_count) { - long drain_count = BIAS_MAX - pool->frag_users; - struct page *page = pool->frag_page; - - pool->frag_page = NULL; - if (!page || page_pool_atomic_sub_frag_count_return(page, drain_count)) return; @@ -572,7 +577,8 @@ struct page *page_pool_alloc_frag(struct page_pool *pool, *offset = pool->frag_offset; if (page && *offset + size > max_size) { - page = page_pool_drain_frag(pool, page); + page = page_pool_drain_frag(pool, page, + BIAS_MAX - pool->frag_users); if (page) goto frag_reset; } @@ -628,26 +634,26 @@ static void page_pool_free(struct page_pool *pool) kfree(pool); } -static void page_pool_empty_alloc_cache_once(struct page_pool *pool) +void page_pool_empty_alloc_cache_once(struct page_pool *pool, + struct pp_alloc_cache *alloc) { struct page *page; - if (pool->destroy_cnt) - return; - /* Empty alloc cache, assume caller made sure this is * no-longer in use, and page_pool_alloc_pages() cannot be * call concurrently. */ - while (pool->alloc.count) { - page = pool->alloc.cache[--pool->alloc.count]; + while (alloc->count) { + page = alloc->cache[--alloc->count]; page_pool_return_page(pool, page); } } static void page_pool_scrub(struct page_pool *pool) { - page_pool_empty_alloc_cache_once(pool); + if (!pool->destroy_cnt) + page_pool_empty_alloc_cache_once(pool, &pool->alloc); + pool->destroy_cnt++; /* No more consumers should exist, but producers could still @@ -705,7 +711,9 @@ void page_pool_destroy(struct page_pool *pool) if (!page_pool_put(pool)) return; - page_pool_free_frag(pool); + page_pool_free_frag(pool, pool->frag_page, + BIAS_MAX - pool->frag_users); + pool->frag_page = NULL; if (!page_pool_release(pool)) return; From patchwork Wed Aug 18 03:32:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 499617 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D924C4338F for ; Wed, 18 Aug 2021 03:34:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 256BE610A6 for ; Wed, 18 Aug 2021 03:34:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237394AbhHRDei (ORCPT ); Tue, 17 Aug 2021 23:34:38 -0400 Received: from szxga02-in.huawei.com ([45.249.212.188]:8873 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236287AbhHRDeX (ORCPT ); Tue, 17 Aug 2021 23:34:23 -0400 Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4GqD0n1Tyjz8sZd; Wed, 18 Aug 2021 11:29:45 +0800 (CST) Received: from dggpemm500005.china.huawei.com (7.185.36.74) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Wed, 18 Aug 2021 11:33:27 +0800 Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Wed, 18 Aug 2021 11:33:27 +0800 From: Yunsheng Lin To: , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH RFC 2/7] skbuff: add interface to manipulate frag count for tx recycling Date: Wed, 18 Aug 2021 11:32:18 +0800 Message-ID: <1629257542-36145-3-git-send-email-linyunsheng@huawei.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1629257542-36145-1-git-send-email-linyunsheng@huawei.com> References: <1629257542-36145-1-git-send-email-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org As the skb->pp_recycle and page->pp_magic may not be enough to track if a frag page is from page pool after the calling of __skb_frag_ref(), mostly because of a data race, see: commit 2cc3aeb5eccc ("skbuff: Fix a potential race while recycling page_pool packets"). As the case of tcp, there may be fragmenting, coalescing or retransmiting case that might lose the track if a frag page is from page pool or not. So increment the frag count when __skb_frag_ref() is called, and use the bit 0 in frag->bv_page to indicate if a page is from a page pool, which automically pass down to another frag->bv_page when doing a '*new_frag = *frag' or memcpying the shinfo. It seems we could do the trick for rx too if it makes sense. Signed-off-by: Yunsheng Lin --- include/linux/skbuff.h | 43 ++++++++++++++++++++++++++++++++++++++++--- include/net/page_pool.h | 5 +++++ 2 files changed, 45 insertions(+), 3 deletions(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 6bdb0db..2878d26 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -331,6 +331,11 @@ static inline unsigned int skb_frag_size(const skb_frag_t *frag) return frag->bv_len; } +static inline bool skb_frag_is_pp(const skb_frag_t *frag) +{ + return (unsigned long)frag->bv_page & 1UL; +} + /** * skb_frag_size_set() - Sets the size of a skb fragment * @frag: skb fragment @@ -2190,6 +2195,21 @@ static inline void __skb_fill_page_desc(struct sk_buff *skb, int i, skb->pfmemalloc = true; } +static inline void __skb_fill_pp_page_desc(struct sk_buff *skb, int i, + struct page *page, int off, + int size) +{ + skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; + + frag->bv_page = (struct page *)((unsigned long)page | 0x1UL); + frag->bv_offset = off; + skb_frag_size_set(frag, size); + + page = compound_head(page); + if (page_is_pfmemalloc(page)) + skb->pfmemalloc = true; +} + /** * skb_fill_page_desc - initialise a paged fragment in an skb * @skb: buffer containing fragment to be initialised @@ -2211,6 +2231,14 @@ static inline void skb_fill_page_desc(struct sk_buff *skb, int i, skb_shinfo(skb)->nr_frags = i + 1; } +static inline void skb_fill_pp_page_desc(struct sk_buff *skb, int i, + struct page *page, int off, + int size) +{ + __skb_fill_pp_page_desc(skb, i, page, off, size); + skb_shinfo(skb)->nr_frags = i + 1; +} + void skb_add_rx_frag(struct sk_buff *skb, int i, struct page *page, int off, int size, unsigned int truesize); @@ -3062,7 +3090,10 @@ static inline void skb_frag_off_copy(skb_frag_t *fragto, */ static inline struct page *skb_frag_page(const skb_frag_t *frag) { - return frag->bv_page; + unsigned long page = (unsigned long)frag->bv_page; + + page &= ~1UL; + return (struct page *)page; } /** @@ -3073,7 +3104,12 @@ static inline struct page *skb_frag_page(const skb_frag_t *frag) */ static inline void __skb_frag_ref(skb_frag_t *frag) { - get_page(skb_frag_page(frag)); + struct page *page = skb_frag_page(frag); + + if (skb_frag_is_pp(frag)) + page_pool_atomic_inc_frag_count(page); + else + get_page(page); } /** @@ -3101,7 +3137,8 @@ static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle) struct page *page = skb_frag_page(frag); #ifdef CONFIG_PAGE_POOL - if (recycle && page_pool_return_skb_page(page)) + if ((recycle || skb_frag_is_pp(frag)) && + page_pool_return_skb_page(page)) return; #endif put_page(page); diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 8d4ae4b..86babb2 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -270,6 +270,11 @@ static inline long page_pool_atomic_sub_frag_count_return(struct page *page, return ret; } +static void page_pool_atomic_inc_frag_count(struct page *page) +{ + atomic_long_inc(&page->pp_frag_count); +} + static inline bool is_page_pool_compiled_in(void) { #ifdef CONFIG_PAGE_POOL From patchwork Wed Aug 18 03:32:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 500357 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE5C8C4320A for ; Wed, 18 Aug 2021 03:33:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D2E67608FE for ; Wed, 18 Aug 2021 03:33:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237181AbhHRDeV (ORCPT ); Tue, 17 Aug 2021 23:34:21 -0400 Received: from szxga02-in.huawei.com ([45.249.212.188]:8871 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237076AbhHRDeU (ORCPT ); Tue, 17 Aug 2021 23:34:20 -0400 Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4GqD0h3NJVz8sZW; Wed, 18 Aug 2021 11:29:40 +0800 (CST) Received: from dggpemm500005.china.huawei.com (7.185.36.74) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Wed, 18 Aug 2021 11:33:28 +0800 Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Wed, 18 Aug 2021 11:33:27 +0800 From: Yunsheng Lin To: , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH RFC 3/7] net: add NAPI api to register and retrieve the page pool ptr Date: Wed, 18 Aug 2021 11:32:19 +0800 Message-ID: <1629257542-36145-4-git-send-email-linyunsheng@huawei.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1629257542-36145-1-git-send-email-linyunsheng@huawei.com> References: <1629257542-36145-1-git-send-email-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org As tx recycling is built upon the busy polling infrastructure, and busy polling is based on napi_id, so add a api for driver to register a page pool to a NAPI instance and api for socket layer to retrieve the page pool corresponding to a NAPI. Signed-off-by: Yunsheng Lin --- include/linux/netdevice.h | 9 +++++++++ net/core/dev.c | 34 +++++++++++++++++++++++++++++++--- 2 files changed, 40 insertions(+), 3 deletions(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 2f03cd9..51a1169 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -40,6 +40,7 @@ #endif #include #include +#include #include #include @@ -336,6 +337,7 @@ struct napi_struct { struct hlist_node napi_hash_node; unsigned int napi_id; struct task_struct *thread; + struct page_pool *pp; }; enum { @@ -349,6 +351,7 @@ enum { NAPI_STATE_PREFER_BUSY_POLL, /* prefer busy-polling over softirq processing*/ NAPI_STATE_THREADED, /* The poll is performed inside its own thread*/ NAPI_STATE_SCHED_THREADED, /* Napi is currently scheduled in threaded mode */ + NAPI_STATE_RECYCLABLE, /* Support tx page recycling */ }; enum { @@ -362,6 +365,7 @@ enum { NAPIF_STATE_PREFER_BUSY_POLL = BIT(NAPI_STATE_PREFER_BUSY_POLL), NAPIF_STATE_THREADED = BIT(NAPI_STATE_THREADED), NAPIF_STATE_SCHED_THREADED = BIT(NAPI_STATE_SCHED_THREADED), + NAPIF_STATE_RECYCLABLE = BIT(NAPI_STATE_RECYCLABLE), }; enum gro_result { @@ -2473,6 +2477,10 @@ static inline void *netdev_priv(const struct net_device *dev) void netif_napi_add(struct net_device *dev, struct napi_struct *napi, int (*poll)(struct napi_struct *, int), int weight); +void netif_recyclable_napi_add(struct net_device *dev, struct napi_struct *napi, + int (*poll)(struct napi_struct *, int), + int weight, struct page_pool *pool); + /** * netif_tx_napi_add - initialize a NAPI context * @dev: network device @@ -2997,6 +3005,7 @@ struct net_device *dev_get_by_index(struct net *net, int ifindex); struct net_device *__dev_get_by_index(struct net *net, int ifindex); struct net_device *dev_get_by_index_rcu(struct net *net, int ifindex); struct net_device *dev_get_by_napi_id(unsigned int napi_id); +struct page_pool *page_pool_get_by_napi_id(unsigned int napi_id); int netdev_get_name(struct net *net, char *name, int ifindex); int dev_restart(struct net_device *dev); int skb_gro_receive(struct sk_buff *p, struct sk_buff *skb); diff --git a/net/core/dev.c b/net/core/dev.c index 74fd402..d6b905b 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -935,6 +935,19 @@ struct net_device *dev_get_by_napi_id(unsigned int napi_id) } EXPORT_SYMBOL(dev_get_by_napi_id); +struct page_pool *page_pool_get_by_napi_id(unsigned int napi_id) +{ + struct napi_struct *napi; + struct page_pool *pp = NULL; + + napi = napi_by_id(napi_id); + if (napi) + pp = napi->pp; + + return pp; +} +EXPORT_SYMBOL(page_pool_get_by_napi_id); + /** * netdev_get_name - get a netdevice name, knowing its ifindex. * @net: network namespace @@ -6757,7 +6770,8 @@ EXPORT_SYMBOL(napi_busy_loop); static void napi_hash_add(struct napi_struct *napi) { - if (test_bit(NAPI_STATE_NO_BUSY_POLL, &napi->state)) + if (test_bit(NAPI_STATE_NO_BUSY_POLL, &napi->state) || + !test_bit(NAPI_STATE_RECYCLABLE, &napi->state)) return; spin_lock(&napi_hash_lock); @@ -6860,8 +6874,10 @@ int dev_set_threaded(struct net_device *dev, bool threaded) } EXPORT_SYMBOL(dev_set_threaded); -void netif_napi_add(struct net_device *dev, struct napi_struct *napi, - int (*poll)(struct napi_struct *, int), int weight) +void netif_recyclable_napi_add(struct net_device *dev, + struct napi_struct *napi, + int (*poll)(struct napi_struct *, int), + int weight, struct page_pool *pool) { if (WARN_ON(test_and_set_bit(NAPI_STATE_LISTED, &napi->state))) return; @@ -6886,6 +6902,11 @@ void netif_napi_add(struct net_device *dev, struct napi_struct *napi, set_bit(NAPI_STATE_SCHED, &napi->state); set_bit(NAPI_STATE_NPSVC, &napi->state); list_add_rcu(&napi->dev_list, &dev->napi_list); + if (pool) { + napi->pp = pool; + set_bit(NAPI_STATE_RECYCLABLE, &napi->state); + } + napi_hash_add(napi); /* Create kthread for this napi if dev->threaded is set. * Clear dev->threaded if kthread creation failed so that @@ -6894,6 +6915,13 @@ void netif_napi_add(struct net_device *dev, struct napi_struct *napi, if (dev->threaded && napi_kthread_create(napi)) dev->threaded = 0; } +EXPORT_SYMBOL(netif_recyclable_napi_add); + +void netif_napi_add(struct net_device *dev, struct napi_struct *napi, + int (*poll)(struct napi_struct *, int), int weight) +{ + netif_recyclable_napi_add(dev, napi, poll, weight, NULL); +} EXPORT_SYMBOL(netif_napi_add); void napi_disable(struct napi_struct *n) From patchwork Wed Aug 18 03:32:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 499618 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29F04C4338F for ; Wed, 18 Aug 2021 03:33:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 149A9610FB for ; Wed, 18 Aug 2021 03:33:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237515AbhHRDe3 (ORCPT ); Tue, 17 Aug 2021 23:34:29 -0400 Received: from szxga01-in.huawei.com ([45.249.212.187]:8034 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237149AbhHRDeW (ORCPT ); Tue, 17 Aug 2021 23:34:22 -0400 Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.54]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4GqD4y3XslzYqht; Wed, 18 Aug 2021 11:33:22 +0800 (CST) Received: from dggpemm500005.china.huawei.com (7.185.36.74) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Wed, 18 Aug 2021 11:33:28 +0800 Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Wed, 18 Aug 2021 11:33:28 +0800 From: Yunsheng Lin To: , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH RFC 4/7] net: pfrag_pool: add pfrag pool support based on page pool Date: Wed, 18 Aug 2021 11:32:20 +0800 Message-ID: <1629257542-36145-5-git-send-email-linyunsheng@huawei.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1629257542-36145-1-git-send-email-linyunsheng@huawei.com> References: <1629257542-36145-1-git-send-email-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This patch add the pfrag pool support based on page pool. Caller need to call pfrag_pool_updata_napi() to connect the pfrag pool to the page pool through napi. Signed-off-by: Yunsheng Lin --- include/net/pfrag_pool.h | 24 +++++++++++++ net/core/Makefile | 1 + net/core/pfrag_pool.c | 92 ++++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 117 insertions(+) create mode 100644 include/net/pfrag_pool.h create mode 100644 net/core/pfrag_pool.c diff --git a/include/net/pfrag_pool.h b/include/net/pfrag_pool.h new file mode 100644 index 0000000..2abea26 --- /dev/null +++ b/include/net/pfrag_pool.h @@ -0,0 +1,24 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _PAGE_FRAG_H +#define _PAGE_FRAG_H + +#include +#include +#include +#include + +struct pfrag_pool { + struct page_frag frag; + long frag_users; + unsigned int napi_id; + struct page_pool *pp; + struct pp_alloc_cache alloc; +}; + +void pfrag_pool_updata_napi(struct pfrag_pool *pool, + unsigned int napi_id); +struct page_frag *pfrag_pool_refill(struct pfrag_pool *pool, gfp_t gfp); +void pfrag_pool_commit(struct pfrag_pool *pool, unsigned int sz, + bool merge); +void pfrag_pool_flush(struct pfrag_pool *pool); +#endif diff --git a/net/core/Makefile b/net/core/Makefile index 35ced62..171f839 100644 --- a/net/core/Makefile +++ b/net/core/Makefile @@ -14,6 +14,7 @@ obj-y += dev.o dev_addr_lists.o dst.o netevent.o \ fib_notifier.o xdp.o flow_offload.o obj-y += net-sysfs.o +obj-y += pfrag_pool.o obj-$(CONFIG_PAGE_POOL) += page_pool.o obj-$(CONFIG_PROC_FS) += net-procfs.o obj-$(CONFIG_NET_PKTGEN) += pktgen.o diff --git a/net/core/pfrag_pool.c b/net/core/pfrag_pool.c new file mode 100644 index 0000000..6ad1383 --- /dev/null +++ b/net/core/pfrag_pool.c @@ -0,0 +1,92 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include +#include +#include + +#define BAIS_MAX (LONG_MAX / 2) + +void pfrag_pool_updata_napi(struct pfrag_pool *pool, + unsigned int napi_id) +{ + struct page_pool *pp; + + if (!pool || pool->napi_id == napi_id) + return; + + pr_info("frag pool %pK's napi id changed from %u to %u\n", + pool, pool->napi_id, napi_id); + + rcu_read_lock(); + pp = page_pool_get_by_napi_id(napi_id); + if (!pp) { + rcu_read_unlock(); + return; + } + + pool->napi_id = napi_id; + pool->pp = pp; + rcu_read_unlock(); +} +EXPORT_SYMBOL(pfrag_pool_updata_napi); + +struct page_frag *pfrag_pool_refill(struct pfrag_pool *pool, gfp_t gfp) +{ + struct page_frag *pfrag = &pool->frag; + + if (!pool || !pool->pp) + return NULL; + + if (pfrag->page) { + long drain_users; + + if (pfrag->offset < pfrag->size) + return pfrag; + + drain_users = BAIS_MAX - pool->frag_users; + if (page_pool_drain_frag(pool->pp, pfrag->page, drain_users)) + goto out; + } + + pfrag->page = __page_pool_alloc_pages(pool->pp, &pool->alloc, gfp); + if (unlikely(!pfrag->page)) + return NULL; + +out: + page_pool_set_frag_count(pfrag->page, BAIS_MAX); + pfrag->size = page_size(pfrag->page); + pool->frag_users = 0; + pfrag->offset = 0; + return pfrag; +} +EXPORT_SYMBOL(pfrag_pool_refill); + +void pfrag_pool_commit(struct pfrag_pool *pool, unsigned int sz, + bool merge) +{ + struct page_frag *pfrag = &pool->frag; + + pfrag->offset += ALIGN(sz, dma_get_cache_alignment()); + WARN_ON(pfrag->offset > pfrag->size); + + if (!merge) + pool->frag_users++; +} +EXPORT_SYMBOL(pfrag_pool_commit); + +void pfrag_pool_flush(struct pfrag_pool *pool) +{ + struct page_frag *pfrag = &pool->frag; + + page_pool_empty_alloc_cache_once(pool->pp, &pool->alloc); + + if (!pfrag->page) + return; + + page_pool_free_frag(pool->pp, pfrag->page, + BAIS_MAX - pool->frag_users); + pfrag->page = NULL; +} +EXPORT_SYMBOL(pfrag_pool_flush); From patchwork Wed Aug 18 03:32:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 500355 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9DDCFC432BE for ; Wed, 18 Aug 2021 03:34:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 826486108E for ; Wed, 18 Aug 2021 03:34:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237585AbhHRDee (ORCPT ); Tue, 17 Aug 2021 23:34:34 -0400 Received: from szxga02-in.huawei.com ([45.249.212.188]:8874 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237271AbhHRDeY (ORCPT ); Tue, 17 Aug 2021 23:34:24 -0400 Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4GqD0n3445z8sZf; Wed, 18 Aug 2021 11:29:45 +0800 (CST) Received: from dggpemm500005.china.huawei.com (7.185.36.74) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Wed, 18 Aug 2021 11:33:29 +0800 Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Wed, 18 Aug 2021 11:33:28 +0800 From: Yunsheng Lin To: , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH RFC 5/7] sock: support refilling pfrag from pfrag_pool Date: Wed, 18 Aug 2021 11:32:21 +0800 Message-ID: <1629257542-36145-6-git-send-email-linyunsheng@huawei.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1629257542-36145-1-git-send-email-linyunsheng@huawei.com> References: <1629257542-36145-1-git-send-email-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org As previous patch has added pfrag pool based on the page pool, so support refilling pfrag from the new pfrag pool for tcpv4. Signed-off-by: Yunsheng Lin --- include/net/sock.h | 1 + net/core/sock.c | 9 +++++++++ net/ipv4/tcp.c | 34 ++++++++++++++++++++++++++-------- 3 files changed, 36 insertions(+), 8 deletions(-) diff --git a/include/net/sock.h b/include/net/sock.h index 6e76145..af40084 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -455,6 +455,7 @@ struct sock { unsigned long sk_pacing_rate; /* bytes per second */ unsigned long sk_max_pacing_rate; struct page_frag sk_frag; + struct pfrag_pool *sk_frag_pool; netdev_features_t sk_route_caps; netdev_features_t sk_route_nocaps; netdev_features_t sk_route_forced_caps; diff --git a/net/core/sock.c b/net/core/sock.c index aada649..53152c9 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -140,6 +140,7 @@ #include #include +#include static DEFINE_MUTEX(proto_list_mutex); static LIST_HEAD(proto_list); @@ -1934,6 +1935,11 @@ static void __sk_destruct(struct rcu_head *head) put_page(sk->sk_frag.page); sk->sk_frag.page = NULL; } + if (sk->sk_frag_pool) { + pfrag_pool_flush(sk->sk_frag_pool); + kfree(sk->sk_frag_pool); + sk->sk_frag_pool = NULL; + } if (sk->sk_peer_cred) put_cred(sk->sk_peer_cred); @@ -3134,6 +3140,9 @@ void sock_init_data(struct socket *sock, struct sock *sk) sk->sk_frag.page = NULL; sk->sk_frag.offset = 0; + + sk->sk_frag_pool = kzalloc(sizeof(*sk->sk_frag_pool), sk->sk_allocation); + sk->sk_peek_off = -1; sk->sk_peer_pid = NULL; diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index f931def..992dcbc 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -280,6 +280,7 @@ #include #include #include +#include /* Track pending CMSGs. */ enum { @@ -1337,12 +1338,20 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size) if (err) goto do_fault; } else if (!zc) { - bool merge = true; + bool merge = true, pfrag_pool = true; int i = skb_shinfo(skb)->nr_frags; - struct page_frag *pfrag = sk_page_frag(sk); + struct page_frag *pfrag; - if (!sk_page_frag_refill(sk, pfrag)) - goto wait_for_space; + pfrag_pool_updata_napi(sk->sk_frag_pool, + READ_ONCE(sk->sk_napi_id)); + pfrag = pfrag_pool_refill(sk->sk_frag_pool, sk->sk_allocation); + if (!pfrag) { + pfrag = sk_page_frag(sk); + if (!sk_page_frag_refill(sk, pfrag)) + goto wait_for_space; + + pfrag_pool = false; + } if (!skb_can_coalesce(skb, i, pfrag->page, pfrag->offset)) { @@ -1369,11 +1378,20 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size) if (merge) { skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy); } else { - skb_fill_page_desc(skb, i, pfrag->page, - pfrag->offset, copy); - page_ref_inc(pfrag->page); + if (pfrag_pool) { + skb_fill_pp_page_desc(skb, i, pfrag->page, + pfrag->offset, copy); + } else { + page_ref_inc(pfrag->page); + skb_fill_page_desc(skb, i, pfrag->page, + pfrag->offset, copy); + } } - pfrag->offset += copy; + + if (pfrag_pool) + pfrag_pool_commit(sk->sk_frag_pool, copy, merge); + else + pfrag->offset += copy; } else { if (!sk_wmem_schedule(sk, copy)) goto wait_for_space; From patchwork Wed Aug 18 03:32:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 499619 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67038C4320E for ; Wed, 18 Aug 2021 03:33:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 500A56108E for ; Wed, 18 Aug 2021 03:33:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237281AbhHRDeX (ORCPT ); Tue, 17 Aug 2021 23:34:23 -0400 Received: from szxga02-in.huawei.com ([45.249.212.188]:8872 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237103AbhHRDeU (ORCPT ); Tue, 17 Aug 2021 23:34:20 -0400 Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4GqD0h54PCz8sZR; Wed, 18 Aug 2021 11:29:40 +0800 (CST) Received: from dggpemm500005.china.huawei.com (7.185.36.74) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Wed, 18 Aug 2021 11:33:29 +0800 Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Wed, 18 Aug 2021 11:33:29 +0800 From: Yunsheng Lin To: , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH RFC 6/7] net: hns3: support tx recycling in the hns3 driver Date: Wed, 18 Aug 2021 11:32:22 +0800 Message-ID: <1629257542-36145-7-git-send-email-linyunsheng@huawei.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1629257542-36145-1-git-send-email-linyunsheng@huawei.com> References: <1629257542-36145-1-git-send-email-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Use netif_recyclable_napi_add() to register page pool to the NAPI instance, and avoid doing the DMA mapping/unmapping when the page is from page pool. Signed-off-by: Yunsheng Lin --- drivers/net/ethernet/hisilicon/hns3/hns3_enet.c | 32 +++++++++++++++---------- 1 file changed, 19 insertions(+), 13 deletions(-) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c index fcbeb1f..ab86566 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c @@ -1689,12 +1689,18 @@ static int hns3_map_and_fill_desc(struct hns3_enet_ring *ring, void *priv, return 0; } else { skb_frag_t *frag = (skb_frag_t *)priv; + struct page *page = skb_frag_page(frag); size = skb_frag_size(frag); if (!size) return 0; - dma = skb_frag_dma_map(dev, frag, 0, size, DMA_TO_DEVICE); + if (skb_frag_is_pp(frag) && page->pp->p.dev == dev) { + dma = page_pool_get_dma_addr(page) + skb_frag_off(frag); + type = DESC_TYPE_PP_FRAG; + } else { + dma = skb_frag_dma_map(dev, frag, 0, size, DMA_TO_DEVICE); + } } if (unlikely(dma_mapping_error(dev, dma))) { @@ -4525,7 +4531,7 @@ static int hns3_nic_init_vector_data(struct hns3_nic_priv *priv) ret = hns3_get_vector_ring_chain(tqp_vector, &vector_ring_chain); if (ret) - goto map_ring_fail; + return ret; ret = h->ae_algo->ops->map_ring_to_vector(h, tqp_vector->vector_irq, &vector_ring_chain); @@ -4533,19 +4539,10 @@ static int hns3_nic_init_vector_data(struct hns3_nic_priv *priv) hns3_free_vector_ring_chain(tqp_vector, &vector_ring_chain); if (ret) - goto map_ring_fail; - - netif_napi_add(priv->netdev, &tqp_vector->napi, - hns3_nic_common_poll, NAPI_POLL_WEIGHT); + return ret; } return 0; - -map_ring_fail: - while (i--) - netif_napi_del(&priv->tqp_vector[i].napi); - - return ret; } static void hns3_nic_init_coal_cfg(struct hns3_nic_priv *priv) @@ -4754,7 +4751,7 @@ static void hns3_alloc_page_pool(struct hns3_enet_ring *ring) (PAGE_SIZE << hns3_page_order(ring)), .nid = dev_to_node(ring_to_dev(ring)), .dev = ring_to_dev(ring), - .dma_dir = DMA_FROM_DEVICE, + .dma_dir = DMA_BIDIRECTIONAL, .offset = 0, .max_len = PAGE_SIZE << hns3_page_order(ring), }; @@ -4923,6 +4920,15 @@ int hns3_init_all_ring(struct hns3_nic_priv *priv) u64_stats_init(&priv->ring[i].syncp); } + for (i = 0; i < priv->vector_num; i++) { + struct hns3_enet_tqp_vector *tqp_vector; + + tqp_vector = &priv->tqp_vector[i]; + netif_recyclable_napi_add(priv->netdev, &tqp_vector->napi, + hns3_nic_common_poll, NAPI_POLL_WEIGHT, + tqp_vector->rx_group.ring->page_pool); + } + return 0; out_when_alloc_ring_memory: