From patchwork Wed Feb 24 18:56:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jesper Dangaard Brouer X-Patchwork-Id: 387088 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B230DC433E0 for ; Wed, 24 Feb 2021 18:59:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5D14964F11 for ; Wed, 24 Feb 2021 18:59:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235677AbhBXS66 (ORCPT ); Wed, 24 Feb 2021 13:58:58 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:55788 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235750AbhBXS6N (ORCPT ); Wed, 24 Feb 2021 13:58:13 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1614193006; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SA3Llh8vILJh1ORZtfVQcGyaHvHK4xGRwPJ760zHgnI=; b=CuQdz6mu0/zcFi5FS9SJqeXLzWWayMcyDKO9KCOVsQZhKrFmy7nJC3qQ+Ufj6X1H8tL8Sn wBGdjbhI+HJ8z9B30GE2EMCm0njSbu4ihS2CmqF3gkuQz7Drpfp+qh1y7gtB9lj1AzpMr1 BL/WPRMLcSIO2rFDJ2EREVLickCLaRE= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-457-BKWPuBlyMWCn288FLnXnWA-1; Wed, 24 Feb 2021 13:56:44 -0500 X-MC-Unique: BKWPuBlyMWCn288FLnXnWA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C09B0107ACF3; Wed, 24 Feb 2021 18:56:42 +0000 (UTC) Received: from firesoul.localdomain (unknown [10.40.208.7]) by smtp.corp.redhat.com (Postfix) with ESMTP id 42AD69CA0; Wed, 24 Feb 2021 18:56:42 +0000 (UTC) Received: from [192.168.42.3] (localhost [IPv6:::1]) by firesoul.localdomain (Postfix) with ESMTP id 27A7D30736C73; Wed, 24 Feb 2021 19:56:41 +0100 (CET) Subject: [PATCH RFC net-next 1/3] net: page_pool: refactor dma_map into own function page_pool_dma_map From: Jesper Dangaard Brouer To: Mel Gorman , linux-mm@kvack.org Cc: Jesper Dangaard Brouer , chuck.lever@oracle.com, netdev@vger.kernel.org, linux-nfs@vger.kernel.org, linux-kernel@vger.kernel.org Date: Wed, 24 Feb 2021 19:56:41 +0100 Message-ID: <161419300107.2718959.18302883670835746249.stgit@firesoul> In-Reply-To: <161419296941.2718959.12575257358107256094.stgit@firesoul> References: <161419296941.2718959.12575257358107256094.stgit@firesoul> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org In preparation for next patch, move the dma mapping into its own function, as this will make it easier to follow the changes. Signed-off-by: Jesper Dangaard Brouer --- net/core/page_pool.c | 49 +++++++++++++++++++++++++++++-------------------- 1 file changed, 29 insertions(+), 20 deletions(-) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index ad8b0707af04..50d52aa6fbeb 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -180,6 +180,31 @@ static void page_pool_dma_sync_for_device(struct page_pool *pool, pool->p.dma_dir); } +static struct page * page_pool_dma_map(struct page_pool *pool, + struct page *page) +{ + dma_addr_t dma; + + /* Setup DMA mapping: use 'struct page' area for storing DMA-addr + * since dma_addr_t can be either 32 or 64 bits and does not always fit + * into page private data (i.e 32bit cpu with 64bit DMA caps) + * This mapping is kept for lifetime of page, until leaving pool. + */ + dma = dma_map_page_attrs(pool->p.dev, page, 0, + (PAGE_SIZE << pool->p.order), + pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC); + if (dma_mapping_error(pool->p.dev, dma)) { + put_page(page); + return NULL; + } + page->dma_addr = dma; + + if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) + page_pool_dma_sync_for_device(pool, page, pool->p.max_len); + + return page; +} + /* slow path */ noinline static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, @@ -187,7 +212,6 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, { struct page *page; gfp_t gfp = _gfp; - dma_addr_t dma; /* We could always set __GFP_COMP, and avoid this branch, as * prep_new_page() can handle order-0 with __GFP_COMP. @@ -211,27 +235,12 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, if (!page) return NULL; - if (!(pool->p.flags & PP_FLAG_DMA_MAP)) - goto skip_dma_map; - - /* Setup DMA mapping: use 'struct page' area for storing DMA-addr - * since dma_addr_t can be either 32 or 64 bits and does not always fit - * into page private data (i.e 32bit cpu with 64bit DMA caps) - * This mapping is kept for lifetime of page, until leaving pool. - */ - dma = dma_map_page_attrs(pool->p.dev, page, 0, - (PAGE_SIZE << pool->p.order), - pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC); - if (dma_mapping_error(pool->p.dev, dma)) { - put_page(page); - return NULL; + if (pool->p.flags & PP_FLAG_DMA_MAP) { + page = page_pool_dma_map(pool, page); + if (!page) + return NULL; } - page->dma_addr = dma; - - if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) - page_pool_dma_sync_for_device(pool, page, pool->p.max_len); -skip_dma_map: /* Track how many pages are held 'in-flight' */ pool->pages_state_hold_cnt++; From patchwork Wed Feb 24 18:56:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jesper Dangaard Brouer X-Patchwork-Id: 387087 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, MENTIONS_GIT_HOSTING, SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5AE3FC433DB for ; Wed, 24 Feb 2021 19:00:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0109B64F0F for ; Wed, 24 Feb 2021 19:00:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235644AbhBXS7b (ORCPT ); Wed, 24 Feb 2021 13:59:31 -0500 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:53376 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235276AbhBXS6X (ORCPT ); Wed, 24 Feb 2021 13:58:23 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1614193015; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=J2GT3Unh95elqMpldeh7zjbELEnC6JldM1jdV32+9t4=; b=FbrddVRPscMGIQ9iVfEBvL9uUXyFScTh8qwW1gubGlVuOzaC820q7NVH13RcSJhrRz/CJt qir5jBcqI1zXVI2xG3uk/uTL6ym9npXUvIgKDt4UEGWD1aj4xyllGCHBCLHc3s0SOT1Rtb EilEup0wrIdc+e/03oQidx3uiaS8yTM= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-232-Tbd4enyCNjavKWsUbBV3mg-1; Wed, 24 Feb 2021 13:56:51 -0500 X-MC-Unique: Tbd4enyCNjavKWsUbBV3mg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 72FCC1087D61; Wed, 24 Feb 2021 18:56:50 +0000 (UTC) Received: from firesoul.localdomain (unknown [10.40.208.7]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4C61F60BE5; Wed, 24 Feb 2021 18:56:47 +0000 (UTC) Received: from [192.168.42.3] (localhost [IPv6:::1]) by firesoul.localdomain (Postfix) with ESMTP id 3F99D30736C73; Wed, 24 Feb 2021 19:56:46 +0100 (CET) Subject: [PATCH RFC net-next 2/3] net: page_pool: use alloc_pages_bulk in refill code path From: Jesper Dangaard Brouer To: Mel Gorman , linux-mm@kvack.org Cc: Jesper Dangaard Brouer , chuck.lever@oracle.com, netdev@vger.kernel.org, linux-nfs@vger.kernel.org, linux-kernel@vger.kernel.org Date: Wed, 24 Feb 2021 19:56:46 +0100 Message-ID: <161419300618.2718959.11165518489200268845.stgit@firesoul> In-Reply-To: <161419296941.2718959.12575257358107256094.stgit@firesoul> References: <161419296941.2718959.12575257358107256094.stgit@firesoul> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org There are cases where the page_pool need to refill with pages from the page allocator. Some workloads cause the page_pool to release pages instead of recycling these pages. For these workload it can improve performance to bulk alloc pages from the page-allocator to refill the alloc cache. For XDP-redirect workload with 100G mlx5 driver (that use page_pool) redirecting xdp_frame packets into a veth, that does XDP_PASS to create an SKB from the xdp_frame, which then cannot return the page to the page_pool. In this case, we saw[1] an improvement of 18.8% from using the alloc_pages_bulk API (3,677,958 pps -> 4,368,926 pps). [1] https://github.com/xdp-project/xdp-project/blob/master/areas/mem/page_pool06_alloc_pages_bulk.org Signed-off-by: Jesper Dangaard Brouer --- net/core/page_pool.c | 65 ++++++++++++++++++++++++++++++++------------------ 1 file changed, 41 insertions(+), 24 deletions(-) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 50d52aa6fbeb..e0ae95fc59f0 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -210,44 +210,61 @@ noinline static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, gfp_t _gfp) { - struct page *page; + const int bulk = PP_ALLOC_CACHE_REFILL; + struct page *page, *next, *first_page; + unsigned int pp_flags = pool->p.flags; + unsigned int pp_order = pool->p.order; + int pp_nid = pool->p.nid; + LIST_HEAD(page_list); gfp_t gfp = _gfp; - /* We could always set __GFP_COMP, and avoid this branch, as - * prep_new_page() can handle order-0 with __GFP_COMP. - */ - if (pool->p.order) + /* Don't support bulk alloc for high-order pages */ + if (unlikely(pp_order)) { gfp |= __GFP_COMP; + first_page = alloc_pages_node(pp_nid, gfp, pp_order); + if (unlikely(!first_page)) + return NULL; + goto out; + } - /* FUTURE development: - * - * Current slow-path essentially falls back to single page - * allocations, which doesn't improve performance. This code - * need bulk allocation support from the page allocator code. - */ - - /* Cache was empty, do real allocation */ -#ifdef CONFIG_NUMA - page = alloc_pages_node(pool->p.nid, gfp, pool->p.order); -#else - page = alloc_pages(gfp, pool->p.order); -#endif - if (!page) + if (unlikely(!__alloc_pages_bulk_nodemask(gfp, pp_nid, NULL, + bulk, &page_list))) return NULL; + /* First page is extracted and returned to caller */ + first_page = list_first_entry(&page_list, struct page, lru); + list_del(&first_page->lru); + + /* Remaining pages store in alloc.cache */ + list_for_each_entry_safe(page, next, &page_list, lru) { + list_del(&page->lru); + if (pp_flags & PP_FLAG_DMA_MAP) { + page = page_pool_dma_map(pool, page); + if (!page) + continue; + } + if (likely(pool->alloc.count < PP_ALLOC_CACHE_SIZE)) { + pool->alloc.cache[pool->alloc.count++] = page; + pool->pages_state_hold_cnt++; + trace_page_pool_state_hold(pool, page, + pool->pages_state_hold_cnt); + } else { + put_page(page); + } + } +out: if (pool->p.flags & PP_FLAG_DMA_MAP) { - page = page_pool_dma_map(pool, page); - if (!page) + first_page = page_pool_dma_map(pool, first_page); + if (!first_page) return NULL; } /* Track how many pages are held 'in-flight' */ pool->pages_state_hold_cnt++; - - trace_page_pool_state_hold(pool, page, pool->pages_state_hold_cnt); + trace_page_pool_state_hold(pool, first_page, pool->pages_state_hold_cnt); /* When page just alloc'ed is should/must have refcnt 1. */ - return page; + return first_page; } /* For using page_pool replace: alloc_pages() API calls, but provide From patchwork Wed Feb 24 18:56:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jesper Dangaard Brouer X-Patchwork-Id: 387591 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26787C433DB for ; Wed, 24 Feb 2021 19:00:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C225864F19 for ; Wed, 24 Feb 2021 19:00:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230001AbhBXTAL (ORCPT ); Wed, 24 Feb 2021 14:00:11 -0500 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:32374 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235031AbhBXS60 (ORCPT ); Wed, 24 Feb 2021 13:58:26 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1614193019; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=v9LQLzVj/XgMadw/p7LX8ICKSHncxAimW+HcgNIftlg=; b=KAOMUjeO/6HhWsVOAnhg8x2UkBgUf4ZIU8juOZtmoilMVEysSpfMOgJiZEQh6REhCTeh7O 7nLgohNLoW+oYL1GjC6JdajA4vcFiWjyYiHpMCJdMD9jd4EG+Gey9ANlQ5mUq8nU9H2NnO zArWIdYZt6yfi6WKqC4Rx+lA1ZzWAGI= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-464-kM7Hat76MumCvQC04j4t3Q-1; Wed, 24 Feb 2021 13:56:57 -0500 X-MC-Unique: kM7Hat76MumCvQC04j4t3Q-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 7C6431868405; Wed, 24 Feb 2021 18:56:55 +0000 (UTC) Received: from firesoul.localdomain (unknown [10.40.208.7]) by smtp.corp.redhat.com (Postfix) with ESMTP id 696C39CA0; Wed, 24 Feb 2021 18:56:52 +0000 (UTC) Received: from [192.168.42.3] (localhost [IPv6:::1]) by firesoul.localdomain (Postfix) with ESMTP id 5ACFA30736C73; Wed, 24 Feb 2021 19:56:51 +0100 (CET) Subject: [PATCH RFC net-next 3/3] mm: make zone->free_area[order] access faster From: Jesper Dangaard Brouer To: Mel Gorman , linux-mm@kvack.org Cc: Jesper Dangaard Brouer , chuck.lever@oracle.com, netdev@vger.kernel.org, linux-nfs@vger.kernel.org, linux-kernel@vger.kernel.org Date: Wed, 24 Feb 2021 19:56:51 +0100 Message-ID: <161419301128.2718959.4838557038019199822.stgit@firesoul> In-Reply-To: <161419296941.2718959.12575257358107256094.stgit@firesoul> References: <161419296941.2718959.12575257358107256094.stgit@firesoul> User-Agent: StGit/0.19 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Avoid multiplication (imul) operations when accessing: zone->free_area[order].nr_free This was really tricky to find. I was puzzled why perf reported that rmqueue_bulk was using 44% of the time in an imul operation: │ del_page_from_free_list(): 44,54 │ e2: imul $0x58,%rax,%rax This operation was generated (by compiler) because the struct free_area have size 88 bytes or 0x58 hex. The compiler cannot find a shift operation to use and instead choose to use a more expensive imul, to find the offset into the array free_area[]. The patch align struct free_area to a cache-line, which cause the compiler avoid the imul operation. The imul operation is very fast on modern Intel CPUs. To help fast-path that decrement 'nr_free' move the member 'nr_free' to be first element, which saves one 'add' operation. Looking up instruction latency this exchange a 3-cycle imul with a 1-cycle shl, saving 2-cycles. It does trade some space to do this. Used: gcc (GCC) 9.3.1 20200408 (Red Hat 9.3.1-2) Signed-off-by: Jesper Dangaard Brouer --- include/linux/mmzone.h | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index b593316bff3d..4d83201717e1 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -93,10 +93,12 @@ extern int page_group_by_mobility_disabled; #define get_pageblock_migratetype(page) \ get_pfnblock_flags_mask(page, page_to_pfn(page), MIGRATETYPE_MASK) +/* Aligned struct to make zone->free_area[order] access faster */ struct free_area { - struct list_head free_list[MIGRATE_TYPES]; unsigned long nr_free; -}; + unsigned long __pad_to_align_free_list; + struct list_head free_list[MIGRATE_TYPES]; +} ____cacheline_aligned_in_smp; static inline struct page *get_page_from_free_area(struct free_area *area, int migratetype)