diff mbox series

[4/9] mm/page_alloc: optimize code layout for __alloc_pages_bulk

Message ID 20210325114228.27719-5-mgorman@techsingularity.net
State New
Headers show
Series Introduce a bulk order-0 page allocator with two in-tree users | expand

Commit Message

Mel Gorman March 25, 2021, 11:42 a.m. UTC
From: Jesper Dangaard Brouer <brouer@redhat.com>

Looking at perf-report and ASM-code for __alloc_pages_bulk() it is clear
that the code activated is suboptimal. The compiler guesses wrong and
places unlikely code at the beginning. Due to the use of WARN_ON_ONCE()
macro the UD2 asm instruction is added to the code, which confuse the
I-cache prefetcher in the CPU.

[mgorman: Minor changes and rebasing]
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
---
 mm/page_alloc.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

Comments

Matthew Wilcox March 25, 2021, 12:12 p.m. UTC | #1
On Thu, Mar 25, 2021 at 11:42:23AM +0000, Mel Gorman wrote:
>  
> -	if (WARN_ON_ONCE(nr_pages <= 0))
> +	if (unlikely(nr_pages <= 0))
>  		return 0;

If we made nr_pages unsigned, we wouldn't need this check at all (ok,
we'd still need to figure out what to do with 0).  But then, if a user
inadvertently passes in -ENOMEM, we'll try to allocate 4 billion pages.
So maybe we should check it.  Gah, API design is hard.
Vlastimil Babka April 12, 2021, 10:41 a.m. UTC | #2
On 3/25/21 12:42 PM, Mel Gorman wrote:
> From: Jesper Dangaard Brouer <brouer@redhat.com>

> 

> Looking at perf-report and ASM-code for __alloc_pages_bulk() it is clear

> that the code activated is suboptimal. The compiler guesses wrong and

> places unlikely code at the beginning. Due to the use of WARN_ON_ONCE()

> macro the UD2 asm instruction is added to the code, which confuse the

> I-cache prefetcher in the CPU.


Hm that's weird, WARN_ON_ONCE() uses unlikely() too, so the UD2 should end up in
the out-of-fast-path part?
But anyway.

> [mgorman: Minor changes and rebasing]

> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>

> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>


Acked-By: Vlastimil Babka <vbabka@suse.cz>


> ---

>  mm/page_alloc.c | 6 +++---

>  1 file changed, 3 insertions(+), 3 deletions(-)

> 

> diff --git a/mm/page_alloc.c b/mm/page_alloc.c

> index be1e33a4df39..1ec18121268b 100644

> --- a/mm/page_alloc.c

> +++ b/mm/page_alloc.c

> @@ -5001,7 +5001,7 @@ int __alloc_pages_bulk(gfp_t gfp, int preferred_nid,

>  	unsigned int alloc_flags;

>  	int nr_populated = 0;

>  

> -	if (WARN_ON_ONCE(nr_pages <= 0))

> +	if (unlikely(nr_pages <= 0))

>  		return 0;

>  

>  	/*

> @@ -5048,7 +5048,7 @@ int __alloc_pages_bulk(gfp_t gfp, int preferred_nid,

>  	 * If there are no allowed local zones that meets the watermarks then

>  	 * try to allocate a single page and reclaim if necessary.

>  	 */

> -	if (!zone)

> +	if (unlikely(!zone))

>  		goto failed;

>  

>  	/* Attempt the batch allocation */

> @@ -5066,7 +5066,7 @@ int __alloc_pages_bulk(gfp_t gfp, int preferred_nid,

>  

>  		page = __rmqueue_pcplist(zone, ac.migratetype, alloc_flags,

>  								pcp, pcp_list);

> -		if (!page) {

> +		if (unlikely(!page)) {

>  			/* Try and get at least one page */

>  			if (!nr_populated)

>  				goto failed_irq;

>
diff mbox series

Patch

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index be1e33a4df39..1ec18121268b 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5001,7 +5001,7 @@  int __alloc_pages_bulk(gfp_t gfp, int preferred_nid,
 	unsigned int alloc_flags;
 	int nr_populated = 0;
 
-	if (WARN_ON_ONCE(nr_pages <= 0))
+	if (unlikely(nr_pages <= 0))
 		return 0;
 
 	/*
@@ -5048,7 +5048,7 @@  int __alloc_pages_bulk(gfp_t gfp, int preferred_nid,
 	 * If there are no allowed local zones that meets the watermarks then
 	 * try to allocate a single page and reclaim if necessary.
 	 */
-	if (!zone)
+	if (unlikely(!zone))
 		goto failed;
 
 	/* Attempt the batch allocation */
@@ -5066,7 +5066,7 @@  int __alloc_pages_bulk(gfp_t gfp, int preferred_nid,
 
 		page = __rmqueue_pcplist(zone, ac.migratetype, alloc_flags,
 								pcp, pcp_list);
-		if (!page) {
+		if (unlikely(!page)) {
 			/* Try and get at least one page */
 			if (!nr_populated)
 				goto failed_irq;