diff mbox series

[net-next,6/6] mvneta: recycle buffers

Message ID 20210322170301.26017-7-mcroce@linux.microsoft.com
State Superseded
Headers show
Series page_pool: recycle buffers | expand

Commit Message

Matteo Croce March 22, 2021, 5:03 p.m. UTC
From: Matteo Croce <mcroce@microsoft.com>

Use the new recycling API for page_pool.
In a drop rate test, the packet rate increased di 10%,
from 269 Kpps to 296 Kpps.

perf top on a stock system shows:

Overhead  Shared Object     Symbol
  21.78%  [kernel]          [k] __pi___inval_dcache_area
  21.66%  [mvneta]          [k] mvneta_rx_swbm
   7.00%  [kernel]          [k] kmem_cache_alloc
   6.05%  [kernel]          [k] eth_type_trans
   4.44%  [kernel]          [k] kmem_cache_free.part.0
   3.80%  [kernel]          [k] __netif_receive_skb_core
   3.68%  [kernel]          [k] dev_gro_receive
   3.65%  [kernel]          [k] get_page_from_freelist
   3.43%  [kernel]          [k] page_pool_release_page
   3.35%  [kernel]          [k] free_unref_page

And this is the same output with recycling enabled:

Overhead  Shared Object     Symbol
  24.10%  [kernel]          [k] __pi___inval_dcache_area
  23.02%  [mvneta]          [k] mvneta_rx_swbm
   7.19%  [kernel]          [k] kmem_cache_alloc
   6.50%  [kernel]          [k] eth_type_trans
   4.93%  [kernel]          [k] __netif_receive_skb_core
   4.77%  [kernel]          [k] kmem_cache_free.part.0
   3.93%  [kernel]          [k] dev_gro_receive
   3.03%  [kernel]          [k] build_skb
   2.91%  [kernel]          [k] page_pool_put_page
   2.85%  [kernel]          [k] __xdp_return

The test was done with mausezahn on the TX side with 64 byte raw
ethernet frames.

Signed-off-by: Matteo Croce <mcroce@microsoft.com>
---
 drivers/net/ethernet/marvell/mvneta.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Comments

Jesper Dangaard Brouer March 23, 2021, 3:06 p.m. UTC | #1
On Mon, 22 Mar 2021 18:03:01 +0100
Matteo Croce <mcroce@linux.microsoft.com> wrote:

> From: Matteo Croce <mcroce@microsoft.com>
> 
> Use the new recycling API for page_pool.
> In a drop rate test, the packet rate increased di 10%,
> from 269 Kpps to 296 Kpps.
> 
> perf top on a stock system shows:
> 
> Overhead  Shared Object     Symbol
>   21.78%  [kernel]          [k] __pi___inval_dcache_area
>   21.66%  [mvneta]          [k] mvneta_rx_swbm
>    7.00%  [kernel]          [k] kmem_cache_alloc
>    6.05%  [kernel]          [k] eth_type_trans
>    4.44%  [kernel]          [k] kmem_cache_free.part.0
>    3.80%  [kernel]          [k] __netif_receive_skb_core
>    3.68%  [kernel]          [k] dev_gro_receive
>    3.65%  [kernel]          [k] get_page_from_freelist
>    3.43%  [kernel]          [k] page_pool_release_page
>    3.35%  [kernel]          [k] free_unref_page
> 
> And this is the same output with recycling enabled:
> 
> Overhead  Shared Object     Symbol
>   24.10%  [kernel]          [k] __pi___inval_dcache_area
>   23.02%  [mvneta]          [k] mvneta_rx_swbm
>    7.19%  [kernel]          [k] kmem_cache_alloc
>    6.50%  [kernel]          [k] eth_type_trans
>    4.93%  [kernel]          [k] __netif_receive_skb_core
>    4.77%  [kernel]          [k] kmem_cache_free.part.0
>    3.93%  [kernel]          [k] dev_gro_receive
>    3.03%  [kernel]          [k] build_skb
>    2.91%  [kernel]          [k] page_pool_put_page
>    2.85%  [kernel]          [k] __xdp_return
> 
> The test was done with mausezahn on the TX side with 64 byte raw
> ethernet frames.
> 
> Signed-off-by: Matteo Croce <mcroce@microsoft.com>
> ---
>  drivers/net/ethernet/marvell/mvneta.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
> index a635cf84608a..8b3250394703 100644
> --- a/drivers/net/ethernet/marvell/mvneta.c
> +++ b/drivers/net/ethernet/marvell/mvneta.c
> @@ -2332,7 +2332,7 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
>  	if (!skb)
>  		return ERR_PTR(-ENOMEM);
>  
> -	page_pool_release_page(rxq->page_pool, virt_to_page(xdp->data));
> +	skb_mark_for_recycle(skb, virt_to_page(xdp->data), &xdp->rxq->mem);
>  
>  	skb_reserve(skb, xdp->data - xdp->data_hard_start);
>  	skb_put(skb, xdp->data_end - xdp->data);
> @@ -2344,7 +2344,7 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
>  		skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
>  				skb_frag_page(frag), skb_frag_off(frag),
>  				skb_frag_size(frag), PAGE_SIZE);
> -		page_pool_release_page(rxq->page_pool, skb_frag_page(frag));
> +		skb_mark_for_recycle(skb, skb_frag_page(frag), &xdp->rxq->mem);
>  	}
>  
>  	return skb;

This cause skb_mark_for_recycle() to set 'skb->pp_recycle=1' multiple
times, for the same SKB.  (copy-pasted function below signature to help
reviewers).

This makes me question if we need an API for setting this per page
fragment?
Or if the API skb_mark_for_recycle() need to walk the page fragments in
the SKB and set the info stored in the page for each?
Lorenzo Bianconi March 24, 2021, 9:28 a.m. UTC | #2
[...]
> > diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c

> > index a635cf84608a..8b3250394703 100644

> > --- a/drivers/net/ethernet/marvell/mvneta.c

> > +++ b/drivers/net/ethernet/marvell/mvneta.c

> > @@ -2332,7 +2332,7 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,

> >  	if (!skb)

> >  		return ERR_PTR(-ENOMEM);

> >  

> > -	page_pool_release_page(rxq->page_pool, virt_to_page(xdp->data));

> > +	skb_mark_for_recycle(skb, virt_to_page(xdp->data), &xdp->rxq->mem);

> >  

> >  	skb_reserve(skb, xdp->data - xdp->data_hard_start);

> >  	skb_put(skb, xdp->data_end - xdp->data);

> > @@ -2344,7 +2344,7 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,

> >  		skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,

> >  				skb_frag_page(frag), skb_frag_off(frag),

> >  				skb_frag_size(frag), PAGE_SIZE);

> > -		page_pool_release_page(rxq->page_pool, skb_frag_page(frag));

> > +		skb_mark_for_recycle(skb, skb_frag_page(frag), &xdp->rxq->mem);

> >  	}

> >  

> >  	return skb;

> 

> This cause skb_mark_for_recycle() to set 'skb->pp_recycle=1' multiple

> times, for the same SKB.  (copy-pasted function below signature to help

> reviewers).

> 

> This makes me question if we need an API for setting this per page

> fragment?

> Or if the API skb_mark_for_recycle() need to walk the page fragments in

> the SKB and set the info stored in the page for each?


Considering just performances, I guess it is better open-code here since the
driver already performs a loop over fragments to build the skb, but I guess
this approach is quite risky and I would prefer to have a single utility
routine to take care of linear area + fragments. What do you think?

Regards,
Lorenzo

> 

> 

> -- 

> Best regards,

>   Jesper Dangaard Brouer

>   MSc.CS, Principal Kernel Engineer at Red Hat

>   LinkedIn: http://www.linkedin.com/in/brouer

>
Ilias Apalodimas March 24, 2021, 9:48 p.m. UTC | #3
On Wed, Mar 24, 2021 at 10:28:35AM +0100, Lorenzo Bianconi wrote:
> [...]

> > > diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c

> > > index a635cf84608a..8b3250394703 100644

> > > --- a/drivers/net/ethernet/marvell/mvneta.c

> > > +++ b/drivers/net/ethernet/marvell/mvneta.c

> > > @@ -2332,7 +2332,7 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,

> > >  	if (!skb)

> > >  		return ERR_PTR(-ENOMEM);

> > >  

> > > -	page_pool_release_page(rxq->page_pool, virt_to_page(xdp->data));

> > > +	skb_mark_for_recycle(skb, virt_to_page(xdp->data), &xdp->rxq->mem);

> > >  

> > >  	skb_reserve(skb, xdp->data - xdp->data_hard_start);

> > >  	skb_put(skb, xdp->data_end - xdp->data);

> > > @@ -2344,7 +2344,7 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,

> > >  		skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,

> > >  				skb_frag_page(frag), skb_frag_off(frag),

> > >  				skb_frag_size(frag), PAGE_SIZE);

> > > -		page_pool_release_page(rxq->page_pool, skb_frag_page(frag));

> > > +		skb_mark_for_recycle(skb, skb_frag_page(frag), &xdp->rxq->mem);

> > >  	}

> > >  

> > >  	return skb;

> > 

> > This cause skb_mark_for_recycle() to set 'skb->pp_recycle=1' multiple

> > times, for the same SKB.  (copy-pasted function below signature to help

> > reviewers).

> > 

> > This makes me question if we need an API for setting this per page

> > fragment?

> > Or if the API skb_mark_for_recycle() need to walk the page fragments in

> > the SKB and set the info stored in the page for each?

> 

> Considering just performances, I guess it is better open-code here since the

> driver already performs a loop over fragments to build the skb, but I guess

> this approach is quite risky and I would prefer to have a single utility

> routine to take care of linear area + fragments. What do you think?

> 


The mark_for_recycle does two things as you noticed, 
set the pp_recyle bit on the skb head and update the struct page information we
need to trigger the recycling.
We could split those and be more explicit, but isn't the current approach a
bit simpler for the driver writer to get it right?
I don't think setting a single value to 1 will have any noticeable performance
impact, but we can always test it.

> Regards,

> Lorenzo

> 

> > 

> > 

> > -- 

> > Best regards,

> >   Jesper Dangaard Brouer

> >   MSc.CS, Principal Kernel Engineer at Red Hat

> >   LinkedIn: http://www.linkedin.com/in/brouer

> >
diff mbox series

Patch

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index a635cf84608a..8b3250394703 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -2332,7 +2332,7 @@  mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 	if (!skb)
 		return ERR_PTR(-ENOMEM);
 
-	page_pool_release_page(rxq->page_pool, virt_to_page(xdp->data));
+	skb_mark_for_recycle(skb, virt_to_page(xdp->data), &xdp->rxq->mem);
 
 	skb_reserve(skb, xdp->data - xdp->data_hard_start);
 	skb_put(skb, xdp->data_end - xdp->data);
@@ -2344,7 +2344,7 @@  mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 		skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
 				skb_frag_page(frag), skb_frag_off(frag),
 				skb_frag_size(frag), PAGE_SIZE);
-		page_pool_release_page(rxq->page_pool, skb_frag_page(frag));
+		skb_mark_for_recycle(skb, skb_frag_page(frag), &xdp->rxq->mem);
 	}
 
 	return skb;