mbox series

[0/3,net-next,v2] net: netsec: Add XDP Support

Message ID 1561785805-21647-1-git-send-email-ilias.apalodimas@linaro.org
Headers show
Series net: netsec: Add XDP Support | expand

Message

Ilias Apalodimas June 29, 2019, 5:23 a.m. UTC
This is a respin of https://www.spinics.net/lists/netdev/msg526066.html
Since page_pool API fixes are merged into net-next we can now safely use 
it's DMA mapping capabilities. 

First patch changes the buffer allocation from napi/netdev_alloc_frag()
to page_pool API. Although this will lead to slightly reduced performance 
(on raw packet drops only) we can use the API for XDP buffer recycling. 
Another side effect is a slight increase in memory usage, due to using a 
single page per packet.

The second patch adds XDP support on the driver. 
There's a bunch of interesting options that come up due to the single 
Tx queue.
Locking is needed(to avoid messing up the Tx queues since ndo_xdp_xmit 
and the normal stack can co-exist). We also need to track down the 
'buffer type' for TX and properly free or recycle the packet depending 
on it's nature.


Changes since RFC:
- Bug fixes from Jesper and Maciej
- Added page pool API to retrieve the DMA direction

Changes since v1:
- Use page_pool_free correctly if xdp_rxq_info_reg() failed

Ilias Apalodimas (3):
  net: netsec: Use page_pool API
  net: page_pool: add helper function for retrieving dma direction
  net: netsec: add XDP support

 drivers/net/ethernet/socionext/Kconfig  |   1 +
 drivers/net/ethernet/socionext/netsec.c | 473 ++++++++++++++++++++----
 include/net/page_pool.h                 |   9 +
 3 files changed, 416 insertions(+), 67 deletions(-)

-- 
2.20.1

Comments

David Miller July 2, 2019, 2:27 a.m. UTC | #1
From: Ilias Apalodimas <ilias.apalodimas@linaro.org>

Date: Sat, 29 Jun 2019 08:23:22 +0300

> This is a respin of https://www.spinics.net/lists/netdev/msg526066.html

> Since page_pool API fixes are merged into net-next we can now safely use 

> it's DMA mapping capabilities. 

> 

> First patch changes the buffer allocation from napi/netdev_alloc_frag()

> to page_pool API. Although this will lead to slightly reduced performance 

> (on raw packet drops only) we can use the API for XDP buffer recycling. 

> Another side effect is a slight increase in memory usage, due to using a 

> single page per packet.

> 

> The second patch adds XDP support on the driver. 

> There's a bunch of interesting options that come up due to the single 

> Tx queue.

> Locking is needed(to avoid messing up the Tx queues since ndo_xdp_xmit 

> and the normal stack can co-exist). We also need to track down the 

> 'buffer type' for TX and properly free or recycle the packet depending 

> on it's nature.

> 

> 

> Changes since RFC:

> - Bug fixes from Jesper and Maciej

> - Added page pool API to retrieve the DMA direction

> 

> Changes since v1:

> - Use page_pool_free correctly if xdp_rxq_info_reg() failed


Series applied, thanks.

I realize from the discussion on patch #3 there will be follow-ups to this.
Ilias Apalodimas July 2, 2019, 3:18 a.m. UTC | #2
Hi David

[...]
> 

> Series applied, thanks.

> 

> I realize from the discussion on patch #3 there will be follow-ups to this.

Yea, small cosmetic changes. I'll send them shortly

Thanks
/Ilias