mbox series

[v3,net-next,0/7] net: ethernet: ti: cpsw: Add XDP support

Message ID 20190605132009.10734-1-ivan.khoronzhuk@linaro.org
Headers show
Series net: ethernet: ti: cpsw: Add XDP support | expand

Message

Ivan Khoronzhuk June 5, 2019, 1:20 p.m. UTC
This patchset adds XDP support for TI cpsw driver and base it on
page_pool allocator. It was verified on af_xdp socket drop,
af_xdp l2f, ebpf XDP_DROP, XDP_REDIRECT, XDP_PASS, XDP_TX.

It was verified with following configs enabled:
CONFIG_JIT=y
CONFIG_BPFILTER=y
CONFIG_BPF_SYSCALL=y
CONFIG_XDP_SOCKETS=y
CONFIG_BPF_EVENTS=y
CONFIG_HAVE_EBPF_JIT=y
CONFIG_BPF_JIT=y
CONFIG_CGROUP_BPF=y

Link on previous v2:
https://lkml.org/lkml/2019/5/30/1315

Also regular tests with iperf2 were done in order to verify impact on
regular netstack performance, compared with base commit:
https://pastebin.com/JSMT0iZ4

v2..v3:
- each rxq and ndev has its own page pool

v1..v2:
- combined xdp_xmit functions
- used page allocation w/o refcnt juggle
- unmapped page for skb netstack
- moved rxq/page pool allocation to open/close pair
- added several preliminary patches:
  net: page_pool: add helper function to retrieve dma addresses
  net: page_pool: add helper function to unmap dma addresses
  net: ethernet: ti: cpsw: use cpsw as drv data
  net: ethernet: ti: cpsw_ethtool: simplify slave loops


Based on net-next/master

Ilias Apalodimas (2):
  net: page_pool: add helper function to retrieve dma addresses
  net: page_pool: add helper function to unmap dma addresses

Ivan Khoronzhuk (5):
  net: ethernet: ti: cpsw: use cpsw as drv data
  net: ethernet: ti: cpsw_ethtool: simplify slave loops
  net: ethernet: ti: davinci_cpdma: add dma mapped submit
  net: ethernet: ti: davinci_cpdma: return handler status
  net: ethernet: ti: cpsw: add XDP support

 drivers/net/ethernet/ti/Kconfig         |   1 +
 drivers/net/ethernet/ti/cpsw.c          | 555 ++++++++++++++++++++----
 drivers/net/ethernet/ti/cpsw_ethtool.c  | 100 ++++-
 drivers/net/ethernet/ti/cpsw_priv.h     |   9 +-
 drivers/net/ethernet/ti/davinci_cpdma.c | 122 ++++--
 drivers/net/ethernet/ti/davinci_cpdma.h |   6 +-
 drivers/net/ethernet/ti/davinci_emac.c  |  18 +-
 include/net/page_pool.h                 |   6 +
 net/core/page_pool.c                    |   7 +
 9 files changed, 685 insertions(+), 139 deletions(-)

-- 
2.17.1

Comments

Jesper Dangaard Brouer June 6, 2019, 8:08 a.m. UTC | #1
On Wed, 05 Jun 2019 12:14:50 -0700 (PDT)
David Miller <davem@davemloft.net> wrote:

> From: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>

> Date: Wed,  5 Jun 2019 16:20:02 +0300

> 

> > This patchset adds XDP support for TI cpsw driver and base it on

> > page_pool allocator. It was verified on af_xdp socket drop,

> > af_xdp l2f, ebpf XDP_DROP, XDP_REDIRECT, XDP_PASS, XDP_TX.  

> 

> Jesper et al., please give this a good once over.


The issue with merging this, is that I recently discovered two bug with
page_pool API, when using DMA-mappings, which result in missing
DMA-unmap's.  These bugs are not "exposed" yet, but will get exposed
now with this drivers.  

The two bugs are:

#1: in-flight packet-pages can still be on remote drivers TX queue,
while XDP RX driver manage to unregister the page_pool (waiting 1 RCU
period is not enough).

#2: this patchset also introduce page_pool_unmap_page(), which is
called before an XDP frame travel into networks stack (as no callback
exist, yet).  But the CPUMAP redirect *also* needs to call this, else we
"leak"/miss DMA-unmap.

I do have a working prototype, that fixes these two bugs.  I guess, I'm
under pressure to send this to the list soon...

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer
David Miller June 6, 2019, 8:56 p.m. UTC | #2
From: Jesper Dangaard Brouer <brouer@redhat.com>

Date: Thu, 6 Jun 2019 10:08:50 +0200

> I do have a working prototype, that fixes these two bugs.  I guess, I'm

> under pressure to send this to the list soon...


So I'm going to mark this CPSW patchset as "deferred" while these bugs are
worked out.