mbox series

[v4,bpf-next,00/13] mvneta: introduce XDP multi-buffer support

Message ID cover.1601648734.git.lorenzo@kernel.org
Headers show
Series mvneta: introduce XDP multi-buffer support | expand

Message

Lorenzo Bianconi Oct. 2, 2020, 2:41 p.m. UTC
This series introduce XDP multi-buffer support. The mvneta driver is
the first to support these new "non-linear" xdp_{buff,frame}. Reviewers
please focus on how these new types of xdp_{buff,frame} packets
traverse the different layers and the layout design. It is on purpose
that BPF-helpers are kept simple, as we don't want to expose the
internal layout to allow later changes.

For now, to keep the design simple and to maintain performance, the XDP
BPF-prog (still) only have access to the first-buffer. It is left for
later (another patchset) to add payload access across multiple buffers.
This patchset should still allow for these future extensions. The goal
is to lift the XDP MTU restriction that comes with XDP, but maintain
same performance as before.

The main idea for the new multi-buffer layout is to reuse the same
layout used for non-linear SKB. This rely on the "skb_shared_info"
struct at the end of the first buffer to link together subsequent
buffers. Keeping the layout compatible with SKBs is also done to ease
and speedup creating an SKB from an xdp_{buff,frame}. Converting
xdp_frame to SKB and deliver it to the network stack is shown in cpumap
code (patch 13/13).

A multi-buffer bit (mb) has been introduced in xdp_{buff,frame} structure
to notify the bpf/network layer if this is a xdp multi-buffer frame (mb = 1)
or not (mb = 0).
The mb bit will be set by a xdp multi-buffer capable driver only for
non-linear frames maintaining the capability to receive linear frames
without any extra cost since the skb_shared_info structure at the end
of the first buffer will be initialized only if mb is set.

In order to provide to userspace some metdata about the non-linear
xdp_{buff,frame}, we introduced 2 bpf helpers:
- bpf_xdp_get_frags_count:
  get the number of fragments for a given xdp multi-buffer.
- bpf_xdp_get_frags_total_size:
  get the total size of fragments for a given xdp multi-buffer.

Typical use cases for this series are:
- Jumbo-frames
- Packet header split (please see Google’s use-case @ NetDevConf 0x14, [0])
- TSO

More info about the main idea behind this approach can be found here [1][2].

We carried out some throughput tests in a standard linear frame scenario in order
to verify we did not introduced any performance regression adding xdp multi-buff
support to mvneta:

offered load is ~ 1000Kpps, packet size is 64B, mvneta descriptor size is one PAGE

commit: 879456bedbe5 ("net: mvneta: avoid possible cache misses in mvneta_rx_swbm")
- xdp-pass:      ~162Kpps
- xdp-drop:      ~701Kpps
- xdp-tx:        ~185Kpps
- xdp-redirect:  ~202Kpps

mvneta xdp multi-buff:
- xdp-pass:      ~163Kpps
- xdp-drop:      ~739Kpps
- xdp-tx:        ~182Kpps
- xdp-redirect:  ~202Kpps

Changes since v3:
- rebase ontop of bpf-next
- add patch 10/13 to copy back paged data from a xdp multi-buff frame to
  userspace buffer for xdp multi-buff selftests

Changes since v2:
- add throughput measurements
- drop bpf_xdp_adjust_mb_header bpf helper
- introduce selftest for xdp multibuffer
- addressed comments on bpf_xdp_get_frags_count
- introduce xdp multi-buff support to cpumaps

Changes since v1:
- Fix use-after-free in xdp_return_{buff/frame}
- Introduce bpf helpers
- Introduce xdp_mb sample program
- access skb_shared_info->nr_frags only on the last fragment

Changes since RFC:
- squash multi-buffer bit initialization in a single patch
- add mvneta non-linear XDP buff support for tx side

[0] https://netdevconf.info/0x14/session.html?talk-the-path-to-tcp-4k-mtu-and-rx-zerocopy
[1] https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org
[2] https://netdevconf.info/0x14/session.html?tutorial-add-XDP-support-to-a-NIC-driver (XDPmulti-buffers section)

Lorenzo Bianconi (11):
  xdp: introduce mb in xdp_buff/xdp_frame
  xdp: initialize xdp_buff mb bit to 0 in all XDP drivers
  net: mvneta: update mb bit before passing the xdp buffer to eBPF layer
  xdp: add multi-buff support to xdp_return_{buff/frame}
  net: mvneta: add multi buffer support to XDP_TX
  bpf: move user_size out of bpf_test_init
  bpf: introduce multibuff support to bpf_prog_test_run_xdp()
  bpf: test_run: add skb_shared_info pointer in bpf_test_finish
    signature
  bpf: add xdp multi-buffer selftest
  net: mvneta: enable jumbo frames for XDP
  bpf: cpumap: introduce xdp multi-buff support

Sameeh Jubran (2):
  bpf: introduce bpf_xdp_get_frags_{count, total_size} helpers
  samples/bpf: add bpf program that uses xdp mb helpers

 drivers/net/ethernet/amazon/ena/ena_netdev.c  |   1 +
 drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c |   1 +
 .../net/ethernet/cavium/thunder/nicvf_main.c  |   1 +
 .../net/ethernet/freescale/dpaa2/dpaa2-eth.c  |   1 +
 drivers/net/ethernet/intel/i40e/i40e_txrx.c   |   1 +
 drivers/net/ethernet/intel/ice/ice_txrx.c     |   1 +
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |   1 +
 .../net/ethernet/intel/ixgbevf/ixgbevf_main.c |   1 +
 drivers/net/ethernet/marvell/mvneta.c         | 131 +++++++------
 .../net/ethernet/marvell/mvpp2/mvpp2_main.c   |   1 +
 drivers/net/ethernet/mellanox/mlx4/en_rx.c    |   1 +
 .../net/ethernet/mellanox/mlx5/core/en_rx.c   |   1 +
 .../ethernet/netronome/nfp/nfp_net_common.c   |   1 +
 drivers/net/ethernet/qlogic/qede/qede_fp.c    |   1 +
 drivers/net/ethernet/sfc/rx.c                 |   1 +
 drivers/net/ethernet/socionext/netsec.c       |   1 +
 drivers/net/ethernet/ti/cpsw.c                |   1 +
 drivers/net/ethernet/ti/cpsw_new.c            |   1 +
 drivers/net/hyperv/netvsc_bpf.c               |   1 +
 drivers/net/tun.c                             |   2 +
 drivers/net/veth.c                            |   1 +
 drivers/net/virtio_net.c                      |   2 +
 drivers/net/xen-netfront.c                    |   1 +
 include/net/xdp.h                             |  31 ++-
 include/uapi/linux/bpf.h                      |  14 ++
 kernel/bpf/cpumap.c                           |  45 +----
 net/bpf/test_run.c                            | 118 ++++++++++--
 net/core/dev.c                                |   1 +
 net/core/filter.c                             |  42 ++++
 net/core/xdp.c                                | 104 ++++++++++
 samples/bpf/Makefile                          |   3 +
 samples/bpf/xdp_mb_kern.c                     |  68 +++++++
 samples/bpf/xdp_mb_user.c                     | 182 ++++++++++++++++++
 tools/include/uapi/linux/bpf.h                |  14 ++
 .../testing/selftests/bpf/prog_tests/xdp_mb.c |  79 ++++++++
 .../selftests/bpf/progs/test_xdp_multi_buff.c |  24 +++
 36 files changed, 757 insertions(+), 123 deletions(-)
 create mode 100644 samples/bpf/xdp_mb_kern.c
 create mode 100644 samples/bpf/xdp_mb_user.c
 create mode 100644 tools/testing/selftests/bpf/prog_tests/xdp_mb.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_xdp_multi_buff.c

Comments

John Fastabend Oct. 2, 2020, 3:25 p.m. UTC | #1
Lorenzo Bianconi wrote:
> This series introduce XDP multi-buffer support. The mvneta driver is
> the first to support these new "non-linear" xdp_{buff,frame}. Reviewers
> please focus on how these new types of xdp_{buff,frame} packets
> traverse the different layers and the layout design. It is on purpose
> that BPF-helpers are kept simple, as we don't want to expose the
> internal layout to allow later changes.
> 
> For now, to keep the design simple and to maintain performance, the XDP
> BPF-prog (still) only have access to the first-buffer. It is left for
> later (another patchset) to add payload access across multiple buffers.
> This patchset should still allow for these future extensions. The goal
> is to lift the XDP MTU restriction that comes with XDP, but maintain
> same performance as before.
> 
> The main idea for the new multi-buffer layout is to reuse the same
> layout used for non-linear SKB. This rely on the "skb_shared_info"
> struct at the end of the first buffer to link together subsequent
> buffers. Keeping the layout compatible with SKBs is also done to ease
> and speedup creating an SKB from an xdp_{buff,frame}. Converting
> xdp_frame to SKB and deliver it to the network stack is shown in cpumap
> code (patch 13/13).

Using the end of the buffer for the skb_shared_info struct is going to
become driver API so unwinding it if it proves to be a performance issue
is going to be ugly. So same question as before, for the use case where
we receive packet and do XDP_TX with it how do we avoid cache miss
overhead? This is not just a hypothetical use case, the Facebook
load balancer is doing this as well as Cilium and allowing this with
multi-buffer packets >1500B would be useful.

Can we write the skb_shared_info lazily? It should only be needed once
we know the packet is going up the stack to some place that needs the
info. Which we could learn from the return code of the XDP program.

> 
> A multi-buffer bit (mb) has been introduced in xdp_{buff,frame} structure
> to notify the bpf/network layer if this is a xdp multi-buffer frame (mb = 1)
> or not (mb = 0).
> The mb bit will be set by a xdp multi-buffer capable driver only for
> non-linear frames maintaining the capability to receive linear frames
> without any extra cost since the skb_shared_info structure at the end
> of the first buffer will be initialized only if mb is set.

Thanks above is clearer.

> 
> In order to provide to userspace some metdata about the non-linear
> xdp_{buff,frame}, we introduced 2 bpf helpers:
> - bpf_xdp_get_frags_count:
>   get the number of fragments for a given xdp multi-buffer.
> - bpf_xdp_get_frags_total_size:
>   get the total size of fragments for a given xdp multi-buffer.

Whats the use case for these? Do you have an example where knowing
the frags count is going to be something a BPF program will use?
Having total size seems interesting but perhaps we should push that
into the metadata so its pulled into the cache if users are going to
be reading it on every packet or something.

> 
> Typical use cases for this series are:
> - Jumbo-frames
> - Packet header split (please see Google���s use-case @ NetDevConf 0x14, [0])
> - TSO
> 
> More info about the main idea behind this approach can be found here [1][2].
> 
> We carried out some throughput tests in a standard linear frame scenario in order
> to verify we did not introduced any performance regression adding xdp multi-buff
> support to mvneta:
> 
> offered load is ~ 1000Kpps, packet size is 64B, mvneta descriptor size is one PAGE
> 
> commit: 879456bedbe5 ("net: mvneta: avoid possible cache misses in mvneta_rx_swbm")
> - xdp-pass:      ~162Kpps
> - xdp-drop:      ~701Kpps
> - xdp-tx:        ~185Kpps
> - xdp-redirect:  ~202Kpps
> 
> mvneta xdp multi-buff:
> - xdp-pass:      ~163Kpps
> - xdp-drop:      ~739Kpps
> - xdp-tx:        ~182Kpps
> - xdp-redirect:  ~202Kpps
> 
> Changes since v3:
> - rebase ontop of bpf-next
> - add patch 10/13 to copy back paged data from a xdp multi-buff frame to
>   userspace buffer for xdp multi-buff selftests
> 
> Changes since v2:
> - add throughput measurements
> - drop bpf_xdp_adjust_mb_header bpf helper
> - introduce selftest for xdp multibuffer
> - addressed comments on bpf_xdp_get_frags_count
> - introduce xdp multi-buff support to cpumaps
> 
> Changes since v1:
> - Fix use-after-free in xdp_return_{buff/frame}
> - Introduce bpf helpers
> - Introduce xdp_mb sample program
> - access skb_shared_info->nr_frags only on the last fragment
> 
> Changes since RFC:
> - squash multi-buffer bit initialization in a single patch
> - add mvneta non-linear XDP buff support for tx side
> 
> [0] https://netdevconf.info/0x14/session.html?talk-the-path-to-tcp-4k-mtu-and-rx-zerocopy
> [1] https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org
> [2] https://netdevconf.info/0x14/session.html?tutorial-add-XDP-support-to-a-NIC-driver (XDPmulti-buffers section)
> 
> Lorenzo Bianconi (11):
>   xdp: introduce mb in xdp_buff/xdp_frame
>   xdp: initialize xdp_buff mb bit to 0 in all XDP drivers
>   net: mvneta: update mb bit before passing the xdp buffer to eBPF layer
>   xdp: add multi-buff support to xdp_return_{buff/frame}
>   net: mvneta: add multi buffer support to XDP_TX
>   bpf: move user_size out of bpf_test_init
>   bpf: introduce multibuff support to bpf_prog_test_run_xdp()
>   bpf: test_run: add skb_shared_info pointer in bpf_test_finish
>     signature
>   bpf: add xdp multi-buffer selftest
>   net: mvneta: enable jumbo frames for XDP
>   bpf: cpumap: introduce xdp multi-buff support
> 
> Sameeh Jubran (2):
>   bpf: introduce bpf_xdp_get_frags_{count, total_size} helpers
>   samples/bpf: add bpf program that uses xdp mb helpers
> 
>  drivers/net/ethernet/amazon/ena/ena_netdev.c  |   1 +
>  drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c |   1 +
>  .../net/ethernet/cavium/thunder/nicvf_main.c  |   1 +
>  .../net/ethernet/freescale/dpaa2/dpaa2-eth.c  |   1 +
>  drivers/net/ethernet/intel/i40e/i40e_txrx.c   |   1 +
>  drivers/net/ethernet/intel/ice/ice_txrx.c     |   1 +
>  drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |   1 +
>  .../net/ethernet/intel/ixgbevf/ixgbevf_main.c |   1 +
>  drivers/net/ethernet/marvell/mvneta.c         | 131 +++++++------
>  .../net/ethernet/marvell/mvpp2/mvpp2_main.c   |   1 +
>  drivers/net/ethernet/mellanox/mlx4/en_rx.c    |   1 +
>  .../net/ethernet/mellanox/mlx5/core/en_rx.c   |   1 +
>  .../ethernet/netronome/nfp/nfp_net_common.c   |   1 +
>  drivers/net/ethernet/qlogic/qede/qede_fp.c    |   1 +
>  drivers/net/ethernet/sfc/rx.c                 |   1 +
>  drivers/net/ethernet/socionext/netsec.c       |   1 +
>  drivers/net/ethernet/ti/cpsw.c                |   1 +
>  drivers/net/ethernet/ti/cpsw_new.c            |   1 +
>  drivers/net/hyperv/netvsc_bpf.c               |   1 +
>  drivers/net/tun.c                             |   2 +
>  drivers/net/veth.c                            |   1 +
>  drivers/net/virtio_net.c                      |   2 +
>  drivers/net/xen-netfront.c                    |   1 +
>  include/net/xdp.h                             |  31 ++-
>  include/uapi/linux/bpf.h                      |  14 ++
>  kernel/bpf/cpumap.c                           |  45 +----
>  net/bpf/test_run.c                            | 118 ++++++++++--
>  net/core/dev.c                                |   1 +
>  net/core/filter.c                             |  42 ++++
>  net/core/xdp.c                                | 104 ++++++++++
>  samples/bpf/Makefile                          |   3 +
>  samples/bpf/xdp_mb_kern.c                     |  68 +++++++
>  samples/bpf/xdp_mb_user.c                     | 182 ++++++++++++++++++
>  tools/include/uapi/linux/bpf.h                |  14 ++
>  .../testing/selftests/bpf/prog_tests/xdp_mb.c |  79 ++++++++
>  .../selftests/bpf/progs/test_xdp_multi_buff.c |  24 +++
>  36 files changed, 757 insertions(+), 123 deletions(-)
>  create mode 100644 samples/bpf/xdp_mb_kern.c
>  create mode 100644 samples/bpf/xdp_mb_user.c
>  create mode 100644 tools/testing/selftests/bpf/prog_tests/xdp_mb.c
>  create mode 100644 tools/testing/selftests/bpf/progs/test_xdp_multi_buff.c
> 
> -- 
> 2.26.2
>
Lorenzo Bianconi Oct. 2, 2020, 4:06 p.m. UTC | #2
> Lorenzo Bianconi wrote:
> > This series introduce XDP multi-buffer support. The mvneta driver is
> > the first to support these new "non-linear" xdp_{buff,frame}. Reviewers
> > please focus on how these new types of xdp_{buff,frame} packets
> > traverse the different layers and the layout design. It is on purpose
> > that BPF-helpers are kept simple, as we don't want to expose the
> > internal layout to allow later changes.
> > 
> > For now, to keep the design simple and to maintain performance, the XDP
> > BPF-prog (still) only have access to the first-buffer. It is left for
> > later (another patchset) to add payload access across multiple buffers.
> > This patchset should still allow for these future extensions. The goal
> > is to lift the XDP MTU restriction that comes with XDP, but maintain
> > same performance as before.
> > 
> > The main idea for the new multi-buffer layout is to reuse the same
> > layout used for non-linear SKB. This rely on the "skb_shared_info"
> > struct at the end of the first buffer to link together subsequent
> > buffers. Keeping the layout compatible with SKBs is also done to ease
> > and speedup creating an SKB from an xdp_{buff,frame}. Converting
> > xdp_frame to SKB and deliver it to the network stack is shown in cpumap
> > code (patch 13/13).
> 
> Using the end of the buffer for the skb_shared_info struct is going to
> become driver API so unwinding it if it proves to be a performance issue
> is going to be ugly. So same question as before, for the use case where
> we receive packet and do XDP_TX with it how do we avoid cache miss
> overhead? This is not just a hypothetical use case, the Facebook
> load balancer is doing this as well as Cilium and allowing this with
> multi-buffer packets >1500B would be useful.
> 
> Can we write the skb_shared_info lazily? It should only be needed once
> we know the packet is going up the stack to some place that needs the
> info. Which we could learn from the return code of the XDP program.

Hi John,

I agree, I think for XDP_TX use-case it is not strictly necessary to fill the
skb_hared_info. The driver can just keep this info on the stack and use it
inserting the packet back to the DMA ring.
For mvneta I implemented it in this way to keep the code aligned with ndo_xdp_xmit
path since it is a low-end device. I guess we are not introducing any API constraint
for XDP_TX. A high-end device can implement multi-buff for XDP_TX in a different way
in order to avoid the cache miss.

We need to fill the skb_shared info only when we want to pass the frame to the
network stack (build_skb() can directly reuse skb_shared_info->frags[]) or for
XDP_REDIRECT use-case.

> 
> > 
> > A multi-buffer bit (mb) has been introduced in xdp_{buff,frame} structure
> > to notify the bpf/network layer if this is a xdp multi-buffer frame (mb = 1)
> > or not (mb = 0).
> > The mb bit will be set by a xdp multi-buffer capable driver only for
> > non-linear frames maintaining the capability to receive linear frames
> > without any extra cost since the skb_shared_info structure at the end
> > of the first buffer will be initialized only if mb is set.
> 
> Thanks above is clearer.
> 
> > 
> > In order to provide to userspace some metdata about the non-linear
> > xdp_{buff,frame}, we introduced 2 bpf helpers:
> > - bpf_xdp_get_frags_count:
> >   get the number of fragments for a given xdp multi-buffer.
> > - bpf_xdp_get_frags_total_size:
> >   get the total size of fragments for a given xdp multi-buffer.
> 
> Whats the use case for these? Do you have an example where knowing
> the frags count is going to be something a BPF program will use?
> Having total size seems interesting but perhaps we should push that
> into the metadata so its pulled into the cache if users are going to
> be reading it on every packet or something.

At the moment we do not have any use-case for these helpers (not considering
the sample in the series :)). We introduced them to provide some basic metadata
about the non-linear xdp_frame.
IIRC we decided to introduce some helpers instead of adding this info in xdp_frame
in order to save space on it (for xdp it is essential xdp_frame to fit in a single
cache-line).

Regards,
Lorenzo

> 
> > 
> > Typical use cases for this series are:
> > - Jumbo-frames
> > - Packet header split (please see Google���s use-case @ NetDevConf 0x14, [0])
> > - TSO
> > 
> > More info about the main idea behind this approach can be found here [1][2].
> > 
> > We carried out some throughput tests in a standard linear frame scenario in order
> > to verify we did not introduced any performance regression adding xdp multi-buff
> > support to mvneta:
> > 
> > offered load is ~ 1000Kpps, packet size is 64B, mvneta descriptor size is one PAGE
> > 
> > commit: 879456bedbe5 ("net: mvneta: avoid possible cache misses in mvneta_rx_swbm")
> > - xdp-pass:      ~162Kpps
> > - xdp-drop:      ~701Kpps
> > - xdp-tx:        ~185Kpps
> > - xdp-redirect:  ~202Kpps
> > 
> > mvneta xdp multi-buff:
> > - xdp-pass:      ~163Kpps
> > - xdp-drop:      ~739Kpps
> > - xdp-tx:        ~182Kpps
> > - xdp-redirect:  ~202Kpps
> > 
> > Changes since v3:
> > - rebase ontop of bpf-next
> > - add patch 10/13 to copy back paged data from a xdp multi-buff frame to
> >   userspace buffer for xdp multi-buff selftests
> > 
> > Changes since v2:
> > - add throughput measurements
> > - drop bpf_xdp_adjust_mb_header bpf helper
> > - introduce selftest for xdp multibuffer
> > - addressed comments on bpf_xdp_get_frags_count
> > - introduce xdp multi-buff support to cpumaps
> > 
> > Changes since v1:
> > - Fix use-after-free in xdp_return_{buff/frame}
> > - Introduce bpf helpers
> > - Introduce xdp_mb sample program
> > - access skb_shared_info->nr_frags only on the last fragment
> > 
> > Changes since RFC:
> > - squash multi-buffer bit initialization in a single patch
> > - add mvneta non-linear XDP buff support for tx side
> > 
> > [0] https://netdevconf.info/0x14/session.html?talk-the-path-to-tcp-4k-mtu-and-rx-zerocopy
> > [1] https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org
> > [2] https://netdevconf.info/0x14/session.html?tutorial-add-XDP-support-to-a-NIC-driver (XDPmulti-buffers section)
> > 
> > Lorenzo Bianconi (11):
> >   xdp: introduce mb in xdp_buff/xdp_frame
> >   xdp: initialize xdp_buff mb bit to 0 in all XDP drivers
> >   net: mvneta: update mb bit before passing the xdp buffer to eBPF layer
> >   xdp: add multi-buff support to xdp_return_{buff/frame}
> >   net: mvneta: add multi buffer support to XDP_TX
> >   bpf: move user_size out of bpf_test_init
> >   bpf: introduce multibuff support to bpf_prog_test_run_xdp()
> >   bpf: test_run: add skb_shared_info pointer in bpf_test_finish
> >     signature
> >   bpf: add xdp multi-buffer selftest
> >   net: mvneta: enable jumbo frames for XDP
> >   bpf: cpumap: introduce xdp multi-buff support
> > 
> > Sameeh Jubran (2):
> >   bpf: introduce bpf_xdp_get_frags_{count, total_size} helpers
> >   samples/bpf: add bpf program that uses xdp mb helpers
> > 
> >  drivers/net/ethernet/amazon/ena/ena_netdev.c  |   1 +
> >  drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c |   1 +
> >  .../net/ethernet/cavium/thunder/nicvf_main.c  |   1 +
> >  .../net/ethernet/freescale/dpaa2/dpaa2-eth.c  |   1 +
> >  drivers/net/ethernet/intel/i40e/i40e_txrx.c   |   1 +
> >  drivers/net/ethernet/intel/ice/ice_txrx.c     |   1 +
> >  drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |   1 +
> >  .../net/ethernet/intel/ixgbevf/ixgbevf_main.c |   1 +
> >  drivers/net/ethernet/marvell/mvneta.c         | 131 +++++++------
> >  .../net/ethernet/marvell/mvpp2/mvpp2_main.c   |   1 +
> >  drivers/net/ethernet/mellanox/mlx4/en_rx.c    |   1 +
> >  .../net/ethernet/mellanox/mlx5/core/en_rx.c   |   1 +
> >  .../ethernet/netronome/nfp/nfp_net_common.c   |   1 +
> >  drivers/net/ethernet/qlogic/qede/qede_fp.c    |   1 +
> >  drivers/net/ethernet/sfc/rx.c                 |   1 +
> >  drivers/net/ethernet/socionext/netsec.c       |   1 +
> >  drivers/net/ethernet/ti/cpsw.c                |   1 +
> >  drivers/net/ethernet/ti/cpsw_new.c            |   1 +
> >  drivers/net/hyperv/netvsc_bpf.c               |   1 +
> >  drivers/net/tun.c                             |   2 +
> >  drivers/net/veth.c                            |   1 +
> >  drivers/net/virtio_net.c                      |   2 +
> >  drivers/net/xen-netfront.c                    |   1 +
> >  include/net/xdp.h                             |  31 ++-
> >  include/uapi/linux/bpf.h                      |  14 ++
> >  kernel/bpf/cpumap.c                           |  45 +----
> >  net/bpf/test_run.c                            | 118 ++++++++++--
> >  net/core/dev.c                                |   1 +
> >  net/core/filter.c                             |  42 ++++
> >  net/core/xdp.c                                | 104 ++++++++++
> >  samples/bpf/Makefile                          |   3 +
> >  samples/bpf/xdp_mb_kern.c                     |  68 +++++++
> >  samples/bpf/xdp_mb_user.c                     | 182 ++++++++++++++++++
> >  tools/include/uapi/linux/bpf.h                |  14 ++
> >  .../testing/selftests/bpf/prog_tests/xdp_mb.c |  79 ++++++++
> >  .../selftests/bpf/progs/test_xdp_multi_buff.c |  24 +++
> >  36 files changed, 757 insertions(+), 123 deletions(-)
> >  create mode 100644 samples/bpf/xdp_mb_kern.c
> >  create mode 100644 samples/bpf/xdp_mb_user.c
> >  create mode 100644 tools/testing/selftests/bpf/prog_tests/xdp_mb.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/test_xdp_multi_buff.c
> > 
> > -- 
> > 2.26.2
> > 
> 
>
John Fastabend Oct. 2, 2020, 6:06 p.m. UTC | #3
Lorenzo Bianconi wrote:
> > Lorenzo Bianconi wrote:
> > > This series introduce XDP multi-buffer support. The mvneta driver is
> > > the first to support these new "non-linear" xdp_{buff,frame}. Reviewers
> > > please focus on how these new types of xdp_{buff,frame} packets
> > > traverse the different layers and the layout design. It is on purpose
> > > that BPF-helpers are kept simple, as we don't want to expose the
> > > internal layout to allow later changes.
> > > 
> > > For now, to keep the design simple and to maintain performance, the XDP
> > > BPF-prog (still) only have access to the first-buffer. It is left for
> > > later (another patchset) to add payload access across multiple buffers.
> > > This patchset should still allow for these future extensions. The goal
> > > is to lift the XDP MTU restriction that comes with XDP, but maintain
> > > same performance as before.
> > > 
> > > The main idea for the new multi-buffer layout is to reuse the same
> > > layout used for non-linear SKB. This rely on the "skb_shared_info"
> > > struct at the end of the first buffer to link together subsequent
> > > buffers. Keeping the layout compatible with SKBs is also done to ease
> > > and speedup creating an SKB from an xdp_{buff,frame}. Converting
> > > xdp_frame to SKB and deliver it to the network stack is shown in cpumap
> > > code (patch 13/13).
> > 
> > Using the end of the buffer for the skb_shared_info struct is going to
> > become driver API so unwinding it if it proves to be a performance issue
> > is going to be ugly. So same question as before, for the use case where
> > we receive packet and do XDP_TX with it how do we avoid cache miss
> > overhead? This is not just a hypothetical use case, the Facebook
> > load balancer is doing this as well as Cilium and allowing this with
> > multi-buffer packets >1500B would be useful.
> > 
> > Can we write the skb_shared_info lazily? It should only be needed once
> > we know the packet is going up the stack to some place that needs the
> > info. Which we could learn from the return code of the XDP program.
> 
> Hi John,

Hi, I'll try to join the two threads this one and the one on helpers here
so we don't get too fragmented.

> 
> I agree, I think for XDP_TX use-case it is not strictly necessary to fill the
> skb_hared_info. The driver can just keep this info on the stack and use it
> inserting the packet back to the DMA ring.
> For mvneta I implemented it in this way to keep the code aligned with ndo_xdp_xmit
> path since it is a low-end device. I guess we are not introducing any API constraint
> for XDP_TX. A high-end device can implement multi-buff for XDP_TX in a different way
> in order to avoid the cache miss.

Agree it would be an implementation detail for XDP_TX except the two helpers added
in this series currently require it to be there.

> 
> We need to fill the skb_shared info only when we want to pass the frame to the
> network stack (build_skb() can directly reuse skb_shared_info->frags[]) or for
> XDP_REDIRECT use-case.

It might be good to think about the XDP_REDIRECT case as well then. If the
frags list fit in the metadata/xdp_frame would we expect better
performance?

Looking at skb_shared_info{} that is a rather large structure with many
fields that look unnecessary for XDP_REDIRECT case and only needed when
passing to the stack. Fundamentally, a frag just needs

 struct bio_vec {
     struct page *bv_page;     // 8B
     unsigned int bv_len;      // 4B
     unsigned int bv_offset;   // 4B
 } // 16B

With header split + data we only need a single frag so we could use just
16B. And worse case jumbo frame + header split seems 3 entries would be
enough giving 48B (header plus 3 4k pages). Could we just stick this in
the metadata and make it read only? Then programs that care can read it
and get all the info they need without helpers. I would expect performance
to be better in the XDP_TX and XDP_REDIRECT cases. And copying an extra
worse case 48B in passing to the stack I guess is not measurable given
all the work needed in that path.

> 
> > 
> > > 
> > > A multi-buffer bit (mb) has been introduced in xdp_{buff,frame} structure
> > > to notify the bpf/network layer if this is a xdp multi-buffer frame (mb = 1)
> > > or not (mb = 0).
> > > The mb bit will be set by a xdp multi-buffer capable driver only for
> > > non-linear frames maintaining the capability to receive linear frames
> > > without any extra cost since the skb_shared_info structure at the end
> > > of the first buffer will be initialized only if mb is set.
> > 
> > Thanks above is clearer.
> > 
> > > 
> > > In order to provide to userspace some metdata about the non-linear
> > > xdp_{buff,frame}, we introduced 2 bpf helpers:
> > > - bpf_xdp_get_frags_count:
> > >   get the number of fragments for a given xdp multi-buffer.
> > > - bpf_xdp_get_frags_total_size:
> > >   get the total size of fragments for a given xdp multi-buffer.
> > 
> > Whats the use case for these? Do you have an example where knowing
> > the frags count is going to be something a BPF program will use?
> > Having total size seems interesting but perhaps we should push that
> > into the metadata so its pulled into the cache if users are going to
> > be reading it on every packet or something.
> 
> At the moment we do not have any use-case for these helpers (not considering
> the sample in the series :)). We introduced them to provide some basic metadata
> about the non-linear xdp_frame.
> IIRC we decided to introduce some helpers instead of adding this info in xdp_frame
> in order to save space on it (for xdp it is essential xdp_frame to fit in a single
> cache-line).

Sure, how about in the metadata then? (From other thread I was suggesting putting
the total length in metadata) We could even allow programs to overwrite it if
they wanted if its not used by the stack for anything other than packet length
visibility. Of course users would then need to be a bit careful not to overwrite
it and then read it again expecting the length to be correct. I think from a
users perspective though that would be expected.

> 
> Regards,
> Lorenzo
>
Daniel Borkmann Oct. 2, 2020, 7:53 p.m. UTC | #4
On 10/2/20 5:25 PM, John Fastabend wrote:
> Lorenzo Bianconi wrote:
>> This series introduce XDP multi-buffer support. The mvneta driver is
>> the first to support these new "non-linear" xdp_{buff,frame}. Reviewers
>> please focus on how these new types of xdp_{buff,frame} packets
>> traverse the different layers and the layout design. It is on purpose
>> that BPF-helpers are kept simple, as we don't want to expose the
>> internal layout to allow later changes.
>>
>> For now, to keep the design simple and to maintain performance, the XDP
>> BPF-prog (still) only have access to the first-buffer. It is left for
>> later (another patchset) to add payload access across multiple buffers.
>> This patchset should still allow for these future extensions. The goal
>> is to lift the XDP MTU restriction that comes with XDP, but maintain
>> same performance as before.
>>
>> The main idea for the new multi-buffer layout is to reuse the same
>> layout used for non-linear SKB. This rely on the "skb_shared_info"
>> struct at the end of the first buffer to link together subsequent
>> buffers. Keeping the layout compatible with SKBs is also done to ease
>> and speedup creating an SKB from an xdp_{buff,frame}. Converting
>> xdp_frame to SKB and deliver it to the network stack is shown in cpumap
>> code (patch 13/13).
> 
> Using the end of the buffer for the skb_shared_info struct is going to
> become driver API so unwinding it if it proves to be a performance issue
> is going to be ugly. So same question as before, for the use case where
> we receive packet and do XDP_TX with it how do we avoid cache miss
> overhead? This is not just a hypothetical use case, the Facebook
> load balancer is doing this as well as Cilium and allowing this with
> multi-buffer packets >1500B would be useful.
[...]

Fully agree. My other question would be if someone else right now is in the process
of implementing this scheme for a 40G+ NIC? My concern is the numbers below are rather
on the lower end of the spectrum, so I would like to see a comparison of XDP as-is
today vs XDP multi-buff on a higher end NIC so that we have a picture how well the
current designed scheme works there and into which performance issue we'll run e.g.
under typical XDP L4 load balancer scenario with XDP_TX. I think this would be crucial
before the driver API becomes 'sort of' set in stone where others start to adapting
it and changing design becomes painful. Do ena folks have an implementation ready as
well? And what about virtio_net, for example, anyone committing there too? Typically
for such features to land is to require at least 2 drivers implementing it.

>> Typical use cases for this series are:
>> - Jumbo-frames
>> - Packet header split (please see Google���s use-case @ NetDevConf 0x14, [0])
>> - TSO
>>
>> More info about the main idea behind this approach can be found here [1][2].
>>
>> We carried out some throughput tests in a standard linear frame scenario in order
>> to verify we did not introduced any performance regression adding xdp multi-buff
>> support to mvneta:
>>
>> offered load is ~ 1000Kpps, packet size is 64B, mvneta descriptor size is one PAGE
>>
>> commit: 879456bedbe5 ("net: mvneta: avoid possible cache misses in mvneta_rx_swbm")
>> - xdp-pass:      ~162Kpps
>> - xdp-drop:      ~701Kpps
>> - xdp-tx:        ~185Kpps
>> - xdp-redirect:  ~202Kpps
>>
>> mvneta xdp multi-buff:
>> - xdp-pass:      ~163Kpps
>> - xdp-drop:      ~739Kpps
>> - xdp-tx:        ~182Kpps
>> - xdp-redirect:  ~202Kpps
[...]
Jesper Dangaard Brouer Oct. 5, 2020, 9:52 a.m. UTC | #5
On Fri, 02 Oct 2020 11:06:12 -0700
John Fastabend <john.fastabend@gmail.com> wrote:

> Lorenzo Bianconi wrote:

> > > Lorenzo Bianconi wrote:  

> > > > This series introduce XDP multi-buffer support. The mvneta driver is

> > > > the first to support these new "non-linear" xdp_{buff,frame}. Reviewers

> > > > please focus on how these new types of xdp_{buff,frame} packets

> > > > traverse the different layers and the layout design. It is on purpose

> > > > that BPF-helpers are kept simple, as we don't want to expose the

> > > > internal layout to allow later changes.

> > > > 

> > > > For now, to keep the design simple and to maintain performance, the XDP

> > > > BPF-prog (still) only have access to the first-buffer. It is left for

> > > > later (another patchset) to add payload access across multiple buffers.

> > > > This patchset should still allow for these future extensions. The goal

> > > > is to lift the XDP MTU restriction that comes with XDP, but maintain

> > > > same performance as before.

> > > > 

> > > > The main idea for the new multi-buffer layout is to reuse the same

> > > > layout used for non-linear SKB. This rely on the "skb_shared_info"

> > > > struct at the end of the first buffer to link together subsequent

> > > > buffers. Keeping the layout compatible with SKBs is also done to ease

> > > > and speedup creating an SKB from an xdp_{buff,frame}. Converting

> > > > xdp_frame to SKB and deliver it to the network stack is shown in cpumap

> > > > code (patch 13/13).  

> > > 

> > > Using the end of the buffer for the skb_shared_info struct is going to

> > > become driver API so unwinding it if it proves to be a performance issue

> > > is going to be ugly. So same question as before, for the use case where

> > > we receive packet and do XDP_TX with it how do we avoid cache miss

> > > overhead? This is not just a hypothetical use case, the Facebook

> > > load balancer is doing this as well as Cilium and allowing this with

> > > multi-buffer packets >1500B would be useful.

> > > 

> > > Can we write the skb_shared_info lazily? It should only be needed once

> > > we know the packet is going up the stack to some place that needs the

> > > info. Which we could learn from the return code of the XDP program.  

> > 

> > Hi John,  

> 

> Hi, I'll try to join the two threads this one and the one on helpers here

> so we don't get too fragmented.

> 

> > 

> > I agree, I think for XDP_TX use-case it is not strictly necessary to fill the

> > skb_hared_info. The driver can just keep this info on the stack and use it

> > inserting the packet back to the DMA ring.

> > For mvneta I implemented it in this way to keep the code aligned with ndo_xdp_xmit

> > path since it is a low-end device. I guess we are not introducing any API constraint

> > for XDP_TX. A high-end device can implement multi-buff for XDP_TX in a different way

> > in order to avoid the cache miss.  

> 

> Agree it would be an implementation detail for XDP_TX except the two

> helpers added in this series currently require it to be there.


That is a good point.  If you look at the details, the helpers use
xdp_buff->mb bit to guard against accessing the "shared_info"
cacheline. Thus, for the normal single frame case XDP_TX should not see
a slowdown.  Do we really need to optimize XDP_TX multi-frame case(?)


> > 

> > We need to fill the skb_shared info only when we want to pass the frame to the

> > network stack (build_skb() can directly reuse skb_shared_info->frags[]) or for

> > XDP_REDIRECT use-case.  

> 

> It might be good to think about the XDP_REDIRECT case as well then. If the

> frags list fit in the metadata/xdp_frame would we expect better

> performance?


I don't like to use space in xdp_frame for this. (1) We (Ahern and I)
are planning to use the space in xdp_frame for RX-csum + RX-hash +vlan,
which will be more common (e.g. all packets will have HW RX+csum).  (2)
I consider XDP multi-buffer the exception case, that will not be used
in most cases, so why reserve space for that in this cache-line.

IMHO we CANNOT allow any slowdown for existing XDP use-cases, but IMHO
XDP multi-buffer use-cases are allowed to run "slower".


> Looking at skb_shared_info{} that is a rather large structure with many


A cache-line detail about skb_shared_info: The first frags[0] member is
in the first cache-line.  Meaning that it is still fast to have xdp
frames with 1 extra buffer.

> fields that look unnecessary for XDP_REDIRECT case and only needed when

> passing to the stack. 


Yes, I think we can use first cache-line of skb_shared_info more
optimally (via defining a xdp_shared_info struct). But I still want us
to use this specific cache-line.  Let me explain why below. (Avoiding
cache-line misses is all about the details, so I hope you can follow).

Hopefully most driver developers understand/knows this.  In the RX-loop
the current RX-descriptor have a status that indicate there are more
frame, usually expressed as non-EOP (End-Of-Packet).  Thus, a driver
can start a prefetchw of this shared_info cache-line, prior to
processing the RX-desc that describe the multi-buffer.
 (Remember this shared_info is constructed prior to calling XDP and any
XDP_TX action, thus the XDP prog should not see a cache-line miss when
using the BPF-helper to read shared_info area).


> Fundamentally, a frag just needs

> 

>  struct bio_vec {

>      struct page *bv_page;     // 8B

>      unsigned int bv_len;      // 4B

>      unsigned int bv_offset;   // 4B

>  } // 16B

> 

> With header split + data we only need a single frag so we could use just

> 16B. And worse case jumbo frame + header split seems 3 entries would be

> enough giving 48B (header plus 3 4k pages). 


For jumbo-frame 9000 MTU 2 entries might be enough, as we also have
room in the first buffer (((9000-(4096-256-320))/4096 = 1.33789).

The problem is that we need to support TSO (TCP Segmentation Offload)
use-case, which can have more frames. Thus, 3 entries will not be
enough.

> Could we just stick this in the metadata and make it read only? Then

> programs that care can read it and get all the info they need without

> helpers.


I don't see how that is possible. (1) the metadata area is only 32
bytes, (2) when freeing an xdp_frame the kernel need to know the layout
as these points will be free'ed.

> I would expect performance to be better in the XDP_TX and

> XDP_REDIRECT cases. And copying an extra worse case 48B in passing to

> the stack I guess is not measurable given all the work needed in that

> path.


I do agree, that when passing to netstack we can do a transformation
from xdp_shared_info to skb_shared_info with a fairly small cost.  (The
TSO case would require more copying).

Notice that allocating an SKB, will always clear the first 32 bytes of
skb_shared_info.  If the XDP driver-code path have done the prefetch
as described above, then we should see a speedup for netstack delivery.

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer
Tirthendu Sarkar Oct. 5, 2020, 3:50 p.m. UTC | #6
On 10/2/20 5:25 PM, John Fastabend wrote:
>>[..] Typically for such features to land is to require at least 2 drivers

>>implementing it.


I am working on making changes to Intel NIC drivers for XDP multi buffer based
on these patches. Respective patches Will be posted once ready.
John Fastabend Oct. 5, 2020, 9:22 p.m. UTC | #7
Jesper Dangaard Brouer wrote:
> On Fri, 02 Oct 2020 11:06:12 -0700
> John Fastabend <john.fastabend@gmail.com> wrote:
> 
> > Lorenzo Bianconi wrote:
> > > > Lorenzo Bianconi wrote:  
> > > > > This series introduce XDP multi-buffer support. The mvneta driver is
> > > > > the first to support these new "non-linear" xdp_{buff,frame}. Reviewers
> > > > > please focus on how these new types of xdp_{buff,frame} packets
> > > > > traverse the different layers and the layout design. It is on purpose
> > > > > that BPF-helpers are kept simple, as we don't want to expose the
> > > > > internal layout to allow later changes.
> > > > > 
> > > > > For now, to keep the design simple and to maintain performance, the XDP
> > > > > BPF-prog (still) only have access to the first-buffer. It is left for
> > > > > later (another patchset) to add payload access across multiple buffers.
> > > > > This patchset should still allow for these future extensions. The goal
> > > > > is to lift the XDP MTU restriction that comes with XDP, but maintain
> > > > > same performance as before.
> > > > > 
> > > > > The main idea for the new multi-buffer layout is to reuse the same
> > > > > layout used for non-linear SKB. This rely on the "skb_shared_info"
> > > > > struct at the end of the first buffer to link together subsequent
> > > > > buffers. Keeping the layout compatible with SKBs is also done to ease
> > > > > and speedup creating an SKB from an xdp_{buff,frame}. Converting
> > > > > xdp_frame to SKB and deliver it to the network stack is shown in cpumap
> > > > > code (patch 13/13).  
> > > > 
> > > > Using the end of the buffer for the skb_shared_info struct is going to
> > > > become driver API so unwinding it if it proves to be a performance issue
> > > > is going to be ugly. So same question as before, for the use case where
> > > > we receive packet and do XDP_TX with it how do we avoid cache miss
> > > > overhead? This is not just a hypothetical use case, the Facebook
> > > > load balancer is doing this as well as Cilium and allowing this with
> > > > multi-buffer packets >1500B would be useful.
> > > > 
> > > > Can we write the skb_shared_info lazily? It should only be needed once
> > > > we know the packet is going up the stack to some place that needs the
> > > > info. Which we could learn from the return code of the XDP program.  
> > > 
> > > Hi John,  
> > 
> > Hi, I'll try to join the two threads this one and the one on helpers here
> > so we don't get too fragmented.
> > 
> > > 
> > > I agree, I think for XDP_TX use-case it is not strictly necessary to fill the
> > > skb_hared_info. The driver can just keep this info on the stack and use it
> > > inserting the packet back to the DMA ring.
> > > For mvneta I implemented it in this way to keep the code aligned with ndo_xdp_xmit
> > > path since it is a low-end device. I guess we are not introducing any API constraint
> > > for XDP_TX. A high-end device can implement multi-buff for XDP_TX in a different way
> > > in order to avoid the cache miss.  
> > 
> > Agree it would be an implementation detail for XDP_TX except the two
> > helpers added in this series currently require it to be there.
> 
> That is a good point.  If you look at the details, the helpers use
> xdp_buff->mb bit to guard against accessing the "shared_info"
> cacheline. Thus, for the normal single frame case XDP_TX should not see
> a slowdown.  Do we really need to optimize XDP_TX multi-frame case(?)

Agree it is guarded by xdp_buff->mb which is why I asked for that detail
to be posted in the cover letter so it was easy to understand that bit
of info.

Do we really need to optimize XDP_TX multi-frame case? Yes I think so.
The use case is jumbo frames (or 4kB) LB. XDP_TX is the common case any
many configurations. For our use case these including cloud providers
and bare-metal data centers.

Keeping the implementation out of the helpers allows drivers to optimize
for this case. Also it doesn't seem like the helpers in this series
have a strong use case. Happy to hear what it is, but I can't see how
to use them myself.

> 
> 
> > > 
> > > We need to fill the skb_shared info only when we want to pass the frame to the
> > > network stack (build_skb() can directly reuse skb_shared_info->frags[]) or for
> > > XDP_REDIRECT use-case.  
> > 
> > It might be good to think about the XDP_REDIRECT case as well then. If the
> > frags list fit in the metadata/xdp_frame would we expect better
> > performance?
> 
> I don't like to use space in xdp_frame for this. (1) We (Ahern and I)
> are planning to use the space in xdp_frame for RX-csum + RX-hash +vlan,
> which will be more common (e.g. all packets will have HW RX+csum).  (2)
> I consider XDP multi-buffer the exception case, that will not be used
> in most cases, so why reserve space for that in this cache-line.

Sure.

> 
> IMHO we CANNOT allow any slowdown for existing XDP use-cases, but IMHO
> XDP multi-buffer use-cases are allowed to run "slower".

I agree we cannot slowdown existing use cases. But, disagree that multi
buffer use cases can be slower. If folks enable jumbo-frames and things
slow down thats a problem.

> 
> 
> > Looking at skb_shared_info{} that is a rather large structure with many
> 
> A cache-line detail about skb_shared_info: The first frags[0] member is
> in the first cache-line.  Meaning that it is still fast to have xdp
> frames with 1 extra buffer.

Thats nice in-theory.

> 
> > fields that look unnecessary for XDP_REDIRECT case and only needed when
> > passing to the stack. 
> 
> Yes, I think we can use first cache-line of skb_shared_info more
> optimally (via defining a xdp_shared_info struct). But I still want us
> to use this specific cache-line.  Let me explain why below. (Avoiding
> cache-line misses is all about the details, so I hope you can follow).
> 
> Hopefully most driver developers understand/knows this.  In the RX-loop
> the current RX-descriptor have a status that indicate there are more
> frame, usually expressed as non-EOP (End-Of-Packet).  Thus, a driver
> can start a prefetchw of this shared_info cache-line, prior to
> processing the RX-desc that describe the multi-buffer.
>  (Remember this shared_info is constructed prior to calling XDP and any
> XDP_TX action, thus the XDP prog should not see a cache-line miss when
> using the BPF-helper to read shared_info area).

In general I see no reason to populate these fields before the XDP
program runs. Someone needs to convince me why having frags info before
program runs is useful. In general headers should be preserved and first
frag already included in the data pointers. If users start parsing further
they might need it, but this series doesn't provide a way to do that
so IMO without those helpers its a bit difficult to debate.

Specifically for XDP_TX case we can just flip the descriptors from RX
ring to TX ring and keep moving along. This is going to be ideal on
40/100Gbps nics.

I'm not arguing that its likely possible to put some prefetch logic
in there and keep the pipe full, but I would need to see that on
a 100gbps nic to be convinced the details here are going to work. Or
at minimum a 40gbps nic.

> 
> 
> > Fundamentally, a frag just needs
> > 
> >  struct bio_vec {
> >      struct page *bv_page;     // 8B
> >      unsigned int bv_len;      // 4B
> >      unsigned int bv_offset;   // 4B
> >  } // 16B
> > 
> > With header split + data we only need a single frag so we could use just
> > 16B. And worse case jumbo frame + header split seems 3 entries would be
> > enough giving 48B (header plus 3 4k pages). 
> 
> For jumbo-frame 9000 MTU 2 entries might be enough, as we also have
> room in the first buffer (((9000-(4096-256-320))/4096 = 1.33789).

Sure. I was just counting the fist buffer a frag understanding it
wouldn't actually be in the frag list.

> 
> The problem is that we need to support TSO (TCP Segmentation Offload)
> use-case, which can have more frames. Thus, 3 entries will not be
> enough.

Sorry not following, TSO? Explain how TSO is going to work for XDP_TX
and XDP_REDIRECT? I guess in theory you can header split and coalesce,
but we are a ways off from that and this series certainly doesn't
talk about TSO unless I missed something.

> 
> > Could we just stick this in the metadata and make it read only? Then
> > programs that care can read it and get all the info they need without
> > helpers.
> 
> I don't see how that is possible. (1) the metadata area is only 32
> bytes, (2) when freeing an xdp_frame the kernel need to know the layout
> as these points will be free'ed.

Agree its tight, probably too tight to be useful.

> 
> > I would expect performance to be better in the XDP_TX and
> > XDP_REDIRECT cases. And copying an extra worse case 48B in passing to
> > the stack I guess is not measurable given all the work needed in that
> > path.
> 
> I do agree, that when passing to netstack we can do a transformation
> from xdp_shared_info to skb_shared_info with a fairly small cost.  (The
> TSO case would require more copying).

I'm lost on the TSO case. Explain how TSO is related here? 

> 
> Notice that allocating an SKB, will always clear the first 32 bytes of
> skb_shared_info.  If the XDP driver-code path have done the prefetch
> as described above, then we should see a speedup for netstack delivery.

Not against it, but these things are a bit tricky. Couple things I still
want to see/understand

 - Lets see a 40gbps use a prefetch and verify it works in practice
 - Explain why we can't just do this after XDP program runs
 - How will we read data in the frag list if we need to parse headers
   inside the frags[].

The above would be best to answer now rather than later IMO.

Thanks,
John
Lorenzo Bianconi Oct. 5, 2020, 10:24 p.m. UTC | #8
[...]

> 

> In general I see no reason to populate these fields before the XDP

> program runs. Someone needs to convince me why having frags info before

> program runs is useful. In general headers should be preserved and first

> frag already included in the data pointers. If users start parsing further

> they might need it, but this series doesn't provide a way to do that

> so IMO without those helpers its a bit difficult to debate.


We need to populate the skb_shared_info before running the xdp program in order to
allow the ebpf sanbox to access this data. If we restrict the access to the first
buffer only I guess we can avoid to do that but I think there is a value allowing
the xdp program to access this data.
A possible optimization can be access the shared_info only once before running
the ebpf program constructing the shared_info using a struct allocated on the
stack.
Moreover we can define a "xdp_shared_info" struct to alias the skb_shared_info
one in order to have most on frags elements in the first "shared_info" cache line.

> 

> Specifically for XDP_TX case we can just flip the descriptors from RX

> ring to TX ring and keep moving along. This is going to be ideal on

> 40/100Gbps nics.

> 

> I'm not arguing that its likely possible to put some prefetch logic

> in there and keep the pipe full, but I would need to see that on

> a 100gbps nic to be convinced the details here are going to work. Or

> at minimum a 40gbps nic.

> 

> > 

> > 


[...]

> Not against it, but these things are a bit tricky. Couple things I still

> want to see/understand

> 

>  - Lets see a 40gbps use a prefetch and verify it works in practice

>  - Explain why we can't just do this after XDP program runs


how can we allow the ebpf program to access paged data if we do not do that?

>  - How will we read data in the frag list if we need to parse headers

>    inside the frags[].

> 

> The above would be best to answer now rather than later IMO.

> 

> Thanks,

> John


Regards,
Lorenzo
John Fastabend Oct. 6, 2020, 4:29 a.m. UTC | #9
Lorenzo Bianconi wrote:
> [...]

> 

> > 

> > In general I see no reason to populate these fields before the XDP

> > program runs. Someone needs to convince me why having frags info before

> > program runs is useful. In general headers should be preserved and first

> > frag already included in the data pointers. If users start parsing further

> > they might need it, but this series doesn't provide a way to do that

> > so IMO without those helpers its a bit difficult to debate.

> 

> We need to populate the skb_shared_info before running the xdp program in order to

> allow the ebpf sanbox to access this data. If we restrict the access to the first

> buffer only I guess we can avoid to do that but I think there is a value allowing

> the xdp program to access this data.


I agree. We could also only populate the fields if the program accesses
the fields.

> A possible optimization can be access the shared_info only once before running

> the ebpf program constructing the shared_info using a struct allocated on the

> stack.


Seems interesting, might be a good idea.

> Moreover we can define a "xdp_shared_info" struct to alias the skb_shared_info

> one in order to have most on frags elements in the first "shared_info" cache line.

> 

> > 

> > Specifically for XDP_TX case we can just flip the descriptors from RX

> > ring to TX ring and keep moving along. This is going to be ideal on

> > 40/100Gbps nics.

> > 

> > I'm not arguing that its likely possible to put some prefetch logic

> > in there and keep the pipe full, but I would need to see that on

> > a 100gbps nic to be convinced the details here are going to work. Or

> > at minimum a 40gbps nic.

> > 

> > > 

> > > 

> 

> [...]

> 

> > Not against it, but these things are a bit tricky. Couple things I still

> > want to see/understand

> > 

> >  - Lets see a 40gbps use a prefetch and verify it works in practice

> >  - Explain why we can't just do this after XDP program runs

> 

> how can we allow the ebpf program to access paged data if we do not do that?


I don't see an easy way, but also this series doesn't have the data
access support.

Its hard to tell until we get at least a 40gbps nic if my concern about
performance is real or not. Prefetching smartly could resolve some of the
issue I guess.

If the Intel folks are working on it I think waiting would be great. Otherwise
at minimum drop the helpers and be prepared to revert things if needed.

> 

> >  - How will we read data in the frag list if we need to parse headers

> >    inside the frags[].

> > 

> > The above would be best to answer now rather than later IMO.

> > 

> > Thanks,

> > John

> 

> Regards,

> Lorenzo
Jesper Dangaard Brouer Oct. 6, 2020, 7:30 a.m. UTC | #10
On Mon, 05 Oct 2020 21:29:36 -0700
John Fastabend <john.fastabend@gmail.com> wrote:

> Lorenzo Bianconi wrote:

> > [...]

> >   

> > > 

> > > In general I see no reason to populate these fields before the XDP

> > > program runs. Someone needs to convince me why having frags info before

> > > program runs is useful. In general headers should be preserved and first

> > > frag already included in the data pointers. If users start parsing further

> > > they might need it, but this series doesn't provide a way to do that

> > > so IMO without those helpers its a bit difficult to debate.  

> > 

> > We need to populate the skb_shared_info before running the xdp program in order to

> > allow the ebpf sanbox to access this data. If we restrict the access to the first

> > buffer only I guess we can avoid to do that but I think there is a value allowing

> > the xdp program to access this data.  

> 

> I agree. We could also only populate the fields if the program accesses

> the fields.


Notice, a driver will not initialize/use the shared_info area unless
there are more segments.  And (we have already established) the xdp->mb
bit is guarding BPF-prog from accessing shared_info area. 

> > A possible optimization can be access the shared_info only once before running

> > the ebpf program constructing the shared_info using a struct allocated on the

> > stack.  

> 

> Seems interesting, might be a good idea.


It *might* be a good idea ("alloc" shared_info on stack), but we should
benchmark this.  The prefetch trick might be fast enough.  But also
keep in mind the performance target, as with large size frames the
packet-per-sec we need to handle dramatically drop.


The TSO statement, I meant LRO (Large Receive Offload), but I want the
ability to XDP-redirect this frame out another netdev as TSO.  This
does means that we need more than 3 pages (2 frags slots) to store LRO
frames.  Thus, if we store this shared_info on the stack it might need
to be larger than we like.



> > Moreover we can define a "xdp_shared_info" struct to alias the skb_shared_info

> > one in order to have most on frags elements in the first "shared_info" cache line.

> >   

> > > 

> > > Specifically for XDP_TX case we can just flip the descriptors from RX

> > > ring to TX ring and keep moving along. This is going to be ideal on

> > > 40/100Gbps nics.


I think both approaches will still allow to do these page-flips.

> > > I'm not arguing that its likely possible to put some prefetch logic

> > > in there and keep the pipe full, but I would need to see that on

> > > a 100gbps nic to be convinced the details here are going to work. Or

> > > at minimum a 40gbps nic.


I'm looking forward to see how this performs on faster NICs.  Once we
have a high-speed NIC driver with this I can also start doing testing
in my testlab.


> > [...]

> >   

> > > Not against it, but these things are a bit tricky. Couple things I still

> > > want to see/understand

> > > 

> > >  - Lets see a 40gbps use a prefetch and verify it works in practice

> > >  - Explain why we can't just do this after XDP program runs  

> > 

> > how can we allow the ebpf program to access paged data if we do not do that?  

> 

> I don't see an easy way, but also this series doesn't have the data

> access support.


Eelco (Cc'ed) are working on patches that allow access to data in these
fragments, so far internal patches, which (sorry to mention) got
shutdown in internal review.


> Its hard to tell until we get at least a 40gbps nic if my concern about

> performance is real or not. Prefetching smartly could resolve some of the

> issue I guess.

> 

> If the Intel folks are working on it I think waiting would be great. Otherwise

> at minimum drop the helpers and be prepared to revert things if needed.


I do think it makes sense to drop the helpers for now, and focus on how
this new multi-buffer frame type is handled in the existing code, and do
some benchmarking on higher speed NIC, before the BPF-helper start to
lockdown/restrict what we can change/revert as they define UAPI.

E.g. existing code that need to handle this is existing helper
bpf_xdp_adjust_tail, which is something I have broad up before and even
described in[1].  Lets make sure existing code works with proposed
design, before introducing new helpers (and this makes it easier to
revert).

[1] https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org#xdp-tail-adjust
-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer
Jubran, Samih Oct. 6, 2020, 12:39 p.m. UTC | #11
> -----Original Message-----

> From: Daniel Borkmann <daniel@iogearbox.net>

> Sent: Friday, October 2, 2020 10:53 PM

> To: John Fastabend <john.fastabend@gmail.com>; Lorenzo Bianconi

> <lorenzo@kernel.org>; bpf@vger.kernel.org; netdev@vger.kernel.org

> Cc: davem@davemloft.net; kuba@kernel.org; ast@kernel.org; Agroskin,

> Shay <shayagr@amazon.com>; Jubran, Samih <sameehj@amazon.com>;

> dsahern@kernel.org; brouer@redhat.com; lorenzo.bianconi@redhat.com;

> echaudro@redhat.com

> Subject: RE: [EXTERNAL] [PATCH v4 bpf-next 00/13] mvneta: introduce XDP

> multi-buffer support

> 

> CAUTION: This email originated from outside of the organization. Do not click

> links or open attachments unless you can confirm the sender and know the

> content is safe.

> 

> 

> 

> On 10/2/20 5:25 PM, John Fastabend wrote:

> > Lorenzo Bianconi wrote:

> >> This series introduce XDP multi-buffer support. The mvneta driver is

> >> the first to support these new "non-linear" xdp_{buff,frame}.

> >> Reviewers please focus on how these new types of xdp_{buff,frame}

> >> packets traverse the different layers and the layout design. It is on

> >> purpose that BPF-helpers are kept simple, as we don't want to expose

> >> the internal layout to allow later changes.

> >>

> >> For now, to keep the design simple and to maintain performance, the

> >> XDP BPF-prog (still) only have access to the first-buffer. It is left

> >> for later (another patchset) to add payload access across multiple buffers.

> >> This patchset should still allow for these future extensions. The

> >> goal is to lift the XDP MTU restriction that comes with XDP, but

> >> maintain same performance as before.

> >>

> >> The main idea for the new multi-buffer layout is to reuse the same

> >> layout used for non-linear SKB. This rely on the "skb_shared_info"

> >> struct at the end of the first buffer to link together subsequent

> >> buffers. Keeping the layout compatible with SKBs is also done to ease

> >> and speedup creating an SKB from an xdp_{buff,frame}. Converting

> >> xdp_frame to SKB and deliver it to the network stack is shown in

> >> cpumap code (patch 13/13).

> >

> > Using the end of the buffer for the skb_shared_info struct is going to

> > become driver API so unwinding it if it proves to be a performance

> > issue is going to be ugly. So same question as before, for the use

> > case where we receive packet and do XDP_TX with it how do we avoid

> > cache miss overhead? This is not just a hypothetical use case, the

> > Facebook load balancer is doing this as well as Cilium and allowing

> > this with multi-buffer packets >1500B would be useful.

> [...]

> 

> Fully agree. My other question would be if someone else right now is in the

> process of implementing this scheme for a 40G+ NIC? My concern is the

> numbers below are rather on the lower end of the spectrum, so I would like

> to see a comparison of XDP as-is today vs XDP multi-buff on a higher end NIC

> so that we have a picture how well the current designed scheme works there

> and into which performance issue we'll run e.g.

> under typical XDP L4 load balancer scenario with XDP_TX. I think this would

> be crucial before the driver API becomes 'sort of' set in stone where others

> start to adapting it and changing design becomes painful. Do ena folks have

> an implementation ready as well? And what about virtio_net, for example,

> anyone committing there too? Typically for such features to land is to require

> at least 2 drivers implementing it.

>


We (ENA) expect to have XDP MB implementation with performance results in around 4-6 weeks.

> >> Typical use cases for this series are:

> >> - Jumbo-frames

> >> - Packet header split (please see Google   s use-case @ NetDevConf

> >> 0x14, [0])

> >> - TSO

> >>

> >> More info about the main idea behind this approach can be found here

> [1][2].

> >>

> >> We carried out some throughput tests in a standard linear frame

> >> scenario in order to verify we did not introduced any performance

> >> regression adding xdp multi-buff support to mvneta:

> >>

> >> offered load is ~ 1000Kpps, packet size is 64B, mvneta descriptor

> >> size is one PAGE

> >>

> >> commit: 879456bedbe5 ("net: mvneta: avoid possible cache misses in

> mvneta_rx_swbm")

> >> - xdp-pass:      ~162Kpps

> >> - xdp-drop:      ~701Kpps

> >> - xdp-tx:        ~185Kpps

> >> - xdp-redirect:  ~202Kpps

> >>

> >> mvneta xdp multi-buff:

> >> - xdp-pass:      ~163Kpps

> >> - xdp-drop:      ~739Kpps

> >> - xdp-tx:        ~182Kpps

> >> - xdp-redirect:  ~202Kpps

> [...]
Lorenzo Bianconi Oct. 6, 2020, 3:28 p.m. UTC | #12
> On Mon, 05 Oct 2020 21:29:36 -0700
> John Fastabend <john.fastabend@gmail.com> wrote:
> 
> > Lorenzo Bianconi wrote:
> > > [...]
> > >   
> > > > 
> > > > In general I see no reason to populate these fields before the XDP
> > > > program runs. Someone needs to convince me why having frags info before
> > > > program runs is useful. In general headers should be preserved and first
> > > > frag already included in the data pointers. If users start parsing further
> > > > they might need it, but this series doesn't provide a way to do that
> > > > so IMO without those helpers its a bit difficult to debate.  
> > > 
> > > We need to populate the skb_shared_info before running the xdp program in order to
> > > allow the ebpf sanbox to access this data. If we restrict the access to the first
> > > buffer only I guess we can avoid to do that but I think there is a value allowing
> > > the xdp program to access this data.  
> > 
> > I agree. We could also only populate the fields if the program accesses
> > the fields.
> 
> Notice, a driver will not initialize/use the shared_info area unless
> there are more segments.  And (we have already established) the xdp->mb
> bit is guarding BPF-prog from accessing shared_info area. 
> 
> > > A possible optimization can be access the shared_info only once before running
> > > the ebpf program constructing the shared_info using a struct allocated on the
> > > stack.  
> > 
> > Seems interesting, might be a good idea.
> 
> It *might* be a good idea ("alloc" shared_info on stack), but we should
> benchmark this.  The prefetch trick might be fast enough.  But also
> keep in mind the performance target, as with large size frames the
> packet-per-sec we need to handle dramatically drop.

right. I guess we need to define a workload we want to run for the
xdp multi-buff use-case (e.g. if MTU is 9K we will have ~3 frames
for each packets and # of pps will be much slower)

> 
> 

[...]

> 
> I do think it makes sense to drop the helpers for now, and focus on how
> this new multi-buffer frame type is handled in the existing code, and do
> some benchmarking on higher speed NIC, before the BPF-helper start to
> lockdown/restrict what we can change/revert as they define UAPI.

ack, I will drop them in v5.

Regards,
Lorenzo

> 
> E.g. existing code that need to handle this is existing helper
> bpf_xdp_adjust_tail, which is something I have broad up before and even
> described in[1].  Lets make sure existing code works with proposed
> design, before introducing new helpers (and this makes it easier to
> revert).
> 
> [1] https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org#xdp-tail-adjust
> -- 
> Best regards,
>   Jesper Dangaard Brouer
>   MSc.CS, Principal Kernel Engineer at Red Hat
>   LinkedIn: http://www.linkedin.com/in/brouer
>
John Fastabend Oct. 8, 2020, 2:38 p.m. UTC | #13
Lorenzo Bianconi wrote:
> > On Mon, 05 Oct 2020 21:29:36 -0700
> > John Fastabend <john.fastabend@gmail.com> wrote:
> > 
> > > Lorenzo Bianconi wrote:
> > > > [...]
> > > >   
> > > > > 
> > > > > In general I see no reason to populate these fields before the XDP
> > > > > program runs. Someone needs to convince me why having frags info before
> > > > > program runs is useful. In general headers should be preserved and first
> > > > > frag already included in the data pointers. If users start parsing further
> > > > > they might need it, but this series doesn't provide a way to do that
> > > > > so IMO without those helpers its a bit difficult to debate.  
> > > > 
> > > > We need to populate the skb_shared_info before running the xdp program in order to
> > > > allow the ebpf sanbox to access this data. If we restrict the access to the first
> > > > buffer only I guess we can avoid to do that but I think there is a value allowing
> > > > the xdp program to access this data.  
> > > 
> > > I agree. We could also only populate the fields if the program accesses
> > > the fields.
> > 
> > Notice, a driver will not initialize/use the shared_info area unless
> > there are more segments.  And (we have already established) the xdp->mb
> > bit is guarding BPF-prog from accessing shared_info area. 
> > 
> > > > A possible optimization can be access the shared_info only once before running
> > > > the ebpf program constructing the shared_info using a struct allocated on the
> > > > stack.  
> > > 
> > > Seems interesting, might be a good idea.
> > 
> > It *might* be a good idea ("alloc" shared_info on stack), but we should
> > benchmark this.  The prefetch trick might be fast enough.  But also
> > keep in mind the performance target, as with large size frames the
> > packet-per-sec we need to handle dramatically drop.
> 
> right. I guess we need to define a workload we want to run for the
> xdp multi-buff use-case (e.g. if MTU is 9K we will have ~3 frames
> for each packets and # of pps will be much slower)

Right. Or configuring header split which would give 2 buffers with a much
smaller packet size. This would give some indication of the overhead. Then
we would likely want to look at XDP_TX and XDP_REDIRECT cases. At least
those would be my use cases.

> 
> > 
> > 
> 
> [...]
> 
> > 
> > I do think it makes sense to drop the helpers for now, and focus on how
> > this new multi-buffer frame type is handled in the existing code, and do
> > some benchmarking on higher speed NIC, before the BPF-helper start to
> > lockdown/restrict what we can change/revert as they define UAPI.
> 
> ack, I will drop them in v5.
> 
> Regards,
> Lorenzo
> 
> > 
> > E.g. existing code that need to handle this is existing helper
> > bpf_xdp_adjust_tail, which is something I have broad up before and even
> > described in[1].  Lets make sure existing code works with proposed
> > design, before introducing new helpers (and this makes it easier to
> > revert).
> > 
> > [1] https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org#xdp-tail-adjust
> > -- 
> > Best regards,
> >   Jesper Dangaard Brouer
> >   MSc.CS, Principal Kernel Engineer at Red Hat
> >   LinkedIn: http://www.linkedin.com/in/brouer
> >