mbox series

[RFC,00/16] GTP: flow based

Message ID 20210123195916.2765481-1-jonas@norrbonn.se
Headers show
Series GTP: flow based | expand

Message

Jonas Bonn Jan. 23, 2021, 7:59 p.m. UTC
This series begins by reverting the recently added patch adding support
for GTP with lightweight tunnels.  That patch was added without getting
any ACK from the maintainers and has several issues, as discussed on the
mailing list.

In order to try to make this as painless as possible, I have reworked
Pravin's patch into a series that is, hopefully, a bit more reviewable.
That series is rebased onto a series of other changes that constitute
cleanup work necessary for on-going work to IPv6 support into the
driver.  The IPv6 work should be rebaseable onto top of this series
later on.

I did try to do this other way around:  rebasing the IPv6 series on top
of Pravin's patch.  Given that Pravin's patch contained about 200 lines
of superfluous changes that would have had to be reverted in the process
of realigning the patch series, things got ugly pretty quickly.  The end
result would not have been pretty.

So the result of this is that Pravin's patch is now mostly still in
place.  I've reworked some small bits in order to simplify things.  My
expectation is that Pravin will review and test his bits here.  In
particular, the patch adding GTP control headers needs a bit of
explanation.

This is still an RFC only because I'm not quite convinced that I'm done
with this.  I do want to get this onto the list quickly, though, since
this has implications for the next merge window.  So let's see if we can
sort this out to everyone's satisfaction.

/Jonas

Jonas Bonn (13):
  Revert "GTP: add support for flow based tunneling API"
  gtp: set initial MTU
  gtp: include role in link info
  gtp: really check namespaces before xmit
  gtp: drop unnecessary call to skb_dst_drop
  gtp: set device type
  gtp: rework IPv4 functionality
  gtp: set dev features to enable GSO
  gtp: support GRO
  gtp: refactor check_ms back into version specific handlers
  gtp: drop duplicated assignment
  gtp: update rx_length_errors for abnormally short packets
  gtp: set skb protocol after pulling headers

Pravin B Shelar (3):
  gtp: add support for flow based tunneling
  gtp: add ability to send GTP controls headers
  gtp: add netlink support for setting up flow based tunnels

 drivers/net/gtp.c | 609 +++++++++++++++++++++++++++++-----------------
 1 file changed, 380 insertions(+), 229 deletions(-)

Comments

Jonas Bonn Jan. 24, 2021, 2:21 p.m. UTC | #1
Hi Pravin,

So, this whole GTP metadata thing has me a bit confused.

You've defined a metadata structure like this:

struct gtpu_metadata {
         __u8    ver;
         __u8    flags;
         __u8    type;
};

Here ver is the version of the metadata structure itself, which is fine.
'flags' corresponds to the 3 flag bits of GTP header's first byte:  E, 
S, and PN.
'type' corresponds to the 'message type' field of the GTP header.

The 'control header' (strange name) example below allows the flags to be 
set; however, setting these flags alone is insufficient because each one 
indicates the presence of additional fields in the header and there's 
nothing in the code to account for that.

If E is set, extension headers would need to be added.
If S is set, a sequence number field would need to be added.
If PN is set, a PDU-number header would need to be added.

It's not clear to me who sets up this metadata in the first place.  Is 
that where OVS or eBPF come in?  Can you give some pointers to how this 
works?

Couple of comments below....

On 23/01/2021 20:59, Jonas Bonn wrote:
> From: Pravin B Shelar <pbshelar@fb.com>

> 

> Please explain how this patch actually works... creation of the control

> header makes sense, but I don't understand how sending of a

> control header is actually triggered.

> 

> Signed-off-by: Jonas Bonn <jonas@norrbonn.se>

> ---

>   drivers/net/gtp.c | 43 ++++++++++++++++++++++++++++++++++++++++++-

>   1 file changed, 42 insertions(+), 1 deletion(-)

> 

> diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c

> index 668ed8a4836e..bbce2671de2d 100644

> --- a/drivers/net/gtp.c

> +++ b/drivers/net/gtp.c

> @@ -683,6 +683,38 @@ static void gtp_push_header(struct sk_buff *skb, struct pdp_ctx *pctx,

>   	}

>   }

>   

> +static inline int gtp1_push_control_header(struct sk_buff *skb,


I'm not enamored with the name 'control header' because it makes sound 
like this is some GTP-C thing.  The GTP module is really only about 
GTP-U and the function itself just sets up a GTP-U header.


> +					   struct pdp_ctx *pctx,

> +					   struct gtpu_metadata *opts,

> +					   struct net_device *dev)

> +{

> +	struct gtp1_header *gtp1c;

> +	int payload_len;

> +

> +	if (opts->ver != GTP_METADATA_V1)

> +		return -ENOENT;

> +

> +	if (opts->type == 0xFE) {

> +		/* for end marker ignore skb data. */

> +		netdev_dbg(dev, "xmit pkt with null data");

> +		pskb_trim(skb, 0);

> +	}

> +	if (skb_cow_head(skb, sizeof(*gtp1c)) < 0)

> +		return -ENOMEM;

> +

> +	payload_len = skb->len;

> +

> +	gtp1c = skb_push(skb, sizeof(*gtp1c));

> +

> +	gtp1c->flags	= opts->flags;

> +	gtp1c->type	= opts->type;

> +	gtp1c->length	= htons(payload_len);

> +	gtp1c->tid	= htonl(pctx->u.v1.o_tei);

> +	netdev_dbg(dev, "xmit control pkt: ver %d flags %x type %x pkt len %d tid %x",

> +		   opts->ver, opts->flags, opts->type, skb->len, pctx->u.v1.o_tei);

> +	return 0;

> +}


There's nothing really special about that above function aside from the 
facts that it takes 'opts' as an argument.  Can't we just merge this 
with the regular 'gtp_push_header' function?  Do you have plans for this 
beyond what's here that would complicated by doing so?

/Jonas


> +

>   static int gtp_xmit_ip4(struct sk_buff *skb, struct net_device *dev)

>   {

>   	struct gtp_dev *gtp = netdev_priv(dev);

> @@ -807,7 +839,16 @@ static int gtp_xmit_ip4(struct sk_buff *skb, struct net_device *dev)

>   

>   	skb_set_inner_protocol(skb, skb->protocol);

>   

> -	gtp_push_header(skb, pctx, &port);

> +	if (unlikely(opts)) {

> +		port = htons(GTP1U_PORT);

> +		r = gtp1_push_control_header(skb, pctx, opts, dev);

> +		if (r) {

> +			netdev_info(dev, "cntr pkt error %d", r);

> +			goto err_rt;

> +		}

> +	} else {

> +		gtp_push_header(skb, pctx, &port);

> +	}

>   

>   	iph = ip_hdr(skb);

>   	netdev_dbg(dev, "gtp -> IP src: %pI4 dst: %pI4\n",

>
Jonas Bonn Jan. 24, 2021, 2:27 p.m. UTC | #2
Hi Pravin,

A couple more comments around the GTP_METADATA bits:

On 23/01/2021 20:59, Jonas Bonn wrote:
> From: Pravin B Shelar <pbshelar@fb.com>

> 

> This patch adds support for flow based tunneling, allowing to send and

> receive GTP tunneled packets via the (lightweight) tunnel metadata

> mechanism.  This would allow integration with OVS and eBPF using flow

> based tunneling APIs.

> 

> The mechanism used here is to get the required GTP tunnel parameters

> from the tunnel metadata instead of looking up a pre-configured PDP

> context.  The tunnel metadata contains the necessary information for

> creating the GTP header.

> 

> Signed-off-by: Jonas Bonn <jonas@norrbonn.se>

> ---

>   drivers/net/gtp.c                  | 160 +++++++++++++++++++++++++----

>   include/uapi/linux/gtp.h           |  12 +++

>   include/uapi/linux/if_tunnel.h     |   1 +

>   tools/include/uapi/linux/if_link.h |   1 +

>   4 files changed, 156 insertions(+), 18 deletions(-)

> 


<...>

>   

> +static int gtp_set_tun_dst(struct pdp_ctx *pctx, struct sk_buff *skb,

> +			   unsigned int hdrlen)

> +{

> +	struct metadata_dst *tun_dst;

> +	struct gtp1_header *gtp1;

> +	int opts_len = 0;

> +	__be64 tid;

> +

> +	gtp1 = (struct gtp1_header *)(skb->data + sizeof(struct udphdr));

> +

> +	tid = key32_to_tunnel_id(gtp1->tid);

> +

> +	if (unlikely(gtp1->flags & GTP1_F_MASK))

> +		opts_len = sizeof(struct gtpu_metadata);


So if there are GTP flags sets, you're saying that this no longer a 
T-PDU but something else.  That's wrong... the flags indicate the 
presence of extensions to the GTP header itself.

> +

> +	tun_dst = udp_tun_rx_dst(skb,

> +			pctx->sk->sk_family, TUNNEL_KEY, tid, opts_len);

> +	if (!tun_dst) {

> +		netdev_dbg(pctx->dev, "Failed to allocate tun_dst");

> +		goto err;

> +	}

> +

> +	netdev_dbg(pctx->dev, "attaching metadata_dst to skb, gtp ver %d hdrlen %d\n",

> +		   pctx->gtp_version, hdrlen);

> +	if (unlikely(opts_len)) {

> +		struct gtpu_metadata *opts;

> +

> +		opts = ip_tunnel_info_opts(&tun_dst->u.tun_info);

> +		opts->ver = GTP_METADATA_V1;

> +		opts->flags = gtp1->flags;

> +		opts->type = gtp1->type;

> +		netdev_dbg(pctx->dev, "recved control pkt: flag %x type: %d\n",

> +			   opts->flags, opts->type);

> +		tun_dst->u.tun_info.key.tun_flags |= TUNNEL_GTPU_OPT;

> +		tun_dst->u.tun_info.options_len = opts_len;

> +		skb->protocol = htons(0xffff);         /* Unknown */


It might be relevant to set protocol to unknown for 'end markers' (i.e. 
gtp1->type == 0xfe), but not for everything that happens to have flag 
bits set.  'flags' and 'type' are independent of each other and need to 
handled as such.

/Jonas
Jonas Bonn Jan. 24, 2021, 3:11 p.m. UTC | #3
Hi,

On 23/01/2021 20:59, Jonas Bonn wrote:
> From: Pravin B Shelar <pbshelar@fb.com>

> 

> This patch adds support for flow based tunneling, allowing to send and

> receive GTP tunneled packets via the (lightweight) tunnel metadata

> mechanism.  This would allow integration with OVS and eBPF using flow

> based tunneling APIs.

> 

> The mechanism used here is to get the required GTP tunnel parameters

> from the tunnel metadata instead of looking up a pre-configured PDP

> context.  The tunnel metadata contains the necessary information for

> creating the GTP header.


The GTP driver operates in two modes:  GGSN/PGW/UPF and SGSN/SGW/NG-U. 
For simplicity we'll just refer to these as 'ggsn' and 'sgsn', but these 
are essentially just upstream and downstream nodes.

So, the classic way of adding a tunnel to the driver is:

gtp-tunnel add gtp v1 100 200 192.168.100.1 172.99.0.1

That encapsulates a lot of information about both ends of the tunnel 
including TEID's for both ends of the tunnel, the local IP in use, and 
the remote end.

With this new approach we have ('ggsn' side):

ip route add 192.168.100.1/32 encap ip id 200 dst 172.99.0.1 dev gtp

That has all the information required for sending packets from 'ggsn' to 
'sgsn', but it's missing everything required for the validation of 
incoming packets from the 'sgsn'.  The implementation just ignores the 
TEID on incoming packets and doesn't validate TEID against IP like the 
PDP context variant does.

'sgsn' side we have:

ip route add SOME_IP encap ip id 100 dst 172.99.0.2 dev gtp

'sgsn' side is intended for testing eNodeB-type entities behind which 
there are a large number of simulated UE's.  The PDP context setup 
allows a form of 'source routing' internally in the driver whereby the 
MS/UE address of the PDP context is matched to the source IP of the 
outgoing packet in order to determine the required TEID.  How is 
something similar achievable with the ip route example above; can a 
'src' parameter be added in order to get the right 'id' (TEID) from a 
set of otherwise similar routing rules?


In the one example you posted 
(https://github.com/pshelar/iproute2/commit/d6e99f8342672e6e9ce0b71e153296f8e2b41cfc) 
you have, on the 'ggsn' side:

ip route add 1.1.1.0/24 encap id 0 dst 10.1.0.2 dev gtp1

This amounts to mapping a route to an entire network of 'devices' 
through a single TEID at host 10.1.0.2.  This might work because you 
just ignore the TEID on incoming packets and inject them into the 
receiving host in your 'sgsn' variant of the driver, but with any real 
downstream GTP device I don't think the above is of any practical value.

If I were to consider the above as an 'sgsn' side route setup, then 
you've got an entire network of devices talking to an upstream GTP 
entity (UPF) through a single TEID... security-wise, I'd be surprised if 
anybody actually allowed this.

Thanks for taking the time to consider the above.  I look forward to 
your comments.

/Jonas

> 

> Signed-off-by: Jonas Bonn <jonas@norrbonn.se>

> ---

>   drivers/net/gtp.c                  | 160 +++++++++++++++++++++++++----

>   include/uapi/linux/gtp.h           |  12 +++

>   include/uapi/linux/if_tunnel.h     |   1 +

>   tools/include/uapi/linux/if_link.h |   1 +

>   4 files changed, 156 insertions(+), 18 deletions(-)

> 

> diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c

> index 8aab46ec8a94..668ed8a4836e 100644

> --- a/drivers/net/gtp.c

> +++ b/drivers/net/gtp.c

> @@ -21,6 +21,7 @@

>   #include <linux/file.h>

>   #include <linux/gtp.h>

>   

> +#include <net/dst_metadata.h>

>   #include <net/net_namespace.h>

>   #include <net/protocol.h>

>   #include <net/ip.h>

> @@ -74,6 +75,9 @@ struct gtp_dev {

>   	unsigned int		hash_size;

>   	struct hlist_head	*tid_hash;

>   	struct hlist_head	*addr_hash;

> +	/* Used by LWT tunnel. */

> +	bool			collect_md;

> +	struct socket		*collect_md_sock;

>   };

>   

>   static unsigned int gtp_net_id __read_mostly;

> @@ -224,6 +228,51 @@ static int gtp_rx(struct pdp_ctx *pctx, struct sk_buff *skb,

>   	return -1;

>   }

>   

> +static int gtp_set_tun_dst(struct pdp_ctx *pctx, struct sk_buff *skb,

> +			   unsigned int hdrlen)

> +{

> +	struct metadata_dst *tun_dst;

> +	struct gtp1_header *gtp1;

> +	int opts_len = 0;

> +	__be64 tid;

> +

> +	gtp1 = (struct gtp1_header *)(skb->data + sizeof(struct udphdr));

> +

> +	tid = key32_to_tunnel_id(gtp1->tid);

> +

> +	if (unlikely(gtp1->flags & GTP1_F_MASK))

> +		opts_len = sizeof(struct gtpu_metadata);

> +

> +	tun_dst = udp_tun_rx_dst(skb,

> +			pctx->sk->sk_family, TUNNEL_KEY, tid, opts_len);

> +	if (!tun_dst) {

> +		netdev_dbg(pctx->dev, "Failed to allocate tun_dst");

> +		goto err;

> +	}

> +

> +	netdev_dbg(pctx->dev, "attaching metadata_dst to skb, gtp ver %d hdrlen %d\n",

> +		   pctx->gtp_version, hdrlen);

> +	if (unlikely(opts_len)) {

> +		struct gtpu_metadata *opts;

> +

> +		opts = ip_tunnel_info_opts(&tun_dst->u.tun_info);

> +		opts->ver = GTP_METADATA_V1;

> +		opts->flags = gtp1->flags;

> +		opts->type = gtp1->type;

> +		netdev_dbg(pctx->dev, "recved control pkt: flag %x type: %d\n",

> +			   opts->flags, opts->type);

> +		tun_dst->u.tun_info.key.tun_flags |= TUNNEL_GTPU_OPT;

> +		tun_dst->u.tun_info.options_len = opts_len;

> +		skb->protocol = htons(0xffff);         /* Unknown */

> +	}

> +

> +	skb_dst_set(skb, &tun_dst->dst);

> +	return 0;

> +err:

> +	return -1;

> +}

> +

> +

>   /* 1 means pass up to the stack, -1 means drop and 0 means decapsulated. */

>   static int gtp0_udp_encap_recv(struct gtp_dev *gtp, struct sk_buff *skb)

>   {

> @@ -262,6 +311,7 @@ static int gtp1u_udp_encap_recv(struct gtp_dev *gtp, struct sk_buff *skb)

>   	unsigned int hdrlen = sizeof(struct udphdr) +

>   			      sizeof(struct gtp1_header);

>   	struct gtp1_header *gtp1;

> +	struct pdp_ctx md_pctx;

>   	struct pdp_ctx *pctx;

>   

>   	if (!pskb_may_pull(skb, hdrlen))

> @@ -272,6 +322,24 @@ static int gtp1u_udp_encap_recv(struct gtp_dev *gtp, struct sk_buff *skb)

>   	if ((gtp1->flags >> 5) != GTP_V1)

>   		return 1;

>   

> +	if (ip_tunnel_collect_metadata() || gtp->collect_md) {

> +		int err;

> +

> +		pctx = &md_pctx;

> +

> +		pctx->gtp_version = GTP_V1;

> +		pctx->sk = gtp->sk1u;

> +		pctx->dev = gtp->dev;

> +

> +		err = gtp_set_tun_dst(pctx, skb, hdrlen);

> +		if (err) {

> +			gtp->dev->stats.rx_dropped++;

> +			return -1;

> +		}

> +

> +		return gtp_rx(pctx, skb, hdrlen);

> +	}

> +

>   	if (gtp1->type != GTP_TPDU)

>   		return 1;

>   

> @@ -353,7 +421,8 @@ static int gtp_encap_recv(struct sock *sk, struct sk_buff *skb)

>   	if (!gtp)

>   		return 1;

>   

> -	netdev_dbg(gtp->dev, "encap_recv sk=%p\n", sk);

> +	netdev_dbg(gtp->dev, "encap_recv sk=%p type %d\n",

> +		   sk, udp_sk(sk)->encap_type);

>   

>   	switch (udp_sk(sk)->encap_type) {

>   	case UDP_ENCAP_GTP0:

> @@ -539,7 +608,7 @@ static struct rtable *gtp_get_v4_rt(struct sk_buff *skb,

>   	memset(&fl4, 0, sizeof(fl4));

>   	fl4.flowi4_oif		= sk->sk_bound_dev_if;

>   	fl4.daddr		= pctx->peer_addr_ip4.s_addr;

> -	fl4.saddr		= inet_sk(sk)->inet_saddr;

> +	fl4.saddr		= *saddr;

>   	fl4.flowi4_tos		= RT_CONN_FLAGS(sk);

>   	fl4.flowi4_proto	= sk->sk_protocol;

>   

> @@ -617,29 +686,84 @@ static void gtp_push_header(struct sk_buff *skb, struct pdp_ctx *pctx,

>   static int gtp_xmit_ip4(struct sk_buff *skb, struct net_device *dev)

>   {

>   	struct gtp_dev *gtp = netdev_priv(dev);

> +	struct gtpu_metadata *opts = NULL;

> +	struct pdp_ctx md_pctx;

>   	struct pdp_ctx *pctx;

> +	__be16 port;

>   	struct rtable *rt;

> -	__be32 saddr;

>   	struct iphdr *iph;

> +	__be32 saddr;

>   	int headroom;

> -	__be16 port;

> +	__u8 tos;

>   	int r;

>   

> -	/* Read the IP destination address and resolve the PDP context.

> -	 * Prepend PDP header with TEI/TID from PDP ctx.

> -	 */

> -	iph = ip_hdr(skb);

> -	if (gtp->role == GTP_ROLE_SGSN)

> -		pctx = ipv4_pdp_find(gtp, iph->saddr);

> -	else

> -		pctx = ipv4_pdp_find(gtp, iph->daddr);

> +	if (gtp->collect_md) {

> +		/* LWT GTP1U encap */

> +		struct ip_tunnel_info *info = NULL;

>   

> -	if (!pctx) {

> -		netdev_dbg(dev, "no PDP ctx found for %pI4, skip\n",

> -			   &iph->daddr);

> -		return -ENOENT;

> +		info = skb_tunnel_info(skb);

> +		if (!info) {

> +			netdev_dbg(dev, "missing tunnel info");

> +			return -ENOENT;

> +		}

> +		if (info->key.tp_dst && ntohs(info->key.tp_dst) != GTP1U_PORT) {

> +			netdev_dbg(dev, "unexpected GTP dst port: %d", ntohs(info->key.tp_dst));

> +			return -EOPNOTSUPP;

> +		}

> +

> +		if (!gtp->sk1u) {

> +			netdev_dbg(dev, "missing tunnel sock");

> +			return -EOPNOTSUPP;

> +		}

> +

> +		pctx = &md_pctx;

> +		memset(pctx, 0, sizeof(*pctx));

> +		pctx->sk = gtp->sk1u;

> +		pctx->gtp_version = GTP_V1;

> +		pctx->u.v1.o_tei = ntohl(tunnel_id_to_key32(info->key.tun_id));

> +		pctx->peer_addr_ip4.s_addr = info->key.u.ipv4.dst;

> +

> +		saddr = info->key.u.ipv4.src;

> +		tos = info->key.tos;

> +

> +		if (info->options_len != 0) {

> +			if (info->key.tun_flags & TUNNEL_GTPU_OPT) {

> +				opts = ip_tunnel_info_opts(info);

> +			} else {

> +				netdev_dbg(dev, "missing tunnel metadata for control pkt");

> +				return -EOPNOTSUPP;

> +			}

> +		}

> +		netdev_dbg(dev, "flow-based GTP1U encap: tunnel id %d\n",

> +			   pctx->u.v1.o_tei);

> +	} else {

> +		struct iphdr *iph;

> +

> +		if (ntohs(skb->protocol) != ETH_P_IP)

> +			return -EOPNOTSUPP;

> +

> +		iph = ip_hdr(skb);

> +

> +		/* Read the IP destination address and resolve the PDP context.

> +		 * Prepend PDP header with TEI/TID from PDP ctx.

> +		 */

> +		if (gtp->role == GTP_ROLE_SGSN)

> +			pctx = ipv4_pdp_find(gtp, iph->saddr);

> +		else

> +			pctx = ipv4_pdp_find(gtp, iph->daddr);

> +

> +		if (!pctx) {

> +			netdev_dbg(dev, "no PDP ctx found for %pI4, skip\n",

> +				   &iph->daddr);

> +			return -ENOENT;

> +		}

> +		netdev_dbg(dev, "found PDP context %p\n", pctx);

> +

> +		saddr = inet_sk(pctx->sk)->inet_saddr;

> +		tos = iph->tos;

> +		netdev_dbg(dev, "gtp -> IP src: %pI4 dst: %pI4\n",

> +			   &iph->saddr, &iph->daddr);

>   	}

> -	netdev_dbg(dev, "found PDP context %p\n", pctx);

>   

>   	rt = gtp_get_v4_rt(skb, dev, pctx, &saddr);

>   	if (IS_ERR(rt)) {

> @@ -691,7 +815,7 @@ static int gtp_xmit_ip4(struct sk_buff *skb, struct net_device *dev)

>   

>   	udp_tunnel_xmit_skb(rt, pctx->sk, skb,

>   			    saddr, pctx->peer_addr_ip4.s_addr,

> -			    iph->tos,

> +			    tos,

>   			    ip4_dst_hoplimit(&rt->dst),

>   			    0,

>   			    port, port,

> diff --git a/include/uapi/linux/gtp.h b/include/uapi/linux/gtp.h

> index 79f9191bbb24..62aff78b7c56 100644

> --- a/include/uapi/linux/gtp.h

> +++ b/include/uapi/linux/gtp.h

> @@ -2,6 +2,8 @@

>   #ifndef _UAPI_LINUX_GTP_H_

>   #define _UAPI_LINUX_GTP_H_

>   

> +#include <linux/types.h>

> +

>   #define GTP_GENL_MCGRP_NAME	"gtp"

>   

>   enum gtp_genl_cmds {

> @@ -34,4 +36,14 @@ enum gtp_attrs {

>   };

>   #define GTPA_MAX (__GTPA_MAX + 1)

>   

> +enum {

> +	GTP_METADATA_V1

> +};

> +

> +struct gtpu_metadata {

> +	__u8    ver;

> +	__u8    flags;

> +	__u8    type;

> +};

> +

>   #endif /* _UAPI_LINUX_GTP_H_ */

> diff --git a/include/uapi/linux/if_tunnel.h b/include/uapi/linux/if_tunnel.h

> index 7d9105533c7b..802da679fab1 100644

> --- a/include/uapi/linux/if_tunnel.h

> +++ b/include/uapi/linux/if_tunnel.h

> @@ -176,6 +176,7 @@ enum {

>   #define TUNNEL_VXLAN_OPT	__cpu_to_be16(0x1000)

>   #define TUNNEL_NOCACHE		__cpu_to_be16(0x2000)

>   #define TUNNEL_ERSPAN_OPT	__cpu_to_be16(0x4000)

> +#define TUNNEL_GTPU_OPT		__cpu_to_be16(0x8000)

>   

>   #define TUNNEL_OPTIONS_PRESENT \

>   		(TUNNEL_GENEVE_OPT | TUNNEL_VXLAN_OPT | TUNNEL_ERSPAN_OPT)

> diff --git a/tools/include/uapi/linux/if_link.h b/tools/include/uapi/linux/if_link.h

> index d208b2af697f..28d649bda686 100644

> --- a/tools/include/uapi/linux/if_link.h

> +++ b/tools/include/uapi/linux/if_link.h

> @@ -617,6 +617,7 @@ enum {

>   	IFLA_GTP_FD1,

>   	IFLA_GTP_PDP_HASHSIZE,

>   	IFLA_GTP_ROLE,

> +	IFLA_GTP_COLLECT_METADATA,

>   	__IFLA_GTP_MAX,

>   };

>   #define IFLA_GTP_MAX (__IFLA_GTP_MAX - 1)

>
Pravin Shelar Jan. 24, 2021, 5:42 p.m. UTC | #4
On Sat, Jan 23, 2021 at 12:06 PM Jonas Bonn <jonas@norrbonn.se> wrote:
>

> This series begins by reverting the recently added patch adding support

> for GTP with lightweight tunnels.  That patch was added without getting

> any ACK from the maintainers and has several issues, as discussed on the

> mailing list.

>

> In order to try to make this as painless as possible, I have reworked

> Pravin's patch into a series that is, hopefully, a bit more reviewable.

> That series is rebased onto a series of other changes that constitute

> cleanup work necessary for on-going work to IPv6 support into the

> driver.  The IPv6 work should be rebaseable onto top of this series

> later on.

>

> I did try to do this other way around:  rebasing the IPv6 series on top

> of Pravin's patch.  Given that Pravin's patch contained about 200 lines

> of superfluous changes that would have had to be reverted in the process

> of realigning the patch series, things got ugly pretty quickly.  The end

> result would not have been pretty.

>

> So the result of this is that Pravin's patch is now mostly still in

> place.  I've reworked some small bits in order to simplify things.  My

> expectation is that Pravin will review and test his bits here.  In

> particular, the patch adding GTP control headers needs a bit of

> explanation.

>

> This is still an RFC only because I'm not quite convinced that I'm done

> with this.  I do want to get this onto the list quickly, though, since

> this has implications for the next merge window.  So let's see if we can

> sort this out to everyone's satisfaction.

>


I am all for making progress forward. Thanks Jonas for working on this.
I will finish the review next week.
Jonas Bonn Jan. 25, 2021, 8:12 a.m. UTC | #5
Hi Pravin,

I'm going to submit a new series without the GTP_METADATA bits.  I think 
the whole "collect metadata" approach is fine, but the way GTP header 
information is passed through the tunnel via metadata needs a bit more 
thought.  See below...

On 23/01/2021 20:59, Jonas Bonn wrote:
> From: Pravin B Shelar <pbshelar@fb.com>

>   

> +static int gtp_set_tun_dst(struct pdp_ctx *pctx, struct sk_buff *skb,

> +			   unsigned int hdrlen)

> +{

> +	struct metadata_dst *tun_dst;

> +	struct gtp1_header *gtp1;

> +	int opts_len = 0;

> +	__be64 tid;

> +

> +	gtp1 = (struct gtp1_header *)(skb->data + sizeof(struct udphdr));

> +

> +	tid = key32_to_tunnel_id(gtp1->tid);

> +

> +	if (unlikely(gtp1->flags & GTP1_F_MASK))

> +		opts_len = sizeof(struct gtpu_metadata);


This decides that GTP metadata is required if any of the S, E, and PN 
bits are set in the header.  However:

i) even when any of those bits are set, none of the extra headers are 
actually added to the metadata so it's somewhat pointless to even bother 
reporting that they're set

ii) the more interesting case is that you might want to report reception 
of an end marker through the tunnel; that however, is signalled by way 
of the GTP header type and not via the flags; but, see below...


> +

> +	tun_dst = udp_tun_rx_dst(skb,

> +			pctx->sk->sk_family, TUNNEL_KEY, tid, opts_len);

> +	if (!tun_dst) {

> +		netdev_dbg(pctx->dev, "Failed to allocate tun_dst");

> +		goto err;

> +	}


The problem, as I see it, is that end marker messages don't actually 
contain an inner packet, so you won't be able to set up a destination 
for them.  The above fails and you never hit the metadata path below.

> +

> +	netdev_dbg(pctx->dev, "attaching metadata_dst to skb, gtp ver %d hdrlen %d\n",

> +		   pctx->gtp_version, hdrlen);

> +	if (unlikely(opts_len)) {

> +		struct gtpu_metadata *opts;

> +

> +		opts = ip_tunnel_info_opts(&tun_dst->u.tun_info);

> +		opts->ver = GTP_METADATA_V1;

> +		opts->flags = gtp1->flags;

> +		opts->type = gtp1->type;

> +		netdev_dbg(pctx->dev, "recved control pkt: flag %x type: %d\n",

> +			   opts->flags, opts->type);

> +		tun_dst->u.tun_info.key.tun_flags |= TUNNEL_GTPU_OPT;

> +		tun_dst->u.tun_info.options_len = opts_len;

> +		skb->protocol = htons(0xffff);         /* Unknown */

> +	}


Assuming that you do hit this code and are able to set the 'type' field 
in the metadata, who is going to be the recipient.  After you pull the 
GTP headers, the SKB is presumably zero-length...

/Jonas
Jonas Bonn Jan. 25, 2021, 8:47 a.m. UTC | #6
Hi Pravin,

On 23/01/2021 20:59, Jonas Bonn wrote:
> From: Pravin B Shelar <pbshelar@fb.com>

> 

> @@ -617,29 +686,84 @@ static void gtp_push_header(struct sk_buff *skb, struct pdp_ctx *pctx,

>   static int gtp_xmit_ip4(struct sk_buff *skb, struct net_device *dev)

>   {

>   	struct gtp_dev *gtp = netdev_priv(dev);

> +	struct gtpu_metadata *opts = NULL;

> +	struct pdp_ctx md_pctx;

>   	struct pdp_ctx *pctx;

> +	__be16 port;

>   	struct rtable *rt;

> -	__be32 saddr;

>   	struct iphdr *iph;

> +	__be32 saddr;

>   	int headroom;

> -	__be16 port;

> +	__u8 tos;

>   	int r;

>   

> -	/* Read the IP destination address and resolve the PDP context.

> -	 * Prepend PDP header with TEI/TID from PDP ctx.

> -	 */

> -	iph = ip_hdr(skb);

> -	if (gtp->role == GTP_ROLE_SGSN)

> -		pctx = ipv4_pdp_find(gtp, iph->saddr);

> -	else

> -		pctx = ipv4_pdp_find(gtp, iph->daddr);

> +	if (gtp->collect_md) {


Why do we have this restriction that the device be exclusively "collect 
metadata" mode or PDP context mode?  Why are we not able to mix the two?

Furthermore, since the collect_md_sock will effectively always be 
listening on INADDR_ANY, that precludes any other PDP context devices 
from co-existing with it.  So setting up a secondary device for PDP 
contexts isn't a feasible workaround.

If mixing isn't possible, then I suppose PDP context management needs to 
be made to fail gracefully in "collect_md" mode... with the current 
patches I think that contexts can be set up but they are just silently 
ignored, which seems like a potential source of confusion.

/Jonas


> +		/* LWT GTP1U encap */

> +		struct ip_tunnel_info *info = NULL;

>   

> -	if (!pctx) {

> -		netdev_dbg(dev, "no PDP ctx found for %pI4, skip\n",

> -			   &iph->daddr);

> -		return -ENOENT;

> +		info = skb_tunnel_info(skb);

> +		if (!info) {

> +			netdev_dbg(dev, "missing tunnel info");

> +			return -ENOENT;

> +		}

> +		if (info->key.tp_dst && ntohs(info->key.tp_dst) != GTP1U_PORT) {

> +			netdev_dbg(dev, "unexpected GTP dst port: %d", ntohs(info->key.tp_dst));

> +			return -EOPNOTSUPP;

> +		}

> +

> +		if (!gtp->sk1u) {

> +			netdev_dbg(dev, "missing tunnel sock");

> +			return -EOPNOTSUPP;

> +		}

> +

> +		pctx = &md_pctx;

> +		memset(pctx, 0, sizeof(*pctx));

> +		pctx->sk = gtp->sk1u;

> +		pctx->gtp_version = GTP_V1;

> +		pctx->u.v1.o_tei = ntohl(tunnel_id_to_key32(info->key.tun_id));

> +		pctx->peer_addr_ip4.s_addr = info->key.u.ipv4.dst;

> +

> +		saddr = info->key.u.ipv4.src;

> +		tos = info->key.tos;

> +

> +		if (info->options_len != 0) {

> +			if (info->key.tun_flags & TUNNEL_GTPU_OPT) {

> +				opts = ip_tunnel_info_opts(info);

> +			} else {

> +				netdev_dbg(dev, "missing tunnel metadata for control pkt");

> +				return -EOPNOTSUPP;

> +			}

> +		}

> +		netdev_dbg(dev, "flow-based GTP1U encap: tunnel id %d\n",

> +			   pctx->u.v1.o_tei);

> +	} else {

> +		struct iphdr *iph;

> +

> +		if (ntohs(skb->protocol) != ETH_P_IP)

> +			return -EOPNOTSUPP;

> +

> +		iph = ip_hdr(skb);

> +

> +		/* Read the IP destination address and resolve the PDP context.

> +		 * Prepend PDP header with TEI/TID from PDP ctx.

> +		 */

> +		if (gtp->role == GTP_ROLE_SGSN)

> +			pctx = ipv4_pdp_find(gtp, iph->saddr);

> +		else

> +			pctx = ipv4_pdp_find(gtp, iph->daddr);

> +

> +		if (!pctx) {

> +			netdev_dbg(dev, "no PDP ctx found for %pI4, skip\n",

> +				   &iph->daddr);

> +			return -ENOENT;

> +		}

> +		netdev_dbg(dev, "found PDP context %p\n", pctx);

> +

> +		saddr = inet_sk(pctx->sk)->inet_saddr;

> +		tos = iph->tos;

> +		netdev_dbg(dev, "gtp -> IP src: %pI4 dst: %pI4\n",

> +			   &iph->saddr, &iph->daddr);

>   	}

> -	netdev_dbg(dev, "found PDP context %p\n", pctx);

>   

>   	rt = gtp_get_v4_rt(skb, dev, pctx, &saddr);

>   	if (IS_ERR(rt)) {

> @@ -691,7 +815,7 @@ static int gtp_xmit_ip4(struct sk_buff *skb, struct net_device *dev)

>   

>   	udp_tunnel_xmit_skb(rt, pctx->sk, skb,

>   			    saddr, pctx->peer_addr_ip4.s_addr,

> -			    iph->tos,

> +			    tos,

>   			    ip4_dst_hoplimit(&rt->dst),

>   			    0,

>   			    port, port,

> diff --git a/include/uapi/linux/gtp.h b/include/uapi/linux/gtp.h

> index 79f9191bbb24..62aff78b7c56 100644

> --- a/include/uapi/linux/gtp.h

> +++ b/include/uapi/linux/gtp.h

> @@ -2,6 +2,8 @@

>   #ifndef _UAPI_LINUX_GTP_H_

>   #define _UAPI_LINUX_GTP_H_

>   

> +#include <linux/types.h>

> +

>   #define GTP_GENL_MCGRP_NAME	"gtp"

>   

>   enum gtp_genl_cmds {

> @@ -34,4 +36,14 @@ enum gtp_attrs {

>   };

>   #define GTPA_MAX (__GTPA_MAX + 1)

>   

> +enum {

> +	GTP_METADATA_V1

> +};

> +

> +struct gtpu_metadata {

> +	__u8    ver;

> +	__u8    flags;

> +	__u8    type;

> +};

> +

>   #endif /* _UAPI_LINUX_GTP_H_ */

> diff --git a/include/uapi/linux/if_tunnel.h b/include/uapi/linux/if_tunnel.h

> index 7d9105533c7b..802da679fab1 100644

> --- a/include/uapi/linux/if_tunnel.h

> +++ b/include/uapi/linux/if_tunnel.h

> @@ -176,6 +176,7 @@ enum {

>   #define TUNNEL_VXLAN_OPT	__cpu_to_be16(0x1000)

>   #define TUNNEL_NOCACHE		__cpu_to_be16(0x2000)

>   #define TUNNEL_ERSPAN_OPT	__cpu_to_be16(0x4000)

> +#define TUNNEL_GTPU_OPT		__cpu_to_be16(0x8000)

>   

>   #define TUNNEL_OPTIONS_PRESENT \

>   		(TUNNEL_GENEVE_OPT | TUNNEL_VXLAN_OPT | TUNNEL_ERSPAN_OPT)

> diff --git a/tools/include/uapi/linux/if_link.h b/tools/include/uapi/linux/if_link.h

> index d208b2af697f..28d649bda686 100644

> --- a/tools/include/uapi/linux/if_link.h

> +++ b/tools/include/uapi/linux/if_link.h

> @@ -617,6 +617,7 @@ enum {

>   	IFLA_GTP_FD1,

>   	IFLA_GTP_PDP_HASHSIZE,

>   	IFLA_GTP_ROLE,

> +	IFLA_GTP_COLLECT_METADATA,

>   	__IFLA_GTP_MAX,

>   };

>   #define IFLA_GTP_MAX (__IFLA_GTP_MAX - 1)

>
Harald Welte Jan. 25, 2021, 5:41 p.m. UTC | #7
Hi Jonas,

thanks for your detailed analysis and review of the changes.  To me, they
once again show that the original patch was merged too quickly, without
a detailed review by people with strong GTP background.

On Sun, Jan 24, 2021 at 03:21:21PM +0100, Jonas Bonn wrote:
> struct gtpu_metadata {

>         __u8    ver;

>         __u8    flags;

>         __u8    type;

> };

> 

> Here ver is the version of the metadata structure itself, which is fine.

> 'flags' corresponds to the 3 flag bits of GTP header's first byte:  E, S,

> and PN.

> 'type' corresponds to the 'message type' field of the GTP header.


One more comment on the 'type': Of how much use is it?  After all, the
GTP-U kernel driver only handles a single message type at all (G-PDU /
255 - the only message type that encapsulates user IP data), while all
other message types are always processed in userland via the UDP socket.

Side-note: 3GPP TS 29.060 lists 5 other message types that can happen in
GTP-U:
* Echo Request
* Echo Response
* Error Indication
* Supported Extension Headers Notification
* End Marker

It would be interesting to understand how the new flow-based tunnel would
treat those, if those 

> The 'control header' (strange name) example below allows the flags to be

> set; however, setting these flags alone is insufficient because each one

> indicates the presence of additional fields in the header and there's

> nothing in the code to account for that.


Full ACK from my side here.  Setting arbitrary bits in the GTP flags without
then actually encoding the required additional bits that those flags require
will produce broken packets.  IMHO, the GTP driver should never do that.

-- 
- Harald Welte <laforge@gnumonks.org>           http://laforge.gnumonks.org/
============================================================================
"Privacy in residential applications is a desirable marketing option."
                                                  (ETSI EN 300 175-7 Ch. A6)
Pravin Shelar Jan. 28, 2021, 9:29 p.m. UTC | #8
On Sun, Jan 24, 2021 at 6:22 AM Jonas Bonn <jonas@norrbonn.se> wrote:
>

> Hi Pravin,

>

> So, this whole GTP metadata thing has me a bit confused.

>

> You've defined a metadata structure like this:

>

> struct gtpu_metadata {

>          __u8    ver;

>          __u8    flags;

>          __u8    type;

> };

>

> Here ver is the version of the metadata structure itself, which is fine.

> 'flags' corresponds to the 3 flag bits of GTP header's first byte:  E,

> S, and PN.

> 'type' corresponds to the 'message type' field of the GTP header.

>

> The 'control header' (strange name) example below allows the flags to be

> set; however, setting these flags alone is insufficient because each one

> indicates the presence of additional fields in the header and there's

> nothing in the code to account for that.

>

> If E is set, extension headers would need to be added.

> If S is set, a sequence number field would need to be added.

> If PN is set, a PDU-number header would need to be added.

>

> It's not clear to me who sets up this metadata in the first place.  Is

> that where OVS or eBPF come in?  Can you give some pointers to how this

> works?

>


Receive path: LWT extracts tunnel metadata into tunnel-metadata
struct. This object has 5-tuple info from outer header and tunnel key.
When there is presence of extension header there is no way to store
the info standard tunnel-metadata object. That is when the optional
section of tunnel-metadata comes in the play.
As you can see the packet data from GTP header onwards is still pushed
to the device, so consumers of LWT can look at tunnel-metadata and
make sense of the inner packet that is received on the device.
OVS does exactly the same. When it receives a GTP packet with optional
metadata, it looks at flags and parses the inner packet and extension
header accordingly.

xmit path: it is set by LWT tunnel user, OVS or eBPF code. it needs to
set the metadata in tunnel dst along with the 5-tuple data and tunel
ID. This is how it can steer the packet at the right tunnel using a
single tunnel net device.

> Couple of comments below....

>

> On 23/01/2021 20:59, Jonas Bonn wrote:

> > From: Pravin B Shelar <pbshelar@fb.com>

> >

> > Please explain how this patch actually works... creation of the control

> > header makes sense, but I don't understand how sending of a

> > control header is actually triggered.

> >

> > Signed-off-by: Jonas Bonn <jonas@norrbonn.se>

> > ---

> >   drivers/net/gtp.c | 43 ++++++++++++++++++++++++++++++++++++++++++-

> >   1 file changed, 42 insertions(+), 1 deletion(-)

> >

> > diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c

> > index 668ed8a4836e..bbce2671de2d 100644

> > --- a/drivers/net/gtp.c

> > +++ b/drivers/net/gtp.c

> > @@ -683,6 +683,38 @@ static void gtp_push_header(struct sk_buff *skb, struct pdp_ctx *pctx,

> >       }

> >   }

> >

> > +static inline int gtp1_push_control_header(struct sk_buff *skb,

>

> I'm not enamored with the name 'control header' because it makes sound

> like this is some GTP-C thing.  The GTP module is really only about

> GTP-U and the function itself just sets up a GTP-U header.

>

sure. lets call ext_hdr.

>

> > +                                        struct pdp_ctx *pctx,

> > +                                        struct gtpu_metadata *opts,

> > +                                        struct net_device *dev)

> > +{

> > +     struct gtp1_header *gtp1c;

> > +     int payload_len;

> > +

> > +     if (opts->ver != GTP_METADATA_V1)

> > +             return -ENOENT;

> > +

> > +     if (opts->type == 0xFE) {

> > +             /* for end marker ignore skb data. */

> > +             netdev_dbg(dev, "xmit pkt with null data");

> > +             pskb_trim(skb, 0);

> > +     }

> > +     if (skb_cow_head(skb, sizeof(*gtp1c)) < 0)

> > +             return -ENOMEM;

> > +

> > +     payload_len = skb->len;

> > +

> > +     gtp1c = skb_push(skb, sizeof(*gtp1c));

> > +

> > +     gtp1c->flags    = opts->flags;

> > +     gtp1c->type     = opts->type;

> > +     gtp1c->length   = htons(payload_len);

> > +     gtp1c->tid      = htonl(pctx->u.v1.o_tei);

> > +     netdev_dbg(dev, "xmit control pkt: ver %d flags %x type %x pkt len %d tid %x",

> > +                opts->ver, opts->flags, opts->type, skb->len, pctx->u.v1.o_tei);

> > +     return 0;

> > +}

>

> There's nothing really special about that above function aside from the

> facts that it takes 'opts' as an argument.  Can't we just merge this

> with the regular 'gtp_push_header' function?  Do you have plans for this

> beyond what's here that would complicated by doing so?

>


Yes, we already have usecase for handling GTP PDU session container
related extension header for 5G UPF endpoitnt.



> /Jonas

>

>

> > +

> >   static int gtp_xmit_ip4(struct sk_buff *skb, struct net_device *dev)

> >   {

> >       struct gtp_dev *gtp = netdev_priv(dev);

> > @@ -807,7 +839,16 @@ static int gtp_xmit_ip4(struct sk_buff *skb, struct net_device *dev)

> >

> >       skb_set_inner_protocol(skb, skb->protocol);

> >

> > -     gtp_push_header(skb, pctx, &port);

> > +     if (unlikely(opts)) {

> > +             port = htons(GTP1U_PORT);

> > +             r = gtp1_push_control_header(skb, pctx, opts, dev);

> > +             if (r) {

> > +                     netdev_info(dev, "cntr pkt error %d", r);

> > +                     goto err_rt;

> > +             }

> > +     } else {

> > +             gtp_push_header(skb, pctx, &port);

> > +     }

> >

> >       iph = ip_hdr(skb);

> >       netdev_dbg(dev, "gtp -> IP src: %pI4 dst: %pI4\n",

> >
Pravin Shelar Jan. 28, 2021, 9:29 p.m. UTC | #9
On Mon, Jan 25, 2021 at 9:52 AM Harald Welte <laforge@gnumonks.org> wrote:
>

> Hi Jonas,

>


> On Sun, Jan 24, 2021 at 03:21:21PM +0100, Jonas Bonn wrote:

> > struct gtpu_metadata {

> >         __u8    ver;

> >         __u8    flags;

> >         __u8    type;

> > };

> >

> > Here ver is the version of the metadata structure itself, which is fine.

> > 'flags' corresponds to the 3 flag bits of GTP header's first byte:  E, S,

> > and PN.

> > 'type' corresponds to the 'message type' field of the GTP header.

>

> One more comment on the 'type': Of how much use is it?  After all, the

> GTP-U kernel driver only handles a single message type at all (G-PDU /

> 255 - the only message type that encapsulates user IP data), while all

> other message types are always processed in userland via the UDP socket.

>

There is NO userland UDP socket for the LWT tunnel. All UDP traffic is
terminated in kernel space. userland only sets rules over LTW tunnel
device to handle traffic. This is how OVS/eBPF handles other UDP based
LWT tunnel devices.


> Side-note: 3GPP TS 29.060 lists 5 other message types that can happen in

> GTP-U:

> * Echo Request

> * Echo Response

> * Error Indication

> * Supported Extension Headers Notification

> * End Marker

>

> It would be interesting to understand how the new flow-based tunnel would

> treat those, if those

>

Current code does handle Echo Request, Response and End marker.
Pravin Shelar Jan. 30, 2021, 6:59 a.m. UTC | #10
On Fri, Jan 29, 2021 at 6:08 AM Jonas Bonn <jonas@norrbonn.se> wrote:
>

> Hi Pravin,

>

> On 28/01/2021 22:29, Pravin Shelar wrote:

> > On Sun, Jan 24, 2021 at 6:22 AM Jonas Bonn <jonas@norrbonn.se> wrote:

> >>

> >> Hi Pravin,

> >>

> >> So, this whole GTP metadata thing has me a bit confused.

> >>

> >> You've defined a metadata structure like this:

> >>

> >> struct gtpu_metadata {

> >>           __u8    ver;

> >>           __u8    flags;

> >>           __u8    type;

> >> };

> >>

> >> Here ver is the version of the metadata structure itself, which is fine.

> >> 'flags' corresponds to the 3 flag bits of GTP header's first byte:  E,

> >> S, and PN.

> >> 'type' corresponds to the 'message type' field of the GTP header.

> >>

> >> The 'control header' (strange name) example below allows the flags to be

> >> set; however, setting these flags alone is insufficient because each one

> >> indicates the presence of additional fields in the header and there's

> >> nothing in the code to account for that.

> >>

> >> If E is set, extension headers would need to be added.

> >> If S is set, a sequence number field would need to be added.

> >> If PN is set, a PDU-number header would need to be added.

> >>

> >> It's not clear to me who sets up this metadata in the first place.  Is

> >> that where OVS or eBPF come in?  Can you give some pointers to how this

> >> works?

> >>

> >

> > Receive path: LWT extracts tunnel metadata into tunnel-metadata

> > struct. This object has 5-tuple info from outer header and tunnel key.

> > When there is presence of extension header there is no way to store

> > the info standard tunnel-metadata object. That is when the optional

> > section of tunnel-metadata comes in the play.

> > As you can see the packet data from GTP header onwards is still pushed

> > to the device, so consumers of LWT can look at tunnel-metadata and

> > make sense of the inner packet that is received on the device.

> > OVS does exactly the same. When it receives a GTP packet with optional

> > metadata, it looks at flags and parses the inner packet and extension

> > header accordingly.

>

> Ah, ok, I see.  So you are pulling _half_ of the GTP header off the

> packet but leaving the optional GTP extension headers in place if they

> exist.  So what OVS receives is a packet with metadata indicating

> whether or not it begins with these extension headers or whether it

> begins with an IP header.

>

> So OVS might need to begin by pulling parts of the packet in order to

> get to the inner IP packet.  In that case, why don't you just leave the

> _entire_ GTP header in place and let OVS work from that?  The header

> contains exactly the data you've copied to the metadata struct PLUS it

> has the incoming TEID value that you really should be validating inner

> IP against.

>


Following are the reasons for extracting the header and populating metadata.
1. That is the design used by other tunneling protocols
implementations for handling optional headers. We need to have a
consistent model across all tunnel devices for upper layers.
2. GTP module is parsing the UDP and GTP header. It would be wasteful
to repeat the same process in upper layers.
3. TIED is part of tunnel metadata, it is already used to validating
inner packets. But TIED is not alone to handle packets with extended
header.

I am fine with processing the entire header in GTP but in case of 'end
marker' there is no data left after pulling entire GTP header. Thats
why I took this path.

> Also, what happens if this is used WITHOUT OVS/eBPF... just a route with

> 'encap' set.  In that case, nothing will be there to pull the extension

> headers off the packet.

>


One way would be, the user can use encap to steer "special" packets to
a netdev that can handle such packets. it would be a userspace process
or eBPF program.

> >

> > xmit path: it is set by LWT tunnel user, OVS or eBPF code. it needs to

> > set the metadata in tunnel dst along with the 5-tuple data and tunel

> > ID. This is how it can steer the packet at the right tunnel using a

> > single tunnel net device.

>

> Right, that part is fine.  However, for setting extension headers you'd

> need to set the flags in the metadata and the extensions themselves

> prepended to the IP packet meaning your SKB contains a pseudo-GTP packet

> before the GTP module can finish adding the header.  I don't know why

> you wouldn't just add the entire GTP header in one place and be done

> with it... going via the GTP module gets you almost nothing at this point.

>

The UDP socket is owned by GTP module, so there is no other way to
inject a packet in the tunnel than sending it over GTP module.
plus this would looks this will break the standard model of
abstracting tunnel implementation in ovs or bridge code. We will need
special code for GTP packets to parse extra outer headers when OVS
extracts the flow from the inner packet.
I am open to handling  the optional headers completely GTP module.

>

> >

> >> Couple of comments below....

> >>

> >> On 23/01/2021 20:59, Jonas Bonn wrote:

> >>> From: Pravin B Shelar <pbshelar@fb.com>

> >>>

> >>> Please explain how this patch actually works... creation of the control

> >>> header makes sense, but I don't understand how sending of a

> >>> control header is actually triggered.

> >>>

> >>> Signed-off-by: Jonas Bonn <jonas@norrbonn.se>

> >>> ---

> >>>    drivers/net/gtp.c | 43 ++++++++++++++++++++++++++++++++++++++++++-

> >>>    1 file changed, 42 insertions(+), 1 deletion(-)

> >>>

> >>> diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c

> >>> index 668ed8a4836e..bbce2671de2d 100644

> >>> --- a/drivers/net/gtp.c

> >>> +++ b/drivers/net/gtp.c

> >>> @@ -683,6 +683,38 @@ static void gtp_push_header(struct sk_buff *skb, struct pdp_ctx *pctx,

> >>>        }

> >>>    }

> >>>

> >>> +static inline int gtp1_push_control_header(struct sk_buff *skb,

> >>

> >> I'm not enamored with the name 'control header' because it makes sound

> >> like this is some GTP-C thing.  The GTP module is really only about

> >> GTP-U and the function itself just sets up a GTP-U header.

> >>

> > sure. lets call ext_hdr.

> >

> >>

> >>> +                                        struct pdp_ctx *pctx,

> >>> +                                        struct gtpu_metadata *opts,

> >>> +                                        struct net_device *dev)

> >>> +{

> >>> +     struct gtp1_header *gtp1c;

> >>> +     int payload_len;

> >>> +

> >>> +     if (opts->ver != GTP_METADATA_V1)

> >>> +             return -ENOENT;

> >>> +

> >>> +     if (opts->type == 0xFE) {

> >>> +             /* for end marker ignore skb data. */

> >>> +             netdev_dbg(dev, "xmit pkt with null data");

> >>> +             pskb_trim(skb, 0);

> >>> +     }

> >>> +     if (skb_cow_head(skb, sizeof(*gtp1c)) < 0)

> >>> +             return -ENOMEM;

> >>> +

> >>> +     payload_len = skb->len;

> >>> +

> >>> +     gtp1c = skb_push(skb, sizeof(*gtp1c));

> >>> +

> >>> +     gtp1c->flags    = opts->flags;

> >>> +     gtp1c->type     = opts->type;

> >>> +     gtp1c->length   = htons(payload_len);

> >>> +     gtp1c->tid      = htonl(pctx->u.v1.o_tei);

> >>> +     netdev_dbg(dev, "xmit control pkt: ver %d flags %x type %x pkt len %d tid %x",

> >>> +                opts->ver, opts->flags, opts->type, skb->len, pctx->u.v1.o_tei);

> >>> +     return 0;

> >>> +}

> >>

> >> There's nothing really special about that above function aside from the

> >> facts that it takes 'opts' as an argument.  Can't we just merge this

> >> with the regular 'gtp_push_header' function?  Do you have plans for this

> >> beyond what's here that would complicated by doing so?

> >>

> >

> > Yes, we already have usecase for handling GTP PDU session container

> > related extension header for 5G UPF endpoitnt.

>

> I'll venture a guess that you're referring to QoS indications.  I, too,

> have been wondering whether or not these can be made to be compatible

> with the GTP module... I'm not sure, at this point.

>

Right, It's QoS related headers.
We can use the same optional metadata field to process this header. we
need to extend the option data structure to pass PDU session container
header values.
Jakub Kicinski Jan. 30, 2021, 6:44 p.m. UTC | #11
On Fri, 29 Jan 2021 22:59:06 -0800 Pravin Shelar wrote:
> On Fri, Jan 29, 2021 at 6:08 AM Jonas Bonn <jonas@norrbonn.se> wrote:

> > On 28/01/2021 22:29, Pravin Shelar wrote:  

> > > Receive path: LWT extracts tunnel metadata into tunnel-metadata

> > > struct. This object has 5-tuple info from outer header and tunnel key.

> > > When there is presence of extension header there is no way to store

> > > the info standard tunnel-metadata object. That is when the optional

> > > section of tunnel-metadata comes in the play.

> > > As you can see the packet data from GTP header onwards is still pushed

> > > to the device, so consumers of LWT can look at tunnel-metadata and

> > > make sense of the inner packet that is received on the device.

> > > OVS does exactly the same. When it receives a GTP packet with optional

> > > metadata, it looks at flags and parses the inner packet and extension

> > > header accordingly.  

> >

> > Ah, ok, I see.  So you are pulling _half_ of the GTP header off the

> > packet but leaving the optional GTP extension headers in place if they

> > exist.  So what OVS receives is a packet with metadata indicating

> > whether or not it begins with these extension headers or whether it

> > begins with an IP header.

> >

> > So OVS might need to begin by pulling parts of the packet in order to

> > get to the inner IP packet.  In that case, why don't you just leave the

> > _entire_ GTP header in place and let OVS work from that?  The header

> > contains exactly the data you've copied to the metadata struct PLUS it

> > has the incoming TEID value that you really should be validating inner

> > IP against.

> >  

> 

> Following are the reasons for extracting the header and populating metadata.

> 1. That is the design used by other tunneling protocols

> implementations for handling optional headers. We need to have a

> consistent model across all tunnel devices for upper layers.


Could you clarify with some examples? This does not match intuition, 
I must be missing something.

> 2. GTP module is parsing the UDP and GTP header. It would be wasteful

> to repeat the same process in upper layers.

> 3. TIED is part of tunnel metadata, it is already used to validating

> inner packets. But TIED is not alone to handle packets with extended

> header.

> 

> I am fine with processing the entire header in GTP but in case of 'end

> marker' there is no data left after pulling entire GTP header. Thats

> why I took this path.
Pravin Shelar Jan. 30, 2021, 8:05 p.m. UTC | #12
On Sat, Jan 30, 2021 at 10:44 AM Jakub Kicinski <kuba@kernel.org> wrote:
>

> On Fri, 29 Jan 2021 22:59:06 -0800 Pravin Shelar wrote:

> > On Fri, Jan 29, 2021 at 6:08 AM Jonas Bonn <jonas@norrbonn.se> wrote:

> > > On 28/01/2021 22:29, Pravin Shelar wrote:

> > > > Receive path: LWT extracts tunnel metadata into tunnel-metadata

> > > > struct. This object has 5-tuple info from outer header and tunnel key.

> > > > When there is presence of extension header there is no way to store

> > > > the info standard tunnel-metadata object. That is when the optional

> > > > section of tunnel-metadata comes in the play.

> > > > As you can see the packet data from GTP header onwards is still pushed

> > > > to the device, so consumers of LWT can look at tunnel-metadata and

> > > > make sense of the inner packet that is received on the device.

> > > > OVS does exactly the same. When it receives a GTP packet with optional

> > > > metadata, it looks at flags and parses the inner packet and extension

> > > > header accordingly.

> > >

> > > Ah, ok, I see.  So you are pulling _half_ of the GTP header off the

> > > packet but leaving the optional GTP extension headers in place if they

> > > exist.  So what OVS receives is a packet with metadata indicating

> > > whether or not it begins with these extension headers or whether it

> > > begins with an IP header.

> > >

> > > So OVS might need to begin by pulling parts of the packet in order to

> > > get to the inner IP packet.  In that case, why don't you just leave the

> > > _entire_ GTP header in place and let OVS work from that?  The header

> > > contains exactly the data you've copied to the metadata struct PLUS it

> > > has the incoming TEID value that you really should be validating inner

> > > IP against.

> > >

> >

> > Following are the reasons for extracting the header and populating metadata.

> > 1. That is the design used by other tunneling protocols

> > implementations for handling optional headers. We need to have a

> > consistent model across all tunnel devices for upper layers.

>

> Could you clarify with some examples? This does not match intuition,

> I must be missing something.

>


You can look at geneve_rx() or vxlan_rcv() that extracts optional
headers in ip_tunnel_info opts.
Jakub Kicinski Feb. 1, 2021, 8:44 p.m. UTC | #13
On Sat, 30 Jan 2021 12:05:40 -0800 Pravin Shelar wrote:
> On Sat, Jan 30, 2021 at 10:44 AM Jakub Kicinski <kuba@kernel.org> wrote:

> > On Fri, 29 Jan 2021 22:59:06 -0800 Pravin Shelar wrote:  

> > > On Fri, Jan 29, 2021 at 6:08 AM Jonas Bonn <jonas@norrbonn.se> wrote:  

> > > Following are the reasons for extracting the header and populating metadata.

> > > 1. That is the design used by other tunneling protocols

> > > implementations for handling optional headers. We need to have a

> > > consistent model across all tunnel devices for upper layers.  

> >

> > Could you clarify with some examples? This does not match intuition,

> > I must be missing something.

> 

> You can look at geneve_rx() or vxlan_rcv() that extracts optional

> headers in ip_tunnel_info opts.


Okay, I got confused what Jonas was inquiring about. I thought that the
extension headers were not pulled, rather than not parsed. Copying them
as-is to info->opts is right, thanks!
Jonas Bonn Feb. 2, 2021, 5:24 a.m. UTC | #14
Hi Jakub,

On 01/02/2021 21:44, Jakub Kicinski wrote:
> On Sat, 30 Jan 2021 12:05:40 -0800 Pravin Shelar wrote:

>> On Sat, Jan 30, 2021 at 10:44 AM Jakub Kicinski <kuba@kernel.org> wrote:

>>> On Fri, 29 Jan 2021 22:59:06 -0800 Pravin Shelar wrote:

>>>> On Fri, Jan 29, 2021 at 6:08 AM Jonas Bonn <jonas@norrbonn.se> wrote:

>>>> Following are the reasons for extracting the header and populating metadata.

>>>> 1. That is the design used by other tunneling protocols

>>>> implementations for handling optional headers. We need to have a

>>>> consistent model across all tunnel devices for upper layers.

>>>

>>> Could you clarify with some examples? This does not match intuition,

>>> I must be missing something.

>>

>> You can look at geneve_rx() or vxlan_rcv() that extracts optional

>> headers in ip_tunnel_info opts.

> 

> Okay, I got confused what Jonas was inquiring about. I thought that the

> extension headers were not pulled, rather than not parsed. Copying them

> as-is to info->opts is right, thanks!

> 


No, you're not confused.  The extension headers are not being pulled in 
the current patchset.

Incoming packet:

---------------------------------------------------------------------
| flags | type | len | TEID | N-PDU | SEQ | Ext | EXT.Hdr | IP | ...
---------------------------------------------------------------------
<--------- GTP header ------<<Optional GTP elements>>-----><- Pkt --->

The "collect metadata" path of the patchset copies 'flags' and 'type' to 
info->opts, but leaves the following:

-----------------------------------------
| N-PDU | SEQ | Ext | EXT.Hdr | IP | ...
-----------------------------------------
<--------- GTP header -------><- Pkt --->

So it's leaving _half_ the header and making it a requirement that there 
be further intelligence down the line that can handle this.  This is far 
from intuitive.

/Jonas
Pravin Shelar Feb. 2, 2021, 6:56 a.m. UTC | #15
On Mon, Feb 1, 2021 at 9:24 PM Jonas Bonn <jonas@norrbonn.se> wrote:
>

> Hi Jakub,

>

> On 01/02/2021 21:44, Jakub Kicinski wrote:

> > On Sat, 30 Jan 2021 12:05:40 -0800 Pravin Shelar wrote:

> >> On Sat, Jan 30, 2021 at 10:44 AM Jakub Kicinski <kuba@kernel.org> wrote:

> >>> On Fri, 29 Jan 2021 22:59:06 -0800 Pravin Shelar wrote:

> >>>> On Fri, Jan 29, 2021 at 6:08 AM Jonas Bonn <jonas@norrbonn.se> wrote:

> >>>> Following are the reasons for extracting the header and populating metadata.

> >>>> 1. That is the design used by other tunneling protocols

> >>>> implementations for handling optional headers. We need to have a

> >>>> consistent model across all tunnel devices for upper layers.

> >>>

> >>> Could you clarify with some examples? This does not match intuition,

> >>> I must be missing something.

> >>

> >> You can look at geneve_rx() or vxlan_rcv() that extracts optional

> >> headers in ip_tunnel_info opts.

> >

> > Okay, I got confused what Jonas was inquiring about. I thought that the

> > extension headers were not pulled, rather than not parsed. Copying them

> > as-is to info->opts is right, thanks!

> >

>

> No, you're not confused.  The extension headers are not being pulled in

> the current patchset.

>

> Incoming packet:

>

> ---------------------------------------------------------------------

> | flags | type | len | TEID | N-PDU | SEQ | Ext | EXT.Hdr | IP | ...

> ---------------------------------------------------------------------

> <--------- GTP header ------<<Optional GTP elements>>-----><- Pkt --->

>

> The "collect metadata" path of the patchset copies 'flags' and 'type' to

> info->opts, but leaves the following:

>

> -----------------------------------------

> | N-PDU | SEQ | Ext | EXT.Hdr | IP | ...

> -----------------------------------------

> <--------- GTP header -------><- Pkt --->

>

> So it's leaving _half_ the header and making it a requirement that there

> be further intelligence down the line that can handle this.  This is far

> from intuitive.

>


The patch supports Echo, Echo response and End marker packet.
Issue with pulling the entire extension header is that it would result
in zero length skb, such packets can not be passed on to the upper
layer. That is the reason I kept the extension header in skb and added
indication in tunnel metadata that it is not a IP packet. so that
upper layer can process the packet.
IP packet without an extension header would be handled in a fast path
without any special handling.

Obviously In case of PDU session container extension header GTP driver
would need to process the entire extension header in the module. This
way we can handle these user data packets in fastpath.
I can make changes to use the same method for all extension headers if needed.
Jonas Bonn Feb. 2, 2021, 8:03 a.m. UTC | #16
On 02/02/2021 07:56, Pravin Shelar wrote:
> On Mon, Feb 1, 2021 at 9:24 PM Jonas Bonn <jonas@norrbonn.se> wrote:

>>

>> Hi Jakub,

>>

>> On 01/02/2021 21:44, Jakub Kicinski wrote:

>>> On Sat, 30 Jan 2021 12:05:40 -0800 Pravin Shelar wrote:

>>>> On Sat, Jan 30, 2021 at 10:44 AM Jakub Kicinski <kuba@kernel.org> wrote:

>>>>> On Fri, 29 Jan 2021 22:59:06 -0800 Pravin Shelar wrote:

>>>>>> On Fri, Jan 29, 2021 at 6:08 AM Jonas Bonn <jonas@norrbonn.se> wrote:

>>>>>> Following are the reasons for extracting the header and populating metadata.

>>>>>> 1. That is the design used by other tunneling protocols

>>>>>> implementations for handling optional headers. We need to have a

>>>>>> consistent model across all tunnel devices for upper layers.

>>>>>

>>>>> Could you clarify with some examples? This does not match intuition,

>>>>> I must be missing something.

>>>>

>>>> You can look at geneve_rx() or vxlan_rcv() that extracts optional

>>>> headers in ip_tunnel_info opts.

>>>

>>> Okay, I got confused what Jonas was inquiring about. I thought that the

>>> extension headers were not pulled, rather than not parsed. Copying them

>>> as-is to info->opts is right, thanks!

>>>

>>

>> No, you're not confused.  The extension headers are not being pulled in

>> the current patchset.

>>

>> Incoming packet:

>>

>> ---------------------------------------------------------------------

>> | flags | type | len | TEID | N-PDU | SEQ | Ext | EXT.Hdr | IP | ...

>> ---------------------------------------------------------------------

>> <--------- GTP header ------<<Optional GTP elements>>-----><- Pkt --->

>>

>> The "collect metadata" path of the patchset copies 'flags' and 'type' to

>> info->opts, but leaves the following:

>>

>> -----------------------------------------

>> | N-PDU | SEQ | Ext | EXT.Hdr | IP | ...

>> -----------------------------------------

>> <--------- GTP header -------><- Pkt --->

>>

>> So it's leaving _half_ the header and making it a requirement that there

>> be further intelligence down the line that can handle this.  This is far

>> from intuitive.

>>

> 

> The patch supports Echo, Echo response and End marker packet.

> Issue with pulling the entire extension header is that it would result

> in zero length skb, such packets can not be passed on to the upper

> layer. That is the reason I kept the extension header in skb and added

> indication in tunnel metadata that it is not a IP packet. so that

> upper layer can process the packet.

> IP packet without an extension header would be handled in a fast path

> without any special handling.

> 

> Obviously In case of PDU session container extension header GTP driver

> would need to process the entire extension header in the module. This

> way we can handle these user data packets in fastpath.

> I can make changes to use the same method for all extension headers if needed.

> 


The most disturbing bit is the fact that the upper layer needs to 
understand that part of the header info is in info->opts whereas the 
remainder is on the SKB itself.  If it is going to access the SKB 
anyway, why not just leave the entire GTP header in place and let the 
upper layer just get all the information from there?  What's the 
advantage of info->opts in this case?

Normally, the gtp module extracts T-PDU's from the GTP packet and passes 
them on (after validating their IP address) to the network stack.  For 
_everything else_, it just passes them along the socket for handling 
elsewhere.

It sounds like you are trying to do exactly the same thing:  extract 
T-PDU and inject into network stack for T-PDU's, and pass everything
else to another handler.

So what is different in your case from the normal case?
- there's metadata on the packet... can't we detect this and set the 
tunnel ID from the TEID in that case?  Or can't we just always have 
metadata on the packet?
- the upper layer handler is in kernel space instead of userspace; but 
they are doing pretty much the same thing, right?  why does the kernel 
space variant need something (info->opts) that userspace can get by without?

It would be seriously good to see a _real_ example of how you intend to 
use this.  Isn't the PDP context mechanism already sufficient to do all 
of the above?  What's missing?

ip route 192.168.99.0/24 encap gtp id 100 dst 172.99.0.2 dev gtp1

is roughly equivalent to:

gtp-tunnel add gtp1 v1 [LOCAL_TEID] 100 172.99.0.2 [UE_IP]

/Jonas
Pravin Shelar Feb. 2, 2021, 10:54 p.m. UTC | #17
On Tue, Feb 2, 2021 at 12:03 AM Jonas Bonn <jonas@norrbonn.se> wrote:
>

>

>

> On 02/02/2021 07:56, Pravin Shelar wrote:

> > On Mon, Feb 1, 2021 at 9:24 PM Jonas Bonn <jonas@norrbonn.se> wrote:

> >>

> >> Hi Jakub,

> >>

> >> On 01/02/2021 21:44, Jakub Kicinski wrote:

> >>> On Sat, 30 Jan 2021 12:05:40 -0800 Pravin Shelar wrote:

> >>>> On Sat, Jan 30, 2021 at 10:44 AM Jakub Kicinski <kuba@kernel.org> wrote:

> >>>>> On Fri, 29 Jan 2021 22:59:06 -0800 Pravin Shelar wrote:

> >>>>>> On Fri, Jan 29, 2021 at 6:08 AM Jonas Bonn <jonas@norrbonn.se> wrote:

> >>>>>> Following are the reasons for extracting the header and populating metadata.

> >>>>>> 1. That is the design used by other tunneling protocols

> >>>>>> implementations for handling optional headers. We need to have a

> >>>>>> consistent model across all tunnel devices for upper layers.

> >>>>>

> >>>>> Could you clarify with some examples? This does not match intuition,

> >>>>> I must be missing something.

> >>>>

> >>>> You can look at geneve_rx() or vxlan_rcv() that extracts optional

> >>>> headers in ip_tunnel_info opts.

> >>>

> >>> Okay, I got confused what Jonas was inquiring about. I thought that the

> >>> extension headers were not pulled, rather than not parsed. Copying them

> >>> as-is to info->opts is right, thanks!

> >>>

> >>

> >> No, you're not confused.  The extension headers are not being pulled in

> >> the current patchset.

> >>

> >> Incoming packet:

> >>

> >> ---------------------------------------------------------------------

> >> | flags | type | len | TEID | N-PDU | SEQ | Ext | EXT.Hdr | IP | ...

> >> ---------------------------------------------------------------------

> >> <--------- GTP header ------<<Optional GTP elements>>-----><- Pkt --->

> >>

> >> The "collect metadata" path of the patchset copies 'flags' and 'type' to

> >> info->opts, but leaves the following:

> >>

> >> -----------------------------------------

> >> | N-PDU | SEQ | Ext | EXT.Hdr | IP | ...

> >> -----------------------------------------

> >> <--------- GTP header -------><- Pkt --->

> >>

> >> So it's leaving _half_ the header and making it a requirement that there

> >> be further intelligence down the line that can handle this.  This is far

> >> from intuitive.

> >>

> >

> > The patch supports Echo, Echo response and End marker packet.

> > Issue with pulling the entire extension header is that it would result

> > in zero length skb, such packets can not be passed on to the upper

> > layer. That is the reason I kept the extension header in skb and added

> > indication in tunnel metadata that it is not a IP packet. so that

> > upper layer can process the packet.

> > IP packet without an extension header would be handled in a fast path

> > without any special handling.

> >

> > Obviously In case of PDU session container extension header GTP driver

> > would need to process the entire extension header in the module. This

> > way we can handle these user data packets in fastpath.

> > I can make changes to use the same method for all extension headers if needed.

> >

>

> The most disturbing bit is the fact that the upper layer needs to

> understand that part of the header info is in info->opts whereas the

> remainder is on the SKB itself.  If it is going to access the SKB

> anyway, why not just leave the entire GTP header in place and let the

> upper layer just get all the information from there?  What's the

> advantage of info->opts in this case?

>

> Normally, the gtp module extracts T-PDU's from the GTP packet and passes

> them on (after validating their IP address) to the network stack.  For

> _everything else_, it just passes them along the socket for handling

> elsewhere.

>

> It sounds like you are trying to do exactly the same thing:  extract

> T-PDU and inject into network stack for T-PDU's, and pass everything

> else to another handler.

>

> So what is different in your case from the normal case?

> - there's metadata on the packet... can't we detect this and set the

> tunnel ID from the TEID in that case?  Or can't we just always have

> metadata on the packet?

> - the upper layer handler is in kernel space instead of userspace; but

> they are doing pretty much the same thing, right?  why does the kernel

> space variant need something (info->opts) that userspace can get by without?

>

This is how OVS/eBPF abstracts tunneling implementation. Tunnel
context is represented in tunnel metadata. I am trying to integrate
GTP tunnel with LTW framework. This way we can make use of GTP tunnel
as any other LWT device.

I am fine with the GTP extension header handling that passes GTP
packet as is in case extension header. Can you make those changes in
this series and post Non RFC Patch Set.
Jonas Bonn Feb. 6, 2021, 6:04 p.m. UTC | #18
Hi Pravin et al;

TL;DR:  we don't need to introduce an entire collect_md mode to the 
driver; we just need to tweak what we've got so that metadata is 
"always" added on RX and respected on TX; make the userspace socket 
optional and dump GTP packets to netstack if it's not present; and make 
a decision on whether to allow packets into netstack without validating 
their IP address.

On 23/01/2021 20:59, Jonas Bonn wrote:
> From: Pravin B Shelar <pbshelar@fb.com>

> 

> This patch adds support for flow based tunneling, allowing to send and

> receive GTP tunneled packets via the (lightweight) tunnel metadata

> mechanism.  This would allow integration with OVS and eBPF using flow

> based tunneling APIs.

> 

> The mechanism used here is to get the required GTP tunnel parameters

> from the tunnel metadata instead of looking up a pre-configured PDP

> context.  The tunnel metadata contains the necessary information for

> creating the GTP header.



So, I've been investigating this a bit further...

What is being introduced in this patch is a variant of "normal 
functionality" that resembles something called "collect metadata" mode 
in other drivers (GRE, VXLAN, etc).  These other drivers operate in one 
of two modes:  more-or-less-point-to-point mode, or this alternative 
collect metadata varaint.  The latter is something that has been bolted 
on to an existing implementation of the former.

For iproute2, a parameter called 'external' is added to the setup of 
links of the above type to switch between the two modes; the 
point-to-point parameters are invalid in 'external' (or 'collect 
metadata') mode.  The name 'external' is because the driver is 
externally controlled, meaning that the tunnel information is not 
hardcoded into the device instance itself.

The GTP driver, however, has never been a point-to-point device.  It is 
already 'externally controlled' in the sense that tunnels can be added 
and removed at any time.  Adding this 'external' mode doesn't 
immediately make sense because that's roughly what we already have.

Looking into how ip_tunnel_collect_metadata() works, it looks to me like 
it's always true if there's _ANY_ routing rule in the system with 
'tunnel ID' set.  Is that correct?  I'll assume it is in order to 
continue my thoughts.

So, with that, either the system is collecting SKB metadata or it isn't. 
  If it is, it seems reasonable that we populate incoming packets with 
the tunnel ID as in this patch.  That said, I'm not convinced that we 
should bypass the PDP context mechanism entirely... there should still 
be a PDP context set up or we drop the packet for security reasons.

For outgoing packets, it seems reasonable that the remote TEID may come 
from metadata OR a PDP context.  That would allow some routing 
alternatives to what we have right now which just does a PDP context 
lookup based on the destination/source address of the packet.  It would 
be nice for OVS/BPF to be able to direct a packet to a remote GTP 
endpoint by providing that endpoint/TEID via a metadata structure.

So we end up with, roughly:

On RX:
i) check TEID in GTP header
ii) lookup PDP context
iii) validate IP of encapsulated packet
iv) if ip(tunnel_collect_metadata()) { /* add tun info */ }
v) decapsulate and pass to network stack

On TX:
i) if SKB has metadata, get destination and TEID from metadata (tunnel ID)
ii) otherwise, lookup PDP context for packet

For RX, only iv) is new; for TX only step i) is new.  The rest is what 
we already have.

The one thing that this complicates is the case where an external entity 
(OVS or BPF) is doing validation of packet IP against incoming TEID.  In 
that case, we've got double validation of the incoming address and, I 
suppose, OVS/BPF would perhaps be more efficient (?)...  In that case, 
holding a PDP context is a bit of a waste of memory.

It's somewhat of a security issue to allow un-checked packets into the 
system, so I'm not super keen on dropping the validation of incoming 
packets; given the previous paragraph, however, we might add a flag when 
creating the link to decide whether or not we allow packets through even 
if we can't validate them.  This would also apply to packets without a 
PDP context for an incoming TEID, too.  This flag, I suppose, looks a 
bit like the 'collect_metadata' flag that Pravin has added here.

The other difference to what we currently have is that this patch sets 
up a socket exclusively in kernel space for the tunnel; currently, all 
sockets terminate in userspace:  userspace receives all packets that 
can't be decapsulated and re-injected in to the network stack.  With 
this patch, ALL packets (without a userspace termination) are 
re-injected into the network stack; it's just that anything that can't 
be decapsulated gets injected as a "GTP packet" with some metadata and 
an UNKNOWN protocol type.  If nothing is looking at the metadata and 
acting on it, then these will just get dropped; and that's what would 
happen if nothing was listening on the userspace end, too.  So we might 
as well just make the FD1 socket parameter to the link setup optional 
and be done with it.

So, thoughts?  What am I missing?

/Jonas

> 

> Signed-off-by: Jonas Bonn <jonas@norrbonn.se>

> ---

>   drivers/net/gtp.c                  | 160 +++++++++++++++++++++++++----

>   include/uapi/linux/gtp.h           |  12 +++

>   include/uapi/linux/if_tunnel.h     |   1 +

>   tools/include/uapi/linux/if_link.h |   1 +

>   4 files changed, 156 insertions(+), 18 deletions(-)

> 

> diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c

> index 8aab46ec8a94..668ed8a4836e 100644

> --- a/drivers/net/gtp.c

> +++ b/drivers/net/gtp.c

> @@ -21,6 +21,7 @@

>   #include <linux/file.h>

>   #include <linux/gtp.h>

>   

> +#include <net/dst_metadata.h>

>   #include <net/net_namespace.h>

>   #include <net/protocol.h>

>   #include <net/ip.h>

> @@ -74,6 +75,9 @@ struct gtp_dev {

>   	unsigned int		hash_size;

>   	struct hlist_head	*tid_hash;

>   	struct hlist_head	*addr_hash;

> +	/* Used by LWT tunnel. */

> +	bool			collect_md;

> +	struct socket		*collect_md_sock;

>   };

>   

>   static unsigned int gtp_net_id __read_mostly;

> @@ -224,6 +228,51 @@ static int gtp_rx(struct pdp_ctx *pctx, struct sk_buff *skb,

>   	return -1;

>   }

>   

> +static int gtp_set_tun_dst(struct pdp_ctx *pctx, struct sk_buff *skb,

> +			   unsigned int hdrlen)

> +{

> +	struct metadata_dst *tun_dst;

> +	struct gtp1_header *gtp1;

> +	int opts_len = 0;

> +	__be64 tid;

> +

> +	gtp1 = (struct gtp1_header *)(skb->data + sizeof(struct udphdr));

> +

> +	tid = key32_to_tunnel_id(gtp1->tid);

> +

> +	if (unlikely(gtp1->flags & GTP1_F_MASK))

> +		opts_len = sizeof(struct gtpu_metadata);

> +

> +	tun_dst = udp_tun_rx_dst(skb,

> +			pctx->sk->sk_family, TUNNEL_KEY, tid, opts_len);

> +	if (!tun_dst) {

> +		netdev_dbg(pctx->dev, "Failed to allocate tun_dst");

> +		goto err;

> +	}

> +

> +	netdev_dbg(pctx->dev, "attaching metadata_dst to skb, gtp ver %d hdrlen %d\n",

> +		   pctx->gtp_version, hdrlen);

> +	if (unlikely(opts_len)) {

> +		struct gtpu_metadata *opts;

> +

> +		opts = ip_tunnel_info_opts(&tun_dst->u.tun_info);

> +		opts->ver = GTP_METADATA_V1;

> +		opts->flags = gtp1->flags;

> +		opts->type = gtp1->type;

> +		netdev_dbg(pctx->dev, "recved control pkt: flag %x type: %d\n",

> +			   opts->flags, opts->type);

> +		tun_dst->u.tun_info.key.tun_flags |= TUNNEL_GTPU_OPT;

> +		tun_dst->u.tun_info.options_len = opts_len;

> +		skb->protocol = htons(0xffff);         /* Unknown */

> +	}

> +

> +	skb_dst_set(skb, &tun_dst->dst);

> +	return 0;

> +err:

> +	return -1;

> +}

> +

> +

>   /* 1 means pass up to the stack, -1 means drop and 0 means decapsulated. */

>   static int gtp0_udp_encap_recv(struct gtp_dev *gtp, struct sk_buff *skb)

>   {

> @@ -262,6 +311,7 @@ static int gtp1u_udp_encap_recv(struct gtp_dev *gtp, struct sk_buff *skb)

>   	unsigned int hdrlen = sizeof(struct udphdr) +

>   			      sizeof(struct gtp1_header);

>   	struct gtp1_header *gtp1;

> +	struct pdp_ctx md_pctx;

>   	struct pdp_ctx *pctx;

>   

>   	if (!pskb_may_pull(skb, hdrlen))

> @@ -272,6 +322,24 @@ static int gtp1u_udp_encap_recv(struct gtp_dev *gtp, struct sk_buff *skb)

>   	if ((gtp1->flags >> 5) != GTP_V1)

>   		return 1;

>   

> +	if (ip_tunnel_collect_metadata() || gtp->collect_md) {

> +		int err;

> +

> +		pctx = &md_pctx;

> +

> +		pctx->gtp_version = GTP_V1;

> +		pctx->sk = gtp->sk1u;

> +		pctx->dev = gtp->dev;

> +

> +		err = gtp_set_tun_dst(pctx, skb, hdrlen);

> +		if (err) {

> +			gtp->dev->stats.rx_dropped++;

> +			return -1;

> +		}

> +

> +		return gtp_rx(pctx, skb, hdrlen);

> +	}

> +

>   	if (gtp1->type != GTP_TPDU)

>   		return 1;

>   

> @@ -353,7 +421,8 @@ static int gtp_encap_recv(struct sock *sk, struct sk_buff *skb)

>   	if (!gtp)

>   		return 1;

>   

> -	netdev_dbg(gtp->dev, "encap_recv sk=%p\n", sk);

> +	netdev_dbg(gtp->dev, "encap_recv sk=%p type %d\n",

> +		   sk, udp_sk(sk)->encap_type);

>   

>   	switch (udp_sk(sk)->encap_type) {

>   	case UDP_ENCAP_GTP0:

> @@ -539,7 +608,7 @@ static struct rtable *gtp_get_v4_rt(struct sk_buff *skb,

>   	memset(&fl4, 0, sizeof(fl4));

>   	fl4.flowi4_oif		= sk->sk_bound_dev_if;

>   	fl4.daddr		= pctx->peer_addr_ip4.s_addr;

> -	fl4.saddr		= inet_sk(sk)->inet_saddr;

> +	fl4.saddr		= *saddr;

>   	fl4.flowi4_tos		= RT_CONN_FLAGS(sk);

>   	fl4.flowi4_proto	= sk->sk_protocol;

>   

> @@ -617,29 +686,84 @@ static void gtp_push_header(struct sk_buff *skb, struct pdp_ctx *pctx,

>   static int gtp_xmit_ip4(struct sk_buff *skb, struct net_device *dev)

>   {

>   	struct gtp_dev *gtp = netdev_priv(dev);

> +	struct gtpu_metadata *opts = NULL;

> +	struct pdp_ctx md_pctx;

>   	struct pdp_ctx *pctx;

> +	__be16 port;

>   	struct rtable *rt;

> -	__be32 saddr;

>   	struct iphdr *iph;

> +	__be32 saddr;

>   	int headroom;

> -	__be16 port;

> +	__u8 tos;

>   	int r;

>   

> -	/* Read the IP destination address and resolve the PDP context.

> -	 * Prepend PDP header with TEI/TID from PDP ctx.

> -	 */

> -	iph = ip_hdr(skb);

> -	if (gtp->role == GTP_ROLE_SGSN)

> -		pctx = ipv4_pdp_find(gtp, iph->saddr);

> -	else

> -		pctx = ipv4_pdp_find(gtp, iph->daddr);

> +	if (gtp->collect_md) {

> +		/* LWT GTP1U encap */

> +		struct ip_tunnel_info *info = NULL;

>   

> -	if (!pctx) {

> -		netdev_dbg(dev, "no PDP ctx found for %pI4, skip\n",

> -			   &iph->daddr);

> -		return -ENOENT;

> +		info = skb_tunnel_info(skb);

> +		if (!info) {

> +			netdev_dbg(dev, "missing tunnel info");

> +			return -ENOENT;

> +		}

> +		if (info->key.tp_dst && ntohs(info->key.tp_dst) != GTP1U_PORT) {

> +			netdev_dbg(dev, "unexpected GTP dst port: %d", ntohs(info->key.tp_dst));

> +			return -EOPNOTSUPP;

> +		}

> +

> +		if (!gtp->sk1u) {

> +			netdev_dbg(dev, "missing tunnel sock");

> +			return -EOPNOTSUPP;

> +		}

> +

> +		pctx = &md_pctx;

> +		memset(pctx, 0, sizeof(*pctx));

> +		pctx->sk = gtp->sk1u;

> +		pctx->gtp_version = GTP_V1;

> +		pctx->u.v1.o_tei = ntohl(tunnel_id_to_key32(info->key.tun_id));

> +		pctx->peer_addr_ip4.s_addr = info->key.u.ipv4.dst;

> +

> +		saddr = info->key.u.ipv4.src;

> +		tos = info->key.tos;

> +

> +		if (info->options_len != 0) {

> +			if (info->key.tun_flags & TUNNEL_GTPU_OPT) {

> +				opts = ip_tunnel_info_opts(info);

> +			} else {

> +				netdev_dbg(dev, "missing tunnel metadata for control pkt");

> +				return -EOPNOTSUPP;

> +			}

> +		}

> +		netdev_dbg(dev, "flow-based GTP1U encap: tunnel id %d\n",

> +			   pctx->u.v1.o_tei);

> +	} else {

> +		struct iphdr *iph;

> +

> +		if (ntohs(skb->protocol) != ETH_P_IP)

> +			return -EOPNOTSUPP;

> +

> +		iph = ip_hdr(skb);

> +

> +		/* Read the IP destination address and resolve the PDP context.

> +		 * Prepend PDP header with TEI/TID from PDP ctx.

> +		 */

> +		if (gtp->role == GTP_ROLE_SGSN)

> +			pctx = ipv4_pdp_find(gtp, iph->saddr);

> +		else

> +			pctx = ipv4_pdp_find(gtp, iph->daddr);

> +

> +		if (!pctx) {

> +			netdev_dbg(dev, "no PDP ctx found for %pI4, skip\n",

> +				   &iph->daddr);

> +			return -ENOENT;

> +		}

> +		netdev_dbg(dev, "found PDP context %p\n", pctx);

> +

> +		saddr = inet_sk(pctx->sk)->inet_saddr;

> +		tos = iph->tos;

> +		netdev_dbg(dev, "gtp -> IP src: %pI4 dst: %pI4\n",

> +			   &iph->saddr, &iph->daddr);

>   	}

> -	netdev_dbg(dev, "found PDP context %p\n", pctx);

>   

>   	rt = gtp_get_v4_rt(skb, dev, pctx, &saddr);

>   	if (IS_ERR(rt)) {

> @@ -691,7 +815,7 @@ static int gtp_xmit_ip4(struct sk_buff *skb, struct net_device *dev)

>   

>   	udp_tunnel_xmit_skb(rt, pctx->sk, skb,

>   			    saddr, pctx->peer_addr_ip4.s_addr,

> -			    iph->tos,

> +			    tos,

>   			    ip4_dst_hoplimit(&rt->dst),

>   			    0,

>   			    port, port,

> diff --git a/include/uapi/linux/gtp.h b/include/uapi/linux/gtp.h

> index 79f9191bbb24..62aff78b7c56 100644

> --- a/include/uapi/linux/gtp.h

> +++ b/include/uapi/linux/gtp.h

> @@ -2,6 +2,8 @@

>   #ifndef _UAPI_LINUX_GTP_H_

>   #define _UAPI_LINUX_GTP_H_

>   

> +#include <linux/types.h>

> +

>   #define GTP_GENL_MCGRP_NAME	"gtp"

>   

>   enum gtp_genl_cmds {

> @@ -34,4 +36,14 @@ enum gtp_attrs {

>   };

>   #define GTPA_MAX (__GTPA_MAX + 1)

>   

> +enum {

> +	GTP_METADATA_V1

> +};

> +

> +struct gtpu_metadata {

> +	__u8    ver;

> +	__u8    flags;

> +	__u8    type;

> +};

> +

>   #endif /* _UAPI_LINUX_GTP_H_ */

> diff --git a/include/uapi/linux/if_tunnel.h b/include/uapi/linux/if_tunnel.h

> index 7d9105533c7b..802da679fab1 100644

> --- a/include/uapi/linux/if_tunnel.h

> +++ b/include/uapi/linux/if_tunnel.h

> @@ -176,6 +176,7 @@ enum {

>   #define TUNNEL_VXLAN_OPT	__cpu_to_be16(0x1000)

>   #define TUNNEL_NOCACHE		__cpu_to_be16(0x2000)

>   #define TUNNEL_ERSPAN_OPT	__cpu_to_be16(0x4000)

> +#define TUNNEL_GTPU_OPT		__cpu_to_be16(0x8000)

>   

>   #define TUNNEL_OPTIONS_PRESENT \

>   		(TUNNEL_GENEVE_OPT | TUNNEL_VXLAN_OPT | TUNNEL_ERSPAN_OPT)

> diff --git a/tools/include/uapi/linux/if_link.h b/tools/include/uapi/linux/if_link.h

> index d208b2af697f..28d649bda686 100644

> --- a/tools/include/uapi/linux/if_link.h

> +++ b/tools/include/uapi/linux/if_link.h

> @@ -617,6 +617,7 @@ enum {

>   	IFLA_GTP_FD1,

>   	IFLA_GTP_PDP_HASHSIZE,

>   	IFLA_GTP_ROLE,

> +	IFLA_GTP_COLLECT_METADATA,

>   	__IFLA_GTP_MAX,

>   };

>   #define IFLA_GTP_MAX (__IFLA_GTP_MAX - 1)

>
Pravin Shelar Feb. 7, 2021, 5:56 p.m. UTC | #19
On Sat, Feb 6, 2021 at 10:06 AM Jonas Bonn <jonas@norrbonn.se> wrote:
>

> Hi Pravin et al;

>

> TL;DR:  we don't need to introduce an entire collect_md mode to the

> driver; we just need to tweak what we've got so that metadata is

> "always" added on RX and respected on TX; make the userspace socket

> optional and dump GTP packets to netstack if it's not present; and make

> a decision on whether to allow packets into netstack without validating

> their IP address.

>

Thanks for summarizing the LWT mechanism below. Overall I am fine with
the changes, a couple of comments inlined.


> On 23/01/2021 20:59, Jonas Bonn wrote:

> > From: Pravin B Shelar <pbshelar@fb.com>

> >

> > This patch adds support for flow based tunneling, allowing to send and

> > receive GTP tunneled packets via the (lightweight) tunnel metadata

> > mechanism.  This would allow integration with OVS and eBPF using flow

> > based tunneling APIs.

> >

> > The mechanism used here is to get the required GTP tunnel parameters

> > from the tunnel metadata instead of looking up a pre-configured PDP

> > context.  The tunnel metadata contains the necessary information for

> > creating the GTP header.

>

>

> So, I've been investigating this a bit further...

>

> What is being introduced in this patch is a variant of "normal

> functionality" that resembles something called "collect metadata" mode

> in other drivers (GRE, VXLAN, etc).  These other drivers operate in one

> of two modes:  more-or-less-point-to-point mode, or this alternative

> collect metadata varaint.  The latter is something that has been bolted

> on to an existing implementation of the former.

>

> For iproute2, a parameter called 'external' is added to the setup of

> links of the above type to switch between the two modes; the

> point-to-point parameters are invalid in 'external' (or 'collect

> metadata') mode.  The name 'external' is because the driver is

> externally controlled, meaning that the tunnel information is not

> hardcoded into the device instance itself.

>

> The GTP driver, however, has never been a point-to-point device.  It is

> already 'externally controlled' in the sense that tunnels can be added

> and removed at any time.  Adding this 'external' mode doesn't

> immediately make sense because that's roughly what we already have.

>

> Looking into how ip_tunnel_collect_metadata() works, it looks to me like

> it's always true if there's _ANY_ routing rule in the system with

> 'tunnel ID' set.  Is that correct?  I'll assume it is in order to

> continue my thoughts.

>

Right. It is just optimization to avoid allocating tun-dst in datapath.

> So, with that, either the system is collecting SKB metadata or it isn't.

>   If it is, it seems reasonable that we populate incoming packets with

> the tunnel ID as in this patch.  That said, I'm not convinced that we

> should bypass the PDP context mechanism entirely... there should still

> be a PDP context set up or we drop the packet for security reasons.

>

> For outgoing packets, it seems reasonable that the remote TEID may come

> from metadata OR a PDP context.  That would allow some routing

> alternatives to what we have right now which just does a PDP context

> lookup based on the destination/source address of the packet.  It would

> be nice for OVS/BPF to be able to direct a packet to a remote GTP

> endpoint by providing that endpoint/TEID via a metadata structure.

>

> So we end up with, roughly:

>

> On RX:

> i) check TEID in GTP header

> ii) lookup PDP context

> iii) validate IP of encapsulated packet

> iv) if ip(tunnel_collect_metadata()) { /* add tun info */ }

> v) decapsulate and pass to network stack

>

> On TX:

> i) if SKB has metadata, get destination and TEID from metadata (tunnel ID)

> ii) otherwise, lookup PDP context for packet

>

> For RX, only iv) is new; for TX only step i) is new.  The rest is what

> we already have.

>

> The one thing that this complicates is the case where an external entity

> (OVS or BPF) is doing validation of packet IP against incoming TEID.  In

> that case, we've got double validation of the incoming address and, I

> suppose, OVS/BPF would perhaps be more efficient (?)...  In that case,

> holding a PDP context is a bit of a waste of memory.

>

> It's somewhat of a security issue to allow un-checked packets into the

> system, so I'm not super keen on dropping the validation of incoming

> packets; given the previous paragraph, however, we might add a flag when

> creating the link to decide whether or not we allow packets through even

> if we can't validate them.  This would also apply to packets without a

> PDP context for an incoming TEID, too.  This flag, I suppose, looks a

> bit like the 'collect_metadata' flag that Pravin has added here.

>

Yes. user should have the option to use GTP devices in LTW mode and
bypass PDP session context completely. Lets add a flag for GTP link to
have consistent behaviour for all types of LWT devices.

> The other difference to what we currently have is that this patch sets

> up a socket exclusively in kernel space for the tunnel; currently, all

> sockets terminate in userspace:  userspace receives all packets that

> can't be decapsulated and re-injected in to the network stack.  With

> this patch, ALL packets (without a userspace termination) are

> re-injected into the network stack; it's just that anything that can't

> be decapsulated gets injected as a "GTP packet" with some metadata and

> an UNKNOWN protocol type.  If nothing is looking at the metadata and

> acting on it, then these will just get dropped; and that's what would

> happen if nothing was listening on the userspace end, too.  So we might

> as well just make the FD1 socket parameter to the link setup optional

> and be done with it.

>

Good idea to make FD1 optional argument.

Thanks.