diff mbox series

[net-next,v18,20/25] ovpn: implement peer add/get/dump/delete via netlink

Message ID 20250113-b4-ovpn-v18-20-1f00db9c2bd6@openvpn.net
State New
Headers show
Series [net-next,v18,01/25] net: introduce OpenVPN Data Channel Offload (ovpn) | expand

Commit Message

Antonio Quartulli Jan. 13, 2025, 9:31 a.m. UTC
This change introduces the netlink command needed to add, delete and
retrieve/dump known peers. Userspace is expected to use these commands
to handle known peer lifecycles.

Signed-off-by: Antonio Quartulli <antonio@openvpn.net>
---
 drivers/net/ovpn/netlink.c | 630 ++++++++++++++++++++++++++++++++++++++++++++-
 drivers/net/ovpn/peer.c    |  53 ++--
 drivers/net/ovpn/peer.h    |   5 +
 3 files changed, 666 insertions(+), 22 deletions(-)

Comments

Antonio Quartulli Jan. 19, 2025, 1:12 p.m. UTC | #1
On 17/01/2025 18:12, Sabrina Dubroca wrote:
> 2025-01-17, 13:59:35 +0100, Antonio Quartulli wrote:
>> On 17/01/2025 12:48, Sabrina Dubroca wrote:
>>> 2025-01-13, 10:31:39 +0100, Antonio Quartulli wrote:
>>>>    int ovpn_nl_peer_new_doit(struct sk_buff *skb, struct genl_info *info)
>>>>    {
>>>> -	return -EOPNOTSUPP;
>>>> +	struct nlattr *attrs[OVPN_A_PEER_MAX + 1];
>>>> +	struct ovpn_priv *ovpn = info->user_ptr[0];
>>>> +	struct ovpn_socket *ovpn_sock;
>>>> +	struct socket *sock = NULL;
>>>> +	struct ovpn_peer *peer;
>>>> +	u32 sockfd, peer_id;
>>>> +	int ret;
>>>> +
>>>> +	/* peers can only be added when the interface is up and running */
>>>> +	if (!netif_running(ovpn->dev))
>>>> +		return -ENETDOWN;
>>>
>>> Since we're not under rtnl_lock here, the device could go down while
>>> we're creating this peer, and we may end up with a down device that
>>> has a peer anyway.
>>
>> hmm, indeed. This means we must hold the rtnl_lock to prevent ending up in
>> an inconsistent state.
>>
>>>
>>> I'm not sure what this (and the peer flushing on NETDEV_DOWN) is
>>> trying to accomplish. Is it a problem to keep peers when the netdevice
>>> is down?
>>
>> This is the result of my discussion with Sergey that started in v23 5/23:
>>
>> https://lore.kernel.org/r/netdev/20241029-b4-ovpn-v11-5-de4698c73a25@openvpn.net/
>>
>> The idea was to match operational state with actual connectivity to peer(s).
>>
>> Originally I wanted to simply kee the carrier always on, but after further
>> discussion (including the meaning of the openvpn option --persist-tun) we
>> agreed on following the logic where an UP device has a peer connected (logic
>> is slightly different between MP and P2P).
>>
>> I am not extremely happy with the resulting complexity, but it seemed to be
>> blocker for Sergey.
> 
> [after re-reading that discussion with Sergey]
> 
> I don't understand why "admin does 'ip link set tun0 down'" means "we
> should get rid of all peers. For me the carrier situation goes the
> other way: no peer, no carrier (as if I unplugged the cable from my
> ethernet card), and it's independent of what the user does (ip link
> set XXX up/down). You have that with netif_carrier_{on,off}, but
> flushing peers when the admin does "ip link set tun0 down" is separate
> IMO.

The reasoning was "the user is asking the VPN to go down - it should be 
assumed that from that moment on no VPN traffic whatsoever should flow 
in either direction".
Similarly to when you bring an Eth interface dwn - the phy link goes 
down as well.

Does it make sense?

> 
> [...]
>>>>    int ovpn_nl_peer_del_doit(struct sk_buff *skb, struct genl_info *info)
>>>>    {
>>>> -	return -EOPNOTSUPP;
>>>> +	struct nlattr *attrs[OVPN_A_PEER_MAX + 1];
>>>> +	struct ovpn_priv *ovpn = info->user_ptr[0];
>>>> +	struct ovpn_peer *peer;
>>>> +	u32 peer_id;
>>>> +	int ret;
>>>> +
>>>> +	if (GENL_REQ_ATTR_CHECK(info, OVPN_A_PEER))
>>>> +		return -EINVAL;
>>>> +
>>>> +	ret = nla_parse_nested(attrs, OVPN_A_PEER_MAX, info->attrs[OVPN_A_PEER],
>>>> +			       ovpn_peer_nl_policy, info->extack);
>>>> +	if (ret)
>>>> +		return ret;
>>>> +
>>>> +	if (NL_REQ_ATTR_CHECK(info->extack, info->attrs[OVPN_A_PEER], attrs,
>>>> +			      OVPN_A_PEER_ID))
>>>> +		return -EINVAL;
>>>> +
>>>> +	peer_id = nla_get_u32(attrs[OVPN_A_PEER_ID]);
>>>> +	peer = ovpn_peer_get_by_id(ovpn, peer_id);
>>>> +	if (!peer) {
>>>> +		NL_SET_ERR_MSG_FMT_MOD(info->extack,
>>>> +				       "cannot find peer with id %u", peer_id);
>>>> +		return -ENOENT;
>>>> +	}
>>>> +
>>>> +	netdev_dbg(ovpn->dev, "del peer %u\n", peer->id);
>>>> +	ret = ovpn_peer_del(peer, OVPN_DEL_PEER_REASON_USERSPACE);
>>>
>>> With the delayed socket release (which is similar to what was in v11,
>>> but now with refcounting on the netdevice which should make
>>> rtnl_link_unregister in ovpn_cleanup wait [*]), we may return to
>>> userspace as if the peer was gone, but the socket hasn't been detached
>>> yet.
>>>
>>> A userspace application that tries to remove the peer and immediately
>>> re-create it with the same socket could get EBUSY if the workqueue
>>> hasn't done its job yet. That would be quite confusing to the
>>> application.
>>
>> This may happen only for TCP, because in the UDP case we would increase the
>> refcounter and keep the socket attached.
> 
> Not if we're re-attaching to a different ovpn instance/netdevice.

Right.
One more reason to go with the completion logic.

> 
>>
>> However, re-attaching the same TCP socket is hardly going to happen (in TCP
>> we have one socket per peer, therefore if the peer is going away, we're most
>> likely killing the socket too).
>>
>> This said, the complexity added by the completion seems quite tiny,
>> therefore I'll add the code you are suggesting below.
> 
> Ok.

Working on it!

Thanks!
Regards,
Antonio Quartulli Jan. 20, 2025, 10:45 a.m. UTC | #2
On 20/01/2025 11:09, Sabrina Dubroca wrote:
> 2025-01-19, 14:12:05 +0100, Antonio Quartulli wrote:
>> On 17/01/2025 18:12, Sabrina Dubroca wrote:
>>> 2025-01-17, 13:59:35 +0100, Antonio Quartulli wrote:
>>>> On 17/01/2025 12:48, Sabrina Dubroca wrote:
>>>>> 2025-01-13, 10:31:39 +0100, Antonio Quartulli wrote:
>>>>>>     int ovpn_nl_peer_new_doit(struct sk_buff *skb, struct genl_info *info)
>>>>>>     {
>>>>>> -	return -EOPNOTSUPP;
>>>>>> +	struct nlattr *attrs[OVPN_A_PEER_MAX + 1];
>>>>>> +	struct ovpn_priv *ovpn = info->user_ptr[0];
>>>>>> +	struct ovpn_socket *ovpn_sock;
>>>>>> +	struct socket *sock = NULL;
>>>>>> +	struct ovpn_peer *peer;
>>>>>> +	u32 sockfd, peer_id;
>>>>>> +	int ret;
>>>>>> +
>>>>>> +	/* peers can only be added when the interface is up and running */
>>>>>> +	if (!netif_running(ovpn->dev))
>>>>>> +		return -ENETDOWN;
>>>>>
>>>>> Since we're not under rtnl_lock here, the device could go down while
>>>>> we're creating this peer, and we may end up with a down device that
>>>>> has a peer anyway.
>>>>
>>>> hmm, indeed. This means we must hold the rtnl_lock to prevent ending up in
>>>> an inconsistent state.
>>>>
>>>>>
>>>>> I'm not sure what this (and the peer flushing on NETDEV_DOWN) is
>>>>> trying to accomplish. Is it a problem to keep peers when the netdevice
>>>>> is down?
>>>>
>>>> This is the result of my discussion with Sergey that started in v23 5/23:
>>>>
>>>> https://lore.kernel.org/r/netdev/20241029-b4-ovpn-v11-5-de4698c73a25@openvpn.net/
>>>>
>>>> The idea was to match operational state with actual connectivity to peer(s).
>>>>
>>>> Originally I wanted to simply kee the carrier always on, but after further
>>>> discussion (including the meaning of the openvpn option --persist-tun) we
>>>> agreed on following the logic where an UP device has a peer connected (logic
>>>> is slightly different between MP and P2P).
>>>>
>>>> I am not extremely happy with the resulting complexity, but it seemed to be
>>>> blocker for Sergey.
>>>
>>> [after re-reading that discussion with Sergey]
>>>
>>> I don't understand why "admin does 'ip link set tun0 down'" means "we
>>> should get rid of all peers. For me the carrier situation goes the
>>> other way: no peer, no carrier (as if I unplugged the cable from my
>>> ethernet card), and it's independent of what the user does (ip link
>>> set XXX up/down). You have that with netif_carrier_{on,off}, but
>>> flushing peers when the admin does "ip link set tun0 down" is separate
>>> IMO.
>>
>> The reasoning was "the user is asking the VPN to go down - it should be
>> assumed that from that moment on no VPN traffic whatsoever should flow in
>> either direction".
>> Similarly to when you bring an Eth interface dwn - the phy link goes down as
>> well.
>>
>> Does it make sense?
> 
> I'm not sure. If I turn the ovpn interface down for a second, the
> peers are removed. Will they come back when I bring the interface back
> up?  That'd have to be done by userspace (which could also watch for
> the DOWN events and tell the kernel to flush the peers) - but some of
> the peers could have timed out in the meantime.
> 
> If I set the VPN interface down, I expect no packets flowing through
> that interface (dropping the peers isn't necessary for that), but all
> non-data (key exchange etc sent by openvpn's userspace) should still
> go through, and IMO peer keepalive fits in that "non-data" category.

This was my original thought too and my original proposal followed this 
idea :-)

However Sergey had a strong opinion about "the user expect no traffic 
whatsoever".

I'd be happy about going again with your proposed approach, but I need 
to be sure that on the next revision nobody will come asking to revert 
this logic again :(

> 
> 
> What does openvpn currently do if I do
>      ip link set tun0 down ; sleep 5 ; ip link set tun0 up
> with a tuntap interface?

I think nothing happens, because userspace doesn't monitor the netdev 
status. Therefore, unless tun closed the socket (which I think it does 
only when the interface is destroyed), userspace does not even realize 
that the interface went down.

Regards,

>
Antonio Quartulli Jan. 21, 2025, 11:26 p.m. UTC | #3
On 20/01/2025 15:52, Antonio Quartulli wrote:
> On 17/01/2025 12:48, Sabrina Dubroca wrote:
> [...]
>> -------- 8< --------
>>
>> diff --git a/drivers/net/ovpn/netlink.c b/drivers/net/ovpn/netlink.c
>> index 72357bb5f30b..19aa4ee6d468 100644
>> --- a/drivers/net/ovpn/netlink.c
>> +++ b/drivers/net/ovpn/netlink.c
>> @@ -733,6 +733,9 @@ int ovpn_nl_peer_del_doit(struct sk_buff *skb, 
>> struct genl_info *info)
>>       netdev_dbg(ovpn->dev, "del peer %u\n", peer->id);
>>       ret = ovpn_peer_del(peer, OVPN_DEL_PEER_REASON_USERSPACE);
>> +    if (ret >= 0 && peer->sock)
>> +        wait_for_completion(&peer->sock_detach);
>> +
>>       ovpn_peer_put(peer);
>>       return ret;
>> diff --git a/drivers/net/ovpn/peer.c b/drivers/net/ovpn/peer.c
>> index b032390047fe..6120521d0c32 100644
>> --- a/drivers/net/ovpn/peer.c
>> +++ b/drivers/net/ovpn/peer.c
>> @@ -92,6 +92,7 @@ struct ovpn_peer *ovpn_peer_new(struct ovpn_priv 
>> *ovpn, u32 id)
>>       ovpn_peer_stats_init(&peer->vpn_stats);
>>       ovpn_peer_stats_init(&peer->link_stats);
>>       INIT_WORK(&peer->keepalive_work, ovpn_peer_keepalive_send);
>> +    init_completion(&peer->sock_detach);
>>       ret = dst_cache_init(&peer->dst_cache, GFP_KERNEL);
>>       if (ret < 0) {
>> diff --git a/drivers/net/ovpn/peer.h b/drivers/net/ovpn/peer.h
>> index 7a062cc5a5a4..8c54bf5709ef 100644
>> --- a/drivers/net/ovpn/peer.h
>> +++ b/drivers/net/ovpn/peer.h
>> @@ -112,6 +112,7 @@ struct ovpn_peer {
>>       struct rcu_head rcu;
>>       struct work_struct remove_work;
>>       struct work_struct keepalive_work;
>> +    struct completion sock_detach;
>>   };
>>   /**
>> diff --git a/drivers/net/ovpn/socket.c b/drivers/net/ovpn/socket.c
>> index a5c3bc834a35..7cefac42c3be 100644
>> --- a/drivers/net/ovpn/socket.c
>> +++ b/drivers/net/ovpn/socket.c
>> @@ -31,6 +31,8 @@ static void ovpn_socket_release_kref(struct kref *kref)
>>       sockfd_put(sock->sock);
>>       kfree_rcu(sock, rcu);
>> +
>> +    complete(&sock->peer->sock_detach);

I am moving this line to ovpn_socket_put(), right after kref_put() so 
that every peer going through the socket release will get their 
complete() invoked.

If the peer happens to be the last one owning the socket, kref_put() 
will first do the detach and only then complete() gets called.

This requires ovpn_socket_release/put() to take the peer as argument, 
but that's ok.

This way we should achieve what we needed.


Regards,
Sabrina Dubroca Jan. 22, 2025, 8:45 a.m. UTC | #4
2025-01-22, 00:26:50 +0100, Antonio Quartulli wrote:
> On 20/01/2025 15:52, Antonio Quartulli wrote:
> > On 17/01/2025 12:48, Sabrina Dubroca wrote:
> > [...]
> > > -------- 8< --------
> > > 
> > > diff --git a/drivers/net/ovpn/netlink.c b/drivers/net/ovpn/netlink.c
> > > index 72357bb5f30b..19aa4ee6d468 100644
> > > --- a/drivers/net/ovpn/netlink.c
> > > +++ b/drivers/net/ovpn/netlink.c
> > > @@ -733,6 +733,9 @@ int ovpn_nl_peer_del_doit(struct sk_buff *skb,
> > > struct genl_info *info)
> > >       netdev_dbg(ovpn->dev, "del peer %u\n", peer->id);
> > >       ret = ovpn_peer_del(peer, OVPN_DEL_PEER_REASON_USERSPACE);
> > > +    if (ret >= 0 && peer->sock)
> > > +        wait_for_completion(&peer->sock_detach);
> > > +
> > >       ovpn_peer_put(peer);
> > >       return ret;
> > > diff --git a/drivers/net/ovpn/peer.c b/drivers/net/ovpn/peer.c
> > > index b032390047fe..6120521d0c32 100644
> > > --- a/drivers/net/ovpn/peer.c
> > > +++ b/drivers/net/ovpn/peer.c
> > > @@ -92,6 +92,7 @@ struct ovpn_peer *ovpn_peer_new(struct ovpn_priv
> > > *ovpn, u32 id)
> > >       ovpn_peer_stats_init(&peer->vpn_stats);
> > >       ovpn_peer_stats_init(&peer->link_stats);
> > >       INIT_WORK(&peer->keepalive_work, ovpn_peer_keepalive_send);
> > > +    init_completion(&peer->sock_detach);
> > >       ret = dst_cache_init(&peer->dst_cache, GFP_KERNEL);
> > >       if (ret < 0) {
> > > diff --git a/drivers/net/ovpn/peer.h b/drivers/net/ovpn/peer.h
> > > index 7a062cc5a5a4..8c54bf5709ef 100644
> > > --- a/drivers/net/ovpn/peer.h
> > > +++ b/drivers/net/ovpn/peer.h
> > > @@ -112,6 +112,7 @@ struct ovpn_peer {
> > >       struct rcu_head rcu;
> > >       struct work_struct remove_work;
> > >       struct work_struct keepalive_work;
> > > +    struct completion sock_detach;
> > >   };
> > >   /**
> > > diff --git a/drivers/net/ovpn/socket.c b/drivers/net/ovpn/socket.c
> > > index a5c3bc834a35..7cefac42c3be 100644
> > > --- a/drivers/net/ovpn/socket.c
> > > +++ b/drivers/net/ovpn/socket.c
> > > @@ -31,6 +31,8 @@ static void ovpn_socket_release_kref(struct kref *kref)
> > >       sockfd_put(sock->sock);
> > >       kfree_rcu(sock, rcu);
> > > +
> > > +    complete(&sock->peer->sock_detach);
> 
> I am moving this line to ovpn_socket_put(), right after kref_put() so that
> every peer going through the socket release will get their complete()
> invoked.
> 
> If the peer happens to be the last one owning the socket, kref_put() will
> first do the detach and only then complete() gets called.
> 
> This requires ovpn_socket_release/put() to take the peer as argument, but
> that's ok.
> 
> This way we should achieve what we needed.

This seems like it would, thanks.
Sabrina Dubroca Feb. 2, 2025, 11:07 p.m. UTC | #5
2025-01-13, 10:31:39 +0100, Antonio Quartulli wrote:
> +static int ovpn_nl_attr_sockaddr_remote(struct nlattr **attrs,
> +					struct sockaddr_storage *ss)
> +{
> +	struct sockaddr_in6 *sin6;
> +	struct sockaddr_in *sin;
> +	struct in6_addr *in6;
> +	__be16 port = 0;
> +	__be32 *in;
> +	int af;
> +
> +	ss->ss_family = AF_UNSPEC;
> +
> +	if (attrs[OVPN_A_PEER_REMOTE_PORT])
> +		port = nla_get_be16(attrs[OVPN_A_PEER_REMOTE_PORT]);
> +
> +	if (attrs[OVPN_A_PEER_REMOTE_IPV4]) {
> +		af = AF_INET;
> +		ss->ss_family = AF_INET;
> +		in = nla_data(attrs[OVPN_A_PEER_REMOTE_IPV4]);
> +	} else if (attrs[OVPN_A_PEER_REMOTE_IPV6]) {
> +		af = AF_INET6;
> +		ss->ss_family = AF_INET6;
> +		in6 = nla_data(attrs[OVPN_A_PEER_REMOTE_IPV6]);
> +	} else {
> +		return AF_UNSPEC;
> +	}
> +
> +	switch (ss->ss_family) {
> +	case AF_INET6:
> +		/* If this is a regular IPv6 just break and move on,
> +		 * otherwise switch to AF_INET and extract the IPv4 accordingly
> +		 */
> +		if (!ipv6_addr_v4mapped(in6)) {
> +			sin6 = (struct sockaddr_in6 *)ss;
> +			sin6->sin6_port = port;
> +			memcpy(&sin6->sin6_addr, in6, sizeof(*in6));
> +			break;
> +		}
> +
> +		/* v4-mapped-v6 address */
> +		ss->ss_family = AF_INET;
> +		in = &in6->s6_addr32[3];
> +		fallthrough;
> +	case AF_INET:
> +		sin = (struct sockaddr_in *)ss;
> +		sin->sin_port = port;
> +		sin->sin_addr.s_addr = *in;
> +		break;
> +	}
> +
> +	/* don't return ss->ss_family as it may have changed in case of
> +	 * v4-mapped-v6 address
> +	 */

nit: I'm not sure that matters since the only thing the caller checks
is ret != AF_UNSPEC, and at this point, while ss_family could have
been changed, it would have changed from AF_INET6 to AF_INET, so it's
!= AF_UNSPEC.

> +	return af;
> +}

[...]
> +static int ovpn_nl_peer_precheck(struct ovpn_priv *ovpn,
> +				 struct genl_info *info,
> +				 struct nlattr **attrs)
> +{
[...]
> +
> +	/* VPN IPs are needed only in MP mode for selecting the right peer */
> +	if (ovpn->mode == OVPN_MODE_P2P && (attrs[OVPN_A_PEER_VPN_IPV4] ||
> +					    attrs[OVPN_A_PEER_VPN_IPV6])) {

And in MP mode, at least one VPN_IP* is required?


[...]
>  int ovpn_nl_peer_new_doit(struct sk_buff *skb, struct genl_info *info)
>  {
[...]
> +	/* Only when using UDP as transport protocol the remote endpoint
> +	 * can be configured so that ovpn knows where to send packets to.
> +	 *
> +	 * In case of TCP, the socket is connected to the peer and ovpn
> +	 * will just send bytes over it, without the need to specify a
> +	 * destination.
> +	 */
> +	if (sock->sk->sk_protocol != IPPROTO_UDP &&
> +	    (attrs[OVPN_A_PEER_REMOTE_IPV4] ||
> +	     attrs[OVPN_A_PEER_REMOTE_IPV6])) {

Is a peer on a UDP socket without any remote (neither
OVPN_A_PEER_REMOTE_IPV4 nor OVPN_A_PEER_REMOTE_IPV6) valid? We just
wait until we get data from it to update the endpoint?

Or should there be a check to make sure that one was provided?

> +		NL_SET_ERR_MSG_FMT_MOD(info->extack,
> +				       "unexpected remote IP address for non UDP socket");
> +		sockfd_put(sock);
> +		return -EINVAL;
> +	}
> +
> +	ovpn_sock = ovpn_socket_new(sock, peer);
> +	if (IS_ERR(ovpn_sock)) {
> +		NL_SET_ERR_MSG_FMT_MOD(info->extack,
> +				       "cannot encapsulate socket: %ld",
> +				       PTR_ERR(ovpn_sock));
> +		sockfd_put(sock);
> +		return -ENOTSOCK;

Maybe s/-ENOTSOCK/PTR_ERR(ovpn_sock)/ ?
Overwriting ovpn_socket_new's -EBUSY etc with -ENOTSOCK is a bit
misleading to the caller.

> +	}
> +
> +	peer->sock = ovpn_sock;
> +
> +	ret = ovpn_nl_peer_modify(peer, info, attrs);
> +	if (ret < 0)
> +		goto peer_release;
> +
> +	ret = ovpn_peer_add(ovpn, peer);
> +	if (ret < 0) {
> +		NL_SET_ERR_MSG_FMT_MOD(info->extack,
> +				       "cannot add new peer (id=%u) to hashtable: %d\n",
> +				       peer->id, ret);
> +		goto peer_release;
> +	}
> +
> +	return 0;
> +
> +peer_release:

I think you need to add:

	ovpn_socket_release(peer);

If ovpn_socket_new succeeded, ovpn_peer_release only takes care of the
peer but not its socket.

> +	/* release right away because peer is not used in any context */
> +	ovpn_peer_release(peer);
> +
> +	return ret;
>  }
>  
>  int ovpn_nl_peer_set_doit(struct sk_buff *skb, struct genl_info *info)
>  {
[...]
> +	if (attrs[OVPN_A_PEER_SOCKET]) {
> +		NL_SET_ERR_MSG_FMT_MOD(info->extack,
> +				       "socket cannot be modified");
> +		return -EINVAL;
> +	}
> +
> +	peer_id = nla_get_u32(attrs[OVPN_A_PEER_ID]);
> +	peer = ovpn_peer_get_by_id(ovpn, peer_id);
> +	if (!peer) {
> +		NL_SET_ERR_MSG_FMT_MOD(info->extack,
> +				       "cannot find peer with id %u", peer_id);
> +		return -ENOENT;
> +	}

The check for non-UDP socket with a remote address configured should
be replicated here, no?
diff mbox series

Patch

diff --git a/drivers/net/ovpn/netlink.c b/drivers/net/ovpn/netlink.c
index d5d0f9112ec554e9959b17d2c7bf6b054ea9ff1b..9daa48c294b435f26aa099b28ec458a21de3a87a 100644
--- a/drivers/net/ovpn/netlink.c
+++ b/drivers/net/ovpn/netlink.c
@@ -7,6 +7,7 @@ 
  */
 
 #include <linux/netdevice.h>
+#include <linux/types.h>
 #include <net/genetlink.h>
 
 #include <uapi/linux/ovpn.h>
@@ -15,6 +16,9 @@ 
 #include "main.h"
 #include "netlink.h"
 #include "netlink-gen.h"
+#include "bind.h"
+#include "peer.h"
+#include "socket.h"
 
 MODULE_ALIAS_GENL_FAMILY(OVPN_FAMILY_NAME);
 
@@ -89,29 +93,645 @@  void ovpn_nl_post_doit(const struct genl_split_ops *ops, struct sk_buff *skb,
 		netdev_put(ovpn->dev, tracker);
 }
 
+static int ovpn_nl_attr_sockaddr_remote(struct nlattr **attrs,
+					struct sockaddr_storage *ss)
+{
+	struct sockaddr_in6 *sin6;
+	struct sockaddr_in *sin;
+	struct in6_addr *in6;
+	__be16 port = 0;
+	__be32 *in;
+	int af;
+
+	ss->ss_family = AF_UNSPEC;
+
+	if (attrs[OVPN_A_PEER_REMOTE_PORT])
+		port = nla_get_be16(attrs[OVPN_A_PEER_REMOTE_PORT]);
+
+	if (attrs[OVPN_A_PEER_REMOTE_IPV4]) {
+		af = AF_INET;
+		ss->ss_family = AF_INET;
+		in = nla_data(attrs[OVPN_A_PEER_REMOTE_IPV4]);
+	} else if (attrs[OVPN_A_PEER_REMOTE_IPV6]) {
+		af = AF_INET6;
+		ss->ss_family = AF_INET6;
+		in6 = nla_data(attrs[OVPN_A_PEER_REMOTE_IPV6]);
+	} else {
+		return AF_UNSPEC;
+	}
+
+	switch (ss->ss_family) {
+	case AF_INET6:
+		/* If this is a regular IPv6 just break and move on,
+		 * otherwise switch to AF_INET and extract the IPv4 accordingly
+		 */
+		if (!ipv6_addr_v4mapped(in6)) {
+			sin6 = (struct sockaddr_in6 *)ss;
+			sin6->sin6_port = port;
+			memcpy(&sin6->sin6_addr, in6, sizeof(*in6));
+			break;
+		}
+
+		/* v4-mapped-v6 address */
+		ss->ss_family = AF_INET;
+		in = &in6->s6_addr32[3];
+		fallthrough;
+	case AF_INET:
+		sin = (struct sockaddr_in *)ss;
+		sin->sin_port = port;
+		sin->sin_addr.s_addr = *in;
+		break;
+	}
+
+	/* don't return ss->ss_family as it may have changed in case of
+	 * v4-mapped-v6 address
+	 */
+	return af;
+}
+
+static u8 *ovpn_nl_attr_local_ip(struct nlattr **attrs)
+{
+	u8 *addr6;
+
+	if (!attrs[OVPN_A_PEER_LOCAL_IPV4] && !attrs[OVPN_A_PEER_LOCAL_IPV6])
+		return NULL;
+
+	if (attrs[OVPN_A_PEER_LOCAL_IPV4])
+		return nla_data(attrs[OVPN_A_PEER_LOCAL_IPV4]);
+
+	addr6 = nla_data(attrs[OVPN_A_PEER_LOCAL_IPV6]);
+	/* this is an IPv4-mapped IPv6 address, therefore extract the actual
+	 * v4 address from the last 4 bytes
+	 */
+	if (ipv6_addr_v4mapped((struct in6_addr *)addr6))
+		return addr6 + 12;
+
+	return addr6;
+}
+
+static sa_family_t ovpn_nl_family_get(struct nlattr *addr4,
+				      struct nlattr *addr6)
+{
+	if (addr4)
+		return AF_INET;
+
+	if (addr6) {
+		if (ipv6_addr_v4mapped((struct in6_addr *)nla_data(addr6)))
+			return AF_INET;
+		return AF_INET6;
+	}
+
+	return AF_UNSPEC;
+}
+
+static int ovpn_nl_peer_precheck(struct ovpn_priv *ovpn,
+				 struct genl_info *info,
+				 struct nlattr **attrs)
+{
+	sa_family_t local_fam, remote_fam;
+
+	if (NL_REQ_ATTR_CHECK(info->extack, info->attrs[OVPN_A_PEER], attrs,
+			      OVPN_A_PEER_ID))
+		return -EINVAL;
+
+	if (attrs[OVPN_A_PEER_REMOTE_IPV4] && attrs[OVPN_A_PEER_REMOTE_IPV6]) {
+		NL_SET_ERR_MSG_MOD(info->extack,
+				   "cannot specify both remote IPv4 or IPv6 address");
+		return -EINVAL;
+	}
+
+	if (!attrs[OVPN_A_PEER_REMOTE_IPV4] &&
+	    !attrs[OVPN_A_PEER_REMOTE_IPV6] && attrs[OVPN_A_PEER_REMOTE_PORT]) {
+		NL_SET_ERR_MSG_MOD(info->extack,
+				   "cannot specify remote port without IP address");
+		return -EINVAL;
+	}
+
+	if (!attrs[OVPN_A_PEER_REMOTE_IPV4] &&
+	    attrs[OVPN_A_PEER_LOCAL_IPV4]) {
+		NL_SET_ERR_MSG_MOD(info->extack,
+				   "cannot specify local IPv4 address without remote");
+		return -EINVAL;
+	}
+
+	if (!attrs[OVPN_A_PEER_REMOTE_IPV6] &&
+	    attrs[OVPN_A_PEER_LOCAL_IPV6]) {
+		NL_SET_ERR_MSG_MOD(info->extack,
+				   "cannot specify local IPV6 address without remote");
+		return -EINVAL;
+	}
+
+	/* check that local and remote address families are the same even
+	 * after parsing v4mapped IPv6 addresses.
+	 * (if addresses are not provided, family will be AF_UNSPEC and
+	 * the check is skipped)
+	 */
+	local_fam = ovpn_nl_family_get(attrs[OVPN_A_PEER_LOCAL_IPV4],
+				       attrs[OVPN_A_PEER_LOCAL_IPV6]);
+	remote_fam = ovpn_nl_family_get(attrs[OVPN_A_PEER_REMOTE_IPV4],
+					attrs[OVPN_A_PEER_REMOTE_IPV6]);
+	if (local_fam != AF_UNSPEC && remote_fam != AF_UNSPEC &&
+	    local_fam != remote_fam) {
+		NL_SET_ERR_MSG_MOD(info->extack,
+				   "mismatching local and remote address families");
+		return -EINVAL;
+	}
+
+	if (remote_fam != AF_INET6 && attrs[OVPN_A_PEER_REMOTE_IPV6_SCOPE_ID]) {
+		NL_SET_ERR_MSG_MOD(info->extack,
+				   "cannot specify scope id without remote IPv6 address");
+		return -EINVAL;
+	}
+
+	/* VPN IPs are needed only in MP mode for selecting the right peer */
+	if (ovpn->mode == OVPN_MODE_P2P && (attrs[OVPN_A_PEER_VPN_IPV4] ||
+					    attrs[OVPN_A_PEER_VPN_IPV6])) {
+		NL_SET_ERR_MSG_FMT_MOD(info->extack,
+				       "VPN IP unexpected in P2P mode");
+		return -EINVAL;
+	}
+
+	if ((attrs[OVPN_A_PEER_KEEPALIVE_INTERVAL] &&
+	     !attrs[OVPN_A_PEER_KEEPALIVE_TIMEOUT]) ||
+	    (!attrs[OVPN_A_PEER_KEEPALIVE_INTERVAL] &&
+	     attrs[OVPN_A_PEER_KEEPALIVE_TIMEOUT])) {
+		NL_SET_ERR_MSG_FMT_MOD(info->extack,
+				       "keepalive interval and timeout are required together");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+/**
+ * ovpn_nl_peer_modify - modify the peer attributes according to the incoming msg
+ * @peer: the peer to modify
+ * @info: generic netlink info from the user request
+ * @attrs: the attributes from the user request
+ *
+ * Return: a negative error code in case of failure, 0 on success or 1 on
+ *	   success and the VPN IPs have been modified (requires rehashing in MP
+ *	   mode)
+ */
+static int ovpn_nl_peer_modify(struct ovpn_peer *peer, struct genl_info *info,
+			       struct nlattr **attrs)
+{
+	struct sockaddr_storage ss = {};
+	u32 interv, timeout;
+	u8 *local_ip = NULL;
+	bool rehash = false;
+	int ret;
+
+	spin_lock_bh(&peer->lock);
+
+	if (ovpn_nl_attr_sockaddr_remote(attrs, &ss) != AF_UNSPEC) {
+		/* we carry the local IP in a generic container.
+		 * ovpn_peer_reset_sockaddr() will properly interpret it
+		 * based on ss.ss_family
+		 */
+		local_ip = ovpn_nl_attr_local_ip(attrs);
+
+		/* set peer sockaddr */
+		ret = ovpn_peer_reset_sockaddr(peer, &ss, local_ip);
+		if (ret < 0) {
+			NL_SET_ERR_MSG_FMT_MOD(info->extack,
+					       "cannot set peer sockaddr: %d",
+					       ret);
+			goto err_unlock;
+		}
+	}
+
+	if (attrs[OVPN_A_PEER_VPN_IPV4]) {
+		rehash = true;
+		peer->vpn_addrs.ipv4.s_addr =
+			nla_get_in_addr(attrs[OVPN_A_PEER_VPN_IPV4]);
+	}
+
+	if (attrs[OVPN_A_PEER_VPN_IPV6]) {
+		rehash = true;
+		peer->vpn_addrs.ipv6 =
+			nla_get_in6_addr(attrs[OVPN_A_PEER_VPN_IPV6]);
+	}
+
+	/* when setting the keepalive, both parameters have to be configured */
+	if (attrs[OVPN_A_PEER_KEEPALIVE_INTERVAL] &&
+	    attrs[OVPN_A_PEER_KEEPALIVE_TIMEOUT]) {
+		interv = nla_get_u32(attrs[OVPN_A_PEER_KEEPALIVE_INTERVAL]);
+		timeout = nla_get_u32(attrs[OVPN_A_PEER_KEEPALIVE_TIMEOUT]);
+		ovpn_peer_keepalive_set(peer, interv, timeout);
+	}
+
+	netdev_dbg(peer->ovpn->dev,
+		   "modify peer id=%u endpoint=%pIScp/%s VPN-IPv4=%pI4 VPN-IPv6=%pI6c\n",
+		   peer->id, &ss, peer->sock->sock->sk->sk_prot_creator->name,
+		   &peer->vpn_addrs.ipv4.s_addr, &peer->vpn_addrs.ipv6);
+
+	spin_unlock_bh(&peer->lock);
+
+	return rehash ? 1 : 0;
+err_unlock:
+	spin_unlock_bh(&peer->lock);
+	return ret;
+}
+
 int ovpn_nl_peer_new_doit(struct sk_buff *skb, struct genl_info *info)
 {
-	return -EOPNOTSUPP;
+	struct nlattr *attrs[OVPN_A_PEER_MAX + 1];
+	struct ovpn_priv *ovpn = info->user_ptr[0];
+	struct ovpn_socket *ovpn_sock;
+	struct socket *sock = NULL;
+	struct ovpn_peer *peer;
+	u32 sockfd, peer_id;
+	int ret;
+
+	/* peers can only be added when the interface is up and running */
+	if (!netif_running(ovpn->dev))
+		return -ENETDOWN;
+
+	if (GENL_REQ_ATTR_CHECK(info, OVPN_A_PEER))
+		return -EINVAL;
+
+	ret = nla_parse_nested(attrs, OVPN_A_PEER_MAX, info->attrs[OVPN_A_PEER],
+			       ovpn_peer_nl_policy, info->extack);
+	if (ret)
+		return ret;
+
+	ret = ovpn_nl_peer_precheck(ovpn, info, attrs);
+	if (ret < 0)
+		return ret;
+
+	if (NL_REQ_ATTR_CHECK(info->extack, info->attrs[OVPN_A_PEER], attrs,
+			      OVPN_A_PEER_SOCKET))
+		return -EINVAL;
+
+	peer_id = nla_get_u32(attrs[OVPN_A_PEER_ID]);
+	peer = ovpn_peer_new(ovpn, peer_id);
+	if (IS_ERR(peer)) {
+		NL_SET_ERR_MSG_FMT_MOD(info->extack,
+				       "cannot create new peer object for peer %u: %ld",
+				       peer_id, PTR_ERR(peer));
+		return PTR_ERR(peer);
+	}
+
+	/* lookup the fd in the kernel table and extract the socket object */
+	sockfd = nla_get_u32(attrs[OVPN_A_PEER_SOCKET]);
+	/* sockfd_lookup() increases sock's refcounter */
+	sock = sockfd_lookup(sockfd, &ret);
+	if (!sock) {
+		NL_SET_ERR_MSG_FMT_MOD(info->extack,
+				       "cannot lookup peer socket (fd=%u): %d",
+				       sockfd, ret);
+		return -ENOTSOCK;
+	}
+
+	/* Only when using UDP as transport protocol the remote endpoint
+	 * can be configured so that ovpn knows where to send packets to.
+	 *
+	 * In case of TCP, the socket is connected to the peer and ovpn
+	 * will just send bytes over it, without the need to specify a
+	 * destination.
+	 */
+	if (sock->sk->sk_protocol != IPPROTO_UDP &&
+	    (attrs[OVPN_A_PEER_REMOTE_IPV4] ||
+	     attrs[OVPN_A_PEER_REMOTE_IPV6])) {
+		NL_SET_ERR_MSG_FMT_MOD(info->extack,
+				       "unexpected remote IP address for non UDP socket");
+		sockfd_put(sock);
+		return -EINVAL;
+	}
+
+	ovpn_sock = ovpn_socket_new(sock, peer);
+	if (IS_ERR(ovpn_sock)) {
+		NL_SET_ERR_MSG_FMT_MOD(info->extack,
+				       "cannot encapsulate socket: %ld",
+				       PTR_ERR(ovpn_sock));
+		sockfd_put(sock);
+		return -ENOTSOCK;
+	}
+
+	peer->sock = ovpn_sock;
+
+	ret = ovpn_nl_peer_modify(peer, info, attrs);
+	if (ret < 0)
+		goto peer_release;
+
+	ret = ovpn_peer_add(ovpn, peer);
+	if (ret < 0) {
+		NL_SET_ERR_MSG_FMT_MOD(info->extack,
+				       "cannot add new peer (id=%u) to hashtable: %d\n",
+				       peer->id, ret);
+		goto peer_release;
+	}
+
+	return 0;
+
+peer_release:
+	/* release right away because peer is not used in any context */
+	ovpn_peer_release(peer);
+
+	return ret;
 }
 
 int ovpn_nl_peer_set_doit(struct sk_buff *skb, struct genl_info *info)
 {
-	return -EOPNOTSUPP;
+	struct nlattr *attrs[OVPN_A_PEER_MAX + 1];
+	struct ovpn_priv *ovpn = info->user_ptr[0];
+	struct ovpn_peer *peer;
+	u32 peer_id;
+	int ret;
+
+	if (GENL_REQ_ATTR_CHECK(info, OVPN_A_PEER))
+		return -EINVAL;
+
+	ret = nla_parse_nested(attrs, OVPN_A_PEER_MAX, info->attrs[OVPN_A_PEER],
+			       ovpn_peer_nl_policy, info->extack);
+	if (ret)
+		return ret;
+
+	ret = ovpn_nl_peer_precheck(ovpn, info, attrs);
+	if (ret < 0)
+		return ret;
+
+	if (attrs[OVPN_A_PEER_SOCKET]) {
+		NL_SET_ERR_MSG_FMT_MOD(info->extack,
+				       "socket cannot be modified");
+		return -EINVAL;
+	}
+
+	peer_id = nla_get_u32(attrs[OVPN_A_PEER_ID]);
+	peer = ovpn_peer_get_by_id(ovpn, peer_id);
+	if (!peer) {
+		NL_SET_ERR_MSG_FMT_MOD(info->extack,
+				       "cannot find peer with id %u", peer_id);
+		return -ENOENT;
+	}
+
+	spin_lock_bh(&ovpn->lock);
+	ret = ovpn_nl_peer_modify(peer, info, attrs);
+	if (ret < 0) {
+		spin_unlock_bh(&ovpn->lock);
+		ovpn_peer_put(peer);
+		return ret;
+	}
+
+	/* ret == 1 means that VPN IPv4/6 has been modified and rehashing
+	 * is required
+	 */
+	if (ret > 0)
+		ovpn_peer_hash_vpn_ip(peer);
+	spin_unlock_bh(&ovpn->lock);
+	ovpn_peer_put(peer);
+
+	return 0;
+}
+
+static int ovpn_nl_send_peer(struct sk_buff *skb, const struct genl_info *info,
+			     const struct ovpn_peer *peer, u32 portid, u32 seq,
+			     int flags)
+{
+	const struct ovpn_bind *bind;
+	struct nlattr *attr;
+	void *hdr;
+	int id;
+
+	hdr = genlmsg_put(skb, portid, seq, &ovpn_nl_family, flags,
+			  OVPN_CMD_PEER_GET);
+	if (!hdr)
+		return -ENOBUFS;
+
+	attr = nla_nest_start(skb, OVPN_A_PEER);
+	if (!attr)
+		goto err;
+
+	if (!net_eq(genl_info_net(info), sock_net(peer->sock->sock->sk))) {
+		id = peernet2id_alloc(genl_info_net(info),
+				      sock_net(peer->sock->sock->sk),
+				      GFP_ATOMIC);
+		if (nla_put_s32(skb, OVPN_A_PEER_SOCKET_NETNSID, id))
+			goto err;
+	}
+
+	if (nla_put_u32(skb, OVPN_A_PEER_ID, peer->id))
+		goto err;
+
+	if (peer->vpn_addrs.ipv4.s_addr != htonl(INADDR_ANY))
+		if (nla_put_in_addr(skb, OVPN_A_PEER_VPN_IPV4,
+				    peer->vpn_addrs.ipv4.s_addr))
+			goto err;
+
+	if (!ipv6_addr_equal(&peer->vpn_addrs.ipv6, &in6addr_any))
+		if (nla_put_in6_addr(skb, OVPN_A_PEER_VPN_IPV6,
+				     &peer->vpn_addrs.ipv6))
+			goto err;
+
+	if (nla_put_u32(skb, OVPN_A_PEER_KEEPALIVE_INTERVAL,
+			peer->keepalive_interval) ||
+	    nla_put_u32(skb, OVPN_A_PEER_KEEPALIVE_TIMEOUT,
+			peer->keepalive_timeout))
+		goto err;
+
+	rcu_read_lock();
+	bind = rcu_dereference(peer->bind);
+	if (bind) {
+		if (bind->remote.in4.sin_family == AF_INET) {
+			if (nla_put_in_addr(skb, OVPN_A_PEER_REMOTE_IPV4,
+					    bind->remote.in4.sin_addr.s_addr) ||
+			    nla_put_net16(skb, OVPN_A_PEER_REMOTE_PORT,
+					  bind->remote.in4.sin_port) ||
+			    nla_put_in_addr(skb, OVPN_A_PEER_LOCAL_IPV4,
+					    bind->local.ipv4.s_addr))
+				goto err_unlock;
+		} else if (bind->remote.in4.sin_family == AF_INET6) {
+			if (nla_put_in6_addr(skb, OVPN_A_PEER_REMOTE_IPV6,
+					     &bind->remote.in6.sin6_addr) ||
+			    nla_put_u32(skb, OVPN_A_PEER_REMOTE_IPV6_SCOPE_ID,
+					bind->remote.in6.sin6_scope_id) ||
+			    nla_put_net16(skb, OVPN_A_PEER_REMOTE_PORT,
+					  bind->remote.in6.sin6_port) ||
+			    nla_put_in6_addr(skb, OVPN_A_PEER_LOCAL_IPV6,
+					     &bind->local.ipv6))
+				goto err_unlock;
+		}
+	}
+	rcu_read_unlock();
+
+	if (nla_put_net16(skb, OVPN_A_PEER_LOCAL_PORT,
+			  inet_sk(peer->sock->sock->sk)->inet_sport) ||
+	    /* VPN RX stats */
+	    nla_put_uint(skb, OVPN_A_PEER_VPN_RX_BYTES,
+			 atomic64_read(&peer->vpn_stats.rx.bytes)) ||
+	    nla_put_uint(skb, OVPN_A_PEER_VPN_RX_PACKETS,
+			 atomic64_read(&peer->vpn_stats.rx.packets)) ||
+	    /* VPN TX stats */
+	    nla_put_uint(skb, OVPN_A_PEER_VPN_TX_BYTES,
+			 atomic64_read(&peer->vpn_stats.tx.bytes)) ||
+	    nla_put_uint(skb, OVPN_A_PEER_VPN_TX_PACKETS,
+			 atomic64_read(&peer->vpn_stats.tx.packets)) ||
+	    /* link RX stats */
+	    nla_put_uint(skb, OVPN_A_PEER_LINK_RX_BYTES,
+			 atomic64_read(&peer->link_stats.rx.bytes)) ||
+	    nla_put_uint(skb, OVPN_A_PEER_LINK_RX_PACKETS,
+			 atomic64_read(&peer->link_stats.rx.packets)) ||
+	    /* link TX stats */
+	    nla_put_uint(skb, OVPN_A_PEER_LINK_TX_BYTES,
+			 atomic64_read(&peer->link_stats.tx.bytes)) ||
+	    nla_put_uint(skb, OVPN_A_PEER_LINK_TX_PACKETS,
+			 atomic64_read(&peer->link_stats.tx.packets)))
+		goto err;
+
+	nla_nest_end(skb, attr);
+	genlmsg_end(skb, hdr);
+
+	return 0;
+err_unlock:
+	rcu_read_unlock();
+err:
+	genlmsg_cancel(skb, hdr);
+	return -EMSGSIZE;
 }
 
 int ovpn_nl_peer_get_doit(struct sk_buff *skb, struct genl_info *info)
 {
-	return -EOPNOTSUPP;
+	struct nlattr *attrs[OVPN_A_PEER_MAX + 1];
+	struct ovpn_priv *ovpn = info->user_ptr[0];
+	struct ovpn_peer *peer;
+	struct sk_buff *msg;
+	u32 peer_id;
+	int ret;
+
+	if (GENL_REQ_ATTR_CHECK(info, OVPN_A_PEER))
+		return -EINVAL;
+
+	ret = nla_parse_nested(attrs, OVPN_A_PEER_MAX, info->attrs[OVPN_A_PEER],
+			       ovpn_peer_nl_policy, info->extack);
+	if (ret)
+		return ret;
+
+	if (NL_REQ_ATTR_CHECK(info->extack, info->attrs[OVPN_A_PEER], attrs,
+			      OVPN_A_PEER_ID))
+		return -EINVAL;
+
+	peer_id = nla_get_u32(attrs[OVPN_A_PEER_ID]);
+	peer = ovpn_peer_get_by_id(ovpn, peer_id);
+	if (!peer) {
+		NL_SET_ERR_MSG_FMT_MOD(info->extack,
+				       "cannot find peer with id %u", peer_id);
+		return -ENOENT;
+	}
+
+	msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
+	if (!msg) {
+		ret = -ENOMEM;
+		goto err;
+	}
+
+	ret = ovpn_nl_send_peer(msg, info, peer, info->snd_portid,
+				info->snd_seq, 0);
+	if (ret < 0) {
+		nlmsg_free(msg);
+		goto err;
+	}
+
+	ret = genlmsg_reply(msg, info);
+err:
+	ovpn_peer_put(peer);
+	return ret;
 }
 
 int ovpn_nl_peer_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb)
 {
-	return -EOPNOTSUPP;
+	const struct genl_info *info = genl_info_dump(cb);
+	int bkt, last_idx = cb->args[1], dumped = 0;
+	netdevice_tracker tracker;
+	struct ovpn_priv *ovpn;
+	struct ovpn_peer *peer;
+
+	ovpn = ovpn_get_dev_from_attrs(sock_net(cb->skb->sk), info, &tracker);
+	if (IS_ERR(ovpn))
+		return PTR_ERR(ovpn);
+
+	if (ovpn->mode == OVPN_MODE_P2P) {
+		/* if we already dumped a peer it means we are done */
+		if (last_idx)
+			goto out;
+
+		rcu_read_lock();
+		peer = rcu_dereference(ovpn->peer);
+		if (peer) {
+			if (ovpn_nl_send_peer(skb, info, peer,
+					      NETLINK_CB(cb->skb).portid,
+					      cb->nlh->nlmsg_seq,
+					      NLM_F_MULTI) == 0)
+				dumped++;
+		}
+		rcu_read_unlock();
+	} else {
+		rcu_read_lock();
+		hash_for_each_rcu(ovpn->peers->by_id, bkt, peer,
+				  hash_entry_id) {
+			/* skip already dumped peers that were dumped by
+			 * previous invocations
+			 */
+			if (last_idx > 0) {
+				last_idx--;
+				continue;
+			}
+
+			if (ovpn_nl_send_peer(skb, info, peer,
+					      NETLINK_CB(cb->skb).portid,
+					      cb->nlh->nlmsg_seq,
+					      NLM_F_MULTI) < 0)
+				break;
+
+			/* count peers being dumped during this invocation */
+			dumped++;
+		}
+		rcu_read_unlock();
+	}
+
+out:
+	netdev_put(ovpn->dev, &tracker);
+
+	/* sum up peers dumped in this message, so that at the next invocation
+	 * we can continue from where we left
+	 */
+	cb->args[1] += dumped;
+	return skb->len;
 }
 
 int ovpn_nl_peer_del_doit(struct sk_buff *skb, struct genl_info *info)
 {
-	return -EOPNOTSUPP;
+	struct nlattr *attrs[OVPN_A_PEER_MAX + 1];
+	struct ovpn_priv *ovpn = info->user_ptr[0];
+	struct ovpn_peer *peer;
+	u32 peer_id;
+	int ret;
+
+	if (GENL_REQ_ATTR_CHECK(info, OVPN_A_PEER))
+		return -EINVAL;
+
+	ret = nla_parse_nested(attrs, OVPN_A_PEER_MAX, info->attrs[OVPN_A_PEER],
+			       ovpn_peer_nl_policy, info->extack);
+	if (ret)
+		return ret;
+
+	if (NL_REQ_ATTR_CHECK(info->extack, info->attrs[OVPN_A_PEER], attrs,
+			      OVPN_A_PEER_ID))
+		return -EINVAL;
+
+	peer_id = nla_get_u32(attrs[OVPN_A_PEER_ID]);
+	peer = ovpn_peer_get_by_id(ovpn, peer_id);
+	if (!peer) {
+		NL_SET_ERR_MSG_FMT_MOD(info->extack,
+				       "cannot find peer with id %u", peer_id);
+		return -ENOENT;
+	}
+
+	netdev_dbg(ovpn->dev, "del peer %u\n", peer->id);
+	ret = ovpn_peer_del(peer, OVPN_DEL_PEER_REASON_USERSPACE);
+	ovpn_peer_put(peer);
+
+	return ret;
 }
 
 int ovpn_nl_key_new_doit(struct sk_buff *skb, struct genl_info *info)
diff --git a/drivers/net/ovpn/peer.c b/drivers/net/ovpn/peer.c
index e86b16ecf8dc2d152004ba752df5474b673bce17..f680b778c61cd40ce53cf1e834886d0346520a36 100644
--- a/drivers/net/ovpn/peer.c
+++ b/drivers/net/ovpn/peer.c
@@ -115,9 +115,9 @@  struct ovpn_peer *ovpn_peer_new(struct ovpn_priv *ovpn, u32 id)
  *
  * Return: 0 on success or a negative error code otherwise
  */
-static int ovpn_peer_reset_sockaddr(struct ovpn_peer *peer,
-				    const struct sockaddr_storage *ss,
-				    const u8 *local_ip)
+int ovpn_peer_reset_sockaddr(struct ovpn_peer *peer,
+			     const struct sockaddr_storage *ss,
+			     const u8 *local_ip)
 {
 	struct ovpn_bind *bind;
 	size_t ip_len;
@@ -311,7 +311,7 @@  static void ovpn_peer_release_rcu(struct rcu_head *head)
  * ovpn_peer_release - release peer private members
  * @peer: the peer to release
  */
-static void ovpn_peer_release(struct ovpn_peer *peer)
+void ovpn_peer_release(struct ovpn_peer *peer)
 {
 	ovpn_crypto_state_release(&peer->crypto);
 	spin_lock_bh(&peer->lock);
@@ -858,6 +858,37 @@  bool ovpn_peer_check_by_src(struct ovpn_priv *ovpn, struct sk_buff *skb,
 	return match;
 }
 
+void ovpn_peer_hash_vpn_ip(struct ovpn_peer *peer)
+{
+	struct hlist_nulls_head *nhead;
+
+	lockdep_assert_held(&peer->ovpn->lock);
+
+	/* rehashing makes sense only in multipeer mode */
+	if (peer->ovpn->mode != OVPN_MODE_MP)
+		return;
+
+	if (peer->vpn_addrs.ipv4.s_addr != htonl(INADDR_ANY)) {
+		/* remove potential old hashing */
+		hlist_nulls_del_init_rcu(&peer->hash_entry_addr4);
+
+		nhead = ovpn_get_hash_head(peer->ovpn->peers->by_vpn_addr,
+					   &peer->vpn_addrs.ipv4,
+					   sizeof(peer->vpn_addrs.ipv4));
+		hlist_nulls_add_head_rcu(&peer->hash_entry_addr4, nhead);
+	}
+
+	if (!ipv6_addr_any(&peer->vpn_addrs.ipv6)) {
+		/* remove potential old hashing */
+		hlist_nulls_del_init_rcu(&peer->hash_entry_addr6);
+
+		nhead = ovpn_get_hash_head(peer->ovpn->peers->by_vpn_addr,
+					   &peer->vpn_addrs.ipv6,
+					   sizeof(peer->vpn_addrs.ipv6));
+		hlist_nulls_add_head_rcu(&peer->hash_entry_addr6, nhead);
+	}
+}
+
 /**
  * ovpn_peer_add_mp - add peer to related tables in a MP instance
  * @ovpn: the instance to add the peer to
@@ -919,19 +950,7 @@  static int ovpn_peer_add_mp(struct ovpn_priv *ovpn, struct ovpn_peer *peer)
 			   ovpn_get_hash_head(ovpn->peers->by_id, &peer->id,
 					      sizeof(peer->id)));
 
-	if (peer->vpn_addrs.ipv4.s_addr != htonl(INADDR_ANY)) {
-		nhead = ovpn_get_hash_head(ovpn->peers->by_vpn_addr,
-					   &peer->vpn_addrs.ipv4,
-					   sizeof(peer->vpn_addrs.ipv4));
-		hlist_nulls_add_head_rcu(&peer->hash_entry_addr4, nhead);
-	}
-
-	if (!ipv6_addr_any(&peer->vpn_addrs.ipv6)) {
-		nhead = ovpn_get_hash_head(ovpn->peers->by_vpn_addr,
-					   &peer->vpn_addrs.ipv6,
-					   sizeof(peer->vpn_addrs.ipv6));
-		hlist_nulls_add_head_rcu(&peer->hash_entry_addr6, nhead);
-	}
+	ovpn_peer_hash_vpn_ip(peer);
 out:
 	spin_unlock_bh(&ovpn->lock);
 	return ret;
diff --git a/drivers/net/ovpn/peer.h b/drivers/net/ovpn/peer.h
index 33e5fc49fa1219a403f6857ed1a5c6106d5e94de..7a062cc5a5a4c082f908ec444a41fe70702e3882 100644
--- a/drivers/net/ovpn/peer.h
+++ b/drivers/net/ovpn/peer.h
@@ -125,6 +125,7 @@  static inline bool ovpn_peer_hold(struct ovpn_peer *peer)
 	return kref_get_unless_zero(&peer->refcount);
 }
 
+void ovpn_peer_release(struct ovpn_peer *peer);
 void ovpn_peer_release_kref(struct kref *kref);
 
 /**
@@ -148,6 +149,7 @@  struct ovpn_peer *ovpn_peer_get_by_transp_addr(struct ovpn_priv *ovpn,
 struct ovpn_peer *ovpn_peer_get_by_id(struct ovpn_priv *ovpn, u32 peer_id);
 struct ovpn_peer *ovpn_peer_get_by_dst(struct ovpn_priv *ovpn,
 				       struct sk_buff *skb);
+void ovpn_peer_hash_vpn_ip(struct ovpn_peer *peer);
 bool ovpn_peer_check_by_src(struct ovpn_priv *ovpn, struct sk_buff *skb,
 			    struct ovpn_peer *peer);
 
@@ -155,5 +157,8 @@  void ovpn_peer_keepalive_set(struct ovpn_peer *peer, u32 interval, u32 timeout);
 void ovpn_peer_keepalive_work(struct work_struct *work);
 
 void ovpn_peer_endpoints_update(struct ovpn_peer *peer, struct sk_buff *skb);
+int ovpn_peer_reset_sockaddr(struct ovpn_peer *peer,
+			     const struct sockaddr_storage *ss,
+			     const u8 *local_ip);
 
 #endif /* _NET_OVPN_OVPNPEER_H_ */