mbox series

[RFC,bpf-next,0/8] Socket migration for SO_REUSEPORT.

Message ID 20201117094023.3685-1-kuniyu@amazon.co.jp
Headers show
Series Socket migration for SO_REUSEPORT. | expand

Message

Kuniyuki Iwashima Nov. 17, 2020, 9:40 a.m. UTC
The SO_REUSEPORT option allows sockets to listen on the same port and to
accept connections evenly. However, there is a defect in the current
implementation. When a SYN packet is received, the connection is tied to a
listening socket. Accordingly, when the listener is closed, in-flight
requests during the three-way handshake and child sockets in the accept
queue are dropped even if other listeners could accept such connections.

This situation can happen when various server management tools restart
server (such as nginx) processes. For instance, when we change nginx
configurations and restart it, it spins up new workers that respect the new
configuration and closes all listeners on the old workers, resulting in
in-flight ACK of 3WHS is responded by RST.

As a workaround for this issue, we can do connection draining by eBPF:

  1. Before closing a listener, stop routing SYN packets to it.
  2. Wait enough time for requests to complete 3WHS.
  3. Accept connections until EAGAIN, then close the listener.

Although this approach seems to work well, EAGAIN has nothing to do with
how many requests are still during 3WHS. Thus, we have to know the number
of such requests by counting SYN packets by eBPF to complete connection
draining.

  1. Start counting SYN packets and accept syscalls using eBPF map.
  2. Stop routing SYN packets.
  3. Accept connections up to the count, then close the listener.

In cases that eBPF is used only for connection draining, it seems a bit
expensive. Moreover, there is some situation that we cannot modify and
build a server program to implement the workaround. This patchset
introduces a new sysctl option to free userland programs from the kernel
issue. If we enable net.ipv4.tcp_migrate_req before creating a reuseport
group, we can redistribute requests and connections from a listener to
others in the same reuseport group at close() or shutdown() syscalls.

Note that the source and destination listeners MUST have the same settings
at the socket API level; otherwise, applications may face inconsistency and
cause errors. In such a case, we have to use eBPF program to select a
specific listener or to cancel migration.

Kuniyuki Iwashima (8):
  net: Introduce net.ipv4.tcp_migrate_req.
  tcp: Keep TCP_CLOSE sockets in the reuseport group.
  tcp: Migrate TCP_ESTABLISHED/TCP_SYN_RECV sockets in accept queues.
  tcp: Migrate TFO requests causing RST during TCP_SYN_RECV.
  tcp: Migrate TCP_NEW_SYN_RECV requests.
  bpf: Add cookie in sk_reuseport_md.
  bpf: Call bpf_run_sk_reuseport() for socket migration.
  bpf: Test BPF_PROG_TYPE_SK_REUSEPORT for socket migration.

 Documentation/networking/ip-sysctl.rst        |  15 ++
 include/linux/bpf.h                           |   1 +
 include/net/inet_connection_sock.h            |  13 ++
 include/net/netns/ipv4.h                      |   1 +
 include/net/request_sock.h                    |  13 ++
 include/net/sock_reuseport.h                  |   8 +-
 include/uapi/linux/bpf.h                      |   1 +
 net/core/filter.c                             |  34 +++-
 net/core/sock_reuseport.c                     | 110 +++++++++--
 net/ipv4/inet_connection_sock.c               |  84 ++++++++-
 net/ipv4/inet_hashtables.c                    |   9 +-
 net/ipv4/sysctl_net_ipv4.c                    |   9 +
 net/ipv4/tcp_ipv4.c                           |   9 +-
 net/ipv6/tcp_ipv6.c                           |   9 +-
 tools/include/uapi/linux/bpf.h                |   1 +
 .../bpf/prog_tests/migrate_reuseport.c        | 175 ++++++++++++++++++
 .../bpf/progs/test_migrate_reuseport_kern.c   |  53 ++++++
 17 files changed, 511 insertions(+), 34 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/migrate_reuseport.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_migrate_reuseport_kern.c

Comments

David Laight Nov. 18, 2020, 9:18 a.m. UTC | #1
From: Kuniyuki Iwashima

> Sent: 17 November 2020 09:40

> 

> The SO_REUSEPORT option allows sockets to listen on the same port and to

> accept connections evenly. However, there is a defect in the current

> implementation. When a SYN packet is received, the connection is tied to a

> listening socket. Accordingly, when the listener is closed, in-flight

> requests during the three-way handshake and child sockets in the accept

> queue are dropped even if other listeners could accept such connections.

> 

> This situation can happen when various server management tools restart

> server (such as nginx) processes. For instance, when we change nginx

> configurations and restart it, it spins up new workers that respect the new

> configuration and closes all listeners on the old workers, resulting in

> in-flight ACK of 3WHS is responded by RST.


Can't you do something to stop new connections being queued (like
setting the 'backlog' to zero), then carry on doing accept()s
for a guard time (or until the queue length is zero) before finally
closing the listening socket.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
Eric Dumazet Nov. 18, 2020, 4:25 p.m. UTC | #2
On 11/17/20 10:40 AM, Kuniyuki Iwashima wrote:
> The SO_REUSEPORT option allows sockets to listen on the same port and to

> accept connections evenly. However, there is a defect in the current

> implementation. When a SYN packet is received, the connection is tied to a

> listening socket. Accordingly, when the listener is closed, in-flight

> requests during the three-way handshake and child sockets in the accept

> queue are dropped even if other listeners could accept such connections.

> 

> This situation can happen when various server management tools restart

> server (such as nginx) processes. For instance, when we change nginx

> configurations and restart it, it spins up new workers that respect the new

> configuration and closes all listeners on the old workers, resulting in

> in-flight ACK of 3WHS is responded by RST.

> 


I know some programs are simply removing a listener from the group,
so that they no longer handle new SYN packets,
and wait until all timers or 3WHS have completed before closing them.

They pass fd of newly accepted children to more recent programs using af_unix fd passing,
while in this draining mode.

Quite frankly, mixing eBPF in the picture is distracting.

It seems you want some way to transfer request sockets (and/or not yet accepted established ones)
from fd1 to fd2, isn't it something that should be discussed independently ?
Martin KaFai Lau Nov. 18, 2020, 11:50 p.m. UTC | #3
On Tue, Nov 17, 2020 at 06:40:18PM +0900, Kuniyuki Iwashima wrote:
> This patch lets reuseport_detach_sock() return a pointer of struct sock,

> which is used only by inet_unhash(). If it is not NULL,

> inet_csk_reqsk_queue_migrate() migrates TCP_ESTABLISHED/TCP_SYN_RECV

> sockets from the closing listener to the selected one.

> 

> Listening sockets hold incoming connections as a linked list of struct

> request_sock in the accept queue, and each request has reference to a full

> socket and its listener. In inet_csk_reqsk_queue_migrate(), we unlink the

> requests from the closing listener's queue and relink them to the head of

> the new listener's queue. We do not process each request, so the migration

> completes in O(1) time complexity. However, in the case of TCP_SYN_RECV

> sockets, we will take special care in the next commit.

> 

> By default, we select the last element of socks[] as the new listener.

> This behaviour is based on how the kernel moves sockets in socks[].

> 

> For example, we call listen() for four sockets (A, B, C, D), and close the

> first two by turns. The sockets move in socks[] like below. (See also [1])

> 

>   socks[0] : A <-.      socks[0] : D          socks[0] : D

>   socks[1] : B   |  =>  socks[1] : B <-.  =>  socks[1] : C

>   socks[2] : C   |      socks[2] : C --'

>   socks[3] : D --'

> 

> Then, if C and D have newer settings than A and B, and each socket has a

> request (a, b, c, d) in their accept queue, we can redistribute old

> requests evenly to new listeners.

I don't think it should emphasize/claim there is a specific way that
the kernel-pick here can redistribute the requests evenly.  It depends on
how the application close/listen.  The userspace can not expect the
ordering of socks[] will behave in a certain way.

The primary redistribution policy has to depend on BPF which is the
policy defined by the user based on its application logic (e.g. how
its binary restart work).  The application (and bpf) knows which one
is a dying process and can avoid distributing to it.

The kernel-pick could be an optional fallback but not a must.  If the bpf
prog is attached, I would even go further to call bpf to redistribute
regardless of the sysctl, so I think the sysctl is not necessary.

> 

>   socks[0] : A (a) <-.      socks[0] : D (a + d)      socks[0] : D (a + d)

>   socks[1] : B (b)   |  =>  socks[1] : B (b) <-.  =>  socks[1] : C (b + c)

>   socks[2] : C (c)   |      socks[2] : C (c) --'

>   socks[3] : D (d) --'

>
Martin KaFai Lau Nov. 19, 2020, 12:11 a.m. UTC | #4
On Tue, Nov 17, 2020 at 06:40:21PM +0900, Kuniyuki Iwashima wrote:
> We will call sock_reuseport.prog for socket migration in the next commit,
> so the eBPF program has to know which listener is closing in order to
> select the new listener.
> 
> Currently, we can get a unique ID for each listener in the userspace by
> calling bpf_map_lookup_elem() for BPF_MAP_TYPE_REUSEPORT_SOCKARRAY map.
> This patch exposes the ID to the eBPF program.
> 
> Reviewed-by: Benjamin Herrenschmidt <benh@amazon.com>
> Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.co.jp>
> ---
>  include/linux/bpf.h            | 1 +
>  include/uapi/linux/bpf.h       | 1 +
>  net/core/filter.c              | 8 ++++++++
>  tools/include/uapi/linux/bpf.h | 1 +
>  4 files changed, 11 insertions(+)
> 
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 581b2a2e78eb..c0646eceffa2 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -1897,6 +1897,7 @@ struct sk_reuseport_kern {
>  	u32 hash;
>  	u32 reuseport_id;
>  	bool bind_inany;
> +	u64 cookie;
>  };
>  bool bpf_tcp_sock_is_valid_access(int off, int size, enum bpf_access_type type,
>  				  struct bpf_insn_access_aux *info);
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index 162999b12790..3fcddb032838 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -4403,6 +4403,7 @@ struct sk_reuseport_md {
>  	__u32 ip_protocol;	/* IP protocol. e.g. IPPROTO_TCP, IPPROTO_UDP */
>  	__u32 bind_inany;	/* Is sock bound to an INANY address? */
>  	__u32 hash;		/* A hash of the packet 4 tuples */
> +	__u64 cookie;		/* ID of the listener in map */
Instead of only adding the cookie of a sk, lets make the sk pointer available:

	__bpf_md_ptr(struct bpf_sock *, sk);

and then use the BPF_FUNC_get_socket_cookie to get the cookie.

Other fields of the sk can also be directly accessed too once
the sk pointer is available.
Martin KaFai Lau Nov. 19, 2020, 1:49 a.m. UTC | #5
On Tue, Nov 17, 2020 at 06:40:15PM +0900, Kuniyuki Iwashima wrote:
> The SO_REUSEPORT option allows sockets to listen on the same port and to

> accept connections evenly. However, there is a defect in the current

> implementation. When a SYN packet is received, the connection is tied to a

> listening socket. Accordingly, when the listener is closed, in-flight

> requests during the three-way handshake and child sockets in the accept

> queue are dropped even if other listeners could accept such connections.

> 

> This situation can happen when various server management tools restart

> server (such as nginx) processes. For instance, when we change nginx

> configurations and restart it, it spins up new workers that respect the new

> configuration and closes all listeners on the old workers, resulting in

> in-flight ACK of 3WHS is responded by RST.

> 

> As a workaround for this issue, we can do connection draining by eBPF:

> 

>   1. Before closing a listener, stop routing SYN packets to it.

>   2. Wait enough time for requests to complete 3WHS.

>   3. Accept connections until EAGAIN, then close the listener.

> 

> Although this approach seems to work well, EAGAIN has nothing to do with

> how many requests are still during 3WHS. Thus, we have to know the number

It sounds like the application can already drain the established socket
by accept()?  To solve the problem that you have,
does it mean migrating req_sk (the in-progress 3WHS) is enough?

Applications can already use the bpf prog to do (1) and divert
the SYN to the newly started process.

If the application cares about service disruption,
it usually needs to drain the fd(s) that it already has and
finishes serving the pending request (e.g. https) on them anyway.
The time taking to finish those could already be longer than it takes
to drain the accept queue or finish off the 3WHS in reasonable time.
or the application that you have does not need to drain the fd(s) 
it already has and it can close them immediately?

> of such requests by counting SYN packets by eBPF to complete connection

> draining.

> 

>   1. Start counting SYN packets and accept syscalls using eBPF map.

>   2. Stop routing SYN packets.

>   3. Accept connections up to the count, then close the listener.
Kuniyuki Iwashima Nov. 19, 2020, 10:01 p.m. UTC | #6
From:   David Laight <David.Laight@ACULAB.COM>

Date:   Wed, 18 Nov 2020 09:18:24 +0000
> From: Kuniyuki Iwashima

> > Sent: 17 November 2020 09:40

> > 

> > The SO_REUSEPORT option allows sockets to listen on the same port and to

> > accept connections evenly. However, there is a defect in the current

> > implementation. When a SYN packet is received, the connection is tied to a

> > listening socket. Accordingly, when the listener is closed, in-flight

> > requests during the three-way handshake and child sockets in the accept

> > queue are dropped even if other listeners could accept such connections.

> > 

> > This situation can happen when various server management tools restart

> > server (such as nginx) processes. For instance, when we change nginx

> > configurations and restart it, it spins up new workers that respect the new

> > configuration and closes all listeners on the old workers, resulting in

> > in-flight ACK of 3WHS is responded by RST.

> 

> Can't you do something to stop new connections being queued (like

> setting the 'backlog' to zero), then carry on doing accept()s

> for a guard time (or until the queue length is zero) before finally

> closing the listening socket.


Yes, but with eBPF.
There are some ideas suggested and well discussed in the thread below,
resulting in that connection draining by eBPF was merged.
https://lore.kernel.org/netdev/1443313848-751-1-git-send-email-tolga.ceylan@gmail.com/


Also, setting zero to backlog does not work well.
https://lore.kernel.org/netdev/1447262610.17135.114.camel@edumazet-glaptop2.roam.corp.google.com/

---8<---
From: Eric Dumazet <eric.dumazet@gmail.com>

Subject: Re: [PATCH 1/1] net: Add SO_REUSEPORT_LISTEN_OFF socket option as
 drain mode
Date: Wed, 11 Nov 2015 09:23:30 -0800
> Actually listen(fd, 0) is not going to work well :

> 

> For request_sock that were created (by incoming SYN packet) before this

> listen(fd, 0) call, the 3rd packet (ACK coming from client) would not be

> able to create a child attached to this listener.

> 

> sk_acceptq_is_full() test in tcp_v4_syn_recv_sock() would simply drop

> the thing.

---8<---
Kuniyuki Iwashima Nov. 19, 2020, 10:05 p.m. UTC | #7
From:   Eric Dumazet <eric.dumazet@gmail.com>

Date:   Wed, 18 Nov 2020 17:25:44 +0100
> On 11/17/20 10:40 AM, Kuniyuki Iwashima wrote:

> > The SO_REUSEPORT option allows sockets to listen on the same port and to

> > accept connections evenly. However, there is a defect in the current

> > implementation. When a SYN packet is received, the connection is tied to a

> > listening socket. Accordingly, when the listener is closed, in-flight

> > requests during the three-way handshake and child sockets in the accept

> > queue are dropped even if other listeners could accept such connections.

> > 

> > This situation can happen when various server management tools restart

> > server (such as nginx) processes. For instance, when we change nginx

> > configurations and restart it, it spins up new workers that respect the new

> > configuration and closes all listeners on the old workers, resulting in

> > in-flight ACK of 3WHS is responded by RST.

> > 

> 

> I know some programs are simply removing a listener from the group,

> so that they no longer handle new SYN packets,

> and wait until all timers or 3WHS have completed before closing them.

> 

> They pass fd of newly accepted children to more recent programs using af_unix fd passing,

> while in this draining mode.


Just out of curiosity, can I know the software for more study?


> Quite frankly, mixing eBPF in the picture is distracting.


I agree.
Also, I think eBPF itself is not always necessary in many cases and want
to make user programs simpler with this patchset.

The SO_REUSEPORT implementation is excellent to improve the scalability. On
the other hand, as a trade-off, users have to know deeply how the kernel
handles SYN packets and to implement connection draining by eBPF.


> It seems you want some way to transfer request sockets (and/or not yet accepted established ones)

> from fd1 to fd2, isn't it something that should be discussed independently ?


I understand that you are asking that I should discuss the issue and how to
transfer sockets independently. Please correct me if I have misunderstood
your question.

The kernel handles 3WHS and users cannot know its existence (without eBPF).
Many users believe SO_REUSEPORT should make it possible to distribute all
connections across available listeners ideally, but actually, there are
possibly some connections aborted silently. Some user may think that if the
kernel selected other listeners, the connections would not be dropped.

The root cause is within the kernel, so the issue should be addressed in
the kernel space and should not be visible to userspace. In order not to
make users bother with implementing new some stuff, I want to fix the root
cause by transferring sockets automatically so that users need not take
care of kernel implementation and connection draining.

Moreover, if possible, I did not want to mix eBPF with the issue. But there
may be some cases that different applications listen on the same port and
eBPF routes packets to each by some rules. In such cases, redistributing
sockets without user intention will break the application. This patchset
will work in many cases, but to care such cases, I added the eBPF part.
Kuniyuki Iwashima Nov. 19, 2020, 10:09 p.m. UTC | #8
From: Martin KaFai Lau <kafai@fb.com>

Date: Wed, 18 Nov 2020 15:50:17 -0800
> On Tue, Nov 17, 2020 at 06:40:18PM +0900, Kuniyuki Iwashima wrote:

> > This patch lets reuseport_detach_sock() return a pointer of struct sock,

> > which is used only by inet_unhash(). If it is not NULL,

> > inet_csk_reqsk_queue_migrate() migrates TCP_ESTABLISHED/TCP_SYN_RECV

> > sockets from the closing listener to the selected one.

> > 

> > Listening sockets hold incoming connections as a linked list of struct

> > request_sock in the accept queue, and each request has reference to a full

> > socket and its listener. In inet_csk_reqsk_queue_migrate(), we unlink the

> > requests from the closing listener's queue and relink them to the head of

> > the new listener's queue. We do not process each request, so the migration

> > completes in O(1) time complexity. However, in the case of TCP_SYN_RECV

> > sockets, we will take special care in the next commit.

> > 

> > By default, we select the last element of socks[] as the new listener.

> > This behaviour is based on how the kernel moves sockets in socks[].

> > 

> > For example, we call listen() for four sockets (A, B, C, D), and close the

> > first two by turns. The sockets move in socks[] like below. (See also [1])

> > 

> >   socks[0] : A <-.      socks[0] : D          socks[0] : D

> >   socks[1] : B   |  =>  socks[1] : B <-.  =>  socks[1] : C

> >   socks[2] : C   |      socks[2] : C --'

> >   socks[3] : D --'

> > 

> > Then, if C and D have newer settings than A and B, and each socket has a

> > request (a, b, c, d) in their accept queue, we can redistribute old

> > requests evenly to new listeners.

> I don't think it should emphasize/claim there is a specific way that

> the kernel-pick here can redistribute the requests evenly.  It depends on

> how the application close/listen.  The userspace can not expect the

> ordering of socks[] will behave in a certain way.


I've expected replacing listeners by generations as a general use case.
But exactly. Users should not expect the undocumented kernel internal.


> The primary redistribution policy has to depend on BPF which is the

> policy defined by the user based on its application logic (e.g. how

> its binary restart work).  The application (and bpf) knows which one

> is a dying process and can avoid distributing to it.

> 

> The kernel-pick could be an optional fallback but not a must.  If the bpf

> prog is attached, I would even go further to call bpf to redistribute

> regardless of the sysctl, so I think the sysctl is not necessary.


I also think it is just an optional fallback, but to pick out a different
listener everytime, choosing the moved socket was reasonable. So the even
redistribution for a specific use case is a side effect of such socket
selection.

But, users should decide to use either way:
  (1) let the kernel select a new listener randomly
  (2) select a particular listener by eBPF

I will update the commit message like:
The kernel selects a new listener randomly, but as the side effect, it can
redistribute packets evenly for a specific case where an application
replaces listeners by generations.
Kuniyuki Iwashima Nov. 19, 2020, 10:10 p.m. UTC | #9
From:   Martin KaFai Lau <kafai@fb.com>

Date:   Wed, 18 Nov 2020 16:11:54 -0800
> On Tue, Nov 17, 2020 at 06:40:21PM +0900, Kuniyuki Iwashima wrote:

> > We will call sock_reuseport.prog for socket migration in the next commit,

> > so the eBPF program has to know which listener is closing in order to

> > select the new listener.

> > 

> > Currently, we can get a unique ID for each listener in the userspace by

> > calling bpf_map_lookup_elem() for BPF_MAP_TYPE_REUSEPORT_SOCKARRAY map.

> > This patch exposes the ID to the eBPF program.

> > 

> > Reviewed-by: Benjamin Herrenschmidt <benh@amazon.com>

> > Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.co.jp>

> > ---

> >  include/linux/bpf.h            | 1 +

> >  include/uapi/linux/bpf.h       | 1 +

> >  net/core/filter.c              | 8 ++++++++

> >  tools/include/uapi/linux/bpf.h | 1 +

> >  4 files changed, 11 insertions(+)

> > 

> > diff --git a/include/linux/bpf.h b/include/linux/bpf.h

> > index 581b2a2e78eb..c0646eceffa2 100644

> > --- a/include/linux/bpf.h

> > +++ b/include/linux/bpf.h

> > @@ -1897,6 +1897,7 @@ struct sk_reuseport_kern {

> >  	u32 hash;

> >  	u32 reuseport_id;

> >  	bool bind_inany;

> > +	u64 cookie;

> >  };

> >  bool bpf_tcp_sock_is_valid_access(int off, int size, enum bpf_access_type type,

> >  				  struct bpf_insn_access_aux *info);

> > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h

> > index 162999b12790..3fcddb032838 100644

> > --- a/include/uapi/linux/bpf.h

> > +++ b/include/uapi/linux/bpf.h

> > @@ -4403,6 +4403,7 @@ struct sk_reuseport_md {

> >  	__u32 ip_protocol;	/* IP protocol. e.g. IPPROTO_TCP, IPPROTO_UDP */

> >  	__u32 bind_inany;	/* Is sock bound to an INANY address? */

> >  	__u32 hash;		/* A hash of the packet 4 tuples */

> > +	__u64 cookie;		/* ID of the listener in map */

> Instead of only adding the cookie of a sk, lets make the sk pointer available:

> 

> 	__bpf_md_ptr(struct bpf_sock *, sk);

> 

> and then use the BPF_FUNC_get_socket_cookie to get the cookie.

> 

> Other fields of the sk can also be directly accessed too once

> the sk pointer is available.


Oh, I did not know BPF_FUNC_get_socket_cookie.
I will add the sk pointer and use the helper function in the next spin!
Thank you.
Kuniyuki Iwashima Nov. 19, 2020, 10:17 p.m. UTC | #10
From:   Martin KaFai Lau <kafai@fb.com>

Date:   Wed, 18 Nov 2020 17:49:13 -0800
> On Tue, Nov 17, 2020 at 06:40:15PM +0900, Kuniyuki Iwashima wrote:

> > The SO_REUSEPORT option allows sockets to listen on the same port and to

> > accept connections evenly. However, there is a defect in the current

> > implementation. When a SYN packet is received, the connection is tied to a

> > listening socket. Accordingly, when the listener is closed, in-flight

> > requests during the three-way handshake and child sockets in the accept

> > queue are dropped even if other listeners could accept such connections.

> > 

> > This situation can happen when various server management tools restart

> > server (such as nginx) processes. For instance, when we change nginx

> > configurations and restart it, it spins up new workers that respect the new

> > configuration and closes all listeners on the old workers, resulting in

> > in-flight ACK of 3WHS is responded by RST.

> > 

> > As a workaround for this issue, we can do connection draining by eBPF:

> > 

> >   1. Before closing a listener, stop routing SYN packets to it.

> >   2. Wait enough time for requests to complete 3WHS.

> >   3. Accept connections until EAGAIN, then close the listener.

> > 

> > Although this approach seems to work well, EAGAIN has nothing to do with

> > how many requests are still during 3WHS. Thus, we have to know the number

> It sounds like the application can already drain the established socket

> by accept()?  To solve the problem that you have,

> does it mean migrating req_sk (the in-progress 3WHS) is enough?


Ideally, the application needs to drain only the accepted sockets because
3WHS and tying a connection to a listener are just kernel behaviour. Also,
there are some cases where we want to apply new configurations as soon as
possible such as replacing TLS certificates.

It is possible to drain the established sockets by accept(), but the
sockets in the accept queue have not started application sessions yet. So,
if we do not drain such sockets (or if the kernel happened to select
another listener), we can (could) apply the new settings much earlier.

Moreover, the established sockets may start long-standing connections so
that we cannot complete draining for a long time and may have to
force-close them (and they would have longer lifetime if they are migrated
to a new listener).


> Applications can already use the bpf prog to do (1) and divert

> the SYN to the newly started process.

> 

> If the application cares about service disruption,

> it usually needs to drain the fd(s) that it already has and

> finishes serving the pending request (e.g. https) on them anyway.

> The time taking to finish those could already be longer than it takes

> to drain the accept queue or finish off the 3WHS in reasonable time.

> or the application that you have does not need to drain the fd(s) 

> it already has and it can close them immediately?


In the point of view of service disruption, I agree with you.

However, I think that there are some situations where we want to apply new
configurations rather than to drain sockets with old configurations and
that if the kernel migrates sockets automatically, we can simplify user
programs.
Martin KaFai Lau Nov. 20, 2020, 1:53 a.m. UTC | #11
On Fri, Nov 20, 2020 at 07:09:22AM +0900, Kuniyuki Iwashima wrote:
> From: Martin KaFai Lau <kafai@fb.com>

> Date: Wed, 18 Nov 2020 15:50:17 -0800

> > On Tue, Nov 17, 2020 at 06:40:18PM +0900, Kuniyuki Iwashima wrote:

> > > This patch lets reuseport_detach_sock() return a pointer of struct sock,

> > > which is used only by inet_unhash(). If it is not NULL,

> > > inet_csk_reqsk_queue_migrate() migrates TCP_ESTABLISHED/TCP_SYN_RECV

> > > sockets from the closing listener to the selected one.

> > > 

> > > Listening sockets hold incoming connections as a linked list of struct

> > > request_sock in the accept queue, and each request has reference to a full

> > > socket and its listener. In inet_csk_reqsk_queue_migrate(), we unlink the

> > > requests from the closing listener's queue and relink them to the head of

> > > the new listener's queue. We do not process each request, so the migration

> > > completes in O(1) time complexity. However, in the case of TCP_SYN_RECV

> > > sockets, we will take special care in the next commit.

> > > 

> > > By default, we select the last element of socks[] as the new listener.

> > > This behaviour is based on how the kernel moves sockets in socks[].

> > > 

> > > For example, we call listen() for four sockets (A, B, C, D), and close the

> > > first two by turns. The sockets move in socks[] like below. (See also [1])

> > > 

> > >   socks[0] : A <-.      socks[0] : D          socks[0] : D

> > >   socks[1] : B   |  =>  socks[1] : B <-.  =>  socks[1] : C

> > >   socks[2] : C   |      socks[2] : C --'

> > >   socks[3] : D --'

> > > 

> > > Then, if C and D have newer settings than A and B, and each socket has a

> > > request (a, b, c, d) in their accept queue, we can redistribute old

> > > requests evenly to new listeners.

> > I don't think it should emphasize/claim there is a specific way that

> > the kernel-pick here can redistribute the requests evenly.  It depends on

> > how the application close/listen.  The userspace can not expect the

> > ordering of socks[] will behave in a certain way.

> 

> I've expected replacing listeners by generations as a general use case.

> But exactly. Users should not expect the undocumented kernel internal.

> 

> 

> > The primary redistribution policy has to depend on BPF which is the

> > policy defined by the user based on its application logic (e.g. how

> > its binary restart work).  The application (and bpf) knows which one

> > is a dying process and can avoid distributing to it.

> > 

> > The kernel-pick could be an optional fallback but not a must.  If the bpf

> > prog is attached, I would even go further to call bpf to redistribute

> > regardless of the sysctl, so I think the sysctl is not necessary.

> 

> I also think it is just an optional fallback, but to pick out a different

> listener everytime, choosing the moved socket was reasonable. So the even

> redistribution for a specific use case is a side effect of such socket

> selection.

> 

> But, users should decide to use either way:

>   (1) let the kernel select a new listener randomly

>   (2) select a particular listener by eBPF

> 

> I will update the commit message like:

> The kernel selects a new listener randomly, but as the side effect, it can

> redistribute packets evenly for a specific case where an application

> replaces listeners by generations.

Since there is no feedback on sysctl, so may be something missed
in the lines.

I don't think this migration logic should depend on a sysctl.
At least not when a bpf prog is attached that is capable of doing
migration, it is too fragile to ask user to remember to turn on
the sysctl before attaching the bpf prog.

Your use case is to primarily based on bpf prog to pick or only based
on kernel to do a random pick?

Also, IIUC, this sysctl setting sticks at "*reuse", there is no way to
change it until all the listening sockets are closed which is exactly
the service disruption problem this series is trying to solve here.
Martin KaFai Lau Nov. 20, 2020, 2:31 a.m. UTC | #12
On Fri, Nov 20, 2020 at 07:17:49AM +0900, Kuniyuki Iwashima wrote:
> From:   Martin KaFai Lau <kafai@fb.com>

> Date:   Wed, 18 Nov 2020 17:49:13 -0800

> > On Tue, Nov 17, 2020 at 06:40:15PM +0900, Kuniyuki Iwashima wrote:

> > > The SO_REUSEPORT option allows sockets to listen on the same port and to

> > > accept connections evenly. However, there is a defect in the current

> > > implementation. When a SYN packet is received, the connection is tied to a

> > > listening socket. Accordingly, when the listener is closed, in-flight

> > > requests during the three-way handshake and child sockets in the accept

> > > queue are dropped even if other listeners could accept such connections.

> > > 

> > > This situation can happen when various server management tools restart

> > > server (such as nginx) processes. For instance, when we change nginx

> > > configurations and restart it, it spins up new workers that respect the new

> > > configuration and closes all listeners on the old workers, resulting in

> > > in-flight ACK of 3WHS is responded by RST.

> > > 

> > > As a workaround for this issue, we can do connection draining by eBPF:

> > > 

> > >   1. Before closing a listener, stop routing SYN packets to it.

> > >   2. Wait enough time for requests to complete 3WHS.

> > >   3. Accept connections until EAGAIN, then close the listener.

> > > 

> > > Although this approach seems to work well, EAGAIN has nothing to do with

> > > how many requests are still during 3WHS. Thus, we have to know the number

> > It sounds like the application can already drain the established socket

> > by accept()?  To solve the problem that you have,

> > does it mean migrating req_sk (the in-progress 3WHS) is enough?

> 

> Ideally, the application needs to drain only the accepted sockets because

> 3WHS and tying a connection to a listener are just kernel behaviour. Also,

> there are some cases where we want to apply new configurations as soon as

> possible such as replacing TLS certificates.

> 

> It is possible to drain the established sockets by accept(), but the

> sockets in the accept queue have not started application sessions yet. So,

> if we do not drain such sockets (or if the kernel happened to select

> another listener), we can (could) apply the new settings much earlier.

> 

> Moreover, the established sockets may start long-standing connections so

> that we cannot complete draining for a long time and may have to

> force-close them (and they would have longer lifetime if they are migrated

> to a new listener).

> 

> 

> > Applications can already use the bpf prog to do (1) and divert

> > the SYN to the newly started process.

> > 

> > If the application cares about service disruption,

> > it usually needs to drain the fd(s) that it already has and

> > finishes serving the pending request (e.g. https) on them anyway.

> > The time taking to finish those could already be longer than it takes

> > to drain the accept queue or finish off the 3WHS in reasonable time.

> > or the application that you have does not need to drain the fd(s) 

> > it already has and it can close them immediately?

> 

> In the point of view of service disruption, I agree with you.

> 

> However, I think that there are some situations where we want to apply new

> configurations rather than to drain sockets with old configurations and

> that if the kernel migrates sockets automatically, we can simplify user

> programs.

This configuration-update(/new-TLS-cert...etc) consideration will be useful
if it is also included in the cover letter.

It sounds like the service that you have is draining the existing
already-accepted fd(s) which are using the old configuration.
Those existing fd(s) could also be long life.  Potentially those
existing fd(s) will be in a much higher number than the
to-be-accepted fd(s)?

or you meant in some cases it wants to migrate to the new configuration
ASAP (e.g. for security reason) even it has to close all the
already-accepted fds() which are using the old configuration??

In either cases, considering the already-accepted fd(s)
is usually in a much more number, does the to-be-accepted
connection make any difference percentage-wise?
Kuniyuki Iwashima Nov. 21, 2020, 10:13 a.m. UTC | #13
From:   Martin KaFai Lau <kafai@fb.com>

Date:   Thu, 19 Nov 2020 17:53:46 -0800
> On Fri, Nov 20, 2020 at 07:09:22AM +0900, Kuniyuki Iwashima wrote:

> > From: Martin KaFai Lau <kafai@fb.com>

> > Date: Wed, 18 Nov 2020 15:50:17 -0800

> > > On Tue, Nov 17, 2020 at 06:40:18PM +0900, Kuniyuki Iwashima wrote:

> > > > This patch lets reuseport_detach_sock() return a pointer of struct sock,

> > > > which is used only by inet_unhash(). If it is not NULL,

> > > > inet_csk_reqsk_queue_migrate() migrates TCP_ESTABLISHED/TCP_SYN_RECV

> > > > sockets from the closing listener to the selected one.

> > > > 

> > > > Listening sockets hold incoming connections as a linked list of struct

> > > > request_sock in the accept queue, and each request has reference to a full

> > > > socket and its listener. In inet_csk_reqsk_queue_migrate(), we unlink the

> > > > requests from the closing listener's queue and relink them to the head of

> > > > the new listener's queue. We do not process each request, so the migration

> > > > completes in O(1) time complexity. However, in the case of TCP_SYN_RECV

> > > > sockets, we will take special care in the next commit.

> > > > 

> > > > By default, we select the last element of socks[] as the new listener.

> > > > This behaviour is based on how the kernel moves sockets in socks[].

> > > > 

> > > > For example, we call listen() for four sockets (A, B, C, D), and close the

> > > > first two by turns. The sockets move in socks[] like below. (See also [1])

> > > > 

> > > >   socks[0] : A <-.      socks[0] : D          socks[0] : D

> > > >   socks[1] : B   |  =>  socks[1] : B <-.  =>  socks[1] : C

> > > >   socks[2] : C   |      socks[2] : C --'

> > > >   socks[3] : D --'

> > > > 

> > > > Then, if C and D have newer settings than A and B, and each socket has a

> > > > request (a, b, c, d) in their accept queue, we can redistribute old

> > > > requests evenly to new listeners.

> > > I don't think it should emphasize/claim there is a specific way that

> > > the kernel-pick here can redistribute the requests evenly.  It depends on

> > > how the application close/listen.  The userspace can not expect the

> > > ordering of socks[] will behave in a certain way.

> > 

> > I've expected replacing listeners by generations as a general use case.

> > But exactly. Users should not expect the undocumented kernel internal.

> > 

> > 

> > > The primary redistribution policy has to depend on BPF which is the

> > > policy defined by the user based on its application logic (e.g. how

> > > its binary restart work).  The application (and bpf) knows which one

> > > is a dying process and can avoid distributing to it.

> > > 

> > > The kernel-pick could be an optional fallback but not a must.  If the bpf

> > > prog is attached, I would even go further to call bpf to redistribute

> > > regardless of the sysctl, so I think the sysctl is not necessary.

> > 

> > I also think it is just an optional fallback, but to pick out a different

> > listener everytime, choosing the moved socket was reasonable. So the even

> > redistribution for a specific use case is a side effect of such socket

> > selection.

> > 

> > But, users should decide to use either way:

> >   (1) let the kernel select a new listener randomly

> >   (2) select a particular listener by eBPF

> > 

> > I will update the commit message like:

> > The kernel selects a new listener randomly, but as the side effect, it can

> > redistribute packets evenly for a specific case where an application

> > replaces listeners by generations.

> Since there is no feedback on sysctl, so may be something missed

> in the lines.


I'm sorry, I have missed this point while thinking about each reply...


> I don't think this migration logic should depend on a sysctl.

> At least not when a bpf prog is attached that is capable of doing

> migration, it is too fragile to ask user to remember to turn on

> the sysctl before attaching the bpf prog.

> 

> Your use case is to primarily based on bpf prog to pick or only based

> on kernel to do a random pick?


I think we have to care about both cases.

I think we can always enable the migration feature if eBPF prog is not
attached. On the other hand, if BPF_PROG_TYPE_SK_REUSEPORT prog is attached
to select a listener by some rules, along updating the kernel,
redistributing requests without user intention can break the application.
So, there is something needed to confirm user intension at least if eBPF
prog is attached.

But honestly, I believe such eBPF users can follow this change and
implement migration eBPF prog if we introduce such a breaking change.


> Also, IIUC, this sysctl setting sticks at "*reuse", there is no way to

> change it until all the listening sockets are closed which is exactly

> the service disruption problem this series is trying to solve here.


Oh, exactly...
If we apply this series by live patching, we cannot enable the feature
without service disruption.

To enable the migration feature dynamically, how about this logic?
In this logic, we do not save the sysctl value and check it at each time.

  1. no eBPF prog attached -> ON
  2. eBPF prog attached and sysctl is 0 -> OFF
  3. eBPF prog attached and sysctl is 1 -> ON

So, replacing 

  if (reuse->migrate_req)

to 

  if (!reuse->prog || net->ipv4.sysctl_tcp_migrate_req)
Kuniyuki Iwashima Nov. 21, 2020, 10:16 a.m. UTC | #14
From:   Martin KaFai Lau <kafai@fb.com>

Date:   Thu, 19 Nov 2020 18:31:57 -0800
> On Fri, Nov 20, 2020 at 07:17:49AM +0900, Kuniyuki Iwashima wrote:

> > From:   Martin KaFai Lau <kafai@fb.com>

> > Date:   Wed, 18 Nov 2020 17:49:13 -0800

> > > On Tue, Nov 17, 2020 at 06:40:15PM +0900, Kuniyuki Iwashima wrote:

> > > > The SO_REUSEPORT option allows sockets to listen on the same port and to

> > > > accept connections evenly. However, there is a defect in the current

> > > > implementation. When a SYN packet is received, the connection is tied to a

> > > > listening socket. Accordingly, when the listener is closed, in-flight

> > > > requests during the three-way handshake and child sockets in the accept

> > > > queue are dropped even if other listeners could accept such connections.

> > > > 

> > > > This situation can happen when various server management tools restart

> > > > server (such as nginx) processes. For instance, when we change nginx

> > > > configurations and restart it, it spins up new workers that respect the new

> > > > configuration and closes all listeners on the old workers, resulting in

> > > > in-flight ACK of 3WHS is responded by RST.

> > > > 

> > > > As a workaround for this issue, we can do connection draining by eBPF:

> > > > 

> > > >   1. Before closing a listener, stop routing SYN packets to it.

> > > >   2. Wait enough time for requests to complete 3WHS.

> > > >   3. Accept connections until EAGAIN, then close the listener.

> > > > 

> > > > Although this approach seems to work well, EAGAIN has nothing to do with

> > > > how many requests are still during 3WHS. Thus, we have to know the number

> > > It sounds like the application can already drain the established socket

> > > by accept()?  To solve the problem that you have,

> > > does it mean migrating req_sk (the in-progress 3WHS) is enough?

> > 

> > Ideally, the application needs to drain only the accepted sockets because

> > 3WHS and tying a connection to a listener are just kernel behaviour. Also,

> > there are some cases where we want to apply new configurations as soon as

> > possible such as replacing TLS certificates.

> > 

> > It is possible to drain the established sockets by accept(), but the

> > sockets in the accept queue have not started application sessions yet. So,

> > if we do not drain such sockets (or if the kernel happened to select

> > another listener), we can (could) apply the new settings much earlier.

> > 

> > Moreover, the established sockets may start long-standing connections so

> > that we cannot complete draining for a long time and may have to

> > force-close them (and they would have longer lifetime if they are migrated

> > to a new listener).

> > 

> > 

> > > Applications can already use the bpf prog to do (1) and divert

> > > the SYN to the newly started process.

> > > 

> > > If the application cares about service disruption,

> > > it usually needs to drain the fd(s) that it already has and

> > > finishes serving the pending request (e.g. https) on them anyway.

> > > The time taking to finish those could already be longer than it takes

> > > to drain the accept queue or finish off the 3WHS in reasonable time.

> > > or the application that you have does not need to drain the fd(s) 

> > > it already has and it can close them immediately?

> > 

> > In the point of view of service disruption, I agree with you.

> > 

> > However, I think that there are some situations where we want to apply new

> > configurations rather than to drain sockets with old configurations and

> > that if the kernel migrates sockets automatically, we can simplify user

> > programs.

> This configuration-update(/new-TLS-cert...etc) consideration will be useful

> if it is also included in the cover letter.


I will add this to the next cover letter.


> It sounds like the service that you have is draining the existing

> already-accepted fd(s) which are using the old configuration.

> Those existing fd(s) could also be long life.  Potentially those

> existing fd(s) will be in a much higher number than the

> to-be-accepted fd(s)?


In many cases, yes.


> or you meant in some cases it wants to migrate to the new configuration

> ASAP (e.g. for security reason) even it has to close all the

> already-accepted fds() which are using the old configuration??


And sometimes, yes.
As you expected, for some reasons including security, there are cases we
have to prioritize to close connections than to complete them.

For example, HTTP/1.1 is often short-lived, and we can complete draining
immediately. However, sometimes it can be long-lived by upgrading to
WebSocket. Then we may be not able to wait to finish draining.


> In either cases, considering the already-accepted fd(s)

> is usually in a much more number, does the to-be-accepted

> connection make any difference percentage-wise?


It is difficult to drain all connections in every case, but we can decrease
such aborted connections by migration. In that sense, I think migration is
always better than draining.
Martin KaFai Lau Nov. 23, 2020, 12:40 a.m. UTC | #15
On Sat, Nov 21, 2020 at 07:13:22PM +0900, Kuniyuki Iwashima wrote:
> From:   Martin KaFai Lau <kafai@fb.com>

> Date:   Thu, 19 Nov 2020 17:53:46 -0800

> > On Fri, Nov 20, 2020 at 07:09:22AM +0900, Kuniyuki Iwashima wrote:

> > > From: Martin KaFai Lau <kafai@fb.com>

> > > Date: Wed, 18 Nov 2020 15:50:17 -0800

> > > > On Tue, Nov 17, 2020 at 06:40:18PM +0900, Kuniyuki Iwashima wrote:

> > > > > This patch lets reuseport_detach_sock() return a pointer of struct sock,

> > > > > which is used only by inet_unhash(). If it is not NULL,

> > > > > inet_csk_reqsk_queue_migrate() migrates TCP_ESTABLISHED/TCP_SYN_RECV

> > > > > sockets from the closing listener to the selected one.

> > > > > 

> > > > > Listening sockets hold incoming connections as a linked list of struct

> > > > > request_sock in the accept queue, and each request has reference to a full

> > > > > socket and its listener. In inet_csk_reqsk_queue_migrate(), we unlink the

> > > > > requests from the closing listener's queue and relink them to the head of

> > > > > the new listener's queue. We do not process each request, so the migration

> > > > > completes in O(1) time complexity. However, in the case of TCP_SYN_RECV

> > > > > sockets, we will take special care in the next commit.

> > > > > 

> > > > > By default, we select the last element of socks[] as the new listener.

> > > > > This behaviour is based on how the kernel moves sockets in socks[].

> > > > > 

> > > > > For example, we call listen() for four sockets (A, B, C, D), and close the

> > > > > first two by turns. The sockets move in socks[] like below. (See also [1])

> > > > > 

> > > > >   socks[0] : A <-.      socks[0] : D          socks[0] : D

> > > > >   socks[1] : B   |  =>  socks[1] : B <-.  =>  socks[1] : C

> > > > >   socks[2] : C   |      socks[2] : C --'

> > > > >   socks[3] : D --'

> > > > > 

> > > > > Then, if C and D have newer settings than A and B, and each socket has a

> > > > > request (a, b, c, d) in their accept queue, we can redistribute old

> > > > > requests evenly to new listeners.

> > > > I don't think it should emphasize/claim there is a specific way that

> > > > the kernel-pick here can redistribute the requests evenly.  It depends on

> > > > how the application close/listen.  The userspace can not expect the

> > > > ordering of socks[] will behave in a certain way.

> > > 

> > > I've expected replacing listeners by generations as a general use case.

> > > But exactly. Users should not expect the undocumented kernel internal.

> > > 

> > > 

> > > > The primary redistribution policy has to depend on BPF which is the

> > > > policy defined by the user based on its application logic (e.g. how

> > > > its binary restart work).  The application (and bpf) knows which one

> > > > is a dying process and can avoid distributing to it.

> > > > 

> > > > The kernel-pick could be an optional fallback but not a must.  If the bpf

> > > > prog is attached, I would even go further to call bpf to redistribute

> > > > regardless of the sysctl, so I think the sysctl is not necessary.

> > > 

> > > I also think it is just an optional fallback, but to pick out a different

> > > listener everytime, choosing the moved socket was reasonable. So the even

> > > redistribution for a specific use case is a side effect of such socket

> > > selection.

> > > 

> > > But, users should decide to use either way:

> > >   (1) let the kernel select a new listener randomly

> > >   (2) select a particular listener by eBPF

> > > 

> > > I will update the commit message like:

> > > The kernel selects a new listener randomly, but as the side effect, it can

> > > redistribute packets evenly for a specific case where an application

> > > replaces listeners by generations.

> > Since there is no feedback on sysctl, so may be something missed

> > in the lines.

> 

> I'm sorry, I have missed this point while thinking about each reply...

> 

> 

> > I don't think this migration logic should depend on a sysctl.

> > At least not when a bpf prog is attached that is capable of doing

> > migration, it is too fragile to ask user to remember to turn on

> > the sysctl before attaching the bpf prog.

> > 

> > Your use case is to primarily based on bpf prog to pick or only based

> > on kernel to do a random pick?

Again, what is your primarily use case?

> 

> I think we have to care about both cases.

> 

> I think we can always enable the migration feature if eBPF prog is not

> attached. On the other hand, if BPF_PROG_TYPE_SK_REUSEPORT prog is attached

> to select a listener by some rules, along updating the kernel,

> redistributing requests without user intention can break the application.

> So, there is something needed to confirm user intension at least if eBPF

> prog is attached.

Right, something being able to tell if the bpf prog can do migration
can confirm the user intention here.  However, this will not be a
sysctl.

A new bpf_attach_type "BPF_SK_REUSEPORT_SELECT_OR_MIGRATE" can be added.
"prog->expected_attach_type == BPF_SK_REUSEPORT_SELECT_OR_MIGRATE"
can be used to decide if migration can be done by the bpf prog.
Although the prog->expected_attach_type has not been checked for
BPF_PROG_TYPE_SK_REUSEPORT, there was an earlier discussion
that the risk of breaking is very small and is acceptable.

Instead of depending on !reuse_md->data to decide if it
is doing migration or not, a clearer signal should be given
to the bpf prog.  A "u8 migration" can be added to "struct sk_reuseport_kern"
(and to "struct sk_reuseport_md" accordingly).  It can tell
the bpf prog that it is doing migration.  It should also tell if it is
migrating a list of established sk(s) or an individual req_sk.
Accessing "reuse_md->migration" should only be allowed for
BPF_SK_REUSEPORT_SELECT_OR_MIGRATE during is_valid_access().

During migration, if skb is not available, an empty skb can be used.
Migration is a slow path and does not happen very often, so it will
be fine even it has to create a temp skb (or may be a static const skb
can be used, not sure but this is implementation details).

> 

> But honestly, I believe such eBPF users can follow this change and

> implement migration eBPF prog if we introduce such a breaking change.

> 

> 

> > Also, IIUC, this sysctl setting sticks at "*reuse", there is no way to

> > change it until all the listening sockets are closed which is exactly

> > the service disruption problem this series is trying to solve here.

> 

> Oh, exactly...

> If we apply this series by live patching, we cannot enable the feature

> without service disruption.

> 

> To enable the migration feature dynamically, how about this logic?

> In this logic, we do not save the sysctl value and check it at each time.

> 

>   1. no eBPF prog attached -> ON

>   2. eBPF prog attached and sysctl is 0 -> OFF

No.  When bpf prog is attached and it clearly signals (expected_attach_type
here) it can do migration, it should not depend on anything else.  It is very
confusing to use.  When a prog is successfully loaded, verified
and attached, it is expected to run.

This sysctl essentially only disables the bpf prog with
type == BPF_PROG_TYPE_SK_REUSEPORT running at a particular point.
This is going down a path that having another sysctl in the future
to disable another bpf prog type.  If there would be a need to disable
bpf prog on a type-by-type bases, it would need a more
generic solution on the bpf side and do it in a consistent way
for all prog types.  It needs a separate and longer discussion.

All behaviors of the BPF_SK_REUSEPORT_SELECT_OR_MIGRATE bpf prog
should not depend on this sysctl at all .

/* Pseudo code to show the idea only.
 * Actual implementation should try to fit
 * better into the current code and should look
 * quite different from here.
 */

if ((prog && prog->expected_attach_type == BPF_SK_REUSEPORT_SELECT_OR_MIGRATE)) {
	/* call bpf to migrate */
	action = BPF_PROG_RUN(prog, &reuse_kern);

	if (action == SK_PASS) {
		if (!reuse_kern.selected_sk)
			/* fallback to kernel random pick */
		else
			/* migrate to reuse_kern.selected_sk */
	} else {
		/* action == SK_DROP. don't do migration at all and
		 * don't fallback to kernel random pick.
		 */ 
	}
}

Going back to the sysctl, with BPF_SK_REUSEPORT_SELECT_OR_MIGRATE,
do you still have a need on adding sysctl_tcp_migrate_req?
Regardless, if there is still a need,
the document for sysctl_tcp_migrate_req should be something like:
"the kernel will do a random pick when there is no bpf prog
 attached to the reuseport group...."

[ ps, my reply will be slow in this week. ]

>   3. eBPF prog attached and sysctl is 1 -> ON

> 

> So, replacing 

> 

>   if (reuse->migrate_req)

> 

> to 

> 

>   if (!reuse->prog || net->ipv4.sysctl_tcp_migrate_req)
Kuniyuki Iwashima Nov. 24, 2020, 9:24 a.m. UTC | #16
From:   Martin KaFai Lau <kafai@fb.com>

Date:   Sun, 22 Nov 2020 16:40:20 -0800
> On Sat, Nov 21, 2020 at 07:13:22PM +0900, Kuniyuki Iwashima wrote:

> > From:   Martin KaFai Lau <kafai@fb.com>

> > Date:   Thu, 19 Nov 2020 17:53:46 -0800

> > > On Fri, Nov 20, 2020 at 07:09:22AM +0900, Kuniyuki Iwashima wrote:

> > > > From: Martin KaFai Lau <kafai@fb.com>

> > > > Date: Wed, 18 Nov 2020 15:50:17 -0800

> > > > > On Tue, Nov 17, 2020 at 06:40:18PM +0900, Kuniyuki Iwashima wrote:

> > > > > > This patch lets reuseport_detach_sock() return a pointer of struct sock,

> > > > > > which is used only by inet_unhash(). If it is not NULL,

> > > > > > inet_csk_reqsk_queue_migrate() migrates TCP_ESTABLISHED/TCP_SYN_RECV

> > > > > > sockets from the closing listener to the selected one.

> > > > > > 

> > > > > > Listening sockets hold incoming connections as a linked list of struct

> > > > > > request_sock in the accept queue, and each request has reference to a full

> > > > > > socket and its listener. In inet_csk_reqsk_queue_migrate(), we unlink the

> > > > > > requests from the closing listener's queue and relink them to the head of

> > > > > > the new listener's queue. We do not process each request, so the migration

> > > > > > completes in O(1) time complexity. However, in the case of TCP_SYN_RECV

> > > > > > sockets, we will take special care in the next commit.

> > > > > > 

> > > > > > By default, we select the last element of socks[] as the new listener.

> > > > > > This behaviour is based on how the kernel moves sockets in socks[].

> > > > > > 

> > > > > > For example, we call listen() for four sockets (A, B, C, D), and close the

> > > > > > first two by turns. The sockets move in socks[] like below. (See also [1])

> > > > > > 

> > > > > >   socks[0] : A <-.      socks[0] : D          socks[0] : D

> > > > > >   socks[1] : B   |  =>  socks[1] : B <-.  =>  socks[1] : C

> > > > > >   socks[2] : C   |      socks[2] : C --'

> > > > > >   socks[3] : D --'

> > > > > > 

> > > > > > Then, if C and D have newer settings than A and B, and each socket has a

> > > > > > request (a, b, c, d) in their accept queue, we can redistribute old

> > > > > > requests evenly to new listeners.

> > > > > I don't think it should emphasize/claim there is a specific way that

> > > > > the kernel-pick here can redistribute the requests evenly.  It depends on

> > > > > how the application close/listen.  The userspace can not expect the

> > > > > ordering of socks[] will behave in a certain way.

> > > > 

> > > > I've expected replacing listeners by generations as a general use case.

> > > > But exactly. Users should not expect the undocumented kernel internal.

> > > > 

> > > > 

> > > > > The primary redistribution policy has to depend on BPF which is the

> > > > > policy defined by the user based on its application logic (e.g. how

> > > > > its binary restart work).  The application (and bpf) knows which one

> > > > > is a dying process and can avoid distributing to it.

> > > > > 

> > > > > The kernel-pick could be an optional fallback but not a must.  If the bpf

> > > > > prog is attached, I would even go further to call bpf to redistribute

> > > > > regardless of the sysctl, so I think the sysctl is not necessary.

> > > > 

> > > > I also think it is just an optional fallback, but to pick out a different

> > > > listener everytime, choosing the moved socket was reasonable. So the even

> > > > redistribution for a specific use case is a side effect of such socket

> > > > selection.

> > > > 

> > > > But, users should decide to use either way:

> > > >   (1) let the kernel select a new listener randomly

> > > >   (2) select a particular listener by eBPF

> > > > 

> > > > I will update the commit message like:

> > > > The kernel selects a new listener randomly, but as the side effect, it can

> > > > redistribute packets evenly for a specific case where an application

> > > > replaces listeners by generations.

> > > Since there is no feedback on sysctl, so may be something missed

> > > in the lines.

> > 

> > I'm sorry, I have missed this point while thinking about each reply...

> > 

> > 

> > > I don't think this migration logic should depend on a sysctl.

> > > At least not when a bpf prog is attached that is capable of doing

> > > migration, it is too fragile to ask user to remember to turn on

> > > the sysctl before attaching the bpf prog.

> > > 

> > > Your use case is to primarily based on bpf prog to pick or only based

> > > on kernel to do a random pick?

> Again, what is your primarily use case?


We have so many services and components that I cannot grasp all of their
implementations, but I have started this series because a service component
based on the random pick by the kernel suffered from the issue.


> > I think we have to care about both cases.

> > 

> > I think we can always enable the migration feature if eBPF prog is not

> > attached. On the other hand, if BPF_PROG_TYPE_SK_REUSEPORT prog is attached

> > to select a listener by some rules, along updating the kernel,

> > redistributing requests without user intention can break the application.

> > So, there is something needed to confirm user intension at least if eBPF

> > prog is attached.

> Right, something being able to tell if the bpf prog can do migration

> can confirm the user intention here.  However, this will not be a

> sysctl.

> 

> A new bpf_attach_type "BPF_SK_REUSEPORT_SELECT_OR_MIGRATE" can be added.

> "prog->expected_attach_type == BPF_SK_REUSEPORT_SELECT_OR_MIGRATE"

> can be used to decide if migration can be done by the bpf prog.

> Although the prog->expected_attach_type has not been checked for

> BPF_PROG_TYPE_SK_REUSEPORT, there was an earlier discussion

> that the risk of breaking is very small and is acceptable.

> 

> Instead of depending on !reuse_md->data to decide if it

> is doing migration or not, a clearer signal should be given

> to the bpf prog.  A "u8 migration" can be added to "struct sk_reuseport_kern"

> (and to "struct sk_reuseport_md" accordingly).  It can tell

> the bpf prog that it is doing migration.  It should also tell if it is

> migrating a list of established sk(s) or an individual req_sk.

> Accessing "reuse_md->migration" should only be allowed for

> BPF_SK_REUSEPORT_SELECT_OR_MIGRATE during is_valid_access().

> 

> During migration, if skb is not available, an empty skb can be used.

> Migration is a slow path and does not happen very often, so it will

> be fine even it has to create a temp skb (or may be a static const skb

> can be used, not sure but this is implementation details).


I greatly appreciate your detailed idea and explanation!
I will try to implement this.


> > But honestly, I believe such eBPF users can follow this change and

> > implement migration eBPF prog if we introduce such a breaking change.

> > 

> > 

> > > Also, IIUC, this sysctl setting sticks at "*reuse", there is no way to

> > > change it until all the listening sockets are closed which is exactly

> > > the service disruption problem this series is trying to solve here.

> > 

> > Oh, exactly...

> > If we apply this series by live patching, we cannot enable the feature

> > without service disruption.

> > 

> > To enable the migration feature dynamically, how about this logic?

> > In this logic, we do not save the sysctl value and check it at each time.

> > 

> >   1. no eBPF prog attached -> ON

> >   2. eBPF prog attached and sysctl is 0 -> OFF

> No.  When bpf prog is attached and it clearly signals (expected_attach_type

> here) it can do migration, it should not depend on anything else.  It is very

> confusing to use.  When a prog is successfully loaded, verified

> and attached, it is expected to run.

> 

> This sysctl essentially only disables the bpf prog with

> type == BPF_PROG_TYPE_SK_REUSEPORT running at a particular point.

> This is going down a path that having another sysctl in the future

> to disable another bpf prog type.  If there would be a need to disable

> bpf prog on a type-by-type bases, it would need a more

> generic solution on the bpf side and do it in a consistent way

> for all prog types.  It needs a separate and longer discussion.

> 

> All behaviors of the BPF_SK_REUSEPORT_SELECT_OR_MIGRATE bpf prog

> should not depend on this sysctl at all .

> 

> /* Pseudo code to show the idea only.

>  * Actual implementation should try to fit

>  * better into the current code and should look

>  * quite different from here.

>  */

> 

> if ((prog && prog->expected_attach_type == BPF_SK_REUSEPORT_SELECT_OR_MIGRATE)) {

> 	/* call bpf to migrate */

> 	action = BPF_PROG_RUN(prog, &reuse_kern);

> 

> 	if (action == SK_PASS) {

> 		if (!reuse_kern.selected_sk)

> 			/* fallback to kernel random pick */

> 		else

> 			/* migrate to reuse_kern.selected_sk */

> 	} else {

> 		/* action == SK_DROP. don't do migration at all and

> 		 * don't fallback to kernel random pick.

> 		 */ 

> 	}

> }

> 

> Going back to the sysctl, with BPF_SK_REUSEPORT_SELECT_OR_MIGRATE,

> do you still have a need on adding sysctl_tcp_migrate_req?


No, now I do not think the option should be sysctl.
It will be BPF_SK_REUSEPORT_SELECT_OR_MIGRATE in the next series.
Thank you!


> Regardless, if there is still a need,

> the document for sysctl_tcp_migrate_req should be something like:

> "the kernel will do a random pick when there is no bpf prog

>  attached to the reuseport group...."

> 

> [ ps, my reply will be slow in this week. ]