Message ID | cover.1606413118.git.pabeni@redhat.com |
---|---|
Headers | show |
Series | mptcp: avoid workqueue usage for data | expand |
On Fri, 27 Nov 2020, Paolo Abeni wrote: > This allows invoking an additional callback under the > socket spin lock. > > Will be used by the next patches to avoid additional > spin lock contention. > > Acked-by: Florian Westphal <fw@strlen.de> > Signed-off-by: Paolo Abeni <pabeni@redhat.com> > --- > include/net/sock.h | 1 + > net/core/sock.c | 2 +- > net/mptcp/protocol.h | 13 +++++++++++++ > 3 files changed, 15 insertions(+), 1 deletion(-) Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com> -- Mat Martineau Intel
On Fri, 27 Nov 2020, Paolo Abeni wrote: > Such spinlock is currently used only to protect the 'owned' > flag inside the socket lock itself. With this patch, we extend > its scope to protect the whole msk receive path and > sk_forward_memory. > > Given the above, we can always move data into the msk receive > queue (and OoO queue) from the subflow. > > We leverage the previous commit, so that we need to acquire the > spinlock in the tx path only when moving fwd memory. > > recvmsg() must now explicitly acquire the socket spinlock > when moving skbs out of sk_receive_queue. To reduce the number of > lock operations required we use a second rx queue and splice the > first into the latter in mptcp_lock_sock(). Additionally rmem > allocated memory is bulk-freed via release_cb() > > Acked-by: Florian Westphal <fw@strlen.de> > Co-developed-by: Florian Westphal <fw@strlen.de> > Signed-off-by: Florian Westphal <fw@strlen.de> > Signed-off-by: Paolo Abeni <pabeni@redhat.com> > --- > net/mptcp/protocol.c | 149 +++++++++++++++++++++++++++++-------------- > net/mptcp/protocol.h | 5 ++ > 2 files changed, 107 insertions(+), 47 deletions(-) Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com> -- Mat Martineau Intel
On Fri, 27 Nov 2020, Paolo Abeni wrote: > Extending the data_lock scope in mptcp_incoming_option > we can use that to protect both snd_una and wnd_end. > In the typical case, we will have a single atomic op instead of 2 > > Acked-by: Florian Westphal <fw@strlen.de> > Signed-off-by: Paolo Abeni <pabeni@redhat.com> > --- > net/mptcp/mptcp_diag.c | 2 +- > net/mptcp/options.c | 33 +++++++++++++-------------------- > net/mptcp/protocol.c | 34 ++++++++++++++++------------------ > net/mptcp/protocol.h | 8 ++++---- > 4 files changed, 34 insertions(+), 43 deletions(-) Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com> -- Mat Martineau Intel
On Fri, 27 Nov 2020 11:10:21 +0100 Paolo Abeni wrote: > The current locking schema used to protect the MPTCP data-path > requires the usage of the MPTCP workqueue to process the incoming > data, depending on trylock result. > > The above poses scalability limits and introduces random delays > in MPTCP-level acks. > > With this series we use a single spinlock to protect the MPTCP > data-path, removing the need for workqueue and delayed ack usage. > > This additionally reduces the number of atomic operations required > per packet and cleans-up considerably the poll/wake-up code. Applied, thanks!