From patchwork Sun Jun 21 14:09:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 217437 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, UNPARSEABLE_RELAY, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB964C433E0 for ; Sun, 21 Jun 2020 14:09:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A1D702491C for ; Sun, 21 Jun 2020 14:09:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730124AbgFUOJs (ORCPT ); Sun, 21 Jun 2020 10:09:48 -0400 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:55735 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1730022AbgFUOJs (ORCPT ); Sun, 21 Jun 2020 10:09:48 -0400 Received: from Internal Mail-Server by MTLPINE1 (envelope-from tariqt@mellanox.com) with SMTP; 21 Jun 2020 17:09:46 +0300 Received: from dev-l-vrt-051.mtl.labs.mlnx. (dev-l-vrt-051.mtl.labs.mlnx [10.234.40.1]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 05LE9kh0026166; Sun, 21 Jun 2020 17:09:46 +0300 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Saeed Mahameed , Boris Pismenny , Tariq Toukan Subject: [PATCH net] net: Do not clear the socket TX queue in sock_orphan() Date: Sun, 21 Jun 2020 17:09:04 +0300 Message-Id: <1592748544-41555-1-git-send-email-tariqt@mellanox.com> X-Mailer: git-send-email 1.8.3.1 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org sock_orphan() call to sk_set_socket() implies clearing the sock TX queue. This might cause unexpected out-of-order transmit, as outstanding packets can pick a different TX queue and bypass the ones already queued. This is undesired in general. More specifically, it breaks the in-order scheduling property guarantee for device-offloaded TLS sockets. Introduce a function variation __sk_set_socket() that does not clear the TX queue, and call it from sock_orphan(). All other callers of sk_set_socket() do not operate on an active socket, so they do not need this change. Fixes: e022f0b4a03f ("net: Introduce sk_tx_queue_mapping") Signed-off-by: Tariq Toukan Reviewed-by: Boris Pismenny --- include/net/sock.h | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) Please queue for -stable. diff --git a/include/net/sock.h b/include/net/sock.h index c53cc42b5ab9..23e43f3d79f0 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -1846,10 +1846,15 @@ static inline int sk_rx_queue_get(const struct sock *sk) } #endif +static inline void __sk_set_socket(struct sock *sk, struct socket *sock) +{ + sk->sk_socket = sock; +} + static inline void sk_set_socket(struct sock *sk, struct socket *sock) { sk_tx_queue_clear(sk); - sk->sk_socket = sock; + __sk_set_socket(sk, sock); } static inline wait_queue_head_t *sk_sleep(struct sock *sk) @@ -1868,7 +1873,7 @@ static inline void sock_orphan(struct sock *sk) { write_lock_bh(&sk->sk_callback_lock); sock_set_flag(sk, SOCK_DEAD); - sk_set_socket(sk, NULL); + __sk_set_socket(sk, NULL); sk->sk_wq = NULL; write_unlock_bh(&sk->sk_callback_lock); }