From patchwork Mon Jun 21 22:54:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mat Martineau X-Patchwork-Id: 465599 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA6DFC48BC2 for ; Mon, 21 Jun 2021 22:54:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A60E76124B for ; Mon, 21 Jun 2021 22:54:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232395AbhFUW5G (ORCPT ); Mon, 21 Jun 2021 18:57:06 -0400 Received: from mga14.intel.com ([192.55.52.115]:11255 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232335AbhFUW5C (ORCPT ); Mon, 21 Jun 2021 18:57:02 -0400 IronPort-SDR: pb4EHOYIn0febxNMA9fvbh/arvtNT86y6R3joCzYtDLy++C9cpYCplulme7V5CvjA8kx0yug4H TDzo0A/ixfKg== X-IronPort-AV: E=McAfee;i="6200,9189,10022"; a="206768520" X-IronPort-AV: E=Sophos;i="5.83,290,1616482800"; d="scan'208";a="206768520" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jun 2021 15:54:42 -0700 IronPort-SDR: vh6VkRll+EEvDaoq/BV8IyDWVvvmtWb61/eEhojz5txu0RhzkaOmRCv0wZtERcG7bylqTLRKhK rues0IfIp6dg== X-IronPort-AV: E=Sophos;i="5.83,290,1616482800"; d="scan'208";a="486673972" Received: from mjmartin-desk2.amr.corp.intel.com (HELO mjmartin-desk2.intel.com) ([10.209.74.136]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jun 2021 15:54:42 -0700 From: Mat Martineau To: netdev@vger.kernel.org Cc: Paolo Abeni , davem@davemloft.net, kuba@kernel.org, matthieu.baerts@tessares.net, mptcp@lists.linux.dev, Mat Martineau Subject: [PATCH net-next 4/6] mptcp: drop redundant test in move_skbs_to_msk() Date: Mon, 21 Jun 2021 15:54:36 -0700 Message-Id: <20210621225438.10777-5-mathew.j.martineau@linux.intel.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210621225438.10777-1-mathew.j.martineau@linux.intel.com> References: <20210621225438.10777-1-mathew.j.martineau@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Paolo Abeni Currently we check the msk state to avoid enqueuing new skbs at msk shutdown time. Such test is racy - as we can't acquire the msk socket lock - and useless, as the caller already checked the subflow field 'disposable', covering the same scenario in a race free manner - read and updated under the ssk socket lock. Signed-off-by: Paolo Abeni Signed-off-by: Mat Martineau --- net/mptcp/protocol.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index 3e088e9d20fd..cf75be02eb00 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -686,9 +686,6 @@ static bool move_skbs_to_msk(struct mptcp_sock *msk, struct sock *ssk) struct sock *sk = (struct sock *)msk; unsigned int moved = 0; - if (inet_sk_state_load(sk) == TCP_CLOSE) - return false; - __mptcp_move_skbs_from_subflow(msk, ssk, &moved); __mptcp_ofo_queue(msk); if (unlikely(ssk->sk_err)) {