From patchwork Fri Apr 16 22:38:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mat Martineau X-Patchwork-Id: 423726 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43E42C43460 for ; Fri, 16 Apr 2021 22:38:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 211C8613C0 for ; Fri, 16 Apr 2021 22:38:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234807AbhDPWi4 (ORCPT ); Fri, 16 Apr 2021 18:38:56 -0400 Received: from mga01.intel.com ([192.55.52.88]:38339 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234708AbhDPWiu (ORCPT ); Fri, 16 Apr 2021 18:38:50 -0400 IronPort-SDR: Iise0eYeVnYx61qJwFb+vjkQQ6mHHmp2KfH6ZlZANwi20xfuN233wnassoU01YnR+KGqf9yCm4 hRQPyrKqzehg== X-IronPort-AV: E=McAfee;i="6200,9189,9956"; a="215670613" X-IronPort-AV: E=Sophos;i="5.82,228,1613462400"; d="scan'208";a="215670613" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Apr 2021 15:38:18 -0700 IronPort-SDR: iSDFunbap1hYZWGq+wz+/LJG9s1ZjkrP1RRAbWmQ0ObaC1KPCyyOH8QPQohHBhA7t5uQ2Lm6Ee LtdMN5OoPY2g== X-IronPort-AV: E=Sophos;i="5.82,228,1613462400"; d="scan'208";a="462107400" Received: from mjmartin-desk2.amr.corp.intel.com (HELO mjmartin-desk2.intel.com) ([10.209.43.70]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Apr 2021 15:38:18 -0700 From: Mat Martineau To: netdev@vger.kernel.org Cc: Geliang Tang , davem@davemloft.net, kuba@kernel.org, matthieu.baerts@tessares.net, mptcp@lists.linux.dev, Mat Martineau Subject: [PATCH net-next 8/8] mptcp: use mptcp_for_each_subflow in mptcp_close Date: Fri, 16 Apr 2021 15:38:08 -0700 Message-Id: <20210416223808.298842-9-mathew.j.martineau@linux.intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210416223808.298842-1-mathew.j.martineau@linux.intel.com> References: <20210416223808.298842-1-mathew.j.martineau@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Geliang Tang This patch used the macro helper mptcp_for_each_subflow() instead of list_for_each_entry() in mptcp_close. Signed-off-by: Geliang Tang Signed-off-by: Mat Martineau --- net/mptcp/protocol.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index e26ea143754d..c14ac2975736 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -2611,7 +2611,7 @@ static void mptcp_close(struct sock *sk, long timeout) cleanup: /* orphan all the subflows */ inet_csk(sk)->icsk_mtup.probe_timestamp = tcp_jiffies32; - list_for_each_entry(subflow, &mptcp_sk(sk)->conn_list, node) { + mptcp_for_each_subflow(mptcp_sk(sk), subflow) { struct sock *ssk = mptcp_subflow_tcp_sock(subflow); bool slow = lock_sock_fast(ssk);