From patchwork Mon Aug 31 15:08:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikolay Aleksandrov X-Patchwork-Id: 261729 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.1 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E564C43461 for ; Mon, 31 Aug 2020 15:12:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EEE0D20FC3 for ; Mon, 31 Aug 2020 15:12:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=cumulusnetworks.com header.i=@cumulusnetworks.com header.b="MIaPXvH4" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728466AbgHaPMS (ORCPT ); Mon, 31 Aug 2020 11:12:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52052 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728216AbgHaPJy (ORCPT ); Mon, 31 Aug 2020 11:09:54 -0400 Received: from mail-wr1-x443.google.com (mail-wr1-x443.google.com [IPv6:2a00:1450:4864:20::443]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7F823C061755 for ; Mon, 31 Aug 2020 08:09:53 -0700 (PDT) Received: by mail-wr1-x443.google.com with SMTP id h15so6274205wrt.12 for ; Mon, 31 Aug 2020 08:09:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cumulusnetworks.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=nDPaNY6uULGl6w+zVk/xRhftdiMNqXaciWDwCjq2kwk=; b=MIaPXvH4gfJyOHGaKsBWkcZkz3u8rsNPxppHOv0rpp/WhjoEEW0remHOQgTpV6DtzV qXi//NPlOTbqkElmlGeejQybeXjfml9qSSnelTFSAvYbpI1UcCNRkEZMqQXqHefrUzRs o0Hk584S/HXHeRgYwfp+smuF7NGIWOyMgaaR8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nDPaNY6uULGl6w+zVk/xRhftdiMNqXaciWDwCjq2kwk=; b=KSg2KWi8goC4ziQuuu+dbEqNgVejji3HcoWkInUwH1fkwBDe/6jZaMiYVzcntyFG1J 89yA+eYNDKjQaZmI5GzYBH7bvOt6u75QvqpBWZsFdVm+iAEInbDAa/MeSkVSYS5BYAsS T59BmnjObEGvA5+7fVc8/2eCcz6ALzqal3Jm8aWY2a36fMZnd4XHoQrzgCFYYYIbPn4l hwux99XRSzhfwAbGmyrGeddQf+ypquo43RX9EzbhZuSn0Gh5cN6iUV38HvDJhy1/hi12 AyaHIvOg5jQlXdiGPNv79sC2gsq1JWLwtblbNapN6toPUTynSilr5EP0oFjerPdq5Jqn 7HBA== X-Gm-Message-State: AOAM532zxtQjfV7E9S1PJr1odYxi193lpjRYOuJfN1Z/BXn2gTSkcLmT gmleSbtCqp2PMMi3gBFTsiNN0wNHzWjw1Tfs X-Google-Smtp-Source: ABdhPJz/j5pjZIxRjBl0mJ/8WrDfDVDYxuv5q18PD8QfmyOb/TFHbUZailcfuMG0Hi9gB3zUcHBWrw== X-Received: by 2002:adf:9ed1:: with SMTP id b17mr2040010wrf.227.1598886591835; Mon, 31 Aug 2020 08:09:51 -0700 (PDT) Received: from localhost.localdomain (84-238-136-197.ip.btc-net.bg. [84.238.136.197]) by smtp.gmail.com with ESMTPSA id f6sm14181636wme.32.2020.08.31.08.09.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Aug 2020 08:09:51 -0700 (PDT) From: Nikolay Aleksandrov To: netdev@vger.kernel.org Cc: roopa@nvidia.com, bridge@lists.linux-foundation.org, davem@davemloft.net, Nikolay Aleksandrov Subject: [PATCH net-next 01/15] net: bridge: mdb: arrange internal structs so fast-path fields are close Date: Mon, 31 Aug 2020 18:08:31 +0300 Message-Id: <20200831150845.1062447-2-nikolay@cumulusnetworks.com> X-Mailer: git-send-email 2.25.4 In-Reply-To: <20200831150845.1062447-1-nikolay@cumulusnetworks.com> References: <20200831150845.1062447-1-nikolay@cumulusnetworks.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Before this patch we'd need 2 cache lines for fast-path, now all used fields are in the first cache line. Signed-off-by: Nikolay Aleksandrov --- net/bridge/br_private.h | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h index baa1500f384f..357b6905ecef 100644 --- a/net/bridge/br_private.h +++ b/net/bridge/br_private.h @@ -217,23 +217,27 @@ struct net_bridge_fdb_entry { struct net_bridge_port_group { struct net_bridge_port *port; struct net_bridge_port_group __rcu *next; - struct hlist_node mglist; - struct rcu_head rcu; - struct timer_list timer; struct br_ip addr; unsigned char eth_addr[ETH_ALEN] __aligned(2); unsigned char flags; + + struct timer_list timer; + struct hlist_node mglist; + + struct rcu_head rcu; }; struct net_bridge_mdb_entry { struct rhash_head rhnode; struct net_bridge *br; struct net_bridge_port_group __rcu *ports; - struct rcu_head rcu; - struct timer_list timer; struct br_ip addr; bool host_joined; + + struct timer_list timer; struct hlist_node mdb_node; + + struct rcu_head rcu; }; struct net_bridge_port { From patchwork Mon Aug 31 15:08:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikolay Aleksandrov X-Patchwork-Id: 261736 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.1 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22927C433E2 for ; Mon, 31 Aug 2020 15:10:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E982E20936 for ; Mon, 31 Aug 2020 15:10:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=cumulusnetworks.com header.i=@cumulusnetworks.com header.b="PUBLhlBh" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728364AbgHaPKt (ORCPT ); Mon, 31 Aug 2020 11:10:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52110 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728264AbgHaPKJ (ORCPT ); Mon, 31 Aug 2020 11:10:09 -0400 Received: from mail-wr1-x442.google.com (mail-wr1-x442.google.com [IPv6:2a00:1450:4864:20::442]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 50CDBC0619C2 for ; Mon, 31 Aug 2020 08:10:01 -0700 (PDT) Received: by mail-wr1-x442.google.com with SMTP id m6so4100301wrn.0 for ; Mon, 31 Aug 2020 08:10:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cumulusnetworks.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=MDO+Yqa9RKjI5KhYhFk8z38TVcFDWxcJklAwlggn+X0=; b=PUBLhlBho/8ZQUVe3ytUg5QfH1H+J/q9lopcEm/+lf40VXKZkoP1GHDj7IwP14KaMk t21pMK1+ZRmQA5FXutvFvwYaWTIyoFggWN0kCThxDuTyi6Hdkx/3MjLZ4bRU/jy2Cskd oUr3cBpgwm4M2r0YJJ6MJNyNlz+6wYEGPup6c= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=MDO+Yqa9RKjI5KhYhFk8z38TVcFDWxcJklAwlggn+X0=; b=AJEV7SoSyBcrtDOZcGy8e/P2IacLtfhB4O5QW4Fl5OlBmotOdIaqFxjqYh69qdLsZA BVtfQy9DGiOi46t/uEoG8pUijRI3FQaAcv8OriR48G3qzjeE2OO8NBm6xKNAL08mYRlz 263sFe6aXogUo0NxhHHtMN71FZ8mjzmicIASpqcrL4B95xkd+RYlTN/6iRso7IEmHV9A Brv7nUyTZ3KboJbvyL3dwulshBnDXavqZQvDg8CRHNUZXtNoz0fOBAittjx+gvkrww9P j5ZlsXLQ0WCz41yQhk7swroF/UeWnrJUESW8cKDAZMqA2UzoOBS1Z/PYBd+c8JCRaPJp da1w== X-Gm-Message-State: AOAM5324j3JU/PGiQErN45k9WzMNCNn4YMg+VvLgGqwRxIh1lcChv5LS U1QKyVdZ3pAI7GKbQiVIuMoluo4jHbCFZ/Ss X-Google-Smtp-Source: ABdhPJwnXyvV3qy0OZk9Isd3beHzktIFw6YXVmZsOtIdDO8FKnlGPvGXCMtxMbk6cojdc9U8gTKaBw== X-Received: by 2002:adf:c552:: with SMTP id s18mr1994091wrf.209.1598886599584; Mon, 31 Aug 2020 08:09:59 -0700 (PDT) Received: from localhost.localdomain (84-238-136-197.ip.btc-net.bg. [84.238.136.197]) by smtp.gmail.com with ESMTPSA id f6sm14181636wme.32.2020.08.31.08.09.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Aug 2020 08:09:58 -0700 (PDT) From: Nikolay Aleksandrov To: netdev@vger.kernel.org Cc: roopa@nvidia.com, bridge@lists.linux-foundation.org, davem@davemloft.net, Nikolay Aleksandrov Subject: [PATCH net-next 06/15] net: bridge: mcast: add support for group query retransmit Date: Mon, 31 Aug 2020 18:08:36 +0300 Message-Id: <20200831150845.1062447-7-nikolay@cumulusnetworks.com> X-Mailer: git-send-email 2.25.4 In-Reply-To: <20200831150845.1062447-1-nikolay@cumulusnetworks.com> References: <20200831150845.1062447-1-nikolay@cumulusnetworks.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org We need to be able to retransmit group-specific and group-and-source specific queries. The new timer takes care of those. Signed-off-by: Nikolay Aleksandrov --- net/bridge/br_multicast.c | 65 ++++++++++++++++++++++++++++++++++----- net/bridge/br_private.h | 8 +++++ 2 files changed, 65 insertions(+), 8 deletions(-) diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index fc9f0584edf2..0f47882efdef 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -50,6 +50,7 @@ static void br_ip4_multicast_leave_group(struct net_bridge *br, __be32 group, __u16 vid, const unsigned char *src); +static void br_multicast_port_group_rexmit(struct timer_list *t); static void __del_port_router(struct net_bridge_port *p); #if IS_ENABLED(CONFIG_IPV6) @@ -184,6 +185,7 @@ void br_multicast_del_pg(struct net_bridge_mdb_entry *mp, rcu_assign_pointer(*pp, pg->next); hlist_del_init(&pg->mglist); del_timer(&pg->timer); + del_timer(&pg->rexmit_timer); hlist_for_each_entry_safe(ent, tmp, &pg->src_list, node) br_multicast_del_group_src(ent); br_mdb_notify(br->dev, pg->port, &pg->addr, RTM_DELMDB, pg->flags); @@ -237,7 +239,8 @@ static struct sk_buff *br_ip4_multicast_alloc_query(struct net_bridge *br, struct net_bridge_port_group *pg, __be32 ip_dst, __be32 group, bool with_srcs, bool over_lmqt, - u8 sflag, u8 *igmp_type) + u8 sflag, u8 *igmp_type, + bool *need_rexmit) { struct net_bridge_port *p = pg ? pg->port : NULL; struct net_bridge_group_src *ent; @@ -352,6 +355,8 @@ static struct sk_buff *br_ip4_multicast_alloc_query(struct net_bridge *br, ent->src_query_rexmit_cnt > 0) { ihv3->srcs[lmqt_srcs++] = ent->addr.u.ip4; ent->src_query_rexmit_cnt--; + if (need_rexmit && ent->src_query_rexmit_cnt) + *need_rexmit = true; } } if (WARN_ON(lmqt_srcs != ntohs(ihv3->nsrcs))) { @@ -493,7 +498,8 @@ static struct sk_buff *br_multicast_alloc_query(struct net_bridge *br, struct br_ip *ip_dst, struct br_ip *group, bool with_srcs, bool over_lmqt, - u8 sflag, u8 *igmp_type) + u8 sflag, u8 *igmp_type, + bool *need_rexmit) { __be32 ip4_dst; @@ -503,7 +509,8 @@ static struct sk_buff *br_multicast_alloc_query(struct net_bridge *br, return br_ip4_multicast_alloc_query(br, pg, ip4_dst, group->u.ip4, with_srcs, over_lmqt, - sflag, igmp_type); + sflag, igmp_type, + need_rexmit); #if IS_ENABLED(CONFIG_IPV6) case htons(ETH_P_IPV6): return br_ip6_multicast_alloc_query(br, &group->u.ip6, @@ -647,8 +654,9 @@ struct net_bridge_port_group *br_multicast_new_port_group( p->filter_mode = MCAST_EXCLUDE; INIT_HLIST_HEAD(&p->src_list); rcu_assign_pointer(p->next, next); - hlist_add_head(&p->mglist, &port->mglist); timer_setup(&p->timer, br_multicast_port_group_expired, 0); + timer_setup(&p->rexmit_timer, br_multicast_port_group_rexmit, 0); + hlist_add_head(&p->mglist, &port->mglist); if (src) memcpy(p->eth_addr, src, ETH_ALEN); @@ -875,7 +883,8 @@ static void __br_multicast_send_query(struct net_bridge *br, struct br_ip *ip_dst, struct br_ip *group, bool with_srcs, - u8 sflag) + u8 sflag, + bool *need_rexmit) { bool over_lmqt = !!sflag; struct sk_buff *skb; @@ -883,7 +892,8 @@ static void __br_multicast_send_query(struct net_bridge *br, again_under_lmqt: skb = br_multicast_alloc_query(br, pg, ip_dst, group, with_srcs, - over_lmqt, sflag, &igmp_type); + over_lmqt, sflag, &igmp_type, + need_rexmit); if (!skb) return; @@ -936,7 +946,8 @@ static void br_multicast_send_query(struct net_bridge *br, if (!other_query || timer_pending(&other_query->timer)) return; - __br_multicast_send_query(br, port, NULL, NULL, &br_group, false, 0); + __br_multicast_send_query(br, port, NULL, NULL, &br_group, false, 0, + NULL); time = jiffies; time += own_query->startup_sent < br->multicast_startup_query_count ? @@ -981,6 +992,44 @@ static void br_ip6_multicast_port_query_expired(struct timer_list *t) } #endif +static void br_multicast_port_group_rexmit(struct timer_list *t) +{ + struct net_bridge_port_group *pg = from_timer(pg, t, rexmit_timer); + struct bridge_mcast_other_query *other_query = NULL; + struct net_bridge *br = pg->port->br; + bool need_rexmit = false; + + spin_lock(&br->multicast_lock); + if (!netif_running(br->dev) || hlist_unhashed(&pg->mglist) || + !br_opt_get(br, BROPT_MULTICAST_ENABLED) || + !br_opt_get(br, BROPT_MULTICAST_QUERIER)) + goto out; + + if (pg->addr.proto == htons(ETH_P_IP)) + other_query = &br->ip4_other_query; +#if IS_ENABLED(CONFIG_IPV6) + else + other_query = &br->ip6_other_query; +#endif + + if (!other_query || timer_pending(&other_query->timer)) + goto out; + + if (pg->grp_query_rexmit_cnt) { + pg->grp_query_rexmit_cnt--; + __br_multicast_send_query(br, pg->port, pg, &pg->addr, + &pg->addr, false, 1, NULL); + } + __br_multicast_send_query(br, pg->port, pg, &pg->addr, + &pg->addr, true, 0, &need_rexmit); + + if (pg->grp_query_rexmit_cnt || need_rexmit) + mod_timer(&pg->rexmit_timer, jiffies + + br->multicast_last_member_interval); +out: + spin_unlock(&br->multicast_lock); +} + static void br_mc_disabled_update(struct net_device *dev, bool value) { struct switchdev_attr attr = { @@ -1589,7 +1638,7 @@ br_multicast_leave_group(struct net_bridge *br, if (br_opt_get(br, BROPT_MULTICAST_QUERIER)) { __br_multicast_send_query(br, port, NULL, NULL, &mp->addr, - false, 0); + false, 0, NULL); time = jiffies + br->multicast_last_member_count * br->multicast_last_member_interval; diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h index a82d0230f552..86fe45146a44 100644 --- a/net/bridge/br_private.h +++ b/net/bridge/br_private.h @@ -239,10 +239,12 @@ struct net_bridge_port_group { unsigned char eth_addr[ETH_ALEN] __aligned(2); unsigned char flags; unsigned char filter_mode; + unsigned char grp_query_rexmit_cnt; struct hlist_head src_list; unsigned int src_ents; struct timer_list timer; + struct timer_list rexmit_timer; struct hlist_node mglist; struct rcu_head rcu; @@ -866,6 +868,12 @@ static inline int br_multicast_igmp_type(const struct sk_buff *skb) { return BR_INPUT_SKB_CB(skb)->igmp; } + +static inline unsigned long br_multicast_lmqt(const struct net_bridge *br) +{ + return br->multicast_last_member_interval * + br->multicast_last_member_count; +} #else static inline int br_multicast_rcv(struct net_bridge *br, struct net_bridge_port *port, From patchwork Mon Aug 31 15:08:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikolay Aleksandrov X-Patchwork-Id: 261734 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.1 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 913A4C433E2 for ; Mon, 31 Aug 2020 15:11:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 67738207EA for ; Mon, 31 Aug 2020 15:11:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=cumulusnetworks.com header.i=@cumulusnetworks.com header.b="akT3Heof" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728361AbgHaPLT (ORCPT ); Mon, 31 Aug 2020 11:11:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52052 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728268AbgHaPKK (ORCPT ); Mon, 31 Aug 2020 11:10:10 -0400 Received: from mail-wr1-x442.google.com (mail-wr1-x442.google.com [IPv6:2a00:1450:4864:20::442]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A460AC0619C3 for ; Mon, 31 Aug 2020 08:10:02 -0700 (PDT) Received: by mail-wr1-x442.google.com with SMTP id e16so6336690wrm.2 for ; Mon, 31 Aug 2020 08:10:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cumulusnetworks.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TOF7a7FI8Ce6aHPNT2YVGTdqisTrt5r3aRa3Yt5wFn4=; b=akT3HeoftK+wnUCJ7eeEPeWNfCZ5VDVE0TvSnrtcrghUhrmCX6EhZ/yir6Csc6WF0j nh1jDcxxQxIDcYdWPVCyA6tLNlelvVOjM3gYMMMngh6xoAstSqC24zlbeN1pBO3gvSZk UwKs09kKbumBw1sTagi+TxZ0VqgA4Fz70pPC0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TOF7a7FI8Ce6aHPNT2YVGTdqisTrt5r3aRa3Yt5wFn4=; b=Y/4HJV8QLJWV7NjHQo2bVKUnJp6+fDimynfCQ15YXZc3H4c1JhMWRdXILaK92J8EYu COHZ9+FHia/d9i81lEMRCUdbMbMiyHLabQ+OZPJwZ3hwySWkg4iOmIUElV6qYiEwV5ZE 6bnrtPfGnIftTm5umPkmUnJ8UB9JgYW2kgZHihVNAOpi6Cq4GCdbNDM8C3cPX1aDvu4n w3uC4rWBRG8DJYrHMiLVV2eoWidhJtgdBvPINdpIY5HxLubn9gmfcGm91dH4mXb6HjVe AOP59yzIOW8FiIzQ8Rx2h39ivPICl7iqSfNct/Dk+tmXqEJPhAnRpaBWxLBP/PnvYDkV YtEw== X-Gm-Message-State: AOAM532xV/yoJu4BA0sIEuP5kxHyCPPtlmK0vKb3BzbdIx3f6L53p+fr 1NUeVF9HT8DREudn57oEB4O3Oun1ZuixUlml X-Google-Smtp-Source: ABdhPJxtPp4LqGKgAEqyuh0ZqvW2ny17uGwCl+Rc3yWbWpMBCDZH9FzS3Dazl3kt9gjKyD8HtxD36g== X-Received: by 2002:adf:f8d0:: with SMTP id f16mr2259793wrq.66.1598886600944; Mon, 31 Aug 2020 08:10:00 -0700 (PDT) Received: from localhost.localdomain (84-238-136-197.ip.btc-net.bg. [84.238.136.197]) by smtp.gmail.com with ESMTPSA id f6sm14181636wme.32.2020.08.31.08.09.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Aug 2020 08:10:00 -0700 (PDT) From: Nikolay Aleksandrov To: netdev@vger.kernel.org Cc: roopa@nvidia.com, bridge@lists.linux-foundation.org, davem@davemloft.net, Nikolay Aleksandrov Subject: [PATCH net-next 07/15] net: bridge: mdb: push notifications in __br_mdb_add/del Date: Mon, 31 Aug 2020 18:08:37 +0300 Message-Id: <20200831150845.1062447-8-nikolay@cumulusnetworks.com> X-Mailer: git-send-email 2.25.4 In-Reply-To: <20200831150845.1062447-1-nikolay@cumulusnetworks.com> References: <20200831150845.1062447-1-nikolay@cumulusnetworks.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This change is in preparation for using the mdb port group entries when sending a notification, so their full state and additional attributes can be filled in. Signed-off-by: Nikolay Aleksandrov --- net/bridge/br_mdb.c | 20 ++++++++------------ 1 file changed, 8 insertions(+), 12 deletions(-) diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c index 07cb07cd3691..a3ebc2d3b8f6 100644 --- a/net/bridge/br_mdb.c +++ b/net/bridge/br_mdb.c @@ -662,7 +662,7 @@ static int br_mdb_parse(struct sk_buff *skb, struct nlmsghdr *nlh, } static int br_mdb_add_group(struct net_bridge *br, struct net_bridge_port *port, - struct br_ip *group, unsigned char state) + struct br_ip *group, struct br_mdb_entry *entry) { struct net_bridge_mdb_entry *mp; struct net_bridge_port_group *p; @@ -681,12 +681,13 @@ static int br_mdb_add_group(struct net_bridge *br, struct net_bridge_port *port, /* host join */ if (!port) { /* don't allow any flags for host-joined groups */ - if (state) + if (entry->state) return -EINVAL; if (mp->host_joined) return -EEXIST; br_multicast_host_join(mp, false); + __br_mdb_notify(br->dev, NULL, entry, RTM_NEWMDB); return 0; } @@ -700,13 +701,14 @@ static int br_mdb_add_group(struct net_bridge *br, struct net_bridge_port *port, break; } - p = br_multicast_new_port_group(port, group, *pp, state, NULL); + p = br_multicast_new_port_group(port, group, *pp, entry->state, NULL); if (unlikely(!p)) return -ENOMEM; p->filter_mode = MCAST_EXCLUDE; rcu_assign_pointer(*pp, p); - if (state == MDB_TEMPORARY) + if (entry->state == MDB_TEMPORARY) mod_timer(&p->timer, now + br->multicast_membership_interval); + __br_mdb_notify(br->dev, port, entry, RTM_NEWMDB); return 0; } @@ -735,7 +737,7 @@ static int __br_mdb_add(struct net *net, struct net_bridge *br, __mdb_entry_to_br_ip(entry, &ip); spin_lock_bh(&br->multicast_lock); - ret = br_mdb_add_group(br, p, &ip, entry->state); + ret = br_mdb_add_group(br, p, &ip, entry); spin_unlock_bh(&br->multicast_lock); return ret; } @@ -780,12 +782,9 @@ static int br_mdb_add(struct sk_buff *skb, struct nlmsghdr *nlh, err = __br_mdb_add(net, br, entry); if (err) break; - __br_mdb_notify(dev, p, entry, RTM_NEWMDB); } } else { err = __br_mdb_add(net, br, entry); - if (!err) - __br_mdb_notify(dev, p, entry, RTM_NEWMDB); } return err; @@ -813,6 +812,7 @@ static int __br_mdb_del(struct net_bridge *br, struct br_mdb_entry *entry) if (entry->ifindex == mp->br->dev->ifindex && mp->host_joined) { br_multicast_host_leave(mp, false); err = 0; + __br_mdb_notify(br->dev, NULL, entry, RTM_DELMDB); if (!mp->ports && netif_running(br->dev)) mod_timer(&mp->timer, jiffies); goto unlock; @@ -875,13 +875,9 @@ static int br_mdb_del(struct sk_buff *skb, struct nlmsghdr *nlh, list_for_each_entry(v, &vg->vlan_list, vlist) { entry->vid = v->vid; err = __br_mdb_del(br, entry); - if (!err) - __br_mdb_notify(dev, p, entry, RTM_DELMDB); } } else { err = __br_mdb_del(br, entry); - if (!err) - __br_mdb_notify(dev, p, entry, RTM_DELMDB); } return err; From patchwork Mon Aug 31 15:08:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikolay Aleksandrov X-Patchwork-Id: 261735 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.1 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38055C433E6 for ; Mon, 31 Aug 2020 15:11:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0FAD520EDD for ; Mon, 31 Aug 2020 15:11:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=cumulusnetworks.com header.i=@cumulusnetworks.com header.b="OTHbNeKs" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728393AbgHaPK6 (ORCPT ); Mon, 31 Aug 2020 11:10:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52112 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728269AbgHaPKK (ORCPT ); Mon, 31 Aug 2020 11:10:10 -0400 Received: from mail-wr1-x442.google.com (mail-wr1-x442.google.com [IPv6:2a00:1450:4864:20::442]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 379C8C061573 for ; Mon, 31 Aug 2020 08:10:04 -0700 (PDT) Received: by mail-wr1-x442.google.com with SMTP id f7so6334391wrw.1 for ; Mon, 31 Aug 2020 08:10:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cumulusnetworks.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0YYRXyr8A8hJzuLHPCK7zTwk0bdyRr3ZfGrnrHsV8lg=; b=OTHbNeKsxDCVMAvyESNDOAvFWVkYvhcHmCExiW9LEgxPZcThDnBygaSG39wXVEw5/S GtRqsAQrGpRtFLoF8jhVjMfktASG2b2UCggaVx8FcZkk35qUc3ytz03xU8qz6TtpaXsH KH3HeAIVCsCKT48eU80EKLczn+XssPld6nBe0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0YYRXyr8A8hJzuLHPCK7zTwk0bdyRr3ZfGrnrHsV8lg=; b=nIbiXUDYzX5qfOPpHy+aMVmZQrhll06eSk66g4X6c3ZUp95pX0S9acIxp9oPrLBYXQ /QgeWfuOWgCCzrhHx66SrHNqRmOAXtdt/wp47nWTxEIagHLRXvhSlOUh8kZml8dhWzTQ Zzw3dpj6DGSAv6YjTIU72tAJ81cMshEYurzGF78DwEf67Ef+meUIQOG4hyoQbmagxe/C iKK+CEzdws74f5oAGkdi3jV1JRkbX3GDqyPJom8LKHmp0PLOzKsex1sRee08/Cx3gPtp dhixc5LxkWUGp5NgZQgRw+XWM+BYI7czLVjHI8/VaTHqaVPOs73dmzFRzmPLDIZlm7sT AV0Q== X-Gm-Message-State: AOAM530mL6yrNmMk1EL43WgqvZCofUZUvn7t89mCsJw/epaZfwX4Oa2+ TL1N5610BkwdKw26/tqHyVq8QdPjlqO5ZwOx X-Google-Smtp-Source: ABdhPJwsO1DcqhT9nIFDH53SM43ASlXtZ4l3OrwCnEcaVLMGtMHHkvACZesvhHEXBY5Y426i3FQSEA== X-Received: by 2002:adf:f04c:: with SMTP id t12mr2060880wro.121.1598886602463; Mon, 31 Aug 2020 08:10:02 -0700 (PDT) Received: from localhost.localdomain (84-238-136-197.ip.btc-net.bg. [84.238.136.197]) by smtp.gmail.com with ESMTPSA id f6sm14181636wme.32.2020.08.31.08.10.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Aug 2020 08:10:01 -0700 (PDT) From: Nikolay Aleksandrov To: netdev@vger.kernel.org Cc: roopa@nvidia.com, bridge@lists.linux-foundation.org, davem@davemloft.net, Nikolay Aleksandrov Subject: [PATCH net-next 08/15] net: bridge: mdb: use mdb and port entries in notifications Date: Mon, 31 Aug 2020 18:08:38 +0300 Message-Id: <20200831150845.1062447-9-nikolay@cumulusnetworks.com> X-Mailer: git-send-email 2.25.4 In-Reply-To: <20200831150845.1062447-1-nikolay@cumulusnetworks.com> References: <20200831150845.1062447-1-nikolay@cumulusnetworks.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org We have to use mdb and port entries when sending mdb notifications in order to fill in all group attributes properly. Before this change we would've used a fake br_mdb_entry struct to fill in only partial information about the mdb. Now we can also reuse the mdb dump fill function and thus have only a single central place which fills the mdb attributes. Signed-off-by: Nikolay Aleksandrov --- net/bridge/br_mdb.c | 131 ++++++++++++++++++++------------------ net/bridge/br_multicast.c | 10 +-- net/bridge/br_private.h | 4 +- 3 files changed, 77 insertions(+), 68 deletions(-) diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c index a3ebc2d3b8f6..bec0b986423f 100644 --- a/net/bridge/br_mdb.c +++ b/net/bridge/br_mdb.c @@ -325,14 +325,15 @@ static int br_mdb_dump(struct sk_buff *skb, struct netlink_callback *cb) static int nlmsg_populate_mdb_fill(struct sk_buff *skb, struct net_device *dev, - struct br_mdb_entry *entry, u32 pid, - u32 seq, int type, unsigned int flags) + struct net_bridge_mdb_entry *mp, + struct net_bridge_port_group *pg, + int type) { struct nlmsghdr *nlh; struct br_port_msg *bpm; struct nlattr *nest, *nest2; - nlh = nlmsg_put(skb, pid, seq, type, sizeof(*bpm), 0); + nlh = nlmsg_put(skb, 0, 0, type, sizeof(*bpm), 0); if (!nlh) return -EMSGSIZE; @@ -347,7 +348,7 @@ static int nlmsg_populate_mdb_fill(struct sk_buff *skb, if (nest2 == NULL) goto end; - if (nla_put(skb, MDBA_MDB_ENTRY_INFO, sizeof(*entry), entry)) + if (__mdb_fill_info(skb, mp, pg)) goto end; nla_nest_end(skb, nest2); @@ -362,10 +363,34 @@ static int nlmsg_populate_mdb_fill(struct sk_buff *skb, return -EMSGSIZE; } -static inline size_t rtnl_mdb_nlmsg_size(void) +static size_t rtnl_mdb_nlmsg_size(struct net_bridge_port_group *pg) { - return NLMSG_ALIGN(sizeof(struct br_port_msg)) - + nla_total_size(sizeof(struct br_mdb_entry)); + size_t nlmsg_size = NLMSG_ALIGN(sizeof(struct br_port_msg)) + + nla_total_size(sizeof(struct br_mdb_entry)) + + nla_total_size(sizeof(u32)); + + if (pg && pg->port->br->multicast_igmp_version == 3 && + pg->addr.proto == htons(ETH_P_IP)) { + struct net_bridge_group_src *ent; + + /* MDBA_MDB_EATTR_GROUP_MODE */ + nlmsg_size += nla_total_size(sizeof(u8)); + + /* MDBA_MDB_EATTR_SRC_LIST nested attr */ + if (!hlist_empty(&pg->src_list)) + nlmsg_size += nla_total_size(0); + + hlist_for_each_entry(ent, &pg->src_list, node) { + /* MDBA_MDB_SRCLIST_ENTRY nested attr + + * MDBA_MDB_SRCATTR_ADDRESS + MDBA_MDB_SRCATTR_TIMER + */ + nlmsg_size += nla_total_size(0) + + nla_total_size(sizeof(__be32)) + + nla_total_size(sizeof(u32)); + } + } + + return nlmsg_size; } struct br_mdb_complete_info { @@ -403,21 +428,22 @@ static void br_mdb_complete(struct net_device *dev, int err, void *priv) static void br_mdb_switchdev_host_port(struct net_device *dev, struct net_device *lower_dev, - struct br_mdb_entry *entry, int type) + struct net_bridge_mdb_entry *mp, + int type) { struct switchdev_obj_port_mdb mdb = { .obj = { .id = SWITCHDEV_OBJ_ID_HOST_MDB, .flags = SWITCHDEV_F_DEFER, }, - .vid = entry->vid, + .vid = mp->addr.vid, }; - if (entry->addr.proto == htons(ETH_P_IP)) - ip_eth_mc_map(entry->addr.u.ip4, mdb.addr); + if (mp->addr.proto == htons(ETH_P_IP)) + ip_eth_mc_map(mp->addr.u.ip4, mdb.addr); #if IS_ENABLED(CONFIG_IPV6) else - ipv6_eth_mc_map(&entry->addr.u.ip6, mdb.addr); + ipv6_eth_mc_map(&mp->addr.u.ip6, mdb.addr); #endif mdb.obj.orig_dev = dev; @@ -432,17 +458,19 @@ static void br_mdb_switchdev_host_port(struct net_device *dev, } static void br_mdb_switchdev_host(struct net_device *dev, - struct br_mdb_entry *entry, int type) + struct net_bridge_mdb_entry *mp, int type) { struct net_device *lower_dev; struct list_head *iter; netdev_for_each_lower_dev(dev, lower_dev, iter) - br_mdb_switchdev_host_port(dev, lower_dev, entry, type); + br_mdb_switchdev_host_port(dev, lower_dev, mp, type); } -static void __br_mdb_notify(struct net_device *dev, struct net_bridge_port *p, - struct br_mdb_entry *entry, int type) +void br_mdb_notify(struct net_device *dev, + struct net_bridge_mdb_entry *mp, + struct net_bridge_port_group *pg, + int type) { struct br_mdb_complete_info *complete_info; struct switchdev_obj_port_mdb mdb = { @@ -450,44 +478,45 @@ static void __br_mdb_notify(struct net_device *dev, struct net_bridge_port *p, .id = SWITCHDEV_OBJ_ID_PORT_MDB, .flags = SWITCHDEV_F_DEFER, }, - .vid = entry->vid, + .vid = mp->addr.vid, }; - struct net_device *port_dev; struct net *net = dev_net(dev); struct sk_buff *skb; int err = -ENOBUFS; - port_dev = __dev_get_by_index(net, entry->ifindex); - if (entry->addr.proto == htons(ETH_P_IP)) - ip_eth_mc_map(entry->addr.u.ip4, mdb.addr); + if (pg) { + if (mp->addr.proto == htons(ETH_P_IP)) + ip_eth_mc_map(mp->addr.u.ip4, mdb.addr); #if IS_ENABLED(CONFIG_IPV6) - else - ipv6_eth_mc_map(&entry->addr.u.ip6, mdb.addr); + else + ipv6_eth_mc_map(&mp->addr.u.ip6, mdb.addr); #endif - - mdb.obj.orig_dev = port_dev; - if (p && port_dev && type == RTM_NEWMDB) { - complete_info = kmalloc(sizeof(*complete_info), GFP_ATOMIC); - if (complete_info) { - complete_info->port = p; - __mdb_entry_to_br_ip(entry, &complete_info->ip); + mdb.obj.orig_dev = pg->port->dev; + switch (type) { + case RTM_NEWMDB: + complete_info = kmalloc(sizeof(*complete_info), GFP_ATOMIC); + if (!complete_info) + break; + complete_info->port = pg->port; + complete_info->ip = mp->addr; mdb.obj.complete_priv = complete_info; mdb.obj.complete = br_mdb_complete; - if (switchdev_port_obj_add(port_dev, &mdb.obj, NULL)) + if (switchdev_port_obj_add(pg->port->dev, &mdb.obj, NULL)) kfree(complete_info); + break; + case RTM_DELMDB: + switchdev_port_obj_del(pg->port->dev, &mdb.obj); + break; } - } else if (p && port_dev && type == RTM_DELMDB) { - switchdev_port_obj_del(port_dev, &mdb.obj); + } else { + br_mdb_switchdev_host(dev, mp, type); } - if (!p) - br_mdb_switchdev_host(dev, entry, type); - - skb = nlmsg_new(rtnl_mdb_nlmsg_size(), GFP_ATOMIC); + skb = nlmsg_new(rtnl_mdb_nlmsg_size(pg), GFP_ATOMIC); if (!skb) goto errout; - err = nlmsg_populate_mdb_fill(skb, dev, entry, 0, 0, type, NTF_SELF); + err = nlmsg_populate_mdb_fill(skb, dev, mp, pg, type); if (err < 0) { kfree_skb(skb); goto errout; @@ -499,26 +528,6 @@ static void __br_mdb_notify(struct net_device *dev, struct net_bridge_port *p, rtnl_set_sk_err(net, RTNLGRP_MDB, err); } -void br_mdb_notify(struct net_device *dev, struct net_bridge_port *port, - struct br_ip *group, int type, u8 flags) -{ - struct br_mdb_entry entry; - - memset(&entry, 0, sizeof(entry)); - if (port) - entry.ifindex = port->dev->ifindex; - else - entry.ifindex = dev->ifindex; - entry.addr.proto = group->proto; - entry.addr.u.ip4 = group->u.ip4; -#if IS_ENABLED(CONFIG_IPV6) - entry.addr.u.ip6 = group->u.ip6; -#endif - entry.vid = group->vid; - __mdb_entry_fill_flags(&entry, flags); - __br_mdb_notify(dev, port, &entry, type); -} - static int nlmsg_populate_rtr_fill(struct sk_buff *skb, struct net_device *dev, int ifindex, u32 pid, @@ -687,7 +696,7 @@ static int br_mdb_add_group(struct net_bridge *br, struct net_bridge_port *port, return -EEXIST; br_multicast_host_join(mp, false); - __br_mdb_notify(br->dev, NULL, entry, RTM_NEWMDB); + br_mdb_notify(br->dev, mp, NULL, RTM_NEWMDB); return 0; } @@ -708,7 +717,7 @@ static int br_mdb_add_group(struct net_bridge *br, struct net_bridge_port *port, rcu_assign_pointer(*pp, p); if (entry->state == MDB_TEMPORARY) mod_timer(&p->timer, now + br->multicast_membership_interval); - __br_mdb_notify(br->dev, port, entry, RTM_NEWMDB); + br_mdb_notify(br->dev, mp, p, RTM_NEWMDB); return 0; } @@ -812,7 +821,7 @@ static int __br_mdb_del(struct net_bridge *br, struct br_mdb_entry *entry) if (entry->ifindex == mp->br->dev->ifindex && mp->host_joined) { br_multicast_host_leave(mp, false); err = 0; - __br_mdb_notify(br->dev, NULL, entry, RTM_DELMDB); + br_mdb_notify(br->dev, mp, NULL, RTM_DELMDB); if (!mp->ports && netif_running(br->dev)) mod_timer(&mp->timer, jiffies); goto unlock; diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index 0f47882efdef..cdd732c91d1f 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -188,7 +188,7 @@ void br_multicast_del_pg(struct net_bridge_mdb_entry *mp, del_timer(&pg->rexmit_timer); hlist_for_each_entry_safe(ent, tmp, &pg->src_list, node) br_multicast_del_group_src(ent); - br_mdb_notify(br->dev, pg->port, &pg->addr, RTM_DELMDB, pg->flags); + br_mdb_notify(br->dev, mp, pg, RTM_DELMDB); kfree_rcu(pg, rcu); if (!mp->ports && !mp->host_joined && netif_running(br->dev)) @@ -684,8 +684,7 @@ void br_multicast_host_join(struct net_bridge_mdb_entry *mp, bool notify) if (!mp->host_joined) { mp->host_joined = true; if (notify) - br_mdb_notify(mp->br->dev, NULL, &mp->addr, - RTM_NEWMDB, 0); + br_mdb_notify(mp->br->dev, mp, NULL, RTM_NEWMDB); } mod_timer(&mp->timer, jiffies + mp->br->multicast_membership_interval); } @@ -697,7 +696,7 @@ void br_multicast_host_leave(struct net_bridge_mdb_entry *mp, bool notify) mp->host_joined = false; if (notify) - br_mdb_notify(mp->br->dev, NULL, &mp->addr, RTM_DELMDB, 0); + br_mdb_notify(mp->br->dev, mp, NULL, RTM_DELMDB); } static int br_multicast_add_group(struct net_bridge *br, @@ -739,10 +738,11 @@ static int br_multicast_add_group(struct net_bridge *br, if (unlikely(!p)) goto err; rcu_assign_pointer(*pp, p); - br_mdb_notify(br->dev, port, group, RTM_NEWMDB, 0); + br_mdb_notify(br->dev, mp, p, RTM_NEWMDB); found: mod_timer(&p->timer, now + br->multicast_membership_interval); + out: err = 0; diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h index 86fe45146a44..f514b45b2963 100644 --- a/net/bridge/br_private.h +++ b/net/bridge/br_private.h @@ -798,8 +798,8 @@ br_multicast_new_port_group(struct net_bridge_port *port, struct br_ip *group, unsigned char flags, const unsigned char *src); int br_mdb_hash_init(struct net_bridge *br); void br_mdb_hash_fini(struct net_bridge *br); -void br_mdb_notify(struct net_device *dev, struct net_bridge_port *port, - struct br_ip *group, int type, u8 flags); +void br_mdb_notify(struct net_device *dev, struct net_bridge_mdb_entry *mp, + struct net_bridge_port_group *pg, int type); void br_rtr_notify(struct net_device *dev, struct net_bridge_port *port, int type); void br_multicast_del_pg(struct net_bridge_mdb_entry *mp, From patchwork Mon Aug 31 15:08:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikolay Aleksandrov X-Patchwork-Id: 261733 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.1 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6AFEC433E6 for ; Mon, 31 Aug 2020 15:11:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C1490207EA for ; Mon, 31 Aug 2020 15:11:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=cumulusnetworks.com header.i=@cumulusnetworks.com header.b="MkLoPcPv" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728169AbgHaPL1 (ORCPT ); Mon, 31 Aug 2020 11:11:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52060 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728152AbgHaPKM (ORCPT ); Mon, 31 Aug 2020 11:10:12 -0400 Received: from mail-wr1-x441.google.com (mail-wr1-x441.google.com [IPv6:2a00:1450:4864:20::441]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BC420C0619C4 for ; Mon, 31 Aug 2020 08:10:05 -0700 (PDT) Received: by mail-wr1-x441.google.com with SMTP id f7so6334492wrw.1 for ; Mon, 31 Aug 2020 08:10:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cumulusnetworks.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=yot6KwfYfw5rhPTVz46mBjLztxcuLpYs684NnYPwT1Q=; b=MkLoPcPvifPSOSyCHQhw2IRiKnQLQWjXYp5rBsgBXcZNFjsffeVX2njSRaz28Y60BR qX5d/mTzaSbWlPcj+84xYKqC1qUjoezWm+9R7/43z+y+fiatTq3nxfnEh1YJSMW2/b5f mhMlDXKet306eumKKlUVAJwYrfiybvriGp8Hw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=yot6KwfYfw5rhPTVz46mBjLztxcuLpYs684NnYPwT1Q=; b=qUchQXJnqe3sObs8fkqVxF4mkuIJEj2HblPBqx82miGONwk8zc7TefZOeNKM8zP2Kr /JLNdl7lcH4PysbiBR0dis56QWTYgQ9pMSgu4wJgOmcpgZQNcMLBSOSoiLTqzMH9hckW F00SP0zdr54Dhmdhz4MvIgrqP0mPj72EF7nmNH67+bzSvM8nNXYZgHmJy6ruQ6qOA2Yv PPWZEqDf6btfKta1MDUKynNCOTb0l+SfNSINXgfRT4JUXKKRRmv+Gy5qpWcW8DGX190F PslSb49Jm11jW//z1KIt29M3ayC9Onc68OVBPu6wnwm/gYLRk+9qUZWzkDTQkDWXacxK +I3w== X-Gm-Message-State: AOAM531Aq8CnbcfwUL/e9C6s58EAoSF4Qa7vsoxWo0TEPdXgTgAlBDA8 buM5Ssy/DRzuj6pfqivYQZBNPp8PDhifAwo6 X-Google-Smtp-Source: ABdhPJzvZ1spZ03Pt59fKKNQ5fnVP7a+aXbzRS3TzyftN569erw2414bOblQN4M2uxJJbSas8KAMjA== X-Received: by 2002:adf:ea4f:: with SMTP id j15mr2026650wrn.295.1598886603930; Mon, 31 Aug 2020 08:10:03 -0700 (PDT) Received: from localhost.localdomain (84-238-136-197.ip.btc-net.bg. [84.238.136.197]) by smtp.gmail.com with ESMTPSA id f6sm14181636wme.32.2020.08.31.08.10.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Aug 2020 08:10:03 -0700 (PDT) From: Nikolay Aleksandrov To: netdev@vger.kernel.org Cc: roopa@nvidia.com, bridge@lists.linux-foundation.org, davem@davemloft.net, Nikolay Aleksandrov Subject: [PATCH net-next 09/15] net: bridge: mcast: delete expired port groups without srcs Date: Mon, 31 Aug 2020 18:08:39 +0300 Message-Id: <20200831150845.1062447-10-nikolay@cumulusnetworks.com> X-Mailer: git-send-email 2.25.4 In-Reply-To: <20200831150845.1062447-1-nikolay@cumulusnetworks.com> References: <20200831150845.1062447-1-nikolay@cumulusnetworks.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org If an expired port group is in EXCLUDE mode, then we have to turn it into INCLUDE mode, remove all srcs with zero timer and finally remove the group itself if there are no more srcs with an active timer. For IGMPv2 use there would be no sources, so this will reduce to just removing the group as before. Signed-off-by: Nikolay Aleksandrov --- net/bridge/br_multicast.c | 21 ++++++++++++++++++++- 1 file changed, 20 insertions(+), 1 deletion(-) diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index cdd732c91d1f..1dc0964ea3b5 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -222,15 +222,34 @@ static void br_multicast_find_del_pg(struct net_bridge *br, static void br_multicast_port_group_expired(struct timer_list *t) { struct net_bridge_port_group *pg = from_timer(pg, t, timer); + struct net_bridge_group_src *src_ent; struct net_bridge *br = pg->port->br; + struct hlist_node *tmp; + bool changed; spin_lock(&br->multicast_lock); if (!netif_running(br->dev) || timer_pending(&pg->timer) || hlist_unhashed(&pg->mglist) || pg->flags & MDB_PG_FLAGS_PERMANENT) goto out; - br_multicast_find_del_pg(br, pg); + changed = !!(pg->filter_mode == MCAST_EXCLUDE); + pg->filter_mode = MCAST_INCLUDE; + hlist_for_each_entry_safe(src_ent, tmp, &pg->src_list, node) { + if (!timer_pending(&src_ent->timer)) { + br_multicast_del_group_src(src_ent); + changed = true; + } + } + if (hlist_empty(&pg->src_list)) { + br_multicast_find_del_pg(br, pg); + } else if (changed) { + struct net_bridge_mdb_entry *mp = br_mdb_ip_get(br, &pg->addr); + + if (WARN_ON(!mp)) + goto out; + br_mdb_notify(br->dev, mp, pg, RTM_NEWMDB); + } out: spin_unlock(&br->multicast_lock); } From patchwork Mon Aug 31 15:08:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikolay Aleksandrov X-Patchwork-Id: 261730 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.1 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D1FDC433E6 for ; Mon, 31 Aug 2020 15:12:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CB2A12083E for ; Mon, 31 Aug 2020 15:12:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=cumulusnetworks.com header.i=@cumulusnetworks.com header.b="bxhjZXQL" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728210AbgHaPL4 (ORCPT ); Mon, 31 Aug 2020 11:11:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52062 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728301AbgHaPKO (ORCPT ); Mon, 31 Aug 2020 11:10:14 -0400 Received: from mail-wm1-x341.google.com (mail-wm1-x341.google.com [IPv6:2a00:1450:4864:20::341]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 20F54C061575 for ; Mon, 31 Aug 2020 08:10:10 -0700 (PDT) Received: by mail-wm1-x341.google.com with SMTP id a9so1902737wmm.2 for ; Mon, 31 Aug 2020 08:10:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cumulusnetworks.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cqBBFmMS19JIRuY9JV073peAHW//ng5aHlsxA/FxST8=; b=bxhjZXQLNJCttNhDbbj4DJhzqfZRvEk5zWGaQskCA4AeON2LX4QnH7hG8Ik9x2Evgs ndnzVe5z7bDT0wVpUJrr8KJsjLVAZizPvCbjaEpDV9kM2UCSVsXxjpObiH8JnLQ+0cvo U7PCEwR7syXc3aTWZzDa4YBLAFuJslH9WO8sk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cqBBFmMS19JIRuY9JV073peAHW//ng5aHlsxA/FxST8=; b=VIEb6FjvYgl3h+oCetXSZ+jt6eA73XfZ/BKZIBuvecnfSJpLplClMaqf0YkjoAPzIP uatQ2EKh/kBW10dKJGLCKbllXxK6QXRb7KNbvgbgZqcLVKGHtCNm69q+qGGkrCfwxgDQ U8CUgNX8TUTgLCJxTfXSTl7Vk8yPWjHg7eKp/5B8XFC/QIYcU09fuYBeqLihLi3PR8Jj e+mdxF3ZhHzf/TTKZKHuKFAmcCp6nuSVbTyHcphjGA4WMh1fdRemZsJbEAbtl0WT8Pp8 PX965zugA8l2qPl3Av4oseiCtszWEU4dMg89e3d0e8ocqq3NM7YKM/HwEXd60tvFuVML 7d1w== X-Gm-Message-State: AOAM530pQu5tsG+ZX+CHfFhic/bp2KJ95jsyUNBzl2ShLvoK+4Ow8Hqn eLcketHqhaoB40KZzlt8QetnEK69h7opYR+C X-Google-Smtp-Source: ABdhPJwUncgu+olXr0NC7lAt61fzkB8F+D0eiAKw0nbWPewXzwKFtWJnvFVhdQvXmYr4d3sBslZIpA== X-Received: by 2002:a1c:1b8f:: with SMTP id b137mr1846929wmb.151.1598886608371; Mon, 31 Aug 2020 08:10:08 -0700 (PDT) Received: from localhost.localdomain (84-238-136-197.ip.btc-net.bg. [84.238.136.197]) by smtp.gmail.com with ESMTPSA id f6sm14181636wme.32.2020.08.31.08.10.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Aug 2020 08:10:07 -0700 (PDT) From: Nikolay Aleksandrov To: netdev@vger.kernel.org Cc: roopa@nvidia.com, bridge@lists.linux-foundation.org, davem@davemloft.net, Nikolay Aleksandrov Subject: [PATCH net-next 12/15] net: bridge: mcast: support for IGMPV3_CHANGE_TO_INCLUDE/EXCLUDE report Date: Mon, 31 Aug 2020 18:08:42 +0300 Message-Id: <20200831150845.1062447-13-nikolay@cumulusnetworks.com> X-Mailer: git-send-email 2.25.4 In-Reply-To: <20200831150845.1062447-1-nikolay@cumulusnetworks.com> References: <20200831150845.1062447-1-nikolay@cumulusnetworks.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org In order to process IGMPV3_CHANGE_TO_INCLUDE/EXCLUDE report types we need new helpers which allow us to mark entries based on their timer state and to query only marked entries. Signed-off-by: Nikolay Aleksandrov --- net/bridge/br_multicast.c | 273 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 273 insertions(+) diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index 2ba43d497515..2777e3eb07b9 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -1172,6 +1172,17 @@ static void __grp_src_modify_flags_all(struct net_bridge_port_group *pg, __grp_src_modify_flags(ent, set_flags, clear_flags); } +static void __grp_src_modify_flags_timer(struct net_bridge_port_group *pg, + u8 set_flags, u8 clear_flags, + bool pending) +{ + struct net_bridge_group_src *ent; + + hlist_for_each_entry(ent, &pg->src_list, node) + if (timer_pending(&ent->timer) == pending) + __grp_src_modify_flags(ent, set_flags, clear_flags); +} + static int __grp_src_delete_marked(struct net_bridge_port_group *pg) { struct net_bridge_group_src *ent; @@ -1187,6 +1198,61 @@ static int __grp_src_delete_marked(struct net_bridge_port_group *pg) return deleted; } +static void __grp_src_query_marked_and_rexmit(struct net_bridge_port_group *pg) +{ + struct net_bridge *br = pg->port->br; + u32 lmqc = br->multicast_last_member_count; + unsigned long lmqt, lmi, now = jiffies; + struct net_bridge_group_src *ent; + + lmqt = now + br_multicast_lmqt(br); + hlist_for_each_entry(ent, &pg->src_list, node) { + if (ent->flags & BR_SGRP_F_SEND) { + __grp_src_modify_flags(ent, 0, BR_SGRP_F_SEND); + if (ent->timer.expires > lmqt) { + if (br_opt_get(br, BROPT_MULTICAST_QUERIER) && + !timer_pending(&br->ip4_other_query.timer)) + ent->src_query_rexmit_cnt = lmqc; + mod_timer(&ent->timer, lmqt); + } + } + } + + if (!br_opt_get(br, BROPT_MULTICAST_QUERIER) || + timer_pending(&br->ip4_other_query.timer)) + return; + + __br_multicast_send_query(br, pg->port, pg, &pg->addr, + &pg->addr, true, 1, NULL); + + lmi = now + br->multicast_last_member_interval; + if (!timer_pending(&pg->rexmit_timer) || + time_after(pg->rexmit_timer.expires, lmi)) + mod_timer(&pg->rexmit_timer, lmi); +} + +static void __grp_send_query_and_rexmit(struct net_bridge_port_group *pg) +{ + struct net_bridge *br = pg->port->br; + unsigned long now = jiffies, lmi; + + if (br_opt_get(br, BROPT_MULTICAST_QUERIER) && + timer_pending(&br->ip4_other_query.timer)) { + lmi = now + br->multicast_last_member_interval; + pg->grp_query_rexmit_cnt = br->multicast_last_member_count - 1; + __br_multicast_send_query(br, pg->port, pg, &pg->addr, + &pg->addr, false, 0, NULL); + if (!timer_pending(&pg->rexmit_timer) || + time_after(pg->rexmit_timer.expires, lmi)) + mod_timer(&pg->rexmit_timer, lmi); + } + + if (pg->filter_mode == MCAST_EXCLUDE && + (!timer_pending(&pg->timer) || + time_after(pg->timer.expires, now + br_multicast_lmqt(br)))) + mod_timer(&pg->timer, now + br_multicast_lmqt(br)); +} + /* State Msg type New state Actions * INCLUDE (A) IS_IN (B) INCLUDE (A+B) (B)=GMI * INCLUDE (A) ALLOW (B) INCLUDE (A+B) (B)=GMI @@ -1311,6 +1377,207 @@ static bool br_multicast_isexc(struct net_bridge_port_group *pg, return changed; } +/* State Msg type New state Actions + * INCLUDE (A) TO_IN (B) INCLUDE (A+B) (B)=GMI + * Send Q(G,A-B) + */ +static bool __grp_src_toin_incl(struct net_bridge_port_group *pg, + __be32 *srcs, u32 nsrcs) +{ + struct net_bridge *br = pg->port->br; + u32 src_idx, to_send = pg->src_ents; + struct net_bridge_group_src *ent; + unsigned long now = jiffies; + bool changed = false; + struct br_ip src_ip; + + __grp_src_modify_flags_all(pg, BR_SGRP_F_SEND, 0); + + memset(&src_ip, 0, sizeof(src_ip)); + src_ip.proto = htons(ETH_P_IP); + for (src_idx = 0; src_idx < nsrcs; src_idx++) { + src_ip.u.ip4 = srcs[src_idx]; + ent = br_multicast_find_group_src(pg, &src_ip); + if (ent) { + __grp_src_modify_flags(ent, 0, BR_SGRP_F_SEND); + to_send--; + } else { + ent = br_multicast_new_group_src(pg, &src_ip); + if (ent) + changed = true; + } + if (ent) + mod_timer(&ent->timer, now + br_multicast_gmi(br)); + } + + if (to_send) + __grp_src_query_marked_and_rexmit(pg); + + return changed; +} + +/* State Msg type New state Actions + * EXCLUDE (X,Y) TO_IN (A) EXCLUDE (X+A,Y-A) (A)=GMI + * Send Q(G,X-A) + * Send Q(G) + */ +static bool __grp_src_toin_excl(struct net_bridge_port_group *pg, + __be32 *srcs, u32 nsrcs) +{ + struct net_bridge *br = pg->port->br; + u32 src_idx, to_send = pg->src_ents; + struct net_bridge_group_src *ent; + unsigned long now = jiffies; + bool changed = false; + struct br_ip src_ip; + + __grp_src_modify_flags_timer(pg, BR_SGRP_F_SEND, 0, true); + + memset(&src_ip, 0, sizeof(src_ip)); + src_ip.proto = htons(ETH_P_IP); + for (src_idx = 0; src_idx < nsrcs; src_idx++) { + src_ip.u.ip4 = srcs[src_idx]; + ent = br_multicast_find_group_src(pg, &src_ip); + if (ent) { + if (timer_pending(&ent->timer)) { + __grp_src_modify_flags(ent, 0, BR_SGRP_F_SEND); + to_send--; + } + } else { + ent = br_multicast_new_group_src(pg, &src_ip); + if (ent) + changed = true; + } + if (ent) + mod_timer(&ent->timer, now + br_multicast_gmi(br)); + } + + if (to_send) + __grp_src_query_marked_and_rexmit(pg); + + __grp_send_query_and_rexmit(pg); + + return changed; +} + +static bool br_multicast_toin(struct net_bridge_port_group *pg, + __be32 *srcs, u32 nsrcs) +{ + bool changed = false; + + switch (pg->filter_mode) { + case MCAST_INCLUDE: + changed = __grp_src_toin_incl(pg, srcs, nsrcs); + break; + case MCAST_EXCLUDE: + changed = __grp_src_toin_excl(pg, srcs, nsrcs); + break; + } + + return changed; +} + +/* State Msg type New state Actions + * INCLUDE (A) TO_EX (B) EXCLUDE (A*B,B-A) (B-A)=0 + * Delete (A-B) + * Send Q(G,A*B) + * Group Timer=GMI + */ +static void __grp_src_toex_incl(struct net_bridge_port_group *pg, + __be32 *srcs, u32 nsrcs) +{ + struct net_bridge_group_src *ent; + u32 src_idx, to_send = 0; + struct br_ip src_ip; + + __grp_src_modify_flags_all(pg, BR_SGRP_F_DELETE, BR_SGRP_F_SEND); + + memset(&src_ip, 0, sizeof(src_ip)); + src_ip.proto = htons(ETH_P_IP); + for (src_idx = 0; src_idx < nsrcs; src_idx++) { + src_ip.u.ip4 = srcs[src_idx]; + ent = br_multicast_find_group_src(pg, &src_ip); + if (ent) { + __grp_src_modify_flags(ent, BR_SGRP_F_SEND, + BR_SGRP_F_DELETE); + to_send++; + } else { + br_multicast_new_group_src(pg, &src_ip); + } + } + + __grp_src_delete_marked(pg); + if (to_send) + __grp_src_query_marked_and_rexmit(pg); +} + +/* State Msg type New state Actions + * EXCLUDE (X,Y) TO_EX (A) EXCLUDE (A-Y,Y*A) (A-X-Y)=Group Timer + * Delete (X-A) + * Delete (Y-A) + * Send Q(G,A-Y) + * Group Timer=GMI + */ +static bool __grp_src_toex_excl(struct net_bridge_port_group *pg, + __be32 *srcs, u32 nsrcs) +{ + struct net_bridge_group_src *ent; + u32 src_idx, to_send = 0; + bool changed = false; + struct br_ip src_ip; + + __grp_src_modify_flags_all(pg, BR_SGRP_F_DELETE, BR_SGRP_F_SEND); + + memset(&src_ip, 0, sizeof(src_ip)); + src_ip.proto = htons(ETH_P_IP); + for (src_idx = 0; src_idx < nsrcs; src_idx++) { + src_ip.u.ip4 = srcs[src_idx]; + ent = br_multicast_find_group_src(pg, &src_ip); + if (ent) { + __grp_src_modify_flags(ent, 0, BR_SGRP_F_DELETE); + } else { + ent = br_multicast_new_group_src(pg, &src_ip); + if (ent) { + mod_timer(&ent->timer, pg->timer.expires); + changed = true; + } + } + if (ent && timer_pending(&ent->timer)) { + __grp_src_modify_flags(ent, BR_SGRP_F_SEND, 0); + to_send++; + } + } + + if (__grp_src_delete_marked(pg)) + changed = true; + if (to_send) + __grp_src_query_marked_and_rexmit(pg); + + return changed; +} + +static bool br_multicast_toex(struct net_bridge_port_group *pg, + __be32 *srcs, u32 nsrcs) +{ + struct net_bridge *br = pg->port->br; + bool changed = false; + + switch (pg->filter_mode) { + case MCAST_INCLUDE: + __grp_src_toex_incl(pg, srcs, nsrcs); + changed = true; + break; + case MCAST_EXCLUDE: + __grp_src_toex_excl(pg, srcs, nsrcs); + break; + } + + pg->filter_mode = MCAST_EXCLUDE; + mod_timer(&pg->timer, jiffies + br_multicast_gmi(br)); + + return changed; +} + static struct net_bridge_port_group * br_multicast_find_port(struct net_bridge_mdb_entry *mp, struct net_bridge_port *p, @@ -1413,6 +1680,12 @@ static int br_ip4_multicast_igmp3_report(struct net_bridge *br, case IGMPV3_MODE_IS_EXCLUDE: changed = br_multicast_isexc(pg, grec->grec_src, nsrcs); break; + case IGMPV3_CHANGE_TO_INCLUDE: + changed = br_multicast_toin(pg, grec->grec_src, nsrcs); + break; + case IGMPV3_CHANGE_TO_EXCLUDE: + changed = br_multicast_toex(pg, grec->grec_src, nsrcs); + break; } if (changed) br_mdb_notify(br->dev, mdst, pg, RTM_NEWMDB); From patchwork Mon Aug 31 15:08:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikolay Aleksandrov X-Patchwork-Id: 261731 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.1 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B72C6C433E2 for ; Mon, 31 Aug 2020 15:11:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9266B20866 for ; Mon, 31 Aug 2020 15:11:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=cumulusnetworks.com header.i=@cumulusnetworks.com header.b="WLhTbdMt" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728441AbgHaPLs (ORCPT ); Mon, 31 Aug 2020 11:11:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52134 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728320AbgHaPKP (ORCPT ); Mon, 31 Aug 2020 11:10:15 -0400 Received: from mail-wm1-x342.google.com (mail-wm1-x342.google.com [IPv6:2a00:1450:4864:20::342]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 498CBC06123D for ; Mon, 31 Aug 2020 08:10:13 -0700 (PDT) Received: by mail-wm1-x342.google.com with SMTP id s13so5729840wmh.4 for ; Mon, 31 Aug 2020 08:10:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cumulusnetworks.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5rHaHeEW27aQeKn7DgkSF8lCaC4zYOPlmbEcK5oKJ8E=; b=WLhTbdMtV0EhI2mcciSJy6Eh9ashl1aJ7f4sQM4a25KHqlpcCCiLXhGiIeocun9A2+ nIhczss47B88cfHmYJ9R9pV3zL5604w5hWNdxaEvJ8uLr2OW9hp6B2GkdWjxwvCLkNKk U/MkvB+BRRwmxWAY9NrO06Wvywin3iVI56ST8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5rHaHeEW27aQeKn7DgkSF8lCaC4zYOPlmbEcK5oKJ8E=; b=ZyHsn3yON5+16j5hIBUHsUSMsGkrXPnOdFpugE7P5P3kbZam4qrny1UbTy4UU6WwxT nO7otoPMpRRk7zfVnUP1R6dBf4xnQRvbJMlf3t1MUC1vabsnYnMS9G7yC+0fCzUhSNZe 440xkkH62oAAn0/AkzuYJ3D0W00QEislSOVAIr8mHRyhlEJqusBOIzWXs5mfNAUXLM7/ uM3DGlh6s9/K5fJpW1aGmoSCLNaWieMHq/B3oO/ocfNFRSYwF1cJ9a3Mnc8oib+KPpSD azOrHXgdpXtHlqRe3qir2vBTRwjmFYga/+OZStYC1sMERlpFTlsQ2tFgRUTsYZ/KRwm2 rIgw== X-Gm-Message-State: AOAM530yJEKkZMDGZlDhCD5Qb7k/5cQuErKV1+ZOc4vPJue8HxHRTfyN 7mZ4juQJAvf8/Jc9tefJgmX8KT/4umjMAjqd X-Google-Smtp-Source: ABdhPJzAnlyVPxABDimUPXeNjF1QiCzR2RZtLMSMwEzHIf1Yk8gce86GvJVM9NQSkYxSWISERzesXw== X-Received: by 2002:a1c:6187:: with SMTP id v129mr1711883wmb.35.1598886611622; Mon, 31 Aug 2020 08:10:11 -0700 (PDT) Received: from localhost.localdomain (84-238-136-197.ip.btc-net.bg. [84.238.136.197]) by smtp.gmail.com with ESMTPSA id f6sm14181636wme.32.2020.08.31.08.10.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Aug 2020 08:10:10 -0700 (PDT) From: Nikolay Aleksandrov To: netdev@vger.kernel.org Cc: roopa@nvidia.com, bridge@lists.linux-foundation.org, davem@davemloft.net, Nikolay Aleksandrov Subject: [PATCH net-next 14/15] net: bridge: mcast: improve v3 query processing Date: Mon, 31 Aug 2020 18:08:44 +0300 Message-Id: <20200831150845.1062447-15-nikolay@cumulusnetworks.com> X-Mailer: git-send-email 2.25.4 In-Reply-To: <20200831150845.1062447-1-nikolay@cumulusnetworks.com> References: <20200831150845.1062447-1-nikolay@cumulusnetworks.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org When an IGMPv3 query is received and we're operating in v3 mode then we need to avoid updating group timers if the suppress flag is set. Also we should update only timers for groups in exclude mode. Signed-off-by: Nikolay Aleksandrov --- net/bridge/br_multicast.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index 55c2729c61f4..3b1d9ef25723 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -2052,7 +2052,8 @@ static void br_ip4_multicast_query(struct net_bridge *br, } } else if (transport_len >= sizeof(*ih3)) { ih3 = igmpv3_query_hdr(skb); - if (ih3->nsrcs) + if (ih3->nsrcs || + (br->multicast_igmp_version == 3 && group && ih3->suppress)) goto out; max_delay = ih3->code ? @@ -2087,7 +2088,9 @@ static void br_ip4_multicast_query(struct net_bridge *br, pp = &p->next) { if (timer_pending(&p->timer) ? time_after(p->timer.expires, now + max_delay) : - try_to_del_timer_sync(&p->timer) >= 0) + try_to_del_timer_sync(&p->timer) >= 0 && + (br->multicast_igmp_version == 2 || + p->filter_mode == MCAST_EXCLUDE)) mod_timer(&p->timer, now + max_delay); } From patchwork Mon Aug 31 15:08:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikolay Aleksandrov X-Patchwork-Id: 261732 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.1 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C7D4C433E6 for ; Mon, 31 Aug 2020 15:11:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F3068207EA for ; Mon, 31 Aug 2020 15:11:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=cumulusnetworks.com header.i=@cumulusnetworks.com header.b="WL+obqmP" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728417AbgHaPLi (ORCPT ); Mon, 31 Aug 2020 11:11:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52066 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728329AbgHaPKT (ORCPT ); Mon, 31 Aug 2020 11:10:19 -0400 Received: from mail-wm1-x344.google.com (mail-wm1-x344.google.com [IPv6:2a00:1450:4864:20::344]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A5728C0619C6 for ; Mon, 31 Aug 2020 08:10:14 -0700 (PDT) Received: by mail-wm1-x344.google.com with SMTP id e17so5606490wme.0 for ; Mon, 31 Aug 2020 08:10:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cumulusnetworks.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=azSAih3zOvEyH3wRNMHhDe96mbXk7n191aNOtVT5oPg=; b=WL+obqmP2eSLBN5CbGOqjZEvHiKgQIx+o187aUGVsNyqBxNW9xRPIvfDIWikdjVdTt eGMHAAsgM4VZqFm2rWrf2D5cvmCWoj8XjoLSn1bSwa6xEK0XyYbgQk0s3vLldKJmdXTl Qu1ROlYHoOrMFb+SevV4fKakD5MOpAQ0TTguw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=azSAih3zOvEyH3wRNMHhDe96mbXk7n191aNOtVT5oPg=; b=baQ2c+v8gmng8xNkIK7JEXfHkKVkR6mRvnDQF0M8FnZ99G5fdGHzsKMNU0wkRM2h3+ +sztSUUfmAnheh6/kHfRQLhVQeSGbK7vfbT1yJxzwfpXvs3uI5CmQANfTdE0EcsbY90h SXZ5VINI9m/mwmgPDCOtEAFqWIosMxbKuVLcOYfPCHOxa6PwHigEZv0zu7cKZDebq3p0 G6NmNCjoa05BEszMew4f5ggrPCUXX2zKVwyF95SGfvbhRAgtDQXPl9ykiF80TQBSCYO0 d4Eof3aMnd4hG2ahpbB+GBZ+Psj9dZvvy60xv4DHQB1bNb9I1kYXKFPADQt3WBnMqjFg 4djQ== X-Gm-Message-State: AOAM530cnF27krGYDJc3Ood6ddsxWO06TSSXoHTjMskV3G9Pt2I1Xucb qxVR098d9Lev101Ut3BNIVI4yis7kEhROyJl X-Google-Smtp-Source: ABdhPJyxYe+cqsP66Fpp39cOY47lzzXYp26KSOF3Wbr3TH2W2ivKR5I9wvxyUWnaiOn89smoyYzzvw== X-Received: by 2002:a1c:ab55:: with SMTP id u82mr1757067wme.139.1598886612908; Mon, 31 Aug 2020 08:10:12 -0700 (PDT) Received: from localhost.localdomain (84-238-136-197.ip.btc-net.bg. [84.238.136.197]) by smtp.gmail.com with ESMTPSA id f6sm14181636wme.32.2020.08.31.08.10.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Aug 2020 08:10:12 -0700 (PDT) From: Nikolay Aleksandrov To: netdev@vger.kernel.org Cc: roopa@nvidia.com, bridge@lists.linux-foundation.org, davem@davemloft.net, Nikolay Aleksandrov Subject: [PATCH net-next 15/15] net: bridge: mcast: destroy all entries via gc Date: Mon, 31 Aug 2020 18:08:45 +0300 Message-Id: <20200831150845.1062447-16-nikolay@cumulusnetworks.com> X-Mailer: git-send-email 2.25.4 In-Reply-To: <20200831150845.1062447-1-nikolay@cumulusnetworks.com> References: <20200831150845.1062447-1-nikolay@cumulusnetworks.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Since each entry type has timers that can be running simultaneously we need to make sure that entries are not freed before their timers have finished. In order to do that generalize the src gc work to mcast gc work and use a callback to free the entries (mdb, port group or src). Signed-off-by: Nikolay Aleksandrov --- net/bridge/br_multicast.c | 103 ++++++++++++++++++++++++++------------ net/bridge/br_private.h | 13 +++-- 2 files changed, 80 insertions(+), 36 deletions(-) diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index 3b1d9ef25723..92b206167f4f 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -140,6 +140,29 @@ struct net_bridge_mdb_entry *br_mdb_get(struct net_bridge *br, return br_mdb_ip_get_rcu(br, &ip); } +static void br_multicast_destroy_mdb_entry(struct net_bridge_mcast_gc *gc) +{ + struct net_bridge_mdb_entry *mp; + + mp = container_of(gc, struct net_bridge_mdb_entry, mcast_gc); + WARN_ON(!hlist_unhashed(&mp->mdb_node)); + WARN_ON(mp->ports); + + del_timer_sync(&mp->timer); + kfree_rcu(mp, rcu); +} + +static void br_multicast_del_mdb_entry(struct net_bridge_mdb_entry *mp) +{ + struct net_bridge *br = mp->br; + + rhashtable_remove_fast(&br->mdb_hash_tbl, &mp->rhnode, + br_mdb_rht_params); + hlist_del_init_rcu(&mp->mdb_node); + hlist_add_head(&mp->mcast_gc.gc_node, &br->mcast_gc_list); + queue_work(system_long_wq, &br->mcast_gc_work); +} + static void br_multicast_group_expired(struct timer_list *t) { struct net_bridge_mdb_entry *mp = from_timer(mp, t, timer); @@ -153,15 +176,20 @@ static void br_multicast_group_expired(struct timer_list *t) if (mp->ports) goto out; + br_multicast_del_mdb_entry(mp); +out: + spin_unlock(&br->multicast_lock); +} - rhashtable_remove_fast(&br->mdb_hash_tbl, &mp->rhnode, - br_mdb_rht_params); - hlist_del_rcu(&mp->mdb_node); +static void br_multicast_destroy_group_src(struct net_bridge_mcast_gc *gc) +{ + struct net_bridge_group_src *src; - kfree_rcu(mp, rcu); + src = container_of(gc, struct net_bridge_group_src, mcast_gc); + WARN_ON(!hlist_unhashed(&src->node)); -out: - spin_unlock(&br->multicast_lock); + del_timer_sync(&src->timer); + kfree(src); } static void br_multicast_del_group_src(struct net_bridge_group_src *src) @@ -170,8 +198,21 @@ static void br_multicast_del_group_src(struct net_bridge_group_src *src) hlist_del_init(&src->node); src->pg->src_ents--; - hlist_add_head(&src->del_node, &br->src_gc_list); - queue_work(system_long_wq, &br->src_gc_work); + hlist_add_head(&src->mcast_gc.gc_node, &br->mcast_gc_list); + queue_work(system_long_wq, &br->mcast_gc_work); +} + +static void br_multicast_destroy_port_group(struct net_bridge_mcast_gc *gc) +{ + struct net_bridge_port_group *pg; + + pg = container_of(gc, struct net_bridge_port_group, mcast_gc); + WARN_ON(!hlist_unhashed(&pg->mglist)); + WARN_ON(!hlist_empty(&pg->src_list)); + + del_timer_sync(&pg->rexmit_timer); + del_timer_sync(&pg->timer); + kfree_rcu(pg, rcu); } void br_multicast_del_pg(struct net_bridge_mdb_entry *mp, @@ -184,12 +225,11 @@ void br_multicast_del_pg(struct net_bridge_mdb_entry *mp, rcu_assign_pointer(*pp, pg->next); hlist_del_init(&pg->mglist); - del_timer(&pg->timer); - del_timer(&pg->rexmit_timer); hlist_for_each_entry_safe(ent, tmp, &pg->src_list, node) br_multicast_del_group_src(ent); br_mdb_notify(br->dev, mp, pg, RTM_DELMDB); - kfree_rcu(pg, rcu); + hlist_add_head(&pg->mcast_gc.gc_node, &br->mcast_gc_list); + queue_work(system_long_wq, &br->mcast_gc_work); if (!mp->ports && !mp->host_joined && netif_running(br->dev)) mod_timer(&mp->timer, jiffies); @@ -560,6 +600,7 @@ struct net_bridge_mdb_entry *br_multicast_new_group(struct net_bridge *br, mp->br = br; mp->addr = *group; + mp->mcast_gc.destroy = br_multicast_destroy_mdb_entry; timer_setup(&mp->timer, br_multicast_group_expired, 0); err = rhashtable_lookup_insert_fast(&br->mdb_hash_tbl, &mp->rhnode, br_mdb_rht_params); @@ -642,6 +683,7 @@ br_multicast_new_group_src(struct net_bridge_port_group *pg, struct br_ip *src_i grp_src->pg = pg; grp_src->br = pg->port->br; grp_src->addr = *src_ip; + grp_src->mcast_gc.destroy = br_multicast_destroy_group_src; timer_setup(&grp_src->timer, br_multicast_group_src_expired, 0); hlist_add_head(&grp_src->node, &pg->src_list); @@ -671,6 +713,7 @@ struct net_bridge_port_group *br_multicast_new_port_group( p->filter_mode = MCAST_INCLUDE; else p->filter_mode = MCAST_EXCLUDE; + p->mcast_gc.destroy = br_multicast_destroy_port_group; INIT_HLIST_HEAD(&p->src_list); rcu_assign_pointer(p->next, next); timer_setup(&p->timer, br_multicast_port_group_expired, 0); @@ -2584,29 +2627,28 @@ static void br_ip6_multicast_query_expired(struct timer_list *t) } #endif -static void __grp_src_gc(struct hlist_head *head) +static void br_multicast_do_gc(struct hlist_head *head) { - struct net_bridge_group_src *ent; + struct net_bridge_mcast_gc *gcent; struct hlist_node *tmp; - hlist_for_each_entry_safe(ent, tmp, head, del_node) { - hlist_del_init(&ent->del_node); - del_timer_sync(&ent->timer); - kfree(ent); + hlist_for_each_entry_safe(gcent, tmp, head, gc_node) { + hlist_del_init(&gcent->gc_node); + gcent->destroy(gcent); } } -static void br_multicast_src_gc(struct work_struct *work) +static void br_multicast_gc(struct work_struct *work) { struct net_bridge *br = container_of(work, struct net_bridge, - src_gc_work); + mcast_gc_work); HLIST_HEAD(deleted_head); spin_lock_bh(&br->multicast_lock); - hlist_move_list(&br->src_gc_list, &deleted_head); + hlist_move_list(&br->mcast_gc_list, &deleted_head); spin_unlock_bh(&br->multicast_lock); - __grp_src_gc(&deleted_head); + br_multicast_do_gc(&deleted_head); } void br_multicast_init(struct net_bridge *br) @@ -2649,8 +2691,8 @@ void br_multicast_init(struct net_bridge *br) br_ip6_multicast_query_expired, 0); #endif INIT_HLIST_HEAD(&br->mdb_list); - INIT_HLIST_HEAD(&br->src_gc_list); - INIT_WORK(&br->src_gc_work, br_multicast_src_gc); + INIT_HLIST_HEAD(&br->mcast_gc_list); + INIT_WORK(&br->mcast_gc_work, br_multicast_gc); } static void br_ip4_multicast_join_snoopers(struct net_bridge *br) @@ -2758,18 +2800,13 @@ void br_multicast_dev_del(struct net_bridge *br) struct hlist_node *tmp; spin_lock_bh(&br->multicast_lock); - hlist_for_each_entry_safe(mp, tmp, &br->mdb_list, mdb_node) { - del_timer(&mp->timer); - rhashtable_remove_fast(&br->mdb_hash_tbl, &mp->rhnode, - br_mdb_rht_params); - hlist_del_rcu(&mp->mdb_node); - kfree_rcu(mp, rcu); - } - hlist_move_list(&br->src_gc_list, &deleted_head); + hlist_for_each_entry_safe(mp, tmp, &br->mdb_list, mdb_node) + br_multicast_del_mdb_entry(mp); + hlist_move_list(&br->mcast_gc_list, &deleted_head); spin_unlock_bh(&br->multicast_lock); - __grp_src_gc(&deleted_head); - cancel_work_sync(&br->src_gc_work); + br_multicast_do_gc(&deleted_head); + cancel_work_sync(&br->mcast_gc_work); rcu_barrier(); } diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h index cbcec3bf28ea..4225c72cec8b 100644 --- a/net/bridge/br_private.h +++ b/net/bridge/br_private.h @@ -219,6 +219,11 @@ struct net_bridge_fdb_entry { #define BR_SGRP_F_DELETE BIT(0) #define BR_SGRP_F_SEND BIT(1) +struct net_bridge_mcast_gc { + struct hlist_node gc_node; + void (*destroy)(struct net_bridge_mcast_gc *gc); +}; + struct net_bridge_group_src { struct hlist_node node; @@ -229,7 +234,7 @@ struct net_bridge_group_src { struct timer_list timer; struct net_bridge *br; - struct hlist_node del_node; + struct net_bridge_mcast_gc mcast_gc; }; struct net_bridge_port_group { @@ -247,6 +252,7 @@ struct net_bridge_port_group { struct timer_list rexmit_timer; struct hlist_node mglist; + struct net_bridge_mcast_gc mcast_gc; struct rcu_head rcu; }; @@ -260,6 +266,7 @@ struct net_bridge_mdb_entry { struct timer_list timer; struct hlist_node mdb_node; + struct net_bridge_mcast_gc mcast_gc; struct rcu_head rcu; }; @@ -433,7 +440,7 @@ struct net_bridge { struct rhashtable mdb_hash_tbl; - struct hlist_head src_gc_list; + struct hlist_head mcast_gc_list; struct hlist_head mdb_list; struct hlist_head router_list; @@ -447,7 +454,7 @@ struct net_bridge { struct bridge_mcast_own_query ip6_own_query; struct bridge_mcast_querier ip6_querier; #endif /* IS_ENABLED(CONFIG_IPV6) */ - struct work_struct src_gc_work; + struct work_struct mcast_gc_work; #endif struct timer_list hello_timer;