From patchwork Fri May 29 00:25:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pablo Neira Ayuso X-Patchwork-Id: 218252 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA530C433E2 for ; Fri, 29 May 2020 00:26:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8C2DB207F9 for ; Fri, 29 May 2020 00:26:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2438078AbgE2A0q (ORCPT ); Thu, 28 May 2020 20:26:46 -0400 Received: from correo.us.es ([193.147.175.20]:44254 "EHLO mail.us.es" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2438091AbgE2AZy (ORCPT ); Thu, 28 May 2020 20:25:54 -0400 Received: from antivirus1-rhel7.int (unknown [192.168.2.11]) by mail.us.es (Postfix) with ESMTP id EA4D4CF694 for ; Fri, 29 May 2020 02:25:52 +0200 (CEST) Received: from antivirus1-rhel7.int (localhost [127.0.0.1]) by antivirus1-rhel7.int (Postfix) with ESMTP id DB1BBDA709 for ; Fri, 29 May 2020 02:25:52 +0200 (CEST) Received: by antivirus1-rhel7.int (Postfix, from userid 99) id CEF37DA703; Fri, 29 May 2020 02:25:52 +0200 (CEST) Received: from antivirus1-rhel7.int (localhost [127.0.0.1]) by antivirus1-rhel7.int (Postfix) with ESMTP id 96F82DA70E; Fri, 29 May 2020 02:25:50 +0200 (CEST) Received: from 192.168.1.97 (192.168.1.97) by antivirus1-rhel7.int (F-Secure/fsigk_smtp/550/antivirus1-rhel7.int); Fri, 29 May 2020 02:25:50 +0200 (CEST) X-Virus-Status: clean(F-Secure/fsigk_smtp/550/antivirus1-rhel7.int) Received: from localhost.localdomain (unknown [90.77.255.23]) (Authenticated sender: pneira@us.es) by entrada.int (Postfix) with ESMTPA id 2BD96426CCB9; Fri, 29 May 2020 02:25:50 +0200 (CEST) X-SMTPAUTHUS: auth mail.us.es From: Pablo Neira Ayuso To: netfilter-devel@vger.kernel.org Cc: davem@davemloft.net, netdev@vger.kernel.org, paulb@mellanox.com, ozsh@mellanox.com, vladbu@mellanox.com, jiri@resnulli.us, kuba@kernel.org, saeedm@mellanox.com, michael.chan@broadcom.com, sriharsha.basavapatna@broadcom.com Subject: [PATCH net-next 1/8] netfilter: nf_flowtable: expose nf_flow_table_gc_cleanup() Date: Fri, 29 May 2020 02:25:34 +0200 Message-Id: <20200529002541.19743-2-pablo@netfilter.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200529002541.19743-1-pablo@netfilter.org> References: <20200529002541.19743-1-pablo@netfilter.org> MIME-Version: 1.0 X-Virus-Scanned: ClamAV using ClamSMTP Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This function schedules the flow teardown state and it forces a gc run. Signed-off-by: Pablo Neira Ayuso --- include/net/netfilter/nf_flow_table.h | 2 ++ net/netfilter/nf_flow_table_core.c | 6 +++--- 2 files changed, 5 insertions(+), 3 deletions(-) diff --git a/include/net/netfilter/nf_flow_table.h b/include/net/netfilter/nf_flow_table.h index c54a7f707e50..d7338bfd7b0f 100644 --- a/include/net/netfilter/nf_flow_table.h +++ b/include/net/netfilter/nf_flow_table.h @@ -175,6 +175,8 @@ void flow_offload_refresh(struct nf_flowtable *flow_table, struct flow_offload_tuple_rhash *flow_offload_lookup(struct nf_flowtable *flow_table, struct flow_offload_tuple *tuple); +void nf_flow_table_gc_cleanup(struct nf_flowtable *flowtable, + struct net_device *dev); void nf_flow_table_cleanup(struct net_device *dev); int nf_flow_table_init(struct nf_flowtable *flow_table); diff --git a/net/netfilter/nf_flow_table_core.c b/net/netfilter/nf_flow_table_core.c index 42da6e337276..6a3034f84ab6 100644 --- a/net/netfilter/nf_flow_table_core.c +++ b/net/netfilter/nf_flow_table_core.c @@ -588,8 +588,8 @@ static void nf_flow_table_do_cleanup(struct flow_offload *flow, void *data) flow_offload_teardown(flow); } -static void nf_flow_table_iterate_cleanup(struct nf_flowtable *flowtable, - struct net_device *dev) +void nf_flow_table_gc_cleanup(struct nf_flowtable *flowtable, + struct net_device *dev) { nf_flow_table_iterate(flowtable, nf_flow_table_do_cleanup, dev); flush_delayed_work(&flowtable->gc_work); @@ -602,7 +602,7 @@ void nf_flow_table_cleanup(struct net_device *dev) mutex_lock(&flowtable_lock); list_for_each_entry(flowtable, &flowtables, list) - nf_flow_table_iterate_cleanup(flowtable, dev); + nf_flow_table_gc_cleanup(flowtable, dev); mutex_unlock(&flowtable_lock); } EXPORT_SYMBOL_GPL(nf_flow_table_cleanup); From patchwork Fri May 29 00:25:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pablo Neira Ayuso X-Patchwork-Id: 218255 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06849C433E1 for ; Fri, 29 May 2020 00:26:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D977F207F9 for ; Fri, 29 May 2020 00:26:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2438121AbgE2A0H (ORCPT ); Thu, 28 May 2020 20:26:07 -0400 Received: from correo.us.es ([193.147.175.20]:44332 "EHLO mail.us.es" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2438097AbgE2AZ6 (ORCPT ); Thu, 28 May 2020 20:25:58 -0400 Received: from antivirus1-rhel7.int (unknown [192.168.2.11]) by mail.us.es (Postfix) with ESMTP id 1CC23CF677 for ; Fri, 29 May 2020 02:25:54 +0200 (CEST) Received: from antivirus1-rhel7.int (localhost [127.0.0.1]) by antivirus1-rhel7.int (Postfix) with ESMTP id 0C9E9DA716 for ; Fri, 29 May 2020 02:25:54 +0200 (CEST) Received: by antivirus1-rhel7.int (Postfix, from userid 99) id 02430DA711; Fri, 29 May 2020 02:25:54 +0200 (CEST) Received: from antivirus1-rhel7.int (localhost [127.0.0.1]) by antivirus1-rhel7.int (Postfix) with ESMTP id 8C7E8DA710; Fri, 29 May 2020 02:25:51 +0200 (CEST) Received: from 192.168.1.97 (192.168.1.97) by antivirus1-rhel7.int (F-Secure/fsigk_smtp/550/antivirus1-rhel7.int); Fri, 29 May 2020 02:25:51 +0200 (CEST) X-Virus-Status: clean(F-Secure/fsigk_smtp/550/antivirus1-rhel7.int) Received: from localhost.localdomain (unknown [90.77.255.23]) (Authenticated sender: pneira@us.es) by entrada.int (Postfix) with ESMTPA id 1FB0D426CCB9; Fri, 29 May 2020 02:25:51 +0200 (CEST) X-SMTPAUTHUS: auth mail.us.es From: Pablo Neira Ayuso To: netfilter-devel@vger.kernel.org Cc: davem@davemloft.net, netdev@vger.kernel.org, paulb@mellanox.com, ozsh@mellanox.com, vladbu@mellanox.com, jiri@resnulli.us, kuba@kernel.org, saeedm@mellanox.com, michael.chan@broadcom.com, sriharsha.basavapatna@broadcom.com Subject: [PATCH net-next 2/8] net: flow_offload: consolidate indirect flow_block infrastructure Date: Fri, 29 May 2020 02:25:35 +0200 Message-Id: <20200529002541.19743-3-pablo@netfilter.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200529002541.19743-1-pablo@netfilter.org> References: <20200529002541.19743-1-pablo@netfilter.org> MIME-Version: 1.0 X-Virus-Scanned: ClamAV using ClamSMTP Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Tunnel devices provide no dev->netdev_ops->ndo_setup_tc(...) interface. The tunnel device and route control plane does not provide an obvious way to relate tunnel and physical devices. This patch allows drivers to register a tunnel device offload handler for the tc and netfilter frontends through flow_indr_dev_register() and flow_indr_dev_unregister(). The frontend calls flow_indr_dev_setup_offload() that iterates over the list of drivers that are offering tunnel device hardware offload support and it sets up the flow block for this tunnel device. If the driver module is removed, the indirect flow_block ends up with a stale callback reference. The module removal path triggers the dev_shutdown() path to remove the qdisc and the flow_blocks for the physical devices. However, this is not useful for tunnel devices, where relation between the physical and the tunnel device is not explicit. This patch introduces a cleanup callback that is invoked when the driver module is removed to clean up the tunnel device flow_block. This patch defines struct flow_block_indr and it uses it from flow_block_cb to store the information that front-end requires to perform the flow_block_cb cleanup on module removal. Signed-off-by: Pablo Neira Ayuso --- include/net/flow_offload.h | 19 +++++ net/core/flow_offload.c | 157 +++++++++++++++++++++++++++++++++++++ 2 files changed, 176 insertions(+) diff --git a/include/net/flow_offload.h b/include/net/flow_offload.h index 95d633785ef9..5493282348fa 100644 --- a/include/net/flow_offload.h +++ b/include/net/flow_offload.h @@ -443,6 +443,16 @@ enum tc_setup_type; typedef int flow_setup_cb_t(enum tc_setup_type type, void *type_data, void *cb_priv); +struct flow_block_cb; + +struct flow_block_indr { + struct list_head list; + struct net_device *dev; + enum flow_block_binder_type binder_type; + void *data; + void (*cleanup)(struct flow_block_cb *block_cb); +}; + struct flow_block_cb { struct list_head driver_list; struct list_head list; @@ -450,6 +460,7 @@ struct flow_block_cb { void *cb_ident; void *cb_priv; void (*release)(void *cb_priv); + struct flow_block_indr indr; unsigned int refcnt; }; @@ -523,6 +534,14 @@ static inline void flow_block_init(struct flow_block *flow_block) typedef int flow_indr_block_bind_cb_t(struct net_device *dev, void *cb_priv, enum tc_setup_type type, void *type_data); +int flow_indr_dev_register(flow_indr_block_bind_cb_t *cb, void *cb_priv); +void flow_indr_dev_unregister(flow_indr_block_bind_cb_t *cb, void *cb_priv, + flow_setup_cb_t *setup_cb); +int flow_indr_dev_setup_offload(struct net_device *dev, + enum tc_setup_type type, void *data, + struct flow_block_offload *bo, + void (*cleanup)(struct flow_block_cb *block_cb)); + typedef void flow_indr_block_cmd_t(struct net_device *dev, flow_indr_block_bind_cb_t *cb, void *cb_priv, enum flow_block_command command); diff --git a/net/core/flow_offload.c b/net/core/flow_offload.c index e64941c526b1..8cd7da2586ae 100644 --- a/net/core/flow_offload.c +++ b/net/core/flow_offload.c @@ -317,6 +317,163 @@ int flow_block_cb_setup_simple(struct flow_block_offload *f, } EXPORT_SYMBOL(flow_block_cb_setup_simple); +static DEFINE_MUTEX(flow_indr_block_lock); +static LIST_HEAD(flow_block_indr_list); +static LIST_HEAD(flow_block_indr_dev_list); + +struct flow_indr_dev { + struct list_head list; + flow_indr_block_bind_cb_t *cb; + void *cb_priv; + refcount_t refcnt; + struct rcu_head rcu; +}; + +static struct flow_indr_dev *flow_indr_dev_alloc(flow_indr_block_bind_cb_t *cb, + void *cb_priv) +{ + struct flow_indr_dev *indr_dev; + + indr_dev = kmalloc(sizeof(*indr_dev), GFP_KERNEL); + if (!indr_dev) + return NULL; + + indr_dev->cb = cb; + indr_dev->cb_priv = cb_priv; + refcount_set(&indr_dev->refcnt, 1); + + return indr_dev; +} + +int flow_indr_dev_register(flow_indr_block_bind_cb_t *cb, void *cb_priv) +{ + struct flow_indr_dev *indr_dev; + + mutex_lock(&flow_indr_block_lock); + list_for_each_entry(indr_dev, &flow_block_indr_dev_list, list) { + if (indr_dev->cb == cb && + indr_dev->cb_priv == cb_priv) { + refcount_inc(&indr_dev->refcnt); + mutex_unlock(&flow_indr_block_lock); + return 0; + } + } + + indr_dev = flow_indr_dev_alloc(cb, cb_priv); + if (!indr_dev) { + mutex_unlock(&flow_indr_block_lock); + return -ENOMEM; + } + + list_add(&indr_dev->list, &flow_block_indr_dev_list); + mutex_unlock(&flow_indr_block_lock); + + return 0; +} +EXPORT_SYMBOL(flow_indr_dev_register); + +static void __flow_block_indr_cleanup(flow_setup_cb_t *setup_cb, void *cb_priv, + struct list_head *cleanup_list) +{ + struct flow_block_cb *this, *next; + + list_for_each_entry_safe(this, next, &flow_block_indr_list, indr.list) { + if (this->cb == setup_cb && + this->cb_priv == cb_priv) { + list_move(&this->indr.list, cleanup_list); + return; + } + } +} + +static void flow_block_indr_notify(struct list_head *cleanup_list) +{ + struct flow_block_cb *this, *next; + + list_for_each_entry_safe(this, next, cleanup_list, indr.list) { + list_del(&this->indr.list); + this->indr.cleanup(this); + } +} + +void flow_indr_dev_unregister(flow_indr_block_bind_cb_t *cb, void *cb_priv, + flow_setup_cb_t *setup_cb) +{ + struct flow_indr_dev *this, *next, *indr_dev = NULL; + LIST_HEAD(cleanup_list); + + mutex_lock(&flow_indr_block_lock); + list_for_each_entry_safe(this, next, &flow_block_indr_dev_list, list) { + if (this->cb == cb && + this->cb_priv == cb_priv && + refcount_dec_and_test(&this->refcnt)) { + indr_dev = this; + list_del(&indr_dev->list); + break; + } + } + + if (!indr_dev) { + mutex_unlock(&flow_indr_block_lock); + return; + } + + __flow_block_indr_cleanup(setup_cb, cb_priv, &cleanup_list); + mutex_unlock(&flow_indr_block_lock); + + flow_block_indr_notify(&cleanup_list); + kfree(indr_dev); +} +EXPORT_SYMBOL(flow_indr_dev_unregister); + +static void flow_block_indr_init(struct flow_block_cb *flow_block, + struct flow_block_offload *bo, + struct net_device *dev, void *data, + void (*cleanup)(struct flow_block_cb *block_cb)) +{ + flow_block->indr.binder_type = bo->binder_type; + flow_block->indr.data = data; + flow_block->indr.dev = dev; + flow_block->indr.cleanup = cleanup; +} + +static void __flow_block_indr_binding(struct flow_block_offload *bo, + struct net_device *dev, void *data, + void (*cleanup)(struct flow_block_cb *block_cb)) +{ + struct flow_block_cb *block_cb; + + list_for_each_entry(block_cb, &bo->cb_list, list) { + switch (bo->command) { + case FLOW_BLOCK_BIND: + flow_block_indr_init(block_cb, bo, dev, data, cleanup); + list_add(&block_cb->indr.list, &flow_block_indr_list); + break; + case FLOW_BLOCK_UNBIND: + list_del(&block_cb->indr.list); + break; + } + } +} + +int flow_indr_dev_setup_offload(struct net_device *dev, + enum tc_setup_type type, void *data, + struct flow_block_offload *bo, + void (*cleanup)(struct flow_block_cb *block_cb)) +{ + struct flow_indr_dev *this; + + mutex_lock(&flow_indr_block_lock); + list_for_each_entry(this, &flow_block_indr_dev_list, list) + this->cb(dev, this->cb_priv, type, bo); + + __flow_block_indr_binding(bo, dev, data, cleanup); + mutex_unlock(&flow_indr_block_lock); + + return list_empty(&bo->cb_list) ? -EOPNOTSUPP : 0; +} +EXPORT_SYMBOL(flow_indr_dev_setup_offload); + static LIST_HEAD(block_cb_list); static struct rhashtable indr_setup_block_ht; From patchwork Fri May 29 00:25:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pablo Neira Ayuso X-Patchwork-Id: 218253 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BFC45C433DF for ; Fri, 29 May 2020 00:26:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A565A207F9 for ; Fri, 29 May 2020 00:26:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2438144AbgE2A0f (ORCPT ); Thu, 28 May 2020 20:26:35 -0400 Received: from correo.us.es ([193.147.175.20]:44376 "EHLO mail.us.es" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2438090AbgE2AZ7 (ORCPT ); Thu, 28 May 2020 20:25:59 -0400 Received: from antivirus1-rhel7.int (unknown [192.168.2.11]) by mail.us.es (Postfix) with ESMTP id 8CD03CE61D for ; Fri, 29 May 2020 02:25:57 +0200 (CEST) Received: from antivirus1-rhel7.int (localhost [127.0.0.1]) by antivirus1-rhel7.int (Postfix) with ESMTP id 7D142DA702 for ; Fri, 29 May 2020 02:25:57 +0200 (CEST) Received: by antivirus1-rhel7.int (Postfix, from userid 99) id 71FFEDA701; Fri, 29 May 2020 02:25:57 +0200 (CEST) Received: from antivirus1-rhel7.int (localhost [127.0.0.1]) by antivirus1-rhel7.int (Postfix) with ESMTP id F3DA2DA701; Fri, 29 May 2020 02:25:54 +0200 (CEST) Received: from 192.168.1.97 (192.168.1.97) by antivirus1-rhel7.int (F-Secure/fsigk_smtp/550/antivirus1-rhel7.int); Fri, 29 May 2020 02:25:54 +0200 (CEST) X-Virus-Status: clean(F-Secure/fsigk_smtp/550/antivirus1-rhel7.int) Received: from localhost.localdomain (unknown [90.77.255.23]) (Authenticated sender: pneira@us.es) by entrada.int (Postfix) with ESMTPA id 88AF7426CCB9; Fri, 29 May 2020 02:25:54 +0200 (CEST) X-SMTPAUTHUS: auth mail.us.es From: Pablo Neira Ayuso To: netfilter-devel@vger.kernel.org Cc: davem@davemloft.net, netdev@vger.kernel.org, paulb@mellanox.com, ozsh@mellanox.com, vladbu@mellanox.com, jiri@resnulli.us, kuba@kernel.org, saeedm@mellanox.com, michael.chan@broadcom.com, sriharsha.basavapatna@broadcom.com Subject: [PATCH net-next 6/8] nfp: update indirect block support Date: Fri, 29 May 2020 02:25:39 +0200 Message-Id: <20200529002541.19743-7-pablo@netfilter.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200529002541.19743-1-pablo@netfilter.org> References: <20200529002541.19743-1-pablo@netfilter.org> MIME-Version: 1.0 X-Virus-Scanned: ClamAV using ClamSMTP Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Register ndo callback via flow_indr_dev_register() and flow_indr_dev_unregister(). Signed-off-by: Pablo Neira Ayuso --- .../net/ethernet/netronome/nfp/flower/main.c | 11 +++--- .../net/ethernet/netronome/nfp/flower/main.h | 7 ++-- .../ethernet/netronome/nfp/flower/offload.c | 35 ++++--------------- 3 files changed, 17 insertions(+), 36 deletions(-) diff --git a/drivers/net/ethernet/netronome/nfp/flower/main.c b/drivers/net/ethernet/netronome/nfp/flower/main.c index d054553c75e0..0410c793a488 100644 --- a/drivers/net/ethernet/netronome/nfp/flower/main.c +++ b/drivers/net/ethernet/netronome/nfp/flower/main.c @@ -830,6 +830,10 @@ static int nfp_flower_init(struct nfp_app *app) if (err) goto err_cleanup; + err = flow_indr_dev_register(nfp_flower_indr_setup_tc_cb, app); + if (err) + goto err_cleanup; + if (app_priv->flower_ext_feats & NFP_FL_FEATS_VF_RLIM) nfp_flower_qos_init(app); @@ -856,6 +860,9 @@ static void nfp_flower_clean(struct nfp_app *app) skb_queue_purge(&app_priv->cmsg_skbs_low); flush_work(&app_priv->cmsg_work); + flow_indr_dev_unregister(nfp_flower_indr_setup_tc_cb, app, + nfp_flower_setup_indr_block_cb); + if (app_priv->flower_ext_feats & NFP_FL_FEATS_VF_RLIM) nfp_flower_qos_cleanup(app); @@ -959,10 +966,6 @@ nfp_flower_netdev_event(struct nfp_app *app, struct net_device *netdev, return ret; } - ret = nfp_flower_reg_indir_block_handler(app, netdev, event); - if (ret & NOTIFY_STOP_MASK) - return ret; - ret = nfp_flower_internal_port_event_handler(app, netdev, event); if (ret & NOTIFY_STOP_MASK) return ret; diff --git a/drivers/net/ethernet/netronome/nfp/flower/main.h b/drivers/net/ethernet/netronome/nfp/flower/main.h index 59abea2a39ad..6c3dc3baf387 100644 --- a/drivers/net/ethernet/netronome/nfp/flower/main.h +++ b/drivers/net/ethernet/netronome/nfp/flower/main.h @@ -458,9 +458,10 @@ void nfp_flower_qos_cleanup(struct nfp_app *app); int nfp_flower_setup_qos_offload(struct nfp_app *app, struct net_device *netdev, struct tc_cls_matchall_offload *flow); void nfp_flower_stats_rlim_reply(struct nfp_app *app, struct sk_buff *skb); -int nfp_flower_reg_indir_block_handler(struct nfp_app *app, - struct net_device *netdev, - unsigned long event); +int nfp_flower_indr_setup_tc_cb(struct net_device *netdev, void *cb_priv, + enum tc_setup_type type, void *type_data); +int nfp_flower_setup_indr_block_cb(enum tc_setup_type type, void *type_data, + void *cb_priv); void __nfp_flower_non_repr_priv_get(struct nfp_flower_non_repr_priv *non_repr_priv); diff --git a/drivers/net/ethernet/netronome/nfp/flower/offload.c b/drivers/net/ethernet/netronome/nfp/flower/offload.c index c694dbc239d0..10a4e86f5371 100644 --- a/drivers/net/ethernet/netronome/nfp/flower/offload.c +++ b/drivers/net/ethernet/netronome/nfp/flower/offload.c @@ -1618,8 +1618,8 @@ nfp_flower_indr_block_cb_priv_lookup(struct nfp_app *app, return NULL; } -static int nfp_flower_setup_indr_block_cb(enum tc_setup_type type, - void *type_data, void *cb_priv) +int nfp_flower_setup_indr_block_cb(enum tc_setup_type type, + void *type_data, void *cb_priv) { struct nfp_flower_indr_block_cb_priv *priv = cb_priv; struct flow_cls_offload *flower = type_data; @@ -1707,10 +1707,13 @@ nfp_flower_setup_indr_tc_block(struct net_device *netdev, struct nfp_app *app, return 0; } -static int +int nfp_flower_indr_setup_tc_cb(struct net_device *netdev, void *cb_priv, enum tc_setup_type type, void *type_data) { + if (!nfp_fl_is_netdev_to_offload(netdev)) + return -EOPNOTSUPP; + switch (type) { case TC_SETUP_BLOCK: return nfp_flower_setup_indr_tc_block(netdev, cb_priv, @@ -1719,29 +1722,3 @@ nfp_flower_indr_setup_tc_cb(struct net_device *netdev, void *cb_priv, return -EOPNOTSUPP; } } - -int nfp_flower_reg_indir_block_handler(struct nfp_app *app, - struct net_device *netdev, - unsigned long event) -{ - int err; - - if (!nfp_fl_is_netdev_to_offload(netdev)) - return NOTIFY_OK; - - if (event == NETDEV_REGISTER) { - err = __flow_indr_block_cb_register(netdev, app, - nfp_flower_indr_setup_tc_cb, - app); - if (err) - nfp_flower_cmsg_warn(app, - "Indirect block reg failed - %s\n", - netdev->name); - } else if (event == NETDEV_UNREGISTER) { - __flow_indr_block_cb_unregister(netdev, - nfp_flower_indr_setup_tc_cb, - app); - } - - return NOTIFY_OK; -} From patchwork Fri May 29 00:25:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pablo Neira Ayuso X-Patchwork-Id: 218254 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21594C433DF for ; Fri, 29 May 2020 00:26:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 02E23208DB for ; Fri, 29 May 2020 00:26:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2438136AbgE2A00 (ORCPT ); Thu, 28 May 2020 20:26:26 -0400 Received: from correo.us.es ([193.147.175.20]:44332 "EHLO mail.us.es" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2438109AbgE2AZ7 (ORCPT ); Thu, 28 May 2020 20:25:59 -0400 Received: from antivirus1-rhel7.int (unknown [192.168.2.11]) by mail.us.es (Postfix) with ESMTP id 2263BE780E for ; Fri, 29 May 2020 02:25:58 +0200 (CEST) Received: from antivirus1-rhel7.int (localhost [127.0.0.1]) by antivirus1-rhel7.int (Postfix) with ESMTP id 14357DA71A for ; Fri, 29 May 2020 02:25:58 +0200 (CEST) Received: by antivirus1-rhel7.int (Postfix, from userid 99) id 09CA8DA710; Fri, 29 May 2020 02:25:58 +0200 (CEST) Received: from antivirus1-rhel7.int (localhost [127.0.0.1]) by antivirus1-rhel7.int (Postfix) with ESMTP id BEAC1DA714; Fri, 29 May 2020 02:25:55 +0200 (CEST) Received: from 192.168.1.97 (192.168.1.97) by antivirus1-rhel7.int (F-Secure/fsigk_smtp/550/antivirus1-rhel7.int); Fri, 29 May 2020 02:25:55 +0200 (CEST) X-Virus-Status: clean(F-Secure/fsigk_smtp/550/antivirus1-rhel7.int) Received: from localhost.localdomain (unknown [90.77.255.23]) (Authenticated sender: pneira@us.es) by entrada.int (Postfix) with ESMTPA id 52499426CCB9; Fri, 29 May 2020 02:25:55 +0200 (CEST) X-SMTPAUTHUS: auth mail.us.es From: Pablo Neira Ayuso To: netfilter-devel@vger.kernel.org Cc: davem@davemloft.net, netdev@vger.kernel.org, paulb@mellanox.com, ozsh@mellanox.com, vladbu@mellanox.com, jiri@resnulli.us, kuba@kernel.org, saeedm@mellanox.com, michael.chan@broadcom.com, sriharsha.basavapatna@broadcom.com Subject: [PATCH net-next 7/8] bnxt_tc: update indirect block support Date: Fri, 29 May 2020 02:25:40 +0200 Message-Id: <20200529002541.19743-8-pablo@netfilter.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200529002541.19743-1-pablo@netfilter.org> References: <20200529002541.19743-1-pablo@netfilter.org> MIME-Version: 1.0 X-Virus-Scanned: ClamAV using ClamSMTP Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Register ndo callback via flow_indr_dev_register() and flow_indr_dev_unregister(). Signed-off-by: Pablo Neira Ayuso --- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 1 - drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c | 51 +++++--------------- 2 files changed, 12 insertions(+), 40 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index c04ac4a36005..f7fab249480d 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -1875,7 +1875,6 @@ struct bnxt { u8 dsn[8]; struct bnxt_tc_info *tc_info; struct list_head tc_indr_block_list; - struct notifier_block tc_netdev_nb; struct dentry *debugfs_pdev; struct device *hwmon_dev; }; diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c index 782ea0771221..0eef4f5e4a46 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c @@ -1939,53 +1939,25 @@ static int bnxt_tc_setup_indr_block(struct net_device *netdev, struct bnxt *bp, return 0; } -static int bnxt_tc_setup_indr_cb(struct net_device *netdev, void *cb_priv, - enum tc_setup_type type, void *type_data) -{ - switch (type) { - case TC_SETUP_BLOCK: - return bnxt_tc_setup_indr_block(netdev, cb_priv, type_data); - default: - return -EOPNOTSUPP; - } -} - static bool bnxt_is_netdev_indr_offload(struct net_device *netdev) { return netif_is_vxlan(netdev); } -static int bnxt_tc_indr_block_event(struct notifier_block *nb, - unsigned long event, void *ptr) +static int bnxt_tc_setup_indr_cb(struct net_device *netdev, void *cb_priv, + enum tc_setup_type type, void *type_data) { - struct net_device *netdev; - struct bnxt *bp; - int rc; - - netdev = netdev_notifier_info_to_dev(ptr); if (!bnxt_is_netdev_indr_offload(netdev)) - return NOTIFY_OK; - - bp = container_of(nb, struct bnxt, tc_netdev_nb); + return -EOPNOTSUPP; - switch (event) { - case NETDEV_REGISTER: - rc = __flow_indr_block_cb_register(netdev, bp, - bnxt_tc_setup_indr_cb, - bp); - if (rc) - netdev_info(bp->dev, - "Failed to register indirect blk: dev: %s\n", - netdev->name); - break; - case NETDEV_UNREGISTER: - __flow_indr_block_cb_unregister(netdev, - bnxt_tc_setup_indr_cb, - bp); + switch (type) { + case TC_SETUP_BLOCK: + return bnxt_tc_setup_indr_block(netdev, cb_priv, type_data); + default: break; } - return NOTIFY_DONE; + return -EOPNOTSUPP; } static const struct rhashtable_params bnxt_tc_flow_ht_params = { @@ -2074,8 +2046,8 @@ int bnxt_init_tc(struct bnxt *bp) /* init indirect block notifications */ INIT_LIST_HEAD(&bp->tc_indr_block_list); - bp->tc_netdev_nb.notifier_call = bnxt_tc_indr_block_event; - rc = register_netdevice_notifier(&bp->tc_netdev_nb); + + rc = flow_indr_dev_register(bnxt_tc_setup_indr_cb, bp); if (!rc) return 0; @@ -2101,7 +2073,8 @@ void bnxt_shutdown_tc(struct bnxt *bp) if (!bnxt_tc_flower_enabled(bp)) return; - unregister_netdevice_notifier(&bp->tc_netdev_nb); + flow_indr_dev_unregister(bnxt_tc_setup_indr_cb, bp, + bnxt_tc_setup_indr_block_cb); rhashtable_destroy(&tc_info->flow_table); rhashtable_destroy(&tc_info->l2_table); rhashtable_destroy(&tc_info->decap_l2_table);