From patchwork Tue May 26 14:05:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hangbin Liu X-Patchwork-Id: 218538 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B767BC433DF for ; Tue, 26 May 2020 14:06:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 874A6207CB for ; Tue, 26 May 2020 14:06:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="QRFCZOde" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729077AbgEZOGJ (ORCPT ); Tue, 26 May 2020 10:06:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42918 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726962AbgEZOGI (ORCPT ); Tue, 26 May 2020 10:06:08 -0400 Received: from mail-pg1-x541.google.com (mail-pg1-x541.google.com [IPv6:2607:f8b0:4864:20::541]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BC883C03E96D; Tue, 26 May 2020 07:06:07 -0700 (PDT) Received: by mail-pg1-x541.google.com with SMTP id d10so10132575pgn.4; Tue, 26 May 2020 07:06:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Ej69XbPdJnRtGyGVG81KskWy8ngbcXNo7HOdSLyMHEM=; b=QRFCZOdepCKL8eJV6Zt7Sz5hADAuPULIh/4blvtkk+KT3DWCFo4NR7tdahuBUwhfiH vD8RVvnKtj8V7hGf8VSmn7g4MFcIN/gRFnch+kBdaXJC+CXrjAXpCsK3MOmNt/D3KVxY DXR5gZ2I8hc8EflxJ1GGyN/cyM7vJbhnPq3kqVxBEcpebjlekkuJ+eQiMqVkEkxvjkM+ hTXEnSZJFExKbLdbCITzKBpZqDMcREsQiS1KeEOUSLhNrm34euQXxXHFF00lN/19gv7F +x1NV3IacvTSooxs2hoLdjJKA3KMBFbCY8V6gDJghNI4zncFqO3w2Uu7LWNVydoZANJF rCyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Ej69XbPdJnRtGyGVG81KskWy8ngbcXNo7HOdSLyMHEM=; b=iTHk38SUvODtr7UVCZMOxf9XcfnQvO3P/56sYHE3T8/IR9mXBBsXZKhkbHzDQ4yvsL sv/u03IOheAALBt5SK+FBwolo2cN+EsERU0K1+f+a8/qfd5+Muncv7kyD77TZ4oxDIK2 poa4k9GF9E1Jl3fTHLA8EpbXCulg0IofURB+TNCt1NhG7Es1YoU48h058xiuNS1XzyYp RQaE4s9UuxBoNfq06gXdiRzS+2DsUyYK7KgMqKopd+IneX/Bgyr23XBV2eAitdW9rtfK KuRbjqXT0TNrJsQV5Wlnl+C03HYikCmu16WytiVhRAFVGlZFM4ktJLmVYgY5QI1H1hi9 0o8g== X-Gm-Message-State: AOAM531hVae8Ous0BPWeU90AGu2SUcmu+zBHNX4XfCxOU7QOyq2zpzpq mmaro4YegQWte0qS1ZWEA+dO0A6J56t5Kw== X-Google-Smtp-Source: ABdhPJwzvMcmigaW2MElghDq3vZMOeG8ereBHCON4d6p6VJEMW9Htb+mdTw0xlwBoaMbmw83nwPO/w== X-Received: by 2002:a62:19d5:: with SMTP id 204mr23275221pfz.189.1590501966595; Tue, 26 May 2020 07:06:06 -0700 (PDT) Received: from dhcp-12-153.nay.redhat.com ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id q201sm15506859pfq.40.2020.05.26.07.06.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 May 2020 07:06:05 -0700 (PDT) From: Hangbin Liu To: bpf@vger.kernel.org Cc: netdev@vger.kernel.org, =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= , Jiri Benc , Jesper Dangaard Brouer , Eelco Chaudron , ast@kernel.org, Daniel Borkmann , Lorenzo Bianconi , Hangbin Liu Subject: [PATCHv4 bpf-next 1/2] xdp: add a new helper for dev map multicast support Date: Tue, 26 May 2020 22:05:38 +0800 Message-Id: <20200526140539.4103528-2-liuhangbin@gmail.com> X-Mailer: git-send-email 2.25.4 In-Reply-To: <20200526140539.4103528-1-liuhangbin@gmail.com> References: <20200415085437.23028-1-liuhangbin@gmail.com> <20200526140539.4103528-1-liuhangbin@gmail.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This patch is for xdp multicast support. In this implementation we add a new helper to accept two maps: forward map and exclude map. We will redirect the packet to all the interfaces in *forward map*, but exclude the interfaces that in *exclude map*. To achive this I add a new ex_map for struct bpf_redirect_info. in the helper I set tgt_value to NULL to make a difference with bpf_xdp_redirect_map() We also add a flag *BPF_F_EXCLUDE_INGRESS* incase you don't want to create a exclude map for each interface and just want to exclude the ingress interface. The general data path is kept in net/core/filter.c. The native data path is in kernel/bpf/devmap.c so we can use direct calls to get better performace. v4: Fix bpf_xdp_redirect_map_multi_proto arg2_type typo v3: Based on Toke's suggestion, do the following update a) Update bpf_redirect_map_multi() description in bpf.h. b) Fix exclude_ifindex checking order in dev_in_exclude_map(). c) Fix one more xdpf clone in dev_map_enqueue_multi(). d) Go find next one in dev_map_enqueue_multi() if the interface is not able to forward instead of abort the whole loop. e) Remove READ_ONCE/WRITE_ONCE for ex_map. v2: Add new syscall bpf_xdp_redirect_map_multi() which could accept include/exclude maps directly. Signed-off-by: Hangbin Liu --- include/linux/bpf.h | 20 ++++++ include/linux/filter.h | 1 + include/net/xdp.h | 1 + include/uapi/linux/bpf.h | 22 +++++- kernel/bpf/devmap.c | 124 +++++++++++++++++++++++++++++++++ kernel/bpf/verifier.c | 6 ++ net/core/filter.c | 101 +++++++++++++++++++++++++-- net/core/xdp.c | 26 +++++++ tools/include/uapi/linux/bpf.h | 22 +++++- 9 files changed, 316 insertions(+), 7 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index efe8836b5c48..d1c169bec6b5 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1240,6 +1240,11 @@ int dev_xdp_enqueue(struct net_device *dev, struct xdp_buff *xdp, struct net_device *dev_rx); int dev_map_enqueue(struct bpf_dtab_netdev *dst, struct xdp_buff *xdp, struct net_device *dev_rx); +bool dev_in_exclude_map(struct bpf_dtab_netdev *obj, struct bpf_map *map, + int exclude_ifindex); +int dev_map_enqueue_multi(struct xdp_buff *xdp, struct net_device *dev_rx, + struct bpf_map *map, struct bpf_map *ex_map, + bool exclude_ingress); int dev_map_generic_redirect(struct bpf_dtab_netdev *dst, struct sk_buff *skb, struct bpf_prog *xdp_prog); @@ -1377,6 +1382,21 @@ int dev_map_enqueue(struct bpf_dtab_netdev *dst, struct xdp_buff *xdp, return 0; } +static inline +bool dev_in_exclude_map(struct bpf_dtab_netdev *obj, struct bpf_map *map, + int exclude_ifindex) +{ + return false; +} + +static inline +int dev_map_enqueue_multi(struct xdp_buff *xdp, struct net_device *dev_rx, + struct bpf_map *map, struct bpf_map *ex_map, + bool exclude_ingress) +{ + return 0; +} + struct sk_buff; static inline int dev_map_generic_redirect(struct bpf_dtab_netdev *dst, diff --git a/include/linux/filter.h b/include/linux/filter.h index 73d06a39e2d6..5d9c6ac6ade3 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -612,6 +612,7 @@ struct bpf_redirect_info { u32 tgt_index; void *tgt_value; struct bpf_map *map; + struct bpf_map *ex_map; u32 kern_flags; }; diff --git a/include/net/xdp.h b/include/net/xdp.h index 90f11760bd12..967684aa096a 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -105,6 +105,7 @@ void xdp_warn(const char *msg, const char *func, const int line); #define XDP_WARN(msg) xdp_warn(msg, __func__, __LINE__) struct xdp_frame *xdp_convert_zc_to_xdp_frame(struct xdp_buff *xdp); +struct xdp_frame *xdpf_clone(struct xdp_frame *xdpf); /* Convert xdp_buff to xdp_frame */ static inline diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 97e1fd19ff58..000b0cf961ea 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -3157,6 +3157,20 @@ union bpf_attr { * **bpf_sk_cgroup_id**\ (). * Return * The id is returned or 0 in case the id could not be retrieved. + * + * int bpf_redirect_map_multi(struct bpf_map *map, struct bpf_map *ex_map, u64 flags) + * Description + * Redirect the packet to ALL the interfaces in *map*, but + * exclude the interfaces in *ex_map* (which may be NULL). + * + * Currently the *flags* only supports *BPF_F_EXCLUDE_INGRESS*, + * which additionally excludes the current ingress device. + * + * See also bpf_redirect_map(), which supports redirecting + * packet to a specific ifindex in the map. + * Return + * **XDP_REDIRECT** on success, or **XDP_ABORTED** on error. + * */ #define __BPF_FUNC_MAPPER(FN) \ FN(unspec), \ @@ -3288,7 +3302,8 @@ union bpf_attr { FN(seq_printf), \ FN(seq_write), \ FN(sk_cgroup_id), \ - FN(sk_ancestor_cgroup_id), + FN(sk_ancestor_cgroup_id), \ + FN(redirect_map_multi), /* integer value in 'imm' field of BPF_CALL instruction selects which helper * function eBPF program intends to call @@ -3417,6 +3432,11 @@ enum bpf_lwt_encap_mode { BPF_LWT_ENCAP_IP, }; +/* BPF_FUNC_redirect_map_multi flags. */ +enum { + BPF_F_EXCLUDE_INGRESS = (1ULL << 0), +}; + #define __bpf_md_ptr(type, name) \ union { \ type name; \ diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c index a51d9fb7a359..ecc5c44a5bab 100644 --- a/kernel/bpf/devmap.c +++ b/kernel/bpf/devmap.c @@ -455,6 +455,130 @@ int dev_map_enqueue(struct bpf_dtab_netdev *dst, struct xdp_buff *xdp, return __xdp_enqueue(dev, xdp, dev_rx); } +/* Use direct call in fast path instead of map->ops->map_get_next_key() */ +static int devmap_get_next_key(struct bpf_map *map, void *key, void *next_key) +{ + + switch (map->map_type) { + case BPF_MAP_TYPE_DEVMAP: + return dev_map_get_next_key(map, key, next_key); + case BPF_MAP_TYPE_DEVMAP_HASH: + return dev_map_hash_get_next_key(map, key, next_key); + default: + break; + } + + return -ENOENT; +} + +bool dev_in_exclude_map(struct bpf_dtab_netdev *obj, struct bpf_map *map, + int exclude_ifindex) +{ + struct bpf_dtab_netdev *in_obj = NULL; + u32 key, next_key; + int err; + + if (obj->dev->ifindex == exclude_ifindex) + return true; + + if (!map) + return false; + + devmap_get_next_key(map, NULL, &key); + + for (;;) { + switch (map->map_type) { + case BPF_MAP_TYPE_DEVMAP: + in_obj = __dev_map_lookup_elem(map, key); + break; + case BPF_MAP_TYPE_DEVMAP_HASH: + in_obj = __dev_map_hash_lookup_elem(map, key); + break; + default: + break; + } + + if (in_obj && in_obj->dev->ifindex == obj->dev->ifindex) + return true; + + err = devmap_get_next_key(map, &key, &next_key); + + if (err) + break; + + key = next_key; + } + + return false; +} + +int dev_map_enqueue_multi(struct xdp_buff *xdp, struct net_device *dev_rx, + struct bpf_map *map, struct bpf_map *ex_map, + bool exclude_ingress) +{ + struct bpf_dtab_netdev *obj = NULL; + struct xdp_frame *xdpf, *nxdpf; + struct net_device *dev; + bool first = true; + u32 key, next_key; + int err; + + devmap_get_next_key(map, NULL, &key); + + xdpf = convert_to_xdp_frame(xdp); + if (unlikely(!xdpf)) + return -EOVERFLOW; + + for (;;) { + switch (map->map_type) { + case BPF_MAP_TYPE_DEVMAP: + obj = __dev_map_lookup_elem(map, key); + break; + case BPF_MAP_TYPE_DEVMAP_HASH: + obj = __dev_map_hash_lookup_elem(map, key); + break; + default: + break; + } + + if (!obj || dev_in_exclude_map(obj, ex_map, + exclude_ingress ? dev_rx->ifindex : 0)) + goto find_next; + + dev = obj->dev; + + if (!dev->netdev_ops->ndo_xdp_xmit) + goto find_next; + + err = xdp_ok_fwd_dev(dev, xdp->data_end - xdp->data); + if (unlikely(err)) + goto find_next; + + if (!first) { + nxdpf = xdpf_clone(xdpf); + if (unlikely(!nxdpf)) + return -ENOMEM; + + bq_enqueue(dev, nxdpf, dev_rx); + } else { + bq_enqueue(dev, xdpf, dev_rx); + first = false; + } + +find_next: + err = devmap_get_next_key(map, &key, &next_key); + if (err) + break; + key = next_key; + } + + /* didn't find anywhere to forward to, free buf */ + if (first) + xdp_return_frame_rx_napi(xdpf); + + return 0; +} + int dev_map_generic_redirect(struct bpf_dtab_netdev *dst, struct sk_buff *skb, struct bpf_prog *xdp_prog) { diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index d2e27dba4ac6..a5857953248d 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -3946,6 +3946,7 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env, case BPF_MAP_TYPE_DEVMAP: case BPF_MAP_TYPE_DEVMAP_HASH: if (func_id != BPF_FUNC_redirect_map && + func_id != BPF_FUNC_redirect_map_multi && func_id != BPF_FUNC_map_lookup_elem) goto error; break; @@ -4038,6 +4039,11 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env, map->map_type != BPF_MAP_TYPE_XSKMAP) goto error; break; + case BPF_FUNC_redirect_map_multi: + if (map->map_type != BPF_MAP_TYPE_DEVMAP && + map->map_type != BPF_MAP_TYPE_DEVMAP_HASH) + goto error; + break; case BPF_FUNC_sk_redirect_map: case BPF_FUNC_msg_redirect_map: case BPF_FUNC_sock_map_update: diff --git a/net/core/filter.c b/net/core/filter.c index bd2853d23b50..f07eb1408f70 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -3473,12 +3473,17 @@ static const struct bpf_func_proto bpf_xdp_adjust_meta_proto = { }; static int __bpf_tx_xdp_map(struct net_device *dev_rx, void *fwd, - struct bpf_map *map, struct xdp_buff *xdp) + struct bpf_map *map, struct xdp_buff *xdp, + struct bpf_map *ex_map, bool exclude_ingress) { switch (map->map_type) { case BPF_MAP_TYPE_DEVMAP: case BPF_MAP_TYPE_DEVMAP_HASH: - return dev_map_enqueue(fwd, xdp, dev_rx); + if (fwd) + return dev_map_enqueue(fwd, xdp, dev_rx); + else + return dev_map_enqueue_multi(xdp, dev_rx, map, ex_map, + exclude_ingress); case BPF_MAP_TYPE_CPUMAP: return cpu_map_enqueue(fwd, xdp, dev_rx); case BPF_MAP_TYPE_XSKMAP: @@ -3534,6 +3539,8 @@ int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, struct bpf_prog *xdp_prog) { struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); + bool exclude_ingress = !!(ri->flags & BPF_F_EXCLUDE_INGRESS); + struct bpf_map *ex_map = ri->ex_map; struct bpf_map *map = READ_ONCE(ri->map); u32 index = ri->tgt_index; void *fwd = ri->tgt_value; @@ -3541,6 +3548,7 @@ int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, ri->tgt_index = 0; ri->tgt_value = NULL; + ri->ex_map = NULL; WRITE_ONCE(ri->map, NULL); if (unlikely(!map)) { @@ -3552,7 +3560,7 @@ int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, err = dev_xdp_enqueue(fwd, xdp, dev); } else { - err = __bpf_tx_xdp_map(dev, fwd, map, xdp); + err = __bpf_tx_xdp_map(dev, fwd, map, xdp, ex_map, exclude_ingress); } if (unlikely(err)) @@ -3566,6 +3574,50 @@ int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, } EXPORT_SYMBOL_GPL(xdp_do_redirect); +static int dev_map_redirect_multi(struct net_device *dev, struct sk_buff *skb, + struct bpf_prog *xdp_prog, + struct bpf_map *map, struct bpf_map *ex_map, + bool exclude_ingress) + +{ + struct bpf_dtab_netdev *dst; + struct sk_buff *nskb; + u32 key, next_key; + int err; + void *fwd; + + /* Get first key from forward map */ + map->ops->map_get_next_key(map, NULL, &key); + + for (;;) { + fwd = __xdp_map_lookup_elem(map, key); + if (fwd) { + dst = (struct bpf_dtab_netdev *)fwd; + if (dev_in_exclude_map(dst, ex_map, + exclude_ingress ? dev->ifindex : 0)) + goto find_next; + + nskb = skb_clone(skb, GFP_ATOMIC); + if (!nskb) + return -ENOMEM; + + err = dev_map_generic_redirect(dst, nskb, xdp_prog); + if (unlikely(err)) + return err; + } + +find_next: + err = map->ops->map_get_next_key(map, &key, &next_key); + if (err) + break; + + key = next_key; + } + + consume_skb(skb); + return 0; +} + static int xdp_do_generic_redirect_map(struct net_device *dev, struct sk_buff *skb, struct xdp_buff *xdp, @@ -3573,19 +3625,29 @@ static int xdp_do_generic_redirect_map(struct net_device *dev, struct bpf_map *map) { struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); + bool exclude_ingress = !!(ri->flags & BPF_F_EXCLUDE_INGRESS); + struct bpf_map *ex_map = ri->ex_map; u32 index = ri->tgt_index; void *fwd = ri->tgt_value; int err = 0; ri->tgt_index = 0; ri->tgt_value = NULL; + ri->ex_map = NULL; WRITE_ONCE(ri->map, NULL); if (map->map_type == BPF_MAP_TYPE_DEVMAP || map->map_type == BPF_MAP_TYPE_DEVMAP_HASH) { - struct bpf_dtab_netdev *dst = fwd; + if (fwd) { + struct bpf_dtab_netdev *dst = fwd; + + err = dev_map_generic_redirect(dst, skb, xdp_prog); + } else { + /* Deal with multicast maps */ + err = dev_map_redirect_multi(dev, skb, xdp_prog, map, + ex_map, exclude_ingress); + } - err = dev_map_generic_redirect(dst, skb, xdp_prog); if (unlikely(err)) goto err; } else if (map->map_type == BPF_MAP_TYPE_XSKMAP) { @@ -3699,6 +3761,33 @@ static const struct bpf_func_proto bpf_xdp_redirect_map_proto = { .arg3_type = ARG_ANYTHING, }; +BPF_CALL_3(bpf_xdp_redirect_map_multi, struct bpf_map *, map, + struct bpf_map *, ex_map, u64, flags) +{ + struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); + + if (unlikely(!map || flags > BPF_F_EXCLUDE_INGRESS)) + return XDP_ABORTED; + + ri->tgt_index = 0; + ri->tgt_value = NULL; + ri->flags = flags; + ri->ex_map = ex_map; + + WRITE_ONCE(ri->map, map); + + return XDP_REDIRECT; +} + +static const struct bpf_func_proto bpf_xdp_redirect_map_multi_proto = { + .func = bpf_xdp_redirect_map_multi, + .gpl_only = false, + .ret_type = RET_INTEGER, + .arg1_type = ARG_CONST_MAP_PTR, + .arg2_type = ARG_CONST_MAP_PTR, + .arg3_type = ARG_ANYTHING, +}; + static unsigned long bpf_skb_copy(void *dst_buff, const void *skb, unsigned long off, unsigned long len) { @@ -6363,6 +6452,8 @@ xdp_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) return &bpf_xdp_redirect_proto; case BPF_FUNC_redirect_map: return &bpf_xdp_redirect_map_proto; + case BPF_FUNC_redirect_map_multi: + return &bpf_xdp_redirect_map_multi_proto; case BPF_FUNC_xdp_adjust_tail: return &bpf_xdp_adjust_tail_proto; case BPF_FUNC_fib_lookup: diff --git a/net/core/xdp.c b/net/core/xdp.c index 90f44f382115..acdc63833b1f 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -475,3 +475,29 @@ void xdp_warn(const char *msg, const char *func, const int line) WARN(1, "XDP_WARN: %s(line:%d): %s\n", func, line, msg); }; EXPORT_SYMBOL_GPL(xdp_warn); + +struct xdp_frame *xdpf_clone(struct xdp_frame *xdpf) +{ + unsigned int headroom, totalsize; + struct xdp_frame *nxdpf; + struct page *page; + void *addr; + + headroom = xdpf->headroom + sizeof(*xdpf); + totalsize = headroom + xdpf->len; + + if (unlikely(totalsize > PAGE_SIZE)) + return NULL; + page = dev_alloc_page(); + if (!page) + return NULL; + addr = page_to_virt(page); + + memcpy(addr, xdpf, totalsize); + + nxdpf = addr; + nxdpf->data = addr + headroom; + + return nxdpf; +} +EXPORT_SYMBOL_GPL(xdpf_clone); diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 97e1fd19ff58..000b0cf961ea 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -3157,6 +3157,20 @@ union bpf_attr { * **bpf_sk_cgroup_id**\ (). * Return * The id is returned or 0 in case the id could not be retrieved. + * + * int bpf_redirect_map_multi(struct bpf_map *map, struct bpf_map *ex_map, u64 flags) + * Description + * Redirect the packet to ALL the interfaces in *map*, but + * exclude the interfaces in *ex_map* (which may be NULL). + * + * Currently the *flags* only supports *BPF_F_EXCLUDE_INGRESS*, + * which additionally excludes the current ingress device. + * + * See also bpf_redirect_map(), which supports redirecting + * packet to a specific ifindex in the map. + * Return + * **XDP_REDIRECT** on success, or **XDP_ABORTED** on error. + * */ #define __BPF_FUNC_MAPPER(FN) \ FN(unspec), \ @@ -3288,7 +3302,8 @@ union bpf_attr { FN(seq_printf), \ FN(seq_write), \ FN(sk_cgroup_id), \ - FN(sk_ancestor_cgroup_id), + FN(sk_ancestor_cgroup_id), \ + FN(redirect_map_multi), /* integer value in 'imm' field of BPF_CALL instruction selects which helper * function eBPF program intends to call @@ -3417,6 +3432,11 @@ enum bpf_lwt_encap_mode { BPF_LWT_ENCAP_IP, }; +/* BPF_FUNC_redirect_map_multi flags. */ +enum { + BPF_F_EXCLUDE_INGRESS = (1ULL << 0), +}; + #define __bpf_md_ptr(type, name) \ union { \ type name; \ From patchwork Fri Apr 24 08:56:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hangbin Liu X-Patchwork-Id: 220624 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 840F9C55194 for ; Fri, 24 Apr 2020 08:56:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5FFC620736 for ; Fri, 24 Apr 2020 08:56:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="PvqpXBCe" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726800AbgDXI4g (ORCPT ); Fri, 24 Apr 2020 04:56:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46936 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726668AbgDXI4f (ORCPT ); Fri, 24 Apr 2020 04:56:35 -0400 Received: from mail-qk1-x72a.google.com (mail-qk1-x72a.google.com [IPv6:2607:f8b0:4864:20::72a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 07CB3C09B045; Fri, 24 Apr 2020 01:56:34 -0700 (PDT) Received: by mail-qk1-x72a.google.com with SMTP id c63so9466401qke.2; Fri, 24 Apr 2020 01:56:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=LpyM978HZ4cFkZ+uJr3XLeWS1XN6gMA3JsC5cZ+XA1Q=; b=PvqpXBCeB0DD2Q15hGeGtWg7AKXqpXTYAIde+fv6FqkbsahrE1kiYwRwnEhLADOjyX XOiQsmtHT+uOeYjtn05MbhFXx15rsXPfaKsdfIJURtntsx3WgQ60yPudx0c/Q3hg6OTX 5IPp6C/ooK5sGnPf3Su+Rbv8MBvu1wjyBPfZgWkJtFkJ+AaYLBSWzl7MK7RAlvJPqaAm CIShvRWYEW6kHo9b4D24apH0f7z4Gu/DoARA6H+oyQXL38PjG9wC2wBiP+ZQp1ZZWax2 AIR8P2jSF0d9ygDMo3+5GyiD40jAyR6UBJXbLa0eV5CH4RkIAPBeUb1Nsoq77cFmqHc/ YBKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=LpyM978HZ4cFkZ+uJr3XLeWS1XN6gMA3JsC5cZ+XA1Q=; b=pU4NqjfE8Xb2h65GNExorWMdG4AW16susX/ue4TdsldImnRn8l8sxCfZXO1W9WvHSv raXy3wco8B6c65qfDs1/LLF7izYLoELhC8M+oD5U6UGF+MCSw9+XKlbnAUKl0HfKKH2b eakAJTYzNEXfiijm38+NkoNX/f84jVFGBAlr6TAwFUIxe9+al5/YX1IDw7Jr6lnwYomw X5asd1NZcV/CW5MU0fG4qC8H2phsNzb6QZ0B0Eo+UL0KcjRsmcp/IS3v2CvscgsYftxM xx5iAGUhF82TNqeh4phuFGvieEQBFu5+YZr5yUFT8O5S8GWfvG20YEug+JjCq8na20k5 B2/A== X-Gm-Message-State: AGi0PubLN9/Kc/g2CTSjNlGZ99PSNUTmAMRx5ZKwIVHO1wl4Y59pbnzP e6fzoCEMp2nw9Ttrm+fg1tGi6Ejspyc= X-Google-Smtp-Source: APiQypKbd1tvQDhHpnEcTP+1K3tJbVHae9yL4suYt0MSwIosxiAdA+qqh1xNWjODDsLjx3fRZNmQYA== X-Received: by 2002:a37:66d0:: with SMTP id a199mr5283536qkc.245.1587718592639; Fri, 24 Apr 2020 01:56:32 -0700 (PDT) Received: from dhcp-12-139.nay.redhat.com ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id z18sm3390842qti.47.2020.04.24.01.56.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 24 Apr 2020 01:56:32 -0700 (PDT) From: Hangbin Liu To: bpf@vger.kernel.org Cc: netdev@vger.kernel.org, =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= , Jiri Benc , Jesper Dangaard Brouer , Eelco Chaudron , ast@kernel.org, Daniel Borkmann , Lorenzo Bianconi , Hangbin Liu Subject: [RFC PATCHv2 bpf-next 2/2] sample/bpf: add xdp_redirect_map_multicast test Date: Fri, 24 Apr 2020 16:56:10 +0800 Message-Id: <20200424085610.10047-3-liuhangbin@gmail.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20200424085610.10047-1-liuhangbin@gmail.com> References: <20200415085437.23028-1-liuhangbin@gmail.com> <20200424085610.10047-1-liuhangbin@gmail.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This is a sample for xdp multicast. In the sample we have 3 forward groups and 1 exclude group. It will redirect each interface's packets to all the interfaces in the forward group, and exclude the interface in exclude map. For more testing details, please see the test description in xdp_redirect_map_multi.sh. Signed-off-by: Hangbin Liu --- samples/bpf/Makefile | 3 + samples/bpf/xdp_redirect_map_multi.sh | 124 ++++++++++++++++ samples/bpf/xdp_redirect_map_multi_kern.c | 100 +++++++++++++ samples/bpf/xdp_redirect_map_multi_user.c | 170 ++++++++++++++++++++++ 4 files changed, 397 insertions(+) create mode 100755 samples/bpf/xdp_redirect_map_multi.sh create mode 100644 samples/bpf/xdp_redirect_map_multi_kern.c create mode 100644 samples/bpf/xdp_redirect_map_multi_user.c diff --git a/samples/bpf/Makefile b/samples/bpf/Makefile index 424f6fe7ce38..eb7306efe85e 100644 --- a/samples/bpf/Makefile +++ b/samples/bpf/Makefile @@ -41,6 +41,7 @@ tprogs-y += test_map_in_map tprogs-y += per_socket_stats_example tprogs-y += xdp_redirect tprogs-y += xdp_redirect_map +tprogs-y += xdp_redirect_map_multi tprogs-y += xdp_redirect_cpu tprogs-y += xdp_monitor tprogs-y += xdp_rxq_info @@ -97,6 +98,7 @@ test_map_in_map-objs := bpf_load.o test_map_in_map_user.o per_socket_stats_example-objs := cookie_uid_helper_example.o xdp_redirect-objs := xdp_redirect_user.o xdp_redirect_map-objs := xdp_redirect_map_user.o +xdp_redirect_map_multi-objs := xdp_redirect_map_multi_user.o xdp_redirect_cpu-objs := bpf_load.o xdp_redirect_cpu_user.o xdp_monitor-objs := bpf_load.o xdp_monitor_user.o xdp_rxq_info-objs := xdp_rxq_info_user.o @@ -156,6 +158,7 @@ always-y += tcp_tos_reflect_kern.o always-y += tcp_dumpstats_kern.o always-y += xdp_redirect_kern.o always-y += xdp_redirect_map_kern.o +always-y += xdp_redirect_map_multi_kern.o always-y += xdp_redirect_cpu_kern.o always-y += xdp_monitor_kern.o always-y += xdp_rxq_info_kern.o diff --git a/samples/bpf/xdp_redirect_map_multi.sh b/samples/bpf/xdp_redirect_map_multi.sh new file mode 100755 index 000000000000..1999f261a1e8 --- /dev/null +++ b/samples/bpf/xdp_redirect_map_multi.sh @@ -0,0 +1,124 @@ +#!/bin/bash +# Test topology: +# - - - - - - - - - - - - - - - - - - - - - - - - - +# | veth1 veth2 veth3 veth4 | init net +# - -| - - - - - - | - - - - - - | - - - - - - | - - +# --------- --------- --------- --------- +# | veth0 | | veth0 | | veth0 | | veth0 | +# --------- --------- --------- --------- +# ns1 ns2 ns3 ns4 +# +# Forward multicast groups: +# Forward group all has interfaces: veth1, veth2, veth3, veth4 (All traffic except IPv4, IPv6) +# Forward group v4 has interfaces: veth1, veth3, veth4 (For IPv4 traffic only) +# Forward group v6 has interfaces: veth2, veth3, veth4 (For IPv6 traffic only) +# Exclude Groups: +# Exclude group: veth3 (assume ns3 is in black list) +# +# Test modules: +# XDP modes: generic, native +# map types: group v4 use DEVMAP, others use DEVMAP_HASH +# +# Test cases: +# ARP(we didn't exclude ns3 in kern.c for ARP): +# ns1 -> gw: ns2, ns3, ns4 should receive the arp request +# IPv4: +# ns1 -> ns2 (fail), ns1 -> ns3 (fail), ns1 -> ns4 (pass) +# IPv6 +# ns2 -> ns1 (fail), ns2 -> ns3 (fail), ns2 -> ns4 (pass) +# + + +# netns numbers +NUM=4 +IFACES="" +DRV_MODE="drv generic" + +test_pass() +{ + echo "Pass: $@" +} + +test_fail() +{ + echo "fail: $@" +} + +clean_up() +{ + for i in $(seq $NUM); do + ip netns del ns$i + done +} + +setup_ns() +{ + local mode=$1 + + for i in $(seq $NUM); do + ip netns add ns$i + ip link add veth0 type veth peer name veth$i + ip link set veth0 netns ns$i + ip netns exec ns$i ip link set veth0 up + ip link set veth$i up + + ip netns exec ns$i ip addr add 192.0.2.$i/24 dev veth0 + ip netns exec ns$i ip addr add 2001:db8::$i/24 dev veth0 + ip netns exec ns$i ip link set veth0 xdp$mode obj \ + xdp_redirect_map_multi_kern.o sec xdp_redirect_dummy &> /dev/null || \ + { test_fail "Unable to load dummy xdp" && exit 1; } + IFACES="$IFACES veth$i" + done +} + +do_tests() +{ + local drv_mode=$1 + local drv_p + + [ ${drv_mode} == "drv" ] && drv_p="-N" || drv_p="-S" + + ./xdp_redirect_map_multi $drv_p $IFACES &> xdp_${drv_mode}.log & + xdp_pid=$! + sleep 10 + + # arp test + ip netns exec ns2 tcpdump -i veth0 -nn -l -e &> arp_ns1-2_${drv_mode}.log & + ip netns exec ns3 tcpdump -i veth0 -nn -l -e &> arp_ns1-3_${drv_mode}.log & + ip netns exec ns4 tcpdump -i veth0 -nn -l -e &> arp_ns1-4_${drv_mode}.log & + ip netns exec ns1 ping 192.0.2.254 -c 4 &> /dev/null + sleep 2 + pkill -9 tcpdump + grep -q "Request who-has 192.0.2.254 tell 192.0.2.1" arp_ns1-2_${drv_mode}.log && \ + test_pass "$drv_mode arp ns1-2" || test_fail "$drv_mode arp ns1-2" + grep -q "Request who-has 192.0.2.254 tell 192.0.2.1" arp_ns1-3_${drv_mode}.log && \ + test_pass "$drv_mode arp ns1-3" || test_fail "$drv_mode arp ns1-3" + grep -q "Request who-has 192.0.2.254 tell 192.0.2.1" arp_ns1-4_${drv_mode}.log && \ + test_pass "$drv_mode arp ns1-4" || test_fail "$drv_mode arp ns1-4" + + # ping test + ip netns exec ns1 ping 192.0.2.2 -c 4 &> /dev/null && \ + test_fail "$drv_mode ping ns1-2" || test_pass "$drv_mode ping ns1-2" + ip netns exec ns1 ping 192.0.2.3 -c 4 &> /dev/null && \ + test_fail "$drv_mode ping ns1-3" || test_pass "$drv_mode ping ns1-3" + ip netns exec ns1 ping 192.0.2.4 -c 4 &> /dev/null && \ + test_pass "$drv_mode ping ns1-4" || test_fail "$drv_mode ping ns1-4" + + # ping6 test + ip netns exec ns2 ping6 2001:db8::1 -c 4 &> /dev/null && \ + test_fail "$drv_mode ping6 ns2-1" || test_pass "$drv_mode ping6 ns2-1" + ip netns exec ns2 ping6 2001:db8::3 -c 4 &> /dev/null && \ + test_fail "$drv_mode ping6 ns2-3" || test_pass "$drv_mode ping6 ns2-3" + ip netns exec ns2 ping6 2001:db8::4 -c 4 &> /dev/null && \ + test_pass "$drv_mode ping6 ns2-4" || test_fail "$drv_mode ping6 ns2-4" + + kill $xdp_pid +} + +for mode in ${DRV_MODE}; do + sleep 2 + setup_ns $mode + do_tests $mode + sleep 20 + clean_up +done diff --git a/samples/bpf/xdp_redirect_map_multi_kern.c b/samples/bpf/xdp_redirect_map_multi_kern.c new file mode 100644 index 000000000000..c98985683ba2 --- /dev/null +++ b/samples/bpf/xdp_redirect_map_multi_kern.c @@ -0,0 +1,100 @@ +/* + * modify it under the terms of version 2 of the GNU General Public + * License as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ +#define KBUILD_MODNAME "foo" +#include +#include +#include +#include +#include +#include +#include + +/* In this sample we will use 3 forward maps and 1 exclude map to + * show how to use the helper bpf_redirect_map_multi(). + * + * In real world, there may have multi forward maps and exclude map. You can + * use map-in-map type to store the forward and exlude maps. e.g. + * forward_map_in_map[group_a_index] = forward_group_a_map + * forward_map_in_map[group_b_index] = forward_group_b_map + * exclude_map_in_map[iface_1_index] = iface_1_exclude_map + * exclude_map_in_map[iface_2_index] = iface_2_exclude_map + * Then store the forward group indexes based on IP/MAC policy in another + * hash map, e.g.: + * mcast_route_map[hash(subnet_a)] = group_a_index + * mcast_route_map[hash(subnet_b)] = group_b_index + * + * You can init the maps in user.c, and find the forward group index from + * mcast_route_map bye key hash(subnet) in kern.c, Then you could find + * the forward group by the group index. You can also get the exclude map + * simply by iface index in exclude_map_in_map. + */ +struct bpf_map_def SEC("maps") forward_map_v4 = { + .type = BPF_MAP_TYPE_DEVMAP, + .key_size = sizeof(u32), + .value_size = sizeof(int), + .max_entries = 128, +}; + +struct bpf_map_def SEC("maps") forward_map_v6 = { + .type = BPF_MAP_TYPE_DEVMAP_HASH, + .key_size = sizeof(u32), + .value_size = sizeof(int), + .max_entries = 128, +}; + +struct bpf_map_def SEC("maps") forward_map_all = { + .type = BPF_MAP_TYPE_DEVMAP_HASH, + .key_size = sizeof(u32), + .value_size = sizeof(int), + .max_entries = 128, +}; + +struct bpf_map_def SEC("maps") exclude_map = { + .type = BPF_MAP_TYPE_DEVMAP_HASH, + .key_size = sizeof(u32), + .value_size = sizeof(int), + .max_entries = 128, +}; + +SEC("xdp_redirect_map_multi") +int xdp_redirect_map_multi_prog(struct xdp_md *ctx) +{ + u32 key, mcast_group_id, exclude_group_id; + void *data_end = (void *)(long)ctx->data_end; + void *data = (void *)(long)ctx->data; + struct ethhdr *eth = data; + int *inmap_id; + u16 h_proto; + u64 nh_off; + + nh_off = sizeof(*eth); + if (data + nh_off > data_end) + return XDP_DROP; + + h_proto = eth->h_proto; + + if (h_proto == htons(ETH_P_IP)) + return bpf_redirect_map_multi(&forward_map_v4, &exclude_map, + BPF_F_EXCLUDE_INGRESS); + else if (h_proto == htons(ETH_P_IPV6)) + return bpf_redirect_map_multi(&forward_map_v6, &exclude_map, + BPF_F_EXCLUDE_INGRESS); + else + return bpf_redirect_map_multi(&forward_map_all, NULL, + BPF_F_EXCLUDE_INGRESS); +} + +SEC("xdp_redirect_dummy") +int xdp_redirect_dummy_prog(struct xdp_md *ctx) +{ + return XDP_PASS; +} + +char _license[] SEC("license") = "GPL"; diff --git a/samples/bpf/xdp_redirect_map_multi_user.c b/samples/bpf/xdp_redirect_map_multi_user.c new file mode 100644 index 000000000000..2fcd15322201 --- /dev/null +++ b/samples/bpf/xdp_redirect_map_multi_user.c @@ -0,0 +1,170 @@ +/* SPDX-License-Identifier: GPL-2.0-only + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include + +#define MAX_IFACE_NUM 32 + +static int ifaces[MAX_IFACE_NUM] = {}; +static __u32 xdp_flags = XDP_FLAGS_UPDATE_IF_NOEXIST; + +static void int_exit(int sig) +{ + __u32 prog_id = 0; + int i; + + for (i = 0; ifaces[i] > 0; i++) { + if (bpf_get_link_xdp_id(ifaces[i], &prog_id, xdp_flags)) { + printf("bpf_get_link_xdp_id failed\n"); + exit(1); + } + if (prog_id) + bpf_set_link_xdp_fd(ifaces[i], -1, xdp_flags); + } + + exit(0); +} + +static void usage(const char *prog) +{ + fprintf(stderr, + "usage: %s [OPTS] ... \n" + "OPTS:\n" + " -S use skb-mode\n" + " -N enforce native mode\n" + " -F force loading prog\n", + prog); +} + +int main(int argc, char **argv) +{ + int prog_fd, group_all, group_v4, group_v6, exclude; + struct bpf_prog_load_attr prog_load_attr = { + .prog_type = BPF_PROG_TYPE_XDP, + }; + int i, ret, opt, ifindex; + char ifname[IF_NAMESIZE]; + struct bpf_object *obj; + char filename[256]; + + while ((opt = getopt(argc, argv, "SNF")) != -1) { + switch (opt) { + case 'S': + xdp_flags |= XDP_FLAGS_SKB_MODE; + break; + case 'N': + /* default, set below */ + break; + case 'F': + xdp_flags &= ~XDP_FLAGS_UPDATE_IF_NOEXIST; + break; + default: + usage(basename(argv[0])); + return 1; + } + } + + if (!(xdp_flags & XDP_FLAGS_SKB_MODE)) + xdp_flags |= XDP_FLAGS_DRV_MODE; + + if (optind == argc) { + printf("usage: %s ...\n", argv[0]); + return 1; + } + + printf("Get interfaces"); + for (i = 0; i < MAX_IFACE_NUM && argv[optind + i]; i ++) { + ifaces[i] = if_nametoindex(argv[optind + i]); + if (!ifaces[i]) + ifaces[i] = strtoul(argv[optind + i], NULL, 0); + if (!if_indextoname(ifaces[i], ifname)) { + perror("Invalid interface name or i"); + return 1; + } + printf(" %d", ifaces[i]); + } + printf("\n"); + + snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]); + prog_load_attr.file = filename; + + if (bpf_prog_load_xattr(&prog_load_attr, &obj, &prog_fd)) + return 1; + + group_all = bpf_object__find_map_fd_by_name(obj, "forward_map_all"); + group_v4 = bpf_object__find_map_fd_by_name(obj, "forward_map_v4"); + group_v6 = bpf_object__find_map_fd_by_name(obj, "forward_map_v6"); + exclude = bpf_object__find_map_fd_by_name(obj, "exclude_map"); + + if (group_all < 0 || group_v4 < 0 || group_v6 < 0 || exclude < 0) { + printf("bpf_object__find_map_fd_by_name failed\n"); + return 1; + } + + signal(SIGINT, int_exit); + signal(SIGTERM, int_exit); + + /* Init forward multicast groups and exclude group */ + for (i = 0; ifaces[i] > 0; i++) { + ifindex = ifaces[i]; + + /* Add all the interfaces to group all */ + ret = bpf_map_update_elem(group_all, &ifindex, &ifindex, 0); + if (ret) { + perror("bpf_map_update_elem"); + goto err_out; + } + + /* For testing: remove the 2nd interfaces from group v4 */ + if (i != 1) { + ret = bpf_map_update_elem(group_v4, &ifindex, &ifindex, 0); + if (ret) { + perror("bpf_map_update_elem"); + goto err_out; + } + } + + /* For testing: remove the 1st interfaces from group v6 */ + if (i != 0) { + ret = bpf_map_update_elem(group_v6, &ifindex, &ifindex, 0); + if (ret) { + perror("bpf_map_update_elem"); + goto err_out; + } + } + + /* For testing: add the 3rd interfaces to exclude map */ + if (i == 2) { + ret = bpf_map_update_elem(exclude, &ifindex, &ifindex, 0); + if (ret) { + perror("bpf_map_update_elem"); + goto err_out; + } + } + + /* bind prog_fd to each interface */ + ret = bpf_set_link_xdp_fd(ifindex, prog_fd, xdp_flags); + if (ret) { + printf("Set xdp fd failed on %d\n", ifindex); + goto err_out; + } + + } + + sleep(600); + return 0; + +err_out: + return 1; +}