From patchwork Fri Sep 4 13:53:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 261513 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D23D4C433E2 for ; Fri, 4 Sep 2020 13:55:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9EDED2065E for ; Fri, 4 Sep 2020 13:55:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="bK3iNa4A" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730718AbgIDNzq (ORCPT ); Fri, 4 Sep 2020 09:55:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54762 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730504AbgIDNyW (ORCPT ); Fri, 4 Sep 2020 09:54:22 -0400 Received: from mail-pj1-x1044.google.com (mail-pj1-x1044.google.com [IPv6:2607:f8b0:4864:20::1044]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0DD48C061244; Fri, 4 Sep 2020 06:54:20 -0700 (PDT) Received: by mail-pj1-x1044.google.com with SMTP id np15so5235830pjb.0; Fri, 04 Sep 2020 06:54:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=m9z4cRfJn8UcZ/9frCTkEjGN764ybO6hPefQpHU5TmI=; b=bK3iNa4A/UEw1MU4mezHth5dLxzdli+bEgNfEwtdEWI7IUOhw4S1wwu8BH3kcP1d0J rsfeBwNGMswAm/Rlv2BvFHNUkuG6l4QHhuzL3PANSlg0vUv6bQphl7N54K9spydzSPD6 ajDD+Q1uf8UAZXp6XMuUoSqw6jUxZA3nuvmUV8GdnAYb2jTR9Qm5TbIjCp3LT3kWXjNM /cxxVDEDMFdWVWmiE/mcZMbI17ASZ4yN7KVLNzqPlwJ+SFF79rqmQYpJe1dGq4qJYzK5 gVJRNAN1dUHkn8tyDGQCXyW4QulPcGPEgrgbGmpb7QxS5UviAP0k/HqhSvS/+vhVMlEe DCqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=m9z4cRfJn8UcZ/9frCTkEjGN764ybO6hPefQpHU5TmI=; b=QpsMmUEPLaObl1L3OpvL0FQl4aH2KPbo1zJWnmh8C7+5A2wmtbtUCErsW+VacVQ0lH k0fksWATQ928OwuDHatleCHfSjTHNIHwC3uA/ubsXPC0wOOEd9Zexg0ZqKzfAgEFxujE 8Zy8SC1il0H5R/02+DAwWX21saFM+X4LnMjjVu6cHJJ8wMfgMxkQ5lOhzBtDk6hLTZQW EpLt5Z3nPjpzYzTtc12Mw2rXCV3QHWDZTaouRj43Zu6g+ALcyoNGK+UJ0lixT6l9b1BU vF00X6vfQWlBy/kgO5SXFQlCJXHK3UIoRJtywqSZ0FARqrlppSpn2KHwH909tpfm3+AH b8DQ== X-Gm-Message-State: AOAM532i3HwpvuTwykleLFEEbZ/5G+lAoCdRx2vYoahe0Mtt/jcuu2oc CnEb2IW6GWnBUtBPZL4E1Cg= X-Google-Smtp-Source: ABdhPJwJ8TlaOVV3wCqW/daU3Hy40stCQrDheIx0EIvFRROBLU822+X3TsrXxGs4jJ/3wpxjw+F59g== X-Received: by 2002:a17:902:848a:: with SMTP id c10mr8598273plo.8.1599227659617; Fri, 04 Sep 2020 06:54:19 -0700 (PDT) Received: from btopel-mobl.ger.intel.com ([192.55.55.41]) by smtp.gmail.com with ESMTPSA id g9sm6931239pfr.172.2020.09.04.06.54.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Sep 2020 06:54:19 -0700 (PDT) From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: ast@kernel.org, daniel@iogearbox.net, netdev@vger.kernel.org, bpf@vger.kernel.org Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , magnus.karlsson@intel.com, davem@davemloft.net, kuba@kernel.org, hawk@kernel.org, john.fastabend@gmail.com, intel-wired-lan@lists.osuosl.org Subject: [PATCH bpf-next 2/6] xdp: introduce xdp_do_redirect_ext() function Date: Fri, 4 Sep 2020 15:53:27 +0200 Message-Id: <20200904135332.60259-3-bjorn.topel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200904135332.60259-1-bjorn.topel@gmail.com> References: <20200904135332.60259-1-bjorn.topel@gmail.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Björn Töpel Introduce the xdp_do_redirect_ext() which returns additional information to the caller. For now, it is the type of map that the packet was redirected to. This enables the driver to have more fine-grained control, e.g. is the redirect fails due to full AF_XDP Rx queue (error code ENOBUFS and map is XSKMAP), a zero-copy enabled driver should yield to userland as soon as possible. Signed-off-by: Björn Töpel --- include/linux/filter.h | 2 ++ net/core/filter.c | 16 ++++++++++++++-- 2 files changed, 16 insertions(+), 2 deletions(-) diff --git a/include/linux/filter.h b/include/linux/filter.h index 995625950cc1..0060c2c8abc3 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -942,6 +942,8 @@ static inline int xdp_ok_fwd_dev(const struct net_device *fwd, */ int xdp_do_generic_redirect(struct net_device *dev, struct sk_buff *skb, struct xdp_buff *xdp, struct bpf_prog *prog); +int xdp_do_redirect_ext(struct net_device *dev, struct xdp_buff *xdp, + struct bpf_prog *xdp_prog, enum bpf_map_type *map_type); int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, struct bpf_prog *prog); diff --git a/net/core/filter.c b/net/core/filter.c index 47eef9a0be6a..ce6098210a23 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -3596,8 +3596,8 @@ void bpf_clear_redirect_map(struct bpf_map *map) } } -int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, - struct bpf_prog *xdp_prog) +int xdp_do_redirect_ext(struct net_device *dev, struct xdp_buff *xdp, + struct bpf_prog *xdp_prog, enum bpf_map_type *map_type) { struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); struct bpf_map *map = READ_ONCE(ri->map); @@ -3609,6 +3609,8 @@ int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, ri->tgt_value = NULL; WRITE_ONCE(ri->map, NULL); + *map_type = BPF_MAP_TYPE_UNSPEC; + if (unlikely(!map)) { fwd = dev_get_by_index_rcu(dev_net(dev), index); if (unlikely(!fwd)) { @@ -3618,6 +3620,7 @@ int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, err = dev_xdp_enqueue(fwd, xdp, dev); } else { + *map_type = map->map_type; err = __bpf_tx_xdp_map(dev, fwd, map, xdp); } @@ -3630,6 +3633,15 @@ int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, _trace_xdp_redirect_map_err(dev, xdp_prog, fwd, map, index, err); return err; } +EXPORT_SYMBOL_GPL(xdp_do_redirect_ext); + +int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, + struct bpf_prog *xdp_prog) +{ + enum bpf_map_type dummy; + + return xdp_do_redirect_ext(dev, xdp, xdp_prog, &dummy); +} EXPORT_SYMBOL_GPL(xdp_do_redirect); static int xdp_do_generic_redirect_map(struct net_device *dev, From patchwork Fri Sep 4 13:53:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 261514 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4BED7C43461 for ; Fri, 4 Sep 2020 13:55:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 18AAF2065E for ; Fri, 4 Sep 2020 13:55:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Edvfi7J9" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730683AbgIDNzg (ORCPT ); Fri, 4 Sep 2020 09:55:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54776 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730705AbgIDNy0 (ORCPT ); Fri, 4 Sep 2020 09:54:26 -0400 Received: from mail-pl1-x641.google.com (mail-pl1-x641.google.com [IPv6:2607:f8b0:4864:20::641]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EE4E5C061246; Fri, 4 Sep 2020 06:54:24 -0700 (PDT) Received: by mail-pl1-x641.google.com with SMTP id k13so1172037plk.13; Fri, 04 Sep 2020 06:54:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ndlvsQFIeAXfc3bcbH6mbqQWsrzfWC6PhjjCGX9XLA8=; b=Edvfi7J9dq/mGVeTaUoSyXhsYATv0l+s2ALa97MuHv8o4cr4zqviEdBmBps0FWI+XC DnpzaaBRMb/pmRy9XntxBWagDNu+rKf8OXgltLwPK8S1Ye4vvu176e5ICCHhsg1HW8kR SC/khNiqpJTblNrrFCODwQmfhdklLZAVno2vEgO7eaLdOEZPWwlgg5iQLORKUipo/TzM IDrp6GMOmjd+ieuKLb7FyFr8Rv7a13+xQg4zvfmKNujpgionXQFFyn6wvRiwo0HAxzsE L/60XL562FzCK7CF7iLLG7oRN1PGfrhTEiDU2+mjhNmOB+zlBZty6TDk/nC8q7njJQgY GUCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ndlvsQFIeAXfc3bcbH6mbqQWsrzfWC6PhjjCGX9XLA8=; b=qBqi+ZLTi1ks8w6ivuJbMMktdNcF0LW5o0m9A4avqOvCnEa5iJgpZdSlYQn1e1ikbX ry8nErpXG8TJ2gR2ZYnDJP4/xEBAwdSlgLozQTs7kNGWPIMEpMkgvK/omckn9+Hg4Gel s5M98aAUZErkBubNw2xhSpL719D7qiHDFMvtEofwJS8laZ+9UEzsDPdt/ZRZ7KAdHFXX jDG7UWdxbavq+5Ye9P2+QnVkxcHxcrhRJrMbFxUA1IO8VCDAZDsAnWZZu34/SWemR3DJ 235JywMx2BFMDrokrtoEmDyDknPT84y0Hct38SubyAO6J+K3rz98dDmlPi7WEYZnw0wY bQjg== X-Gm-Message-State: AOAM533M7G3hiy7B4RksM1snElfyc/4ZEVLgZqXt09u1pIqY4qf8zibr eM8gJlc53SL3ij5Bllb2nxE= X-Google-Smtp-Source: ABdhPJweTAGZ9F7Kn14qT3nNdQ72xvhDC1jpCcaeNMvBJHFxl/NA7IL76Ulfc8SEUIDLSb+nE6kthA== X-Received: by 2002:a17:902:7607:: with SMTP id k7mr8577410pll.91.1599227664587; Fri, 04 Sep 2020 06:54:24 -0700 (PDT) Received: from btopel-mobl.ger.intel.com ([192.55.55.41]) by smtp.gmail.com with ESMTPSA id g9sm6931239pfr.172.2020.09.04.06.54.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Sep 2020 06:54:24 -0700 (PDT) From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: ast@kernel.org, daniel@iogearbox.net, netdev@vger.kernel.org, bpf@vger.kernel.org Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , magnus.karlsson@intel.com, davem@davemloft.net, kuba@kernel.org, hawk@kernel.org, john.fastabend@gmail.com, intel-wired-lan@lists.osuosl.org Subject: [PATCH bpf-next 3/6] xsk: introduce xsk_do_redirect_rx_full() helper Date: Fri, 4 Sep 2020 15:53:28 +0200 Message-Id: <20200904135332.60259-4-bjorn.topel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200904135332.60259-1-bjorn.topel@gmail.com> References: <20200904135332.60259-1-bjorn.topel@gmail.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Björn Töpel The xsk_do_redirect_rx_full() helper can be used to check if a failure of xdp_do_redirect() was due to the AF_XDP socket had a full Rx ring. Signed-off-by: Björn Töpel --- include/net/xdp_sock_drv.h | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/include/net/xdp_sock_drv.h b/include/net/xdp_sock_drv.h index 5b1ee8a9976d..34c58b5fbc28 100644 --- a/include/net/xdp_sock_drv.h +++ b/include/net/xdp_sock_drv.h @@ -116,6 +116,11 @@ static inline void xsk_buff_raw_dma_sync_for_device(struct xsk_buff_pool *pool, xp_dma_sync_for_device(pool, dma, size); } +static inline bool xsk_do_redirect_rx_full(int err, enum bpf_map_type map_type) +{ + return err == -ENOBUFS && map_type == BPF_MAP_TYPE_XSKMAP; +} + #else static inline void xsk_tx_completed(struct xsk_buff_pool *pool, u32 nb_entries) @@ -235,6 +240,10 @@ static inline void xsk_buff_raw_dma_sync_for_device(struct xsk_buff_pool *pool, { } +static inline bool xsk_do_redirect_rx_full(int err, enum bpf_map_type map_type) +{ + return false; +} #endif /* CONFIG_XDP_SOCKETS */ #endif /* _LINUX_XDP_SOCK_DRV_H */ From patchwork Fri Sep 4 13:53:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 261516 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84C36C433E2 for ; Fri, 4 Sep 2020 13:54:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 512B320E65 for ; Fri, 4 Sep 2020 13:54:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="eEtCcZfJ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730730AbgIDNyy (ORCPT ); Fri, 4 Sep 2020 09:54:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54810 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730712AbgIDNyj (ORCPT ); Fri, 4 Sep 2020 09:54:39 -0400 Received: from mail-pg1-x544.google.com (mail-pg1-x544.google.com [IPv6:2607:f8b0:4864:20::544]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C6D65C061251; Fri, 4 Sep 2020 06:54:35 -0700 (PDT) Received: by mail-pg1-x544.google.com with SMTP id v15so4363382pgh.6; Fri, 04 Sep 2020 06:54:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=3Br4mU3YiIC4VRDX0+F3OpOlmSyiXlfba1PWnn12174=; b=eEtCcZfJUz2RsWJUBs5W2X/s5sEfMqXT4rlVpRRcCoJmiqFZPRb85GFIko7b/IbW9o gCz6V+k7YM0Bzo7A/sqeM/YUNVTmUimNxL9DQuu5S+y4PV8Y/r4bQrlr4kGsj55JMdjg 5lwGPxhAIgKaGL/4OpmwbBt7yqDDBojMoJYXOFCdXbra5W5aAjCE9BKog3w8T55uAyAt 9iL6r7s5sd8bsSwW2uESigaZAuY+gE1ZLnkkECL7dW5pIExz2bG/N54U2/XyQq5ReNB4 /bwo5DD0ns2bQa3Z1ZKhdNyx5aw1TXwzwiTeRmHjlEtaSXzFb5AOmvcWmIGAEv1YBNln m0VA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=3Br4mU3YiIC4VRDX0+F3OpOlmSyiXlfba1PWnn12174=; b=PeUH7Pooobz5K4YctgVVWCRAHNJDmz9QVc6AHcnk47vD2l5v8eY1POuWzid43Os8n2 +n+QkL8Y3rQQcZwRKmeix/gg3RY+lMJbnjk0VIA/oKHGkDITpvyBe6OLPH+tnT1/t1WZ VmrOvwn2kn+eO3VvMwIBIPYJ4csmJ6HrcQ5wBUuMOh1JQZYAiTZwoRg+1eHX7/NLq78W Lpw0riaNMi0d5QSwrmFDON0RZdfs2H22kXC3kxLOyDE45+vAHrXVKBKcSfnagLx5GWOj nghgC8oRp5inS/rK2MiI+GJRMBo89yeqhyO0rWXdlfD4DWVnEZycfEjUBbhI0Lm15Dpf cpZA== X-Gm-Message-State: AOAM532oK1+gfv8qTZpYmy/KsmFKoJ4ygHahM4BD5zOL5ecuICxax9Pk wSw85rKyhAZa5/cdkMWdLPY= X-Google-Smtp-Source: ABdhPJyi17LWympwfyGZ/fupgfwzB/WqdpEYgSTY9m58p2VgIMdE2I7itIooXJ3yJtNtKqvns2tY/A== X-Received: by 2002:a63:2d42:: with SMTP id t63mr7458188pgt.450.1599227675061; Fri, 04 Sep 2020 06:54:35 -0700 (PDT) Received: from btopel-mobl.ger.intel.com ([192.55.55.41]) by smtp.gmail.com with ESMTPSA id g9sm6931239pfr.172.2020.09.04.06.54.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Sep 2020 06:54:34 -0700 (PDT) From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: ast@kernel.org, daniel@iogearbox.net, netdev@vger.kernel.org, bpf@vger.kernel.org Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , magnus.karlsson@intel.com, davem@davemloft.net, kuba@kernel.org, hawk@kernel.org, john.fastabend@gmail.com, intel-wired-lan@lists.osuosl.org Subject: [PATCH bpf-next 5/6] ice, xsk: finish napi loop if AF_XDP Rx queue is full Date: Fri, 4 Sep 2020 15:53:30 +0200 Message-Id: <20200904135332.60259-6-bjorn.topel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200904135332.60259-1-bjorn.topel@gmail.com> References: <20200904135332.60259-1-bjorn.topel@gmail.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Björn Töpel Make the AF_XDP zero-copy path aware that the reason for redirect failure was due to full Rx queue. If so, exit the napi loop as soon as possible (exit the softirq processing), so that the userspace AF_XDP process can hopefully empty the Rx queue. This mainly helps the "one core scenario", where the userland process and Rx softirq processing is on the same core. Note that the early exit can only be performed if the "need wakeup" feature is enabled, because otherwise there is no notification mechanism available from the kernel side. This requires that the driver starts using the newly introduced xdp_do_redirect_ext() and xsk_do_redirect_rx_full() functions. Signed-off-by: Björn Töpel --- drivers/net/ethernet/intel/ice/ice_xsk.c | 23 ++++++++++++++++------- 1 file changed, 16 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index 797886524054..f698d0199b0a 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -502,13 +502,15 @@ ice_construct_skb_zc(struct ice_ring *rx_ring, struct ice_rx_buf *rx_buf) * ice_run_xdp_zc - Executes an XDP program in zero-copy path * @rx_ring: Rx ring * @xdp: xdp_buff used as input to the XDP program + * @early_exit: true means that the napi loop should exit early * * Returns any of ICE_XDP_{PASS, CONSUMED, TX, REDIR} */ static int -ice_run_xdp_zc(struct ice_ring *rx_ring, struct xdp_buff *xdp) +ice_run_xdp_zc(struct ice_ring *rx_ring, struct xdp_buff *xdp, bool *early_exit) { int err, result = ICE_XDP_PASS; + enum bpf_map_type map_type; struct bpf_prog *xdp_prog; struct ice_ring *xdp_ring; u32 act; @@ -529,8 +531,13 @@ ice_run_xdp_zc(struct ice_ring *rx_ring, struct xdp_buff *xdp) result = ice_xmit_xdp_buff(xdp, xdp_ring); break; case XDP_REDIRECT: - err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog); - result = !err ? ICE_XDP_REDIR : ICE_XDP_CONSUMED; + err = xdp_do_redirect_ext(rx_ring->netdev, xdp, xdp_prog, &map_type); + if (err) { + *early_exit = xsk_do_redirect_rx_full(err, map_type); + result = ICE_XDP_CONSUMED; + } else { + result = ICE_XDP_REDIR; + } break; default: bpf_warn_invalid_xdp_action(act); @@ -558,8 +565,8 @@ int ice_clean_rx_irq_zc(struct ice_ring *rx_ring, int budget) { unsigned int total_rx_bytes = 0, total_rx_packets = 0; u16 cleaned_count = ICE_DESC_UNUSED(rx_ring); + bool early_exit = false, failure = false; unsigned int xdp_xmit = 0; - bool failure = false; while (likely(total_rx_packets < (unsigned int)budget)) { union ice_32b_rx_flex_desc *rx_desc; @@ -597,7 +604,7 @@ int ice_clean_rx_irq_zc(struct ice_ring *rx_ring, int budget) rx_buf->xdp->data_end = rx_buf->xdp->data + size; xsk_buff_dma_sync_for_cpu(rx_buf->xdp, rx_ring->xsk_pool); - xdp_res = ice_run_xdp_zc(rx_ring, rx_buf->xdp); + xdp_res = ice_run_xdp_zc(rx_ring, rx_buf->xdp, &early_exit); if (xdp_res) { if (xdp_res & (ICE_XDP_TX | ICE_XDP_REDIR)) xdp_xmit |= xdp_res; @@ -610,6 +617,8 @@ int ice_clean_rx_irq_zc(struct ice_ring *rx_ring, int budget) cleaned_count++; ice_bump_ntc(rx_ring); + if (early_exit) + break; continue; } @@ -646,12 +655,12 @@ int ice_clean_rx_irq_zc(struct ice_ring *rx_ring, int budget) ice_update_rx_ring_stats(rx_ring, total_rx_packets, total_rx_bytes); if (xsk_uses_need_wakeup(rx_ring->xsk_pool)) { - if (failure || rx_ring->next_to_clean == rx_ring->next_to_use) + if (early_exit || failure || rx_ring->next_to_clean == rx_ring->next_to_use) xsk_set_rx_need_wakeup(rx_ring->xsk_pool); else xsk_clear_rx_need_wakeup(rx_ring->xsk_pool); - return (int)total_rx_packets; + return early_exit ? 0 : (int)total_rx_packets; } return failure ? budget : (int)total_rx_packets; From patchwork Fri Sep 4 13:53:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 261515 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5DA14C43461 for ; Fri, 4 Sep 2020 13:55:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 27ECD20E65 for ; Fri, 4 Sep 2020 13:55:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Gwve0xHq" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730611AbgIDNzN (ORCPT ); Fri, 4 Sep 2020 09:55:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54774 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730718AbgIDNyk (ORCPT ); Fri, 4 Sep 2020 09:54:40 -0400 Received: from mail-pl1-x643.google.com (mail-pl1-x643.google.com [IPv6:2607:f8b0:4864:20::643]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E1501C061244; Fri, 4 Sep 2020 06:54:39 -0700 (PDT) Received: by mail-pl1-x643.google.com with SMTP id h2so1209136plr.0; Fri, 04 Sep 2020 06:54:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=00gl+iNRqCD/06CwSuYnvYFJj+D6FqLV6Ha+6OdW5o0=; b=Gwve0xHqOESk5XRj2J5YoY/XY0pDvDUcYxoL58FBqKdVN1BIs4zdd3sOpy+MeOKCOM oFUMmc2jeF89PGEwjJk54Aq7DvFkrtBJaUrQAu/6cg5ZqpwcAyaUzRnIzi1mnOPOKzhM TdUu2VW1//Sla4CybPIA4ma6sT3pTurtil5ZXLUzT3E9wJNG0pa8faFRZxREJhGK9H4+ ZvkyrdGYwQdVwHDldV6Jeqjo/+uwN9ye5WE75ZrlcoO4ZjjndDjAUu07DyqYzXc6xebk p8UlENYIAZvWRB0tHx82snhOYmp45rfSS48WmgAAtNxEn/5EjaCQ/7VRbIF33Nn1A1Cv wN8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=00gl+iNRqCD/06CwSuYnvYFJj+D6FqLV6Ha+6OdW5o0=; b=X6JmRced95TvKYIZx2rMHr5mclqWspAteKJsom8ofj/gXjNwBtafGG6Z89yN2MgL4L r9DzSaHNSdOwsXw5XRVDMyXN2cxhrLkMs2wkU5udKRAW82O/iD5NdNOxBKUthDjqniAM 73/sYbketqEd6YZGJJs0Dqciy3DSvXg3afDkUphy1WDK1rrpGbmzwzdbKxgRoN5FT3qy 0wHa73KAPrtlTE0rbXxRXiFwoeRNUxaTgAn72LT4ytALFlLgRKJLs/cBw0PredKJMtZh yHtcre0bLCV+VMQIm8k5xpJuVP9UkF4N78xWLcTdk7ChWVKjhG8gmGyDEtiI4ZKM+R8H DyPQ== X-Gm-Message-State: AOAM530S/8evzQqvMaIxPFDIkrEyosAhgGWBsfaePc9vL+3AOAvYb2zT yeFFsSZMD1xY4ZySz/4LIUs= X-Google-Smtp-Source: ABdhPJzJbE5rpgp1DiX2blOXUdtGirT7ZFHRnzhE+YUtNvIjD9OForeut3ciweX78vAgCcr9gajPjA== X-Received: by 2002:a17:90a:ca03:: with SMTP id x3mr8376030pjt.92.1599227679442; Fri, 04 Sep 2020 06:54:39 -0700 (PDT) Received: from btopel-mobl.ger.intel.com ([192.55.55.41]) by smtp.gmail.com with ESMTPSA id g9sm6931239pfr.172.2020.09.04.06.54.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Sep 2020 06:54:38 -0700 (PDT) From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: ast@kernel.org, daniel@iogearbox.net, netdev@vger.kernel.org, bpf@vger.kernel.org Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , magnus.karlsson@intel.com, davem@davemloft.net, kuba@kernel.org, hawk@kernel.org, john.fastabend@gmail.com, intel-wired-lan@lists.osuosl.org Subject: [PATCH bpf-next 6/6] ixgbe, xsk: finish napi loop if AF_XDP Rx queue is full Date: Fri, 4 Sep 2020 15:53:31 +0200 Message-Id: <20200904135332.60259-7-bjorn.topel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200904135332.60259-1-bjorn.topel@gmail.com> References: <20200904135332.60259-1-bjorn.topel@gmail.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Björn Töpel Make the AF_XDP zero-copy path aware that the reason for redirect failure was due to full Rx queue. If so, exit the napi loop as soon as possible (exit the softirq processing), so that the userspace AF_XDP process can hopefully empty the Rx queue. This mainly helps the "one core scenario", where the userland process and Rx softirq processing is on the same core. Note that the early exit can only be performed if the "need wakeup" feature is enabled, because otherwise there is no notification mechanism available from the kernel side. This requires that the driver starts using the newly introduced xdp_do_redirect_ext() and xsk_do_redirect_rx_full() functions. Signed-off-by: Björn Töpel --- drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c | 23 ++++++++++++++------ 1 file changed, 16 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c index 3771857cf887..a4aebfd986b3 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c @@ -93,9 +93,11 @@ int ixgbe_xsk_pool_setup(struct ixgbe_adapter *adapter, static int ixgbe_run_xdp_zc(struct ixgbe_adapter *adapter, struct ixgbe_ring *rx_ring, - struct xdp_buff *xdp) + struct xdp_buff *xdp, + bool *early_exit) { int err, result = IXGBE_XDP_PASS; + enum bpf_map_type map_type; struct bpf_prog *xdp_prog; struct xdp_frame *xdpf; u32 act; @@ -116,8 +118,13 @@ static int ixgbe_run_xdp_zc(struct ixgbe_adapter *adapter, result = ixgbe_xmit_xdp_ring(adapter, xdpf); break; case XDP_REDIRECT: - err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog); - result = !err ? IXGBE_XDP_REDIR : IXGBE_XDP_CONSUMED; + err = xdp_do_redirect_ext(rx_ring->netdev, xdp, xdp_prog, &map_type); + if (err) { + *early_exit = xsk_do_redirect_rx_full(err, map_type); + result = IXGBE_XDP_CONSUMED; + } else { + result = IXGBE_XDP_REDIR; + } break; default: bpf_warn_invalid_xdp_action(act); @@ -235,8 +242,8 @@ int ixgbe_clean_rx_irq_zc(struct ixgbe_q_vector *q_vector, unsigned int total_rx_bytes = 0, total_rx_packets = 0; struct ixgbe_adapter *adapter = q_vector->adapter; u16 cleaned_count = ixgbe_desc_unused(rx_ring); + bool early_exit = false, failure = false; unsigned int xdp_res, xdp_xmit = 0; - bool failure = false; struct sk_buff *skb; while (likely(total_rx_packets < budget)) { @@ -288,7 +295,7 @@ int ixgbe_clean_rx_irq_zc(struct ixgbe_q_vector *q_vector, bi->xdp->data_end = bi->xdp->data + size; xsk_buff_dma_sync_for_cpu(bi->xdp, rx_ring->xsk_pool); - xdp_res = ixgbe_run_xdp_zc(adapter, rx_ring, bi->xdp); + xdp_res = ixgbe_run_xdp_zc(adapter, rx_ring, bi->xdp, &early_exit); if (xdp_res) { if (xdp_res & (IXGBE_XDP_TX | IXGBE_XDP_REDIR)) @@ -302,6 +309,8 @@ int ixgbe_clean_rx_irq_zc(struct ixgbe_q_vector *q_vector, cleaned_count++; ixgbe_inc_ntc(rx_ring); + if (early_exit) + break; continue; } @@ -346,12 +355,12 @@ int ixgbe_clean_rx_irq_zc(struct ixgbe_q_vector *q_vector, q_vector->rx.total_bytes += total_rx_bytes; if (xsk_uses_need_wakeup(rx_ring->xsk_pool)) { - if (failure || rx_ring->next_to_clean == rx_ring->next_to_use) + if (early_exit || failure || rx_ring->next_to_clean == rx_ring->next_to_use) xsk_set_rx_need_wakeup(rx_ring->xsk_pool); else xsk_clear_rx_need_wakeup(rx_ring->xsk_pool); - return (int)total_rx_packets; + return early_exit ? 0 : (int)total_rx_packets; } return failure ? budget : (int)total_rx_packets; }