From patchwork Mon Feb 22 16:25:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jason A. Donenfeld" X-Patchwork-Id: 385809 Delivered-To: patch@linaro.org Received: by 2002:a02:290e:0:0:0:0:0 with SMTP id p14csp1417352jap; Mon, 22 Feb 2021 08:28:15 -0800 (PST) X-Google-Smtp-Source: ABdhPJxNhhKdYhgAZnZ/2fXvjVet9M33HG+HMO4tH1j/KI3r8Zw/u/ejrBrAK9+9AEycBkfitdr+ X-Received: by 2002:a17:906:3285:: with SMTP id 5mr22380438ejw.356.1614011295226; Mon, 22 Feb 2021 08:28:15 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614011295; cv=none; d=google.com; s=arc-20160816; b=MSZ0Uv3A4IyrTOe1i1fW/rRQWX2h8rrpkipP+mqgautH+EwKr2Uoes+cg5KZi/aDZb G7Ua2W0m+K75yzQmkzmEt9qWIkG2NOMUm6iFRHiW/dB3UTINHE8QdbXT6U7lguK+K/49 EA/nnjruovna1P4yP3QtNf9jfZBg6f3b/riSggdH2gGjDSQdWOI1AXo5IZ4Ekax/Rig5 u/B/x4ufJR62vbQUNzzOZnHN/QKtoHYoZSfmIyZYokXRxgrNRiS/EWu3UDgpY4I4J7+X T8XYV67qIpI1zzuOR9DNvSpJPDfllKONCxvB4zX3n/45/pjMZReBTxNCYfM5Vc4/F5/d 4r/Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=N6xDFdEFdQ6jS23xmeafoPQPPFJphZfu48UKYzkjnas=; b=ikaoUcy5tEjwuFbXSzWL7p1d9EcMPf5PTg1ASAAd/moE/74h7aFbmuJvB7cZN8ncrk RsbrzHb57UDmqpPsa31RMiItjX/lqsIOPW04C0RNTVGWjmYUPxcV0yDAjhda+oQaEUJV AH12J4h5yKH2cpfobW+7O56T3FOZ2sdCTCoSBSjieUa5QZ0+pwp530eAQUUf8mtXXwmE PLlv0Jo5BJhhX2YJ6FgWKMHDTtCRBmVUtEqgRTkrC6IvoA5tK6oip17tdp4CSZBU6Tl5 mny48Ss+VClJadyZMzn7VS6w3kUPw4FnVs6BP31XHF5sX/+fvTbFLSmkrWlEfVKq3Wbc 0KSA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@zx2c4.com header.s=20210105 header.b=PxUD1db6; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=zx2c4.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a26si10999342edv.427.2021.02.22.08.28.15; Mon, 22 Feb 2021 08:28:15 -0800 (PST) Received-SPF: pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@zx2c4.com header.s=20210105 header.b=PxUD1db6; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=zx2c4.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230503AbhBVQ1f (ORCPT + 9 others); Mon, 22 Feb 2021 11:27:35 -0500 Received: from mail.zx2c4.com ([104.131.123.232]:60986 "EHLO mail.zx2c4.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231689AbhBVQ06 (ORCPT ); Mon, 22 Feb 2021 11:26:58 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zx2c4.com; s=20210105; t=1614011154; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=N6xDFdEFdQ6jS23xmeafoPQPPFJphZfu48UKYzkjnas=; b=PxUD1db6wi1mQA0Jjo4rEA9Xpx0eFLOn4CqsQqyN8+zWwv+8+iuEsES8/OP8fLJjhRqlzx c+CGtomqDbYQwYoyEO1p8mlCGkcgQjxAQfnX27Hzv8HL6mQUAkCxENoXJTXkaiDWNWAQr+ ksDXbgE6X7GEca4Tzm4d4NL49U+0958= Received: by mail.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id 2022fa9a (TLSv1.3:AEAD-AES256-GCM-SHA384:256:NO); Mon, 22 Feb 2021 16:25:54 +0000 (UTC) From: "Jason A. Donenfeld" To: netdev@vger.kernel.org, davem@davemloft.net Subject: [PATCH net 1/7] wireguard: avoid double unlikely() notation when using IS_ERR() Date: Mon, 22 Feb 2021 17:25:43 +0100 Message-Id: <20210222162549.3252778-2-Jason@zx2c4.com> In-Reply-To: <20210222162549.3252778-1-Jason@zx2c4.com> References: <20210222162549.3252778-1-Jason@zx2c4.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Antonio Quartulli The definition of IS_ERR() already applies the unlikely() notation when checking the error status of the passed pointer. For this reason there is no need to have the same notation outside of IS_ERR() itself. Clean up code by removing redundant notation. Signed-off-by: Antonio Quartulli Fixes: e7096c131e51 ("net: WireGuard secure network tunnel") Signed-off-by: Jason A. Donenfeld --- drivers/net/wireguard/device.c | 2 +- drivers/net/wireguard/socket.c | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) -- 2.30.1 diff --git a/drivers/net/wireguard/device.c b/drivers/net/wireguard/device.c index a3ed49cd95c3..cd51a2afa28e 100644 --- a/drivers/net/wireguard/device.c +++ b/drivers/net/wireguard/device.c @@ -157,7 +157,7 @@ static netdev_tx_t wg_xmit(struct sk_buff *skb, struct net_device *dev) } else { struct sk_buff *segs = skb_gso_segment(skb, 0); - if (unlikely(IS_ERR(segs))) { + if (IS_ERR(segs)) { ret = PTR_ERR(segs); goto err_peer; } diff --git a/drivers/net/wireguard/socket.c b/drivers/net/wireguard/socket.c index 410b318e57fb..41430c0e465a 100644 --- a/drivers/net/wireguard/socket.c +++ b/drivers/net/wireguard/socket.c @@ -71,7 +71,7 @@ static int send4(struct wg_device *wg, struct sk_buff *skb, ip_rt_put(rt); rt = ip_route_output_flow(sock_net(sock), &fl, sock); } - if (unlikely(IS_ERR(rt))) { + if (IS_ERR(rt)) { ret = PTR_ERR(rt); net_dbg_ratelimited("%s: No route to %pISpfsc, error %d\n", wg->dev->name, &endpoint->addr, ret); @@ -138,7 +138,7 @@ static int send6(struct wg_device *wg, struct sk_buff *skb, } dst = ipv6_stub->ipv6_dst_lookup_flow(sock_net(sock), sock, &fl, NULL); - if (unlikely(IS_ERR(dst))) { + if (IS_ERR(dst)) { ret = PTR_ERR(dst); net_dbg_ratelimited("%s: No route to %pISpfsc, error %d\n", wg->dev->name, &endpoint->addr, ret); From patchwork Mon Feb 22 16:25:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jason A. Donenfeld" X-Patchwork-Id: 385810 Delivered-To: patch@linaro.org Received: by 2002:a02:290e:0:0:0:0:0 with SMTP id p14csp1418250jap; Mon, 22 Feb 2021 08:29:41 -0800 (PST) X-Google-Smtp-Source: ABdhPJy7/tN7ukGOs98lKYviYAoBWjdKgc6Qmpmwp7laLM5Mwqt58AsbqFEM5YWz78SB5fiPJ9Q6 X-Received: by 2002:a17:906:2790:: with SMTP id j16mr7254785ejc.420.1614011381678; Mon, 22 Feb 2021 08:29:41 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614011381; cv=none; d=google.com; s=arc-20160816; b=TSk8zhkmbesGw1uKmbv4JWGf4n6AqCT9ftbDrs1xzuuTmorDQmmMP1C7C8TsjZr2z/ ztLDip61seA6T0E9bAYSYlMVob9kSNXVVxb5tWZGmexHtrn30qHDxsCeilKw9TVxW4jn gx71ELAMnQS8tRasg1Xyzs0wG/EDs8wcPigfVyJ7OW/qiurVMEKZ2tslv+dS8Jv0yMx8 OYyuqZ97/U0UQT0Jk8RFTufyNzN9QkSBpXL2a3ZFzUj+Zh1DrKoJJ0895wWe8NYeL4Yr qlOye9navIZWDq++f8rv+iuQVjLAwq7/W53iQjPkDpoWuEaWB+EaHrd05fB5eJErqYUE kAQQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=Z+7fThJ1DMrd5h9xcT9hDeO3ALShvpQVzlOZ35I18cE=; b=yu0L1RDmEvbN6W4kvefriMYo5r96hnZkQM1V7VOl4XIb7mEgOzF4xgw4zRZXthY5Cx 7xHlo9Tu9h04HkvCypWMNrki/juTbeSkQTzqAeEEqUNXWmyM1lH36hDyc+QqOkomChQd hXMXxk84lE9tDpURzCESOlO3EyKUprdYF7KRZNfO/+rIyBxnS/wkX+zcvpNndu8QVsk0 yiLMLHT+19cwOFlxjlmQlu7kUETyOVR6lhKQ0t00suL2P0Mi8NWJ+0dONyZWoT4QfkFc 7Z5LC5f/rsuzhTQVjQ9CyPrHqj1VJMXkVqPQsPjcKttGtDO0du4prV3RWbsERSIsOzOG w7cQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@zx2c4.com header.s=20210105 header.b=cAMVFP9h; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=zx2c4.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id ld19si11583137ejb.273.2021.02.22.08.29.41; Mon, 22 Feb 2021 08:29:41 -0800 (PST) Received-SPF: pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@zx2c4.com header.s=20210105 header.b=cAMVFP9h; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=zx2c4.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231261AbhBVQ2M (ORCPT + 9 others); Mon, 22 Feb 2021 11:28:12 -0500 Received: from mail.zx2c4.com ([104.131.123.232]:32820 "EHLO mail.zx2c4.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231700AbhBVQ1K (ORCPT ); Mon, 22 Feb 2021 11:27:10 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zx2c4.com; s=20210105; t=1614011156; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Z+7fThJ1DMrd5h9xcT9hDeO3ALShvpQVzlOZ35I18cE=; b=cAMVFP9hhw2agLuASQkx2ygpW91a3hJGPMRjC8uRHNtxjWsMNX4OzL7web+0cXzhmYLmpW Fu7fqx4NApX2LuaIiPEKR4gfabSTn8DzEh56Qv32iEek9FEL0wQHxKmYwR30tZTOjUN5qH JKigkCbmQLAAIScZA6hg1v3MYjUHjNo= Received: by mail.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id 7e050274 (TLSv1.3:AEAD-AES256-GCM-SHA384:256:NO); Mon, 22 Feb 2021 16:25:56 +0000 (UTC) From: "Jason A. Donenfeld" To: netdev@vger.kernel.org, davem@davemloft.net Subject: [PATCH net 2/7] wireguard: socket: remove bogus __be32 annotation Date: Mon, 22 Feb 2021 17:25:44 +0100 Message-Id: <20210222162549.3252778-3-Jason@zx2c4.com> In-Reply-To: <20210222162549.3252778-1-Jason@zx2c4.com> References: <20210222162549.3252778-1-Jason@zx2c4.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Jann Horn The endpoint->src_if4 has nothing to do with fixed-endian numbers; remove the bogus annotation. This was introduced in https://git.zx2c4.com/wireguard-monolithic-historical/commit?id=14e7d0a499a676ec55176c0de2f9fcbd34074a82 in the historical WireGuard repo because the old code used to zero-initialize multiple members as follows: endpoint->src4.s_addr = endpoint->src_if4 = fl.saddr = 0; Because fl.saddr is fixed-endian and an assignment returns a value with the type of its left operand, this meant that sparse detected an assignment between values of different endianness. Since then, this assignment was already split up into separate statements; just the cast survived. Signed-off-by: Jann Horn Fixes: e7096c131e51 ("net: WireGuard secure network tunnel") Signed-off-by: Jason A. Donenfeld --- drivers/net/wireguard/socket.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) -- 2.30.1 diff --git a/drivers/net/wireguard/socket.c b/drivers/net/wireguard/socket.c index 41430c0e465a..d9ad850daa79 100644 --- a/drivers/net/wireguard/socket.c +++ b/drivers/net/wireguard/socket.c @@ -53,7 +53,7 @@ static int send4(struct wg_device *wg, struct sk_buff *skb, if (unlikely(!inet_confirm_addr(sock_net(sock), NULL, 0, fl.saddr, RT_SCOPE_HOST))) { endpoint->src4.s_addr = 0; - *(__force __be32 *)&endpoint->src_if4 = 0; + endpoint->src_if4 = 0; fl.saddr = 0; if (cache) dst_cache_reset(cache); @@ -63,7 +63,7 @@ static int send4(struct wg_device *wg, struct sk_buff *skb, PTR_ERR(rt) == -EINVAL) || (!IS_ERR(rt) && rt->dst.dev->ifindex != endpoint->src_if4)))) { endpoint->src4.s_addr = 0; - *(__force __be32 *)&endpoint->src_if4 = 0; + endpoint->src_if4 = 0; fl.saddr = 0; if (cache) dst_cache_reset(cache); From patchwork Mon Feb 22 16:25:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jason A. Donenfeld" X-Patchwork-Id: 385811 Delivered-To: patch@linaro.org Received: by 2002:a02:290e:0:0:0:0:0 with SMTP id p14csp1418296jap; Mon, 22 Feb 2021 08:29:45 -0800 (PST) X-Google-Smtp-Source: ABdhPJxTDFWBmZqZ70mKBYZ2VzaJQYM8hGuj29HrK3dYPkXhuJ5dqhewvayH7iC5R04iyE1+Nvn/ X-Received: by 2002:a05:6402:c15:: with SMTP id co21mr23505112edb.115.1614011385172; Mon, 22 Feb 2021 08:29:45 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614011385; cv=none; d=google.com; s=arc-20160816; b=et91ww4UDWxXr8WOLAFXCAjRJFMJybnoJVg5dqusB7rXqWwCJTS8k6vwyX7FOtS/vI QIhgICSTtRWLsFs+6n1fj4890x61ou5u0CBrXa5yba2uP1+VdC2hD+QspgwxZrmOiNgT Qfwns3gakvlNJhnkBWzsQLgZc/XzvHGk7Sd0tMf+/owvdBxJnSy/qNu82TjboSY2WUNr jKLp+oUsjtcg8V9349QpwbFqzM+XhlfXc0ZewbWYX4K2eNwpCbaPFyArk9t4EZYNOLg7 YOw005S7dM8dSl2XaqnUDp3JSuO9yLLSIV8pkdozwjzVTweaHEK6bAtzdpHpShPIu6JI ojDQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=gFwSZ76WjdQwNEdZs4h/2Dxi/0tIAwu6oGqtiB8pQwM=; b=H0RVtbwB6L5sL6FV3mz+VU4KjrNo7ToAdZ8R8N7bTN6pc2aCz8ZQIhpj846X6G1XDI GCC2t50LVpOkwcBoaTGDiK0r8/tUL73ME4C9gd0Fwh5FBWugPEOx7G8utJsqRfIEjOuw j8yRuTAWYvcrO/HSN1+/LW8v7YQ3O5XJ8nhYNi2BRHJ+yJpIex+bt27iKc1QrlKl7LSP CuJXirLylc4YfpMYMoadldaPPTNw5RcAhGGpAFXKWFIT9cMPBtbRqd/nbQKcYO1udx0h 8j4BOl+v8JSnpp31lg17bLlVExkTsZKW7VN14MF5OjU9DO2b4khv6bIq1TBo8shnQvbE /K8Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@zx2c4.com header.s=20210105 header.b=XKfDFBab; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=zx2c4.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id ld19si11583137ejb.273.2021.02.22.08.29.45; Mon, 22 Feb 2021 08:29:45 -0800 (PST) Received-SPF: pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@zx2c4.com header.s=20210105 header.b=XKfDFBab; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=zx2c4.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231924AbhBVQ2l (ORCPT + 9 others); Mon, 22 Feb 2021 11:28:41 -0500 Received: from mail.zx2c4.com ([104.131.123.232]:60986 "EHLO mail.zx2c4.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231687AbhBVQ1f (ORCPT ); Mon, 22 Feb 2021 11:27:35 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zx2c4.com; s=20210105; t=1614011158; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gFwSZ76WjdQwNEdZs4h/2Dxi/0tIAwu6oGqtiB8pQwM=; b=XKfDFBab9A641iF9hH3I/9/qnhX4gjYFFb2iJaASnDdDG0ipCjTVDb8zJXI/L3c5n/kISd +Rje0v3X6DCHc+ZDlk8dGBMBkSJCzEKFHzo/ZGxW62LddtAqz0ZzJH3Pk6GE3UlfXYBNmk 4lhMThOBK5W4QZeTMWd0S0K0DV1CnW4= Received: by mail.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id 8caa1da0 (TLSv1.3:AEAD-AES256-GCM-SHA384:256:NO); Mon, 22 Feb 2021 16:25:58 +0000 (UTC) From: "Jason A. Donenfeld" To: netdev@vger.kernel.org, davem@davemloft.net Subject: [PATCH net 3/7] wireguard: selftests: test multiple parallel streams Date: Mon, 22 Feb 2021 17:25:45 +0100 Message-Id: <20210222162549.3252778-4-Jason@zx2c4.com> In-Reply-To: <20210222162549.3252778-1-Jason@zx2c4.com> References: <20210222162549.3252778-1-Jason@zx2c4.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org In order to test ndo_start_xmit being called in parallel, explicitly add separate tests, which should all run on different cores. This should help tease out bugs associated with queueing up packets from different cores in parallel. Currently, it hasn't found those types of bugs, but given future planned work, this is a useful regression to avoid. Fixes: e7096c131e51 ("net: WireGuard secure network tunnel") Signed-off-by: Jason A. Donenfeld --- tools/testing/selftests/wireguard/netns.sh | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) -- 2.30.1 diff --git a/tools/testing/selftests/wireguard/netns.sh b/tools/testing/selftests/wireguard/netns.sh index 74c69b75f6f5..7ed7cd95e58f 100755 --- a/tools/testing/selftests/wireguard/netns.sh +++ b/tools/testing/selftests/wireguard/netns.sh @@ -39,7 +39,7 @@ ip0() { pretty 0 "ip $*"; ip -n $netns0 "$@"; } ip1() { pretty 1 "ip $*"; ip -n $netns1 "$@"; } ip2() { pretty 2 "ip $*"; ip -n $netns2 "$@"; } sleep() { read -t "$1" -N 1 || true; } -waitiperf() { pretty "${1//*-}" "wait for iperf:5201 pid $2"; while [[ $(ss -N "$1" -tlpH 'sport = 5201') != *\"iperf3\",pid=$2,fd=* ]]; do sleep 0.1; done; } +waitiperf() { pretty "${1//*-}" "wait for iperf:${3:-5201} pid $2"; while [[ $(ss -N "$1" -tlpH "sport = ${3:-5201}") != *\"iperf3\",pid=$2,fd=* ]]; do sleep 0.1; done; } waitncatudp() { pretty "${1//*-}" "wait for udp:1111 pid $2"; while [[ $(ss -N "$1" -ulpH 'sport = 1111') != *\"ncat\",pid=$2,fd=* ]]; do sleep 0.1; done; } waitiface() { pretty "${1//*-}" "wait for $2 to come up"; ip netns exec "$1" bash -c "while [[ \$(< \"/sys/class/net/$2/operstate\") != up ]]; do read -t .1 -N 0 || true; done;"; } @@ -141,6 +141,19 @@ tests() { n2 iperf3 -s -1 -B fd00::2 & waitiperf $netns2 $! n1 iperf3 -Z -t 3 -b 0 -u -c fd00::2 + + # TCP over IPv4, in parallel + for max in 4 5 50; do + local pids=( ) + for ((i=0; i < max; ++i)) do + n2 iperf3 -p $(( 5200 + i )) -s -1 -B 192.168.241.2 & + pids+=( $! ); waitiperf $netns2 $! $(( 5200 + i )) + done + for ((i=0; i < max; ++i)) do + n1 iperf3 -Z -t 3 -p $(( 5200 + i )) -c 192.168.241.2 & + done + wait "${pids[@]}" + done } [[ $(ip1 link show dev wg0) =~ mtu\ ([0-9]+) ]] && orig_mtu="${BASH_REMATCH[1]}" From patchwork Mon Feb 22 16:25:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jason A. Donenfeld" X-Patchwork-Id: 385812 Delivered-To: patch@linaro.org Received: by 2002:a02:290e:0:0:0:0:0 with SMTP id p14csp1418307jap; Mon, 22 Feb 2021 08:29:46 -0800 (PST) X-Google-Smtp-Source: ABdhPJwqdoZXdNUpfVqRkA6QPPOS1QpiZgaGvyqQUTa2huZX7BkfyNPbcrwMj4+ZoSc3u25mTaCx X-Received: by 2002:a17:906:1992:: with SMTP id g18mr14644759ejd.44.1614011386433; Mon, 22 Feb 2021 08:29:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614011386; cv=none; d=google.com; s=arc-20160816; b=uenRDlxS+F0o4pkdtjTnH1huHcrclIL1icoT37ZiKtg/Yp+JIxYkBvFQrNa55nZVu0 KOzk5zDP0dMmml8fKMVYgXOE4VNTNfIjfBfIEYURA/UAdWZ15TwJ4V5k6VJ5gav9thPN kt6mBh1EnFs2ucwYnXxg+zZaxj/tYnV8vqnrnvGiYJuRlBUdz1go8kR6ZGyxlcaqdD2h Bbyl4jY15wZWQdDBtJ5QUiftO7DQLUU3Occ/AShUZDJ5r7E45kdA9Fr/cv29QK1bvQ9U UmRZIObGjKchcIL+d35GiWvqIHbHWj7AY3Aj21NBTvmQEOrAt0Nyk5roBXPaw1Rqlsc5 Ef+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=2C+KauK5gZ/RdommxEl+TrbuudLez7C1bij2S/qAJFQ=; b=wGcIYd5Tj3Mk6tkbcl/reNGX4W8vMEvMnE1gZAHwAP84aEEF/zFqf58eYw87KnVn/C sgf6FEldSj25w1glQrXds7CJd5ftwbPJfpc3nZohb11GoocQ2pjR7O/HXM8yAsdCha8e lcAY4OGf8SF/K0PUoOPnIG+p/imDF3MlIjs+X5j6wyEpZLX8Su/w4zO8WJ8PCN52qiiV XK8kkEz5kDrLTci78AT03k5z7L+2dSjaKPXdsjZK0r8YQokjwoLJrHPF88ERb1P2UVqQ 2GVrniWDcRRrzq8zFS9NQZcemZj0MKAPwK0sdfbw6SPQPbk8uWgyGP2jc6LxhI6UseCB Z7dg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@zx2c4.com header.s=20210105 header.b=WSTNHYz5; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=zx2c4.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id ld19si11583137ejb.273.2021.02.22.08.29.46; Mon, 22 Feb 2021 08:29:46 -0800 (PST) Received-SPF: pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@zx2c4.com header.s=20210105 header.b=WSTNHYz5; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=zx2c4.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231947AbhBVQ3T (ORCPT + 9 others); Mon, 22 Feb 2021 11:29:19 -0500 Received: from mail.zx2c4.com ([104.131.123.232]:32820 "EHLO mail.zx2c4.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231725AbhBVQ1n (ORCPT ); Mon, 22 Feb 2021 11:27:43 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zx2c4.com; s=20210105; t=1614011160; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2C+KauK5gZ/RdommxEl+TrbuudLez7C1bij2S/qAJFQ=; b=WSTNHYz5s6qt/MiVwYbz6488DMw5CRymJM0KgYA2LlXXh5MsNl1h35lSw/9aC19cmVPPDi jLHYqdTiWR9egrLqCLiZdFmZYmwrr1ZFk0LDUS1fJZ1r8thGY99mlSCrg9VEfzO7exJM0t JCFMHB7OdpOBQPJgy2tf6KzwVl/aYmM= Received: by mail.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id ace57776 (TLSv1.3:AEAD-AES256-GCM-SHA384:256:NO); Mon, 22 Feb 2021 16:26:00 +0000 (UTC) From: "Jason A. Donenfeld" To: netdev@vger.kernel.org, davem@davemloft.net Subject: [PATCH net 4/7] wireguard: peer: put frequently used members above cache lines Date: Mon, 22 Feb 2021 17:25:46 +0100 Message-Id: <20210222162549.3252778-5-Jason@zx2c4.com> In-Reply-To: <20210222162549.3252778-1-Jason@zx2c4.com> References: <20210222162549.3252778-1-Jason@zx2c4.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org The is_dead boolean is checked for every single packet, while the internal_id member is used basically only for pr_debug messages. So it makes sense to hoist up is_dead into some space formerly unused by a struct hole, while demoting internal_api to below the lowest struct cache line. Fixes: e7096c131e51 ("net: WireGuard secure network tunnel") Signed-off-by: Jason A. Donenfeld --- drivers/net/wireguard/peer.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) -- 2.30.1 diff --git a/drivers/net/wireguard/peer.h b/drivers/net/wireguard/peer.h index 23af40922997..aaff8de6e34b 100644 --- a/drivers/net/wireguard/peer.h +++ b/drivers/net/wireguard/peer.h @@ -39,6 +39,7 @@ struct wg_peer { struct crypt_queue tx_queue, rx_queue; struct sk_buff_head staged_packet_queue; int serial_work_cpu; + bool is_dead; struct noise_keypairs keypairs; struct endpoint endpoint; struct dst_cache endpoint_cache; @@ -61,9 +62,8 @@ struct wg_peer { struct rcu_head rcu; struct list_head peer_list; struct list_head allowedips_list; - u64 internal_id; struct napi_struct napi; - bool is_dead; + u64 internal_id; }; struct wg_peer *wg_peer_create(struct wg_device *wg, From patchwork Mon Feb 22 16:25:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jason A. Donenfeld" X-Patchwork-Id: 385813 Delivered-To: patch@linaro.org Received: by 2002:a02:290e:0:0:0:0:0 with SMTP id p14csp1419695jap; Mon, 22 Feb 2021 08:31:20 -0800 (PST) X-Google-Smtp-Source: ABdhPJxw/enguDJxT8H28Hq5IB6gJeGxyNDIUKTNgRDTDcIjzgRiF4OvKjxDZ/xJD8/7W9vbkvZU X-Received: by 2002:aa7:cc98:: with SMTP id p24mr24211239edt.126.1614011480757; Mon, 22 Feb 2021 08:31:20 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614011480; cv=none; d=google.com; s=arc-20160816; b=dFuyrJabc85saaaLhY++7tVR2iIxEwMcKuf8uzkAiULTrC6G1R1R3WHz6b/G8ytTQB xXhetxvbLOpgrjj50uAk892K/OzEoA+rztefwPT40xcdmMrK4GlMATERpMEib0vvSGjq Jb4O+/mgx+8xa0iEfp+JkLEaY27jMZ8dz9U5sjf7jbzKYhYbLGlrL5rwGPblLKpDEL40 V/YhzE4yaTKwzzM63JCeKa51OKHvNt89rC4N/owTdqFCgo1LXgLUu67TfFBU/yfH7255 pOV+bI6DMQQ5zdACJVSkV5Z0/ejmDEh3OvGlh96juWQuCdG/lEp2GhyYKDWTLm6JADg2 gy7Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=iys7Zy8lEPOpQwY9fqMEdYkFEKZRM2Rey0ir2dZq0gU=; b=NNSpxaJG01P39ZnasL7DK00ZTC8sD7s7yrskvhyQQCVxSHgt4sv0nBfNyORiINPXIA XBtOoSzPKqqdWT4YOp8SeyJIx0cwY/RxmTh4fL/9rFFiOpCHnjxIZ+anGfCna/OMlLQ3 IivN8vrjtz29pEM8jpgPuAHuQ4G6IBMbAzYowhzQ4YEcFXoWkEwaH5+JpFa1gqZl4s5P hbJOlI/PUm0LY2Haz0k4KaHrY+deWoDsOYFCrWjmi99EvnaAUoxc2GXx7VkYp+xai3Y3 QKdcZ7uzOKEMpdrMTxzZbDzWyCbkCvre0GYX8I87tz0o1qGy3//eDlGyV33elCrRdGJ5 LSxw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@zx2c4.com header.s=20210105 header.b=S+HX8SB7; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=zx2c4.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id se21si5744482ejb.362.2021.02.22.08.31.20; Mon, 22 Feb 2021 08:31:20 -0800 (PST) Received-SPF: pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@zx2c4.com header.s=20210105 header.b=S+HX8SB7; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=zx2c4.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230223AbhBVQ3w (ORCPT + 9 others); Mon, 22 Feb 2021 11:29:52 -0500 Received: from mail.zx2c4.com ([104.131.123.232]:60986 "EHLO mail.zx2c4.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231615AbhBVQ2G (ORCPT ); Mon, 22 Feb 2021 11:28:06 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zx2c4.com; s=20210105; t=1614011162; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iys7Zy8lEPOpQwY9fqMEdYkFEKZRM2Rey0ir2dZq0gU=; b=S+HX8SB7MjZydRm0jDEeQaB90c8/pW+0qUIiL8MWz4UtXKZ0wpLVSiZZvzAPC9qTt5oid1 3MP6NNn6/T99hZHRxv7RrMrzzfHFZ7Q3M//zogdugt/L4lAM0QztjhTFlMoYv7YVMEFTXt lFsechFAjT8dhrBP2s8KK4EEtcHaK+s= Received: by mail.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id 3d0a66d4 (TLSv1.3:AEAD-AES256-GCM-SHA384:256:NO); Mon, 22 Feb 2021 16:26:02 +0000 (UTC) From: "Jason A. Donenfeld" To: netdev@vger.kernel.org, davem@davemloft.net Subject: [PATCH net 5/7] wireguard: device: do not generate ICMP for non-IP packets Date: Mon, 22 Feb 2021 17:25:47 +0100 Message-Id: <20210222162549.3252778-6-Jason@zx2c4.com> In-Reply-To: <20210222162549.3252778-1-Jason@zx2c4.com> References: <20210222162549.3252778-1-Jason@zx2c4.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org If skb->protocol doesn't match the actual skb->data header, it's probably not a good idea to pass it off to icmp{,v6}_ndo_send, which is expecting to reply to a valid IP packet. So this commit has that early mismatch case jump to a later error label. Fixes: e7096c131e51 ("net: WireGuard secure network tunnel") Signed-off-by: Jason A. Donenfeld --- drivers/net/wireguard/device.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) -- 2.30.1 diff --git a/drivers/net/wireguard/device.c b/drivers/net/wireguard/device.c index cd51a2afa28e..8502e1b083ff 100644 --- a/drivers/net/wireguard/device.c +++ b/drivers/net/wireguard/device.c @@ -138,7 +138,7 @@ static netdev_tx_t wg_xmit(struct sk_buff *skb, struct net_device *dev) else if (skb->protocol == htons(ETH_P_IPV6)) net_dbg_ratelimited("%s: No peer has allowed IPs matching %pI6\n", dev->name, &ipv6_hdr(skb)->daddr); - goto err; + goto err_icmp; } family = READ_ONCE(peer->endpoint.addr.sa_family); @@ -201,12 +201,13 @@ static netdev_tx_t wg_xmit(struct sk_buff *skb, struct net_device *dev) err_peer: wg_peer_put(peer); -err: - ++dev->stats.tx_errors; +err_icmp: if (skb->protocol == htons(ETH_P_IP)) icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_HOST_UNREACH, 0); else if (skb->protocol == htons(ETH_P_IPV6)) icmpv6_ndo_send(skb, ICMPV6_DEST_UNREACH, ICMPV6_ADDR_UNREACH, 0); +err: + ++dev->stats.tx_errors; kfree_skb(skb); return ret; } From patchwork Mon Feb 22 16:25:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Jason A. Donenfeld" X-Patchwork-Id: 385814 Delivered-To: patch@linaro.org Received: by 2002:a02:290e:0:0:0:0:0 with SMTP id p14csp1419713jap; Mon, 22 Feb 2021 08:31:21 -0800 (PST) X-Google-Smtp-Source: ABdhPJwPiKHF5wi7Wv75SsvO7fb6WeWWKU4oT9JvLuG9bcmS7qPkk4HLRLNZpW01+myvtKha0HKz X-Received: by 2002:a05:6402:3d8:: with SMTP id t24mr23221765edw.298.1614011481760; Mon, 22 Feb 2021 08:31:21 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614011481; cv=none; d=google.com; s=arc-20160816; b=HslY866zvB+Pm5dKUOiz1OdSavewBoCTTU7Phi1//9uRStATuym9ESJoB6q5E+1p7m nSC4qvskIccWm3CsoHNK4ERK4rH834E38c9kZBDDMojRQQQSnFwpaGv5uebVmypIuv+b QcJn6f6ep/Rz+SgwJ9Y+QmSsHm1YbRmzIq2+EBY0MAMIIyjhaX2qj0ZuYoR1a4i1LJns rd0UbgcunILY8uM0BC1qwINFsxsuIgqBxrtxQPuck5U0r83FrWvbtnzDe7JgmEnxHdGB uU099n10lXxaM3UJCo/Y56pixjYa9IjyAZY0nNs+SOYgqsM7MU5nT+YRhuniTtVx11ff XqzA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=+grUblbK7VAJc9eNG0TZW74b0x/WCD5IH7WBY+fHDzU=; b=jicWh2xsF+7ZRiSQT1+yBt4UL13W5GIHyLKhzteuZQmYRDNv1LDmIvZK9F84m668px LdmzHPXFXZczK7NWp5GTJj/c4l1rIydi8MD3gs3z+kgCAUj7H0W9MZBBdP5FYTr+fcIn ywedo8AqsH+XfEFHtqMvRhJ8CMr3WLFStBlr0W/rcJpMcZxcg9ifbS6b/HfIYhhLKguD kYcEK3Ocj7qfZw+TBVdZj72IQCKk51p/4hjd7AaI2uKRlPH4W3bylyFlfNJdm/AFjaPa D8X1j+bvcDlcY3SPg8ShF7UZ16ek/Gg0Cl2lsE+ZtAVwLBxFXy2eTTF0751+ULJr5Xfu mMWA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@zx2c4.com header.s=20210105 header.b=Gkp6gLOQ; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=zx2c4.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id se21si5744482ejb.362.2021.02.22.08.31.21; Mon, 22 Feb 2021 08:31:21 -0800 (PST) Received-SPF: pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@zx2c4.com header.s=20210105 header.b=Gkp6gLOQ; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=zx2c4.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231676AbhBVQaV (ORCPT + 9 others); Mon, 22 Feb 2021 11:30:21 -0500 Received: from mail.zx2c4.com ([104.131.123.232]:32820 "EHLO mail.zx2c4.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231816AbhBVQ2N (ORCPT ); Mon, 22 Feb 2021 11:28:13 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zx2c4.com; s=20210105; t=1614011163; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+grUblbK7VAJc9eNG0TZW74b0x/WCD5IH7WBY+fHDzU=; b=Gkp6gLOQ3er25X/IOxPjEmwNENCClMi1tt8HXsfal4jkl/tmZAF3dh1O/Lw0ZXk3uD4uf3 I6fDTBKDuxwZogr/rChwh9gwrccSI0cNPBsPDz7r3yN9svCL4RuLUbIlJe9ZE6mLUn0GM7 vjEXIZ8Ul4/b5LvWsumRbaxTmefNAyg= Received: by mail.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id 73e5786c (TLSv1.3:AEAD-AES256-GCM-SHA384:256:NO); Mon, 22 Feb 2021 16:26:03 +0000 (UTC) From: "Jason A. Donenfeld" To: netdev@vger.kernel.org, davem@davemloft.net Subject: [PATCH net 6/7] wireguard: queueing: get rid of per-peer ring buffers Date: Mon, 22 Feb 2021 17:25:48 +0100 Message-Id: <20210222162549.3252778-7-Jason@zx2c4.com> In-Reply-To: <20210222162549.3252778-1-Jason@zx2c4.com> References: <20210222162549.3252778-1-Jason@zx2c4.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Having two ring buffers per-peer means that every peer results in two massive ring allocations. On an 8-core x86_64 machine, this commit reduces the per-peer allocation from 18,688 bytes to 1,856 bytes, which is an 90% reduction. Ninety percent! With some single-machine deployments approaching 500,000 peers, we're talking about a reduction from 7 gigs of memory down to 700 megs of memory. In order to get rid of these per-peer allocations, this commit switches to using a list-based queueing approach. Currently GSO fragments are chained together using the skb->next pointer (the skb_list_* singly linked list approach), so we form the per-peer queue around the unused skb->prev pointer (which sort of makes sense because the links are pointing backwards). Use of skb_queue_* is not possible here, because that is based on doubly linked lists and spinlocks. Multiple cores can write into the queue at any given time, because its writes occur in the start_xmit path or in the udp_recv path. But reads happen in a single workqueue item per-peer, amounting to a multi-producer, single-consumer paradigm. The MPSC queue is implemented locklessly and never blocks. However, it is not linearizable (though it is serializable), with a very tight and unlikely race on writes, which, when hit (some tiny fraction of the 0.15% of partial adds on a fully loaded 16-core x86_64 system), causes the queue reader to terminate early. However, because every packet sent queues up the same workqueue item after it is fully added, the worker resumes again, and stopping early isn't actually a problem, since at that point the packet wouldn't have yet been added to the encryption queue. These properties allow us to avoid disabling interrupts or spinning. The design is based on Dmitry Vyukov's algorithm [1]. Performance-wise, ordinarily list-based queues aren't preferable to ringbuffers, because of cache misses when following pointers around. However, we *already* have to follow the adjacent pointers when working through fragments, so there shouldn't actually be any change there. A potential downside is that dequeueing is a bit more complicated, but the ptr_ring structure used prior had a spinlock when dequeueing, so all and all the difference appears to be a wash. Actually, from profiling, the biggest performance hit, by far, of this commit winds up being atomic_add_unless(count, 1, max) and atomic_ dec(count), which account for the majority of CPU time, according to perf. In that sense, the previous ring buffer was superior in that it could check if it was full by head==tail, which the list-based approach cannot do. But all and all, this enables us to get massive memory savings, allowing WireGuard to scale for real world deployments, without taking much of a performance hit. [1] http://www.1024cores.net/home/lock-free-algorithms/queues/intrusive-mpsc-node-based-queue Reviewed-by: Dmitry Vyukov Reviewed-by: Toke Høiland-Jørgensen Fixes: e7096c131e51 ("net: WireGuard secure network tunnel") Signed-off-by: Jason A. Donenfeld --- drivers/net/wireguard/device.c | 12 ++--- drivers/net/wireguard/device.h | 15 +++--- drivers/net/wireguard/peer.c | 28 ++++------- drivers/net/wireguard/peer.h | 4 +- drivers/net/wireguard/queueing.c | 86 +++++++++++++++++++++++++------- drivers/net/wireguard/queueing.h | 45 ++++++++++++----- drivers/net/wireguard/receive.c | 16 +++--- drivers/net/wireguard/send.c | 31 ++++-------- 8 files changed, 144 insertions(+), 93 deletions(-) -- 2.30.1 diff --git a/drivers/net/wireguard/device.c b/drivers/net/wireguard/device.c index 8502e1b083ff..551ddaaaf540 100644 --- a/drivers/net/wireguard/device.c +++ b/drivers/net/wireguard/device.c @@ -235,8 +235,8 @@ static void wg_destruct(struct net_device *dev) destroy_workqueue(wg->handshake_receive_wq); destroy_workqueue(wg->handshake_send_wq); destroy_workqueue(wg->packet_crypt_wq); - wg_packet_queue_free(&wg->decrypt_queue, true); - wg_packet_queue_free(&wg->encrypt_queue, true); + wg_packet_queue_free(&wg->decrypt_queue); + wg_packet_queue_free(&wg->encrypt_queue); rcu_barrier(); /* Wait for all the peers to be actually freed. */ wg_ratelimiter_uninit(); memzero_explicit(&wg->static_identity, sizeof(wg->static_identity)); @@ -338,12 +338,12 @@ static int wg_newlink(struct net *src_net, struct net_device *dev, goto err_destroy_handshake_send; ret = wg_packet_queue_init(&wg->encrypt_queue, wg_packet_encrypt_worker, - true, MAX_QUEUED_PACKETS); + MAX_QUEUED_PACKETS); if (ret < 0) goto err_destroy_packet_crypt; ret = wg_packet_queue_init(&wg->decrypt_queue, wg_packet_decrypt_worker, - true, MAX_QUEUED_PACKETS); + MAX_QUEUED_PACKETS); if (ret < 0) goto err_free_encrypt_queue; @@ -368,9 +368,9 @@ static int wg_newlink(struct net *src_net, struct net_device *dev, err_uninit_ratelimiter: wg_ratelimiter_uninit(); err_free_decrypt_queue: - wg_packet_queue_free(&wg->decrypt_queue, true); + wg_packet_queue_free(&wg->decrypt_queue); err_free_encrypt_queue: - wg_packet_queue_free(&wg->encrypt_queue, true); + wg_packet_queue_free(&wg->encrypt_queue); err_destroy_packet_crypt: destroy_workqueue(wg->packet_crypt_wq); err_destroy_handshake_send: diff --git a/drivers/net/wireguard/device.h b/drivers/net/wireguard/device.h index 4d0144e16947..854bc3d97150 100644 --- a/drivers/net/wireguard/device.h +++ b/drivers/net/wireguard/device.h @@ -27,13 +27,14 @@ struct multicore_worker { struct crypt_queue { struct ptr_ring ring; - union { - struct { - struct multicore_worker __percpu *worker; - int last_cpu; - }; - struct work_struct work; - }; + struct multicore_worker __percpu *worker; + int last_cpu; +}; + +struct prev_queue { + struct sk_buff *head, *tail, *peeked; + struct { struct sk_buff *next, *prev; } empty; // Match first 2 members of struct sk_buff. + atomic_t count; }; struct wg_device { diff --git a/drivers/net/wireguard/peer.c b/drivers/net/wireguard/peer.c index b3b6370e6b95..cd5cb0292cb6 100644 --- a/drivers/net/wireguard/peer.c +++ b/drivers/net/wireguard/peer.c @@ -32,27 +32,22 @@ struct wg_peer *wg_peer_create(struct wg_device *wg, peer = kzalloc(sizeof(*peer), GFP_KERNEL); if (unlikely(!peer)) return ERR_PTR(ret); - peer->device = wg; + if (dst_cache_init(&peer->endpoint_cache, GFP_KERNEL)) + goto err; + peer->device = wg; wg_noise_handshake_init(&peer->handshake, &wg->static_identity, public_key, preshared_key, peer); - if (dst_cache_init(&peer->endpoint_cache, GFP_KERNEL)) - goto err_1; - if (wg_packet_queue_init(&peer->tx_queue, wg_packet_tx_worker, false, - MAX_QUEUED_PACKETS)) - goto err_2; - if (wg_packet_queue_init(&peer->rx_queue, NULL, false, - MAX_QUEUED_PACKETS)) - goto err_3; - peer->internal_id = atomic64_inc_return(&peer_counter); peer->serial_work_cpu = nr_cpumask_bits; wg_cookie_init(&peer->latest_cookie); wg_timers_init(peer); wg_cookie_checker_precompute_peer_keys(peer); spin_lock_init(&peer->keypairs.keypair_update_lock); - INIT_WORK(&peer->transmit_handshake_work, - wg_packet_handshake_send_worker); + INIT_WORK(&peer->transmit_handshake_work, wg_packet_handshake_send_worker); + INIT_WORK(&peer->transmit_packet_work, wg_packet_tx_worker); + wg_prev_queue_init(&peer->tx_queue); + wg_prev_queue_init(&peer->rx_queue); rwlock_init(&peer->endpoint_lock); kref_init(&peer->refcount); skb_queue_head_init(&peer->staged_packet_queue); @@ -68,11 +63,7 @@ struct wg_peer *wg_peer_create(struct wg_device *wg, pr_debug("%s: Peer %llu created\n", wg->dev->name, peer->internal_id); return peer; -err_3: - wg_packet_queue_free(&peer->tx_queue, false); -err_2: - dst_cache_destroy(&peer->endpoint_cache); -err_1: +err: kfree(peer); return ERR_PTR(ret); } @@ -197,8 +188,7 @@ static void rcu_release(struct rcu_head *rcu) struct wg_peer *peer = container_of(rcu, struct wg_peer, rcu); dst_cache_destroy(&peer->endpoint_cache); - wg_packet_queue_free(&peer->rx_queue, false); - wg_packet_queue_free(&peer->tx_queue, false); + WARN_ON(wg_prev_queue_peek(&peer->tx_queue) || wg_prev_queue_peek(&peer->rx_queue)); /* The final zeroing takes care of clearing any remaining handshake key * material and other potentially sensitive information. diff --git a/drivers/net/wireguard/peer.h b/drivers/net/wireguard/peer.h index aaff8de6e34b..8d53b687a1d1 100644 --- a/drivers/net/wireguard/peer.h +++ b/drivers/net/wireguard/peer.h @@ -36,7 +36,7 @@ struct endpoint { struct wg_peer { struct wg_device *device; - struct crypt_queue tx_queue, rx_queue; + struct prev_queue tx_queue, rx_queue; struct sk_buff_head staged_packet_queue; int serial_work_cpu; bool is_dead; @@ -46,7 +46,7 @@ struct wg_peer { rwlock_t endpoint_lock; struct noise_handshake handshake; atomic64_t last_sent_handshake; - struct work_struct transmit_handshake_work, clear_peer_work; + struct work_struct transmit_handshake_work, clear_peer_work, transmit_packet_work; struct cookie latest_cookie; struct hlist_node pubkey_hash; u64 rx_bytes, tx_bytes; diff --git a/drivers/net/wireguard/queueing.c b/drivers/net/wireguard/queueing.c index 71b8e80b58e1..48e7b982a307 100644 --- a/drivers/net/wireguard/queueing.c +++ b/drivers/net/wireguard/queueing.c @@ -9,8 +9,7 @@ struct multicore_worker __percpu * wg_packet_percpu_multicore_worker_alloc(work_func_t function, void *ptr) { int cpu; - struct multicore_worker __percpu *worker = - alloc_percpu(struct multicore_worker); + struct multicore_worker __percpu *worker = alloc_percpu(struct multicore_worker); if (!worker) return NULL; @@ -23,7 +22,7 @@ wg_packet_percpu_multicore_worker_alloc(work_func_t function, void *ptr) } int wg_packet_queue_init(struct crypt_queue *queue, work_func_t function, - bool multicore, unsigned int len) + unsigned int len) { int ret; @@ -31,25 +30,78 @@ int wg_packet_queue_init(struct crypt_queue *queue, work_func_t function, ret = ptr_ring_init(&queue->ring, len, GFP_KERNEL); if (ret) return ret; - if (function) { - if (multicore) { - queue->worker = wg_packet_percpu_multicore_worker_alloc( - function, queue); - if (!queue->worker) { - ptr_ring_cleanup(&queue->ring, NULL); - return -ENOMEM; - } - } else { - INIT_WORK(&queue->work, function); - } + queue->worker = wg_packet_percpu_multicore_worker_alloc(function, queue); + if (!queue->worker) { + ptr_ring_cleanup(&queue->ring, NULL); + return -ENOMEM; } return 0; } -void wg_packet_queue_free(struct crypt_queue *queue, bool multicore) +void wg_packet_queue_free(struct crypt_queue *queue) { - if (multicore) - free_percpu(queue->worker); + free_percpu(queue->worker); WARN_ON(!__ptr_ring_empty(&queue->ring)); ptr_ring_cleanup(&queue->ring, NULL); } + +#define NEXT(skb) ((skb)->prev) +#define STUB(queue) ((struct sk_buff *)&queue->empty) + +void wg_prev_queue_init(struct prev_queue *queue) +{ + NEXT(STUB(queue)) = NULL; + queue->head = queue->tail = STUB(queue); + queue->peeked = NULL; + atomic_set(&queue->count, 0); + BUILD_BUG_ON( + offsetof(struct sk_buff, next) != offsetof(struct prev_queue, empty.next) - + offsetof(struct prev_queue, empty) || + offsetof(struct sk_buff, prev) != offsetof(struct prev_queue, empty.prev) - + offsetof(struct prev_queue, empty)); +} + +static void __wg_prev_queue_enqueue(struct prev_queue *queue, struct sk_buff *skb) +{ + WRITE_ONCE(NEXT(skb), NULL); + WRITE_ONCE(NEXT(xchg_release(&queue->head, skb)), skb); +} + +bool wg_prev_queue_enqueue(struct prev_queue *queue, struct sk_buff *skb) +{ + if (!atomic_add_unless(&queue->count, 1, MAX_QUEUED_PACKETS)) + return false; + __wg_prev_queue_enqueue(queue, skb); + return true; +} + +struct sk_buff *wg_prev_queue_dequeue(struct prev_queue *queue) +{ + struct sk_buff *tail = queue->tail, *next = smp_load_acquire(&NEXT(tail)); + + if (tail == STUB(queue)) { + if (!next) + return NULL; + queue->tail = next; + tail = next; + next = smp_load_acquire(&NEXT(next)); + } + if (next) { + queue->tail = next; + atomic_dec(&queue->count); + return tail; + } + if (tail != READ_ONCE(queue->head)) + return NULL; + __wg_prev_queue_enqueue(queue, STUB(queue)); + next = smp_load_acquire(&NEXT(tail)); + if (next) { + queue->tail = next; + atomic_dec(&queue->count); + return tail; + } + return NULL; +} + +#undef NEXT +#undef STUB diff --git a/drivers/net/wireguard/queueing.h b/drivers/net/wireguard/queueing.h index dfb674e03076..4ef2944a68bc 100644 --- a/drivers/net/wireguard/queueing.h +++ b/drivers/net/wireguard/queueing.h @@ -17,12 +17,13 @@ struct wg_device; struct wg_peer; struct multicore_worker; struct crypt_queue; +struct prev_queue; struct sk_buff; /* queueing.c APIs: */ int wg_packet_queue_init(struct crypt_queue *queue, work_func_t function, - bool multicore, unsigned int len); -void wg_packet_queue_free(struct crypt_queue *queue, bool multicore); + unsigned int len); +void wg_packet_queue_free(struct crypt_queue *queue); struct multicore_worker __percpu * wg_packet_percpu_multicore_worker_alloc(work_func_t function, void *ptr); @@ -135,8 +136,31 @@ static inline int wg_cpumask_next_online(int *next) return cpu; } +void wg_prev_queue_init(struct prev_queue *queue); + +/* Multi producer */ +bool wg_prev_queue_enqueue(struct prev_queue *queue, struct sk_buff *skb); + +/* Single consumer */ +struct sk_buff *wg_prev_queue_dequeue(struct prev_queue *queue); + +/* Single consumer */ +static inline struct sk_buff *wg_prev_queue_peek(struct prev_queue *queue) +{ + if (queue->peeked) + return queue->peeked; + queue->peeked = wg_prev_queue_dequeue(queue); + return queue->peeked; +} + +/* Single consumer */ +static inline void wg_prev_queue_drop_peeked(struct prev_queue *queue) +{ + queue->peeked = NULL; +} + static inline int wg_queue_enqueue_per_device_and_peer( - struct crypt_queue *device_queue, struct crypt_queue *peer_queue, + struct crypt_queue *device_queue, struct prev_queue *peer_queue, struct sk_buff *skb, struct workqueue_struct *wq, int *next_cpu) { int cpu; @@ -145,8 +169,9 @@ static inline int wg_queue_enqueue_per_device_and_peer( /* We first queue this up for the peer ingestion, but the consumer * will wait for the state to change to CRYPTED or DEAD before. */ - if (unlikely(ptr_ring_produce_bh(&peer_queue->ring, skb))) + if (unlikely(!wg_prev_queue_enqueue(peer_queue, skb))) return -ENOSPC; + /* Then we queue it up in the device queue, which consumes the * packet as soon as it can. */ @@ -157,9 +182,7 @@ static inline int wg_queue_enqueue_per_device_and_peer( return 0; } -static inline void wg_queue_enqueue_per_peer(struct crypt_queue *queue, - struct sk_buff *skb, - enum packet_state state) +static inline void wg_queue_enqueue_per_peer_tx(struct sk_buff *skb, enum packet_state state) { /* We take a reference, because as soon as we call atomic_set, the * peer can be freed from below us. @@ -167,14 +190,12 @@ static inline void wg_queue_enqueue_per_peer(struct crypt_queue *queue, struct wg_peer *peer = wg_peer_get(PACKET_PEER(skb)); atomic_set_release(&PACKET_CB(skb)->state, state); - queue_work_on(wg_cpumask_choose_online(&peer->serial_work_cpu, - peer->internal_id), - peer->device->packet_crypt_wq, &queue->work); + queue_work_on(wg_cpumask_choose_online(&peer->serial_work_cpu, peer->internal_id), + peer->device->packet_crypt_wq, &peer->transmit_packet_work); wg_peer_put(peer); } -static inline void wg_queue_enqueue_per_peer_napi(struct sk_buff *skb, - enum packet_state state) +static inline void wg_queue_enqueue_per_peer_rx(struct sk_buff *skb, enum packet_state state) { /* We take a reference, because as soon as we call atomic_set, the * peer can be freed from below us. diff --git a/drivers/net/wireguard/receive.c b/drivers/net/wireguard/receive.c index 2c9551ea6dc7..7dc84bcca261 100644 --- a/drivers/net/wireguard/receive.c +++ b/drivers/net/wireguard/receive.c @@ -444,7 +444,6 @@ static void wg_packet_consume_data_done(struct wg_peer *peer, int wg_packet_rx_poll(struct napi_struct *napi, int budget) { struct wg_peer *peer = container_of(napi, struct wg_peer, napi); - struct crypt_queue *queue = &peer->rx_queue; struct noise_keypair *keypair; struct endpoint endpoint; enum packet_state state; @@ -455,11 +454,10 @@ int wg_packet_rx_poll(struct napi_struct *napi, int budget) if (unlikely(budget <= 0)) return 0; - while ((skb = __ptr_ring_peek(&queue->ring)) != NULL && + while ((skb = wg_prev_queue_peek(&peer->rx_queue)) != NULL && (state = atomic_read_acquire(&PACKET_CB(skb)->state)) != PACKET_STATE_UNCRYPTED) { - __ptr_ring_discard_one(&queue->ring); - peer = PACKET_PEER(skb); + wg_prev_queue_drop_peeked(&peer->rx_queue); keypair = PACKET_CB(skb)->keypair; free = true; @@ -508,7 +506,7 @@ void wg_packet_decrypt_worker(struct work_struct *work) enum packet_state state = likely(decrypt_packet(skb, PACKET_CB(skb)->keypair)) ? PACKET_STATE_CRYPTED : PACKET_STATE_DEAD; - wg_queue_enqueue_per_peer_napi(skb, state); + wg_queue_enqueue_per_peer_rx(skb, state); if (need_resched()) cond_resched(); } @@ -531,12 +529,10 @@ static void wg_packet_consume_data(struct wg_device *wg, struct sk_buff *skb) if (unlikely(READ_ONCE(peer->is_dead))) goto err; - ret = wg_queue_enqueue_per_device_and_peer(&wg->decrypt_queue, - &peer->rx_queue, skb, - wg->packet_crypt_wq, - &wg->decrypt_queue.last_cpu); + ret = wg_queue_enqueue_per_device_and_peer(&wg->decrypt_queue, &peer->rx_queue, skb, + wg->packet_crypt_wq, &wg->decrypt_queue.last_cpu); if (unlikely(ret == -EPIPE)) - wg_queue_enqueue_per_peer_napi(skb, PACKET_STATE_DEAD); + wg_queue_enqueue_per_peer_rx(skb, PACKET_STATE_DEAD); if (likely(!ret || ret == -EPIPE)) { rcu_read_unlock_bh(); return; diff --git a/drivers/net/wireguard/send.c b/drivers/net/wireguard/send.c index f74b9341ab0f..5368f7c35b4b 100644 --- a/drivers/net/wireguard/send.c +++ b/drivers/net/wireguard/send.c @@ -239,8 +239,7 @@ void wg_packet_send_keepalive(struct wg_peer *peer) wg_packet_send_staged_packets(peer); } -static void wg_packet_create_data_done(struct sk_buff *first, - struct wg_peer *peer) +static void wg_packet_create_data_done(struct wg_peer *peer, struct sk_buff *first) { struct sk_buff *skb, *next; bool is_keepalive, data_sent = false; @@ -262,22 +261,19 @@ static void wg_packet_create_data_done(struct sk_buff *first, void wg_packet_tx_worker(struct work_struct *work) { - struct crypt_queue *queue = container_of(work, struct crypt_queue, - work); + struct wg_peer *peer = container_of(work, struct wg_peer, transmit_packet_work); struct noise_keypair *keypair; enum packet_state state; struct sk_buff *first; - struct wg_peer *peer; - while ((first = __ptr_ring_peek(&queue->ring)) != NULL && + while ((first = wg_prev_queue_peek(&peer->tx_queue)) != NULL && (state = atomic_read_acquire(&PACKET_CB(first)->state)) != PACKET_STATE_UNCRYPTED) { - __ptr_ring_discard_one(&queue->ring); - peer = PACKET_PEER(first); + wg_prev_queue_drop_peeked(&peer->tx_queue); keypair = PACKET_CB(first)->keypair; if (likely(state == PACKET_STATE_CRYPTED)) - wg_packet_create_data_done(first, peer); + wg_packet_create_data_done(peer, first); else kfree_skb_list(first); @@ -306,16 +302,14 @@ void wg_packet_encrypt_worker(struct work_struct *work) break; } } - wg_queue_enqueue_per_peer(&PACKET_PEER(first)->tx_queue, first, - state); + wg_queue_enqueue_per_peer_tx(first, state); if (need_resched()) cond_resched(); } } -static void wg_packet_create_data(struct sk_buff *first) +static void wg_packet_create_data(struct wg_peer *peer, struct sk_buff *first) { - struct wg_peer *peer = PACKET_PEER(first); struct wg_device *wg = peer->device; int ret = -EINVAL; @@ -323,13 +317,10 @@ static void wg_packet_create_data(struct sk_buff *first) if (unlikely(READ_ONCE(peer->is_dead))) goto err; - ret = wg_queue_enqueue_per_device_and_peer(&wg->encrypt_queue, - &peer->tx_queue, first, - wg->packet_crypt_wq, - &wg->encrypt_queue.last_cpu); + ret = wg_queue_enqueue_per_device_and_peer(&wg->encrypt_queue, &peer->tx_queue, first, + wg->packet_crypt_wq, &wg->encrypt_queue.last_cpu); if (unlikely(ret == -EPIPE)) - wg_queue_enqueue_per_peer(&peer->tx_queue, first, - PACKET_STATE_DEAD); + wg_queue_enqueue_per_peer_tx(first, PACKET_STATE_DEAD); err: rcu_read_unlock_bh(); if (likely(!ret || ret == -EPIPE)) @@ -393,7 +384,7 @@ void wg_packet_send_staged_packets(struct wg_peer *peer) packets.prev->next = NULL; wg_peer_get(keypair->entry.peer); PACKET_CB(packets.next)->keypair = keypair; - wg_packet_create_data(packets.next); + wg_packet_create_data(peer, packets.next); return; out_invalid: From patchwork Mon Feb 22 16:25:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jason A. Donenfeld" X-Patchwork-Id: 385815 Delivered-To: patch@linaro.org Received: by 2002:a02:290e:0:0:0:0:0 with SMTP id p14csp1419727jap; Mon, 22 Feb 2021 08:31:22 -0800 (PST) X-Google-Smtp-Source: ABdhPJzX63DWvWjRqzPOg0GCqd9uKgz8SrTBPt5Ucc2SXP3Esxd8NTllYbuGLOfSK07+MFNw/ch4 X-Received: by 2002:aa7:cd63:: with SMTP id ca3mr16522894edb.119.1614011482189; Mon, 22 Feb 2021 08:31:22 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614011482; cv=none; d=google.com; s=arc-20160816; b=msNyO5XX9unTfSTjd0npx1xHQED8y4AP1IJz58wyosGZcXkjPELd2NGhtpRGQ7mbz8 CUYAZW42fqafzJbHRruFQ1/wRuvWXFb3Kjp6N+W/gV/jPYvMYOj24P59X0/nz5/gW5y5 EImL62y7as3//0wE9GFQuHiDi85U6W19OZCThfTuLuBvJGexshJSCpjXKfC6VWRn3UdS wP4wS75mhXRuHWSe+62WFr4vhcolm3FqimMnO/ZZ4wMJkKs7N4hl5KVKWA40tZwNZSTf vxBQ9fEdP+7DUQuYdmrR04iMhQDQw+j2P4pxGXL9EACNuc6jsjsPtAMcO3RkedvxYqdN 9t6A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=8r6qIde3sFs9t093eU9Rh2Ii/O2i5YprmtUZTFEX6Y0=; b=UvFXtaWYSu4Qd5Yz6uFji/IFxtyelLYFHE7aZ/cgbSNQKY8b4P5zY3uvJtUdPog2qq V0bJP6QlomXatdnY0IkW1u6GQjWy6j89oNxRhGzLIYRk6r99C+qTa9QwfMatE9pFNCKN t4rGinl6PsUA/ftvgN9EFW4Ue9n7n5Zn0o2HafjiR0Wf+BwzCyRs6nCv00gKHRWe3h7y meFiK+XnWSNbPh/xtHOeZ9bDfHqbNKUBKq8OxezUUA3Umja99jWK6pZo9eqPod0lJUa0 Tt1dpLkdeEA6dviYxGCsY6glTturop8gwfdGzmYrSx69Gyv4OwvigtWss3gsrdXRW0Ho lJFQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@zx2c4.com header.s=20210105 header.b=FuAlNWrO; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=zx2c4.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id se21si5744482ejb.362.2021.02.22.08.31.22; Mon, 22 Feb 2021 08:31:22 -0800 (PST) Received-SPF: pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@zx2c4.com header.s=20210105 header.b=FuAlNWrO; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=zx2c4.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231757AbhBVQad (ORCPT + 9 others); Mon, 22 Feb 2021 11:30:33 -0500 Received: from mail.zx2c4.com ([104.131.123.232]:60986 "EHLO mail.zx2c4.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231623AbhBVQ22 (ORCPT ); Mon, 22 Feb 2021 11:28:28 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zx2c4.com; s=20210105; t=1614011166; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8r6qIde3sFs9t093eU9Rh2Ii/O2i5YprmtUZTFEX6Y0=; b=FuAlNWrOVbAuxVSoW/J3PYzAqhJXVVDTrwdVLoEByc+3KEbYLcJFgM5DuFxXC7wkPw4ChR uEbFd3tq8osPLt1I5s8fxKXRmhTNYcL5qz6Q3SRVX38maAuA8Iz5d1fEGqliQrDaUNjYgH joQWkgZzgeSIr/LkQCpb+Pg9eE4TNJs= Received: by mail.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id 08ff0b9b (TLSv1.3:AEAD-AES256-GCM-SHA384:256:NO); Mon, 22 Feb 2021 16:26:06 +0000 (UTC) From: "Jason A. Donenfeld" To: netdev@vger.kernel.org, davem@davemloft.net Subject: [PATCH net 7/7] wireguard: kconfig: use arm chacha even with no neon Date: Mon, 22 Feb 2021 17:25:49 +0100 Message-Id: <20210222162549.3252778-8-Jason@zx2c4.com> In-Reply-To: <20210222162549.3252778-1-Jason@zx2c4.com> References: <20210222162549.3252778-1-Jason@zx2c4.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org The condition here was incorrect: a non-neon fallback implementation is available on arm32 when NEON is not supported. Reported-by: Ilya Lipnitskiy Fixes: e7096c131e51 ("net: WireGuard secure network tunnel") Signed-off-by: Jason A. Donenfeld --- drivers/net/Kconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- 2.30.1 diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig index 260f9f46668b..63339d29be90 100644 --- a/drivers/net/Kconfig +++ b/drivers/net/Kconfig @@ -87,7 +87,7 @@ config WIREGUARD select CRYPTO_CURVE25519_X86 if X86 && 64BIT select ARM_CRYPTO if ARM select ARM64_CRYPTO if ARM64 - select CRYPTO_CHACHA20_NEON if (ARM || ARM64) && KERNEL_MODE_NEON + select CRYPTO_CHACHA20_NEON if ARM || (ARM64 && KERNEL_MODE_NEON) select CRYPTO_POLY1305_NEON if ARM64 && KERNEL_MODE_NEON select CRYPTO_POLY1305_ARM if ARM select CRYPTO_CURVE25519_NEON if ARM && KERNEL_MODE_NEON