From patchwork Sun Jun 30 17:23:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Khoronzhuk X-Patchwork-Id: 168184 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp1959877ilk; Sun, 30 Jun 2019 10:24:36 -0700 (PDT) X-Google-Smtp-Source: APXvYqwih0gLOTPvZuNPVx/CqsgkOf4jqGCwRgDX28NMzPENtugbUlIvLAwVrX4f/Fa7XyO1VjxY X-Received: by 2002:a17:902:724:: with SMTP id 33mr23515384pli.49.1561915476763; Sun, 30 Jun 2019 10:24:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561915476; cv=none; d=google.com; s=arc-20160816; b=e8qBIrimDuX1/rcBenE9qAMuNJmmT3T7Qg0GumJQOPH/eP7CHuVRZW91doYhTxVucY 3Qp9wtcuUdFDPh0vSUaGMDdW/XZ15gRpq3O7CacnWB8XA2evimDbw5FA8pqmM8TK/R6g W8LCG2y8ny9Hfcai9z9+zpKVt8+feTbW/LJ98dcJigoNrXH5tE9nJcUyOG4s3NZML8fE pYrHo5nUJ0Mh6+ObVlsfTf1oYiWzerykQrMfBdxYZdw/4SXz/K7/Lr0PaRcE715m70Rb lN83aR1AEhnX5FclEq8DcdhkffozA8E3b/zN9KMKvmwmWx/GuAnFn0OOOdYqO1fnybs3 lo5A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=nq/Fr5ga9FfpVS/76WeiZ/F4W2Ps3KMUrYs9XpFLQ9U=; b=oNr4ijfLFpjaL9kTDlSgCtklwUFqu5Ms3bLpi8Zg4Lg4RBH1xbjqr37WESNxt+fECL 9iAnkbsj3l06DdqKVI2vvtMX6SgTaMdy9lwUg+Zy6EirDtQtY/slWsVBzGXm0Td/b/C6 DLRhYgQLiVkzRSGR2+iwUbsoR+SFwDfuNU+tfEQ1mENNIPwux5m29b9Dw8GKMPcU44kW axyl1EPthoZ3V/SAIuoC2Ma36P9UlySnP+HlAyVqN/acIP39SIpGMpKLoVeC+/0eacL9 9NAVLc9dSuvqP1zZJkD9dVSgHNtj1JGxwQPdgl66J93QLtE7vZZeMt139Szmg9YnHZpO n3ng== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=m1JAM7jx; spf=pass (google.com: best guess record for domain of linux-omap-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-omap-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r24si7343977pgv.323.2019.06.30.10.24.36; Sun, 30 Jun 2019 10:24:36 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-omap-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=m1JAM7jx; spf=pass (google.com: best guess record for domain of linux-omap-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-omap-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726658AbfF3RYV (ORCPT + 5 others); Sun, 30 Jun 2019 13:24:21 -0400 Received: from mail-lj1-f196.google.com ([209.85.208.196]:44462 "EHLO mail-lj1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726660AbfF3RYV (ORCPT ); Sun, 30 Jun 2019 13:24:21 -0400 Received: by mail-lj1-f196.google.com with SMTP id k18so10619799ljc.11 for ; Sun, 30 Jun 2019 10:24:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=nq/Fr5ga9FfpVS/76WeiZ/F4W2Ps3KMUrYs9XpFLQ9U=; b=m1JAM7jxFgTTz626vancR3myj2YTI3HwhDD7dSUSDqSEUDyxH1XHZ2EEnLnVnn/qlI YxSSLoc73DU2PCt2xfh0xIbhq8klGUM3v+ZbDDWMneprkyzGABPoZ9ctMMwlzF/cYrLd MHbdR4Xcp2TpPZoh6TmEHa44MAwIwgYPTkCCr0wBB4eKg7uraRSZqVf1KsXxDrGMJOZW 1AJMRLsbfzeQUpxusMQ1wf45FSQ3uFsbOtWgDZiPIXZWVPzGKN3Vijin3G4TZLwCdoZf FOe6jeR/5X8iTXcM/2nuyvN/TIBJfK+DfL9aKDLwROpfrywo0umBSH0RbhSBRd9QleH+ eMeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=nq/Fr5ga9FfpVS/76WeiZ/F4W2Ps3KMUrYs9XpFLQ9U=; b=dP2rHK5aAt0xT16qT3CmTqjWeXzlHzzLAcyfKNxzQ6XyiMnFEK7i3+jFsotpw+nzeN oeOBy+iosBd79ffdJ9p322QbD5M+G8Ufi64Dx0JEB4jekPaG7sQ7TRN19esBKuk/K/f1 gb1+gA/D4wKOW8GnJxQHpYNn31g8iQgIs/CBl8pHwFckPlVLe6gUvjbWczbp32EVL6Oq OG6YjVxdsu214Sdb92YkYswtQmLyAPx0EL3kw1QCxSwaTjvUZBzY8U7mORh4lFBoBGdh /tO+SbIyEgL1Gt5mVAgRnYfL8gQIyp4z1U6dgJeiYQWQpjxMzRXrmkO5tkjud3JfwmnR CN4A== X-Gm-Message-State: APjAAAVhXr7IZr9lxeFU/S0uHNxwJMyK7uNYKuuiNdK07TuIcRMYqvIc Ll5t0ekkYY4POYJUBcU89eUGkw== X-Received: by 2002:a05:651c:87:: with SMTP id 7mr9598881ljq.184.1561915458893; Sun, 30 Jun 2019 10:24:18 -0700 (PDT) Received: from localhost.localdomain (59-201-94-178.pool.ukrtel.net. [178.94.201.59]) by smtp.gmail.com with ESMTPSA id c1sm2273795lfh.13.2019.06.30.10.24.17 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Sun, 30 Jun 2019 10:24:18 -0700 (PDT) From: Ivan Khoronzhuk To: grygorii.strashko@ti.com, hawk@kernel.org, davem@davemloft.net Cc: ast@kernel.org, linux-kernel@vger.kernel.org, linux-omap@vger.kernel.org, xdp-newbies@vger.kernel.org, ilias.apalodimas@linaro.org, netdev@vger.kernel.org, daniel@iogearbox.net, jakub.kicinski@netronome.com, john.fastabend@gmail.com, Ivan Khoronzhuk Subject: [PATCH v5 net-next 1/6] xdp: allow same allocator usage Date: Sun, 30 Jun 2019 20:23:43 +0300 Message-Id: <20190630172348.5692-2-ivan.khoronzhuk@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190630172348.5692-1-ivan.khoronzhuk@linaro.org> References: <20190630172348.5692-1-ivan.khoronzhuk@linaro.org> Sender: linux-omap-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-omap@vger.kernel.org XDP rxqs can be same for ndevs running under same rx napi softirq. But there is no ability to register same allocator for both rxqs, by fact it can same rxq but has different ndev as a reference. Due to last changes allocator destroy can be defered till the moment all packets are recycled by destination interface, afterwards it's freed. In order to schedule allocator destroy only after all users are unregistered, add refcnt to allocator object and schedule to destroy only it reaches 0. Signed-off-by: Ivan Khoronzhuk --- include/net/xdp_priv.h | 1 + net/core/xdp.c | 46 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 47 insertions(+) -- 2.17.1 diff --git a/include/net/xdp_priv.h b/include/net/xdp_priv.h index 6a8cba6ea79a..995b21da2f27 100644 --- a/include/net/xdp_priv.h +++ b/include/net/xdp_priv.h @@ -18,6 +18,7 @@ struct xdp_mem_allocator { struct rcu_head rcu; struct delayed_work defer_wq; unsigned long defer_warn; + unsigned long refcnt; }; #endif /* __LINUX_NET_XDP_PRIV_H__ */ diff --git a/net/core/xdp.c b/net/core/xdp.c index b29d7b513a18..a44621190fdc 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -98,6 +98,18 @@ bool __mem_id_disconnect(int id, bool force) WARN(1, "Request remove non-existing id(%d), driver bug?", id); return true; } + + /* to avoid calling hash lookup twice, decrement refcnt here till it + * reaches zero, then it can be called from workqueue afterwards. + */ + if (xa->refcnt) + xa->refcnt--; + + if (xa->refcnt) { + mutex_unlock(&mem_id_lock); + return true; + } + xa->disconnect_cnt++; /* Detects in-flight packet-pages for page_pool */ @@ -312,6 +324,33 @@ static bool __is_supported_mem_type(enum xdp_mem_type type) return true; } +static struct xdp_mem_allocator *xdp_allocator_get(void *allocator) +{ + struct xdp_mem_allocator *xae, *xa = NULL; + struct rhashtable_iter iter; + + mutex_lock(&mem_id_lock); + rhashtable_walk_enter(mem_id_ht, &iter); + do { + rhashtable_walk_start(&iter); + + while ((xae = rhashtable_walk_next(&iter)) && !IS_ERR(xae)) { + if (xae->allocator == allocator) { + xae->refcnt++; + xa = xae; + break; + } + } + + rhashtable_walk_stop(&iter); + + } while (xae == ERR_PTR(-EAGAIN)); + rhashtable_walk_exit(&iter); + mutex_unlock(&mem_id_lock); + + return xa; +} + int xdp_rxq_info_reg_mem_model(struct xdp_rxq_info *xdp_rxq, enum xdp_mem_type type, void *allocator) { @@ -347,6 +386,12 @@ int xdp_rxq_info_reg_mem_model(struct xdp_rxq_info *xdp_rxq, } } + xdp_alloc = xdp_allocator_get(allocator); + if (xdp_alloc) { + xdp_rxq->mem.id = xdp_alloc->mem.id; + return 0; + } + xdp_alloc = kzalloc(sizeof(*xdp_alloc), gfp); if (!xdp_alloc) return -ENOMEM; @@ -360,6 +405,7 @@ int xdp_rxq_info_reg_mem_model(struct xdp_rxq_info *xdp_rxq, xdp_rxq->mem.id = id; xdp_alloc->mem = xdp_rxq->mem; xdp_alloc->allocator = allocator; + xdp_alloc->refcnt = 1; /* Insert allocator into ID lookup table */ ptr = rhashtable_insert_slow(mem_id_ht, &id, &xdp_alloc->node);