From patchwork Sun Jun 30 17:23:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Khoronzhuk X-Patchwork-Id: 168184 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp1959877ilk; Sun, 30 Jun 2019 10:24:36 -0700 (PDT) X-Google-Smtp-Source: APXvYqwih0gLOTPvZuNPVx/CqsgkOf4jqGCwRgDX28NMzPENtugbUlIvLAwVrX4f/Fa7XyO1VjxY X-Received: by 2002:a17:902:724:: with SMTP id 33mr23515384pli.49.1561915476763; Sun, 30 Jun 2019 10:24:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561915476; cv=none; d=google.com; s=arc-20160816; b=e8qBIrimDuX1/rcBenE9qAMuNJmmT3T7Qg0GumJQOPH/eP7CHuVRZW91doYhTxVucY 3Qp9wtcuUdFDPh0vSUaGMDdW/XZ15gRpq3O7CacnWB8XA2evimDbw5FA8pqmM8TK/R6g W8LCG2y8ny9Hfcai9z9+zpKVt8+feTbW/LJ98dcJigoNrXH5tE9nJcUyOG4s3NZML8fE pYrHo5nUJ0Mh6+ObVlsfTf1oYiWzerykQrMfBdxYZdw/4SXz/K7/Lr0PaRcE715m70Rb lN83aR1AEhnX5FclEq8DcdhkffozA8E3b/zN9KMKvmwmWx/GuAnFn0OOOdYqO1fnybs3 lo5A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=nq/Fr5ga9FfpVS/76WeiZ/F4W2Ps3KMUrYs9XpFLQ9U=; b=oNr4ijfLFpjaL9kTDlSgCtklwUFqu5Ms3bLpi8Zg4Lg4RBH1xbjqr37WESNxt+fECL 9iAnkbsj3l06DdqKVI2vvtMX6SgTaMdy9lwUg+Zy6EirDtQtY/slWsVBzGXm0Td/b/C6 DLRhYgQLiVkzRSGR2+iwUbsoR+SFwDfuNU+tfEQ1mENNIPwux5m29b9Dw8GKMPcU44kW axyl1EPthoZ3V/SAIuoC2Ma36P9UlySnP+HlAyVqN/acIP39SIpGMpKLoVeC+/0eacL9 9NAVLc9dSuvqP1zZJkD9dVSgHNtj1JGxwQPdgl66J93QLtE7vZZeMt139Szmg9YnHZpO n3ng== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=m1JAM7jx; spf=pass (google.com: best guess record for domain of linux-omap-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-omap-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r24si7343977pgv.323.2019.06.30.10.24.36; Sun, 30 Jun 2019 10:24:36 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-omap-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=m1JAM7jx; spf=pass (google.com: best guess record for domain of linux-omap-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-omap-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726658AbfF3RYV (ORCPT + 5 others); Sun, 30 Jun 2019 13:24:21 -0400 Received: from mail-lj1-f196.google.com ([209.85.208.196]:44462 "EHLO mail-lj1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726660AbfF3RYV (ORCPT ); Sun, 30 Jun 2019 13:24:21 -0400 Received: by mail-lj1-f196.google.com with SMTP id k18so10619799ljc.11 for ; Sun, 30 Jun 2019 10:24:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=nq/Fr5ga9FfpVS/76WeiZ/F4W2Ps3KMUrYs9XpFLQ9U=; b=m1JAM7jxFgTTz626vancR3myj2YTI3HwhDD7dSUSDqSEUDyxH1XHZ2EEnLnVnn/qlI YxSSLoc73DU2PCt2xfh0xIbhq8klGUM3v+ZbDDWMneprkyzGABPoZ9ctMMwlzF/cYrLd MHbdR4Xcp2TpPZoh6TmEHa44MAwIwgYPTkCCr0wBB4eKg7uraRSZqVf1KsXxDrGMJOZW 1AJMRLsbfzeQUpxusMQ1wf45FSQ3uFsbOtWgDZiPIXZWVPzGKN3Vijin3G4TZLwCdoZf FOe6jeR/5X8iTXcM/2nuyvN/TIBJfK+DfL9aKDLwROpfrywo0umBSH0RbhSBRd9QleH+ eMeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=nq/Fr5ga9FfpVS/76WeiZ/F4W2Ps3KMUrYs9XpFLQ9U=; b=dP2rHK5aAt0xT16qT3CmTqjWeXzlHzzLAcyfKNxzQ6XyiMnFEK7i3+jFsotpw+nzeN oeOBy+iosBd79ffdJ9p322QbD5M+G8Ufi64Dx0JEB4jekPaG7sQ7TRN19esBKuk/K/f1 gb1+gA/D4wKOW8GnJxQHpYNn31g8iQgIs/CBl8pHwFckPlVLe6gUvjbWczbp32EVL6Oq OG6YjVxdsu214Sdb92YkYswtQmLyAPx0EL3kw1QCxSwaTjvUZBzY8U7mORh4lFBoBGdh /tO+SbIyEgL1Gt5mVAgRnYfL8gQIyp4z1U6dgJeiYQWQpjxMzRXrmkO5tkjud3JfwmnR CN4A== X-Gm-Message-State: APjAAAVhXr7IZr9lxeFU/S0uHNxwJMyK7uNYKuuiNdK07TuIcRMYqvIc Ll5t0ekkYY4POYJUBcU89eUGkw== X-Received: by 2002:a05:651c:87:: with SMTP id 7mr9598881ljq.184.1561915458893; Sun, 30 Jun 2019 10:24:18 -0700 (PDT) Received: from localhost.localdomain (59-201-94-178.pool.ukrtel.net. [178.94.201.59]) by smtp.gmail.com with ESMTPSA id c1sm2273795lfh.13.2019.06.30.10.24.17 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Sun, 30 Jun 2019 10:24:18 -0700 (PDT) From: Ivan Khoronzhuk To: grygorii.strashko@ti.com, hawk@kernel.org, davem@davemloft.net Cc: ast@kernel.org, linux-kernel@vger.kernel.org, linux-omap@vger.kernel.org, xdp-newbies@vger.kernel.org, ilias.apalodimas@linaro.org, netdev@vger.kernel.org, daniel@iogearbox.net, jakub.kicinski@netronome.com, john.fastabend@gmail.com, Ivan Khoronzhuk Subject: [PATCH v5 net-next 1/6] xdp: allow same allocator usage Date: Sun, 30 Jun 2019 20:23:43 +0300 Message-Id: <20190630172348.5692-2-ivan.khoronzhuk@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190630172348.5692-1-ivan.khoronzhuk@linaro.org> References: <20190630172348.5692-1-ivan.khoronzhuk@linaro.org> Sender: linux-omap-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-omap@vger.kernel.org XDP rxqs can be same for ndevs running under same rx napi softirq. But there is no ability to register same allocator for both rxqs, by fact it can same rxq but has different ndev as a reference. Due to last changes allocator destroy can be defered till the moment all packets are recycled by destination interface, afterwards it's freed. In order to schedule allocator destroy only after all users are unregistered, add refcnt to allocator object and schedule to destroy only it reaches 0. Signed-off-by: Ivan Khoronzhuk --- include/net/xdp_priv.h | 1 + net/core/xdp.c | 46 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 47 insertions(+) -- 2.17.1 diff --git a/include/net/xdp_priv.h b/include/net/xdp_priv.h index 6a8cba6ea79a..995b21da2f27 100644 --- a/include/net/xdp_priv.h +++ b/include/net/xdp_priv.h @@ -18,6 +18,7 @@ struct xdp_mem_allocator { struct rcu_head rcu; struct delayed_work defer_wq; unsigned long defer_warn; + unsigned long refcnt; }; #endif /* __LINUX_NET_XDP_PRIV_H__ */ diff --git a/net/core/xdp.c b/net/core/xdp.c index b29d7b513a18..a44621190fdc 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -98,6 +98,18 @@ bool __mem_id_disconnect(int id, bool force) WARN(1, "Request remove non-existing id(%d), driver bug?", id); return true; } + + /* to avoid calling hash lookup twice, decrement refcnt here till it + * reaches zero, then it can be called from workqueue afterwards. + */ + if (xa->refcnt) + xa->refcnt--; + + if (xa->refcnt) { + mutex_unlock(&mem_id_lock); + return true; + } + xa->disconnect_cnt++; /* Detects in-flight packet-pages for page_pool */ @@ -312,6 +324,33 @@ static bool __is_supported_mem_type(enum xdp_mem_type type) return true; } +static struct xdp_mem_allocator *xdp_allocator_get(void *allocator) +{ + struct xdp_mem_allocator *xae, *xa = NULL; + struct rhashtable_iter iter; + + mutex_lock(&mem_id_lock); + rhashtable_walk_enter(mem_id_ht, &iter); + do { + rhashtable_walk_start(&iter); + + while ((xae = rhashtable_walk_next(&iter)) && !IS_ERR(xae)) { + if (xae->allocator == allocator) { + xae->refcnt++; + xa = xae; + break; + } + } + + rhashtable_walk_stop(&iter); + + } while (xae == ERR_PTR(-EAGAIN)); + rhashtable_walk_exit(&iter); + mutex_unlock(&mem_id_lock); + + return xa; +} + int xdp_rxq_info_reg_mem_model(struct xdp_rxq_info *xdp_rxq, enum xdp_mem_type type, void *allocator) { @@ -347,6 +386,12 @@ int xdp_rxq_info_reg_mem_model(struct xdp_rxq_info *xdp_rxq, } } + xdp_alloc = xdp_allocator_get(allocator); + if (xdp_alloc) { + xdp_rxq->mem.id = xdp_alloc->mem.id; + return 0; + } + xdp_alloc = kzalloc(sizeof(*xdp_alloc), gfp); if (!xdp_alloc) return -ENOMEM; @@ -360,6 +405,7 @@ int xdp_rxq_info_reg_mem_model(struct xdp_rxq_info *xdp_rxq, xdp_rxq->mem.id = id; xdp_alloc->mem = xdp_rxq->mem; xdp_alloc->allocator = allocator; + xdp_alloc->refcnt = 1; /* Insert allocator into ID lookup table */ ptr = rhashtable_insert_slow(mem_id_ht, &id, &xdp_alloc->node); From patchwork Sun Jun 30 17:23:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Khoronzhuk X-Patchwork-Id: 168185 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp1959888ilk; Sun, 30 Jun 2019 10:24:37 -0700 (PDT) X-Google-Smtp-Source: APXvYqylKZJvZnA0TqT/OklwVcK38axb4wSAMugjg4/vWm7t8be4I1oD6AXNXjl/waMzD3RGXm0r X-Received: by 2002:a17:902:b20d:: with SMTP id t13mr22867404plr.229.1561915477630; Sun, 30 Jun 2019 10:24:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561915477; cv=none; d=google.com; s=arc-20160816; b=wchMtPsG0Q/og6TK6XLRi8tNKCyRtVbb21cm6kLc8GkYMOj2qShmvjpMAFiqd9Dq9H 8/AeNm+lAXXejd6H3+L5bM1aHIRVS6ruwtVKbJjXEIh0BmtXiKTxeucb21J08xex6xP8 f9Svna3img6mIW5P5lhWrNeL2PTyu5R+8KbRnB5aX+LVsKs8T1zE8nio3bJlATf+JGj0 6LK9fMNYXnN9ezLFurjqpGL/LUigSmMNU+qXvI5Y1/blqtBsh8WFXhoYC4PQyZrznrn0 LkVyyBxvDYRB+6eTuZJNjNF+78SqK8gbjCHq2tdi2ChksU5oWO5IW3nYQRoCDwVSRlQf abrQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=F7JFqzRXqxTzUkcqfAtcONZlOP60PRysVxN4UPvDiWY=; b=CnVZ9gDUp463C6dVxlivX5ZqCE21IiRh8MG7ebQnuw8KEIgbtTbJs4gmKQAehRqt4F gOH9ET02imKTajTpttRgZB9/cPR+x75LyX1O++8PceolM+6IoQftr5YYd9xd82fb4IPx 5kYmru+UkTGdBT+6KahN2/5d5AnzjaZGpMgVJ7/oCCTbAYs2jE52pjuzrcq5CoMLdEnh 9lf5V1T71BjCc59uAKcV0JV/I7Ctx4BRpF7VQm+IiHIlMP23FO4DQEYYZesUHduybEGR H31BorJ1IzfD15gbU5cXWWGxOfJlwgz2ASvK8oukeWMrHay1PlxxJD/svtM3TQKTIvBM lm5w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=XOGH2vFf; spf=pass (google.com: best guess record for domain of linux-omap-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-omap-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r24si7343977pgv.323.2019.06.30.10.24.37; Sun, 30 Jun 2019 10:24:37 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-omap-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=XOGH2vFf; spf=pass (google.com: best guess record for domain of linux-omap-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-omap-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726827AbfF3RYX (ORCPT + 5 others); Sun, 30 Jun 2019 13:24:23 -0400 Received: from mail-lj1-f194.google.com ([209.85.208.194]:43893 "EHLO mail-lj1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726688AbfF3RYW (ORCPT ); Sun, 30 Jun 2019 13:24:22 -0400 Received: by mail-lj1-f194.google.com with SMTP id 16so10604046ljv.10 for ; Sun, 30 Jun 2019 10:24:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=F7JFqzRXqxTzUkcqfAtcONZlOP60PRysVxN4UPvDiWY=; b=XOGH2vFfIntCCT4lVdBxdPYFniHPFbdJ9v0ze5VytDjBDx21cCzlyIMu0PDm9IstXq RRjJcHPMVnI+8d8ZMdfRy4BPn+lIg492f3nJ9jXySpsJBC1hNMWXg2U4aJYRCHyR9e5f WKZAseE0BuQMaaG5sgh3vmTrkIzcVJWIj2ge5aZCFgzqJ1RMU/efcB/hqDoY4V6B+UPr +gN1aBmAf8x9pgrpuE6lkMUvsmCoKF84SuCQ8lDA0tO0fMPBuy19VqzI2PnMNlh0S2ii WMm7uhUzbJrAciOXa5u8f76yaNxFieFaiZ44YO+rRT6Jatc4Qck1KvPO1Z/PXIaqxFdJ muFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=F7JFqzRXqxTzUkcqfAtcONZlOP60PRysVxN4UPvDiWY=; b=sZtgEYiYqELSx5rdmlx01N3j1bggryA7/aNn9jNb5NS+Q4gdT7TLi7mdYILQc+IxBh OC92NqYd2XMJt2OmBnZKTmhGsBUF+tg0vJzH8C8yZml93qt8YDKPQt2QJc2tPy7VHJrI M0gueET8B1qVMs+W/badoIHL35P7copRTDHaDK/zdWaMskMXl8r/I1McW4JqfidEevY9 xDfZNmKollyqBOtviZRBNacWlXaAly+kdoqKPuRkZi1+PUZNZyvKSf/FG1Phu2K167zH cvswtonvNQobQYSGWF2sDAoDuJyUfZrS30LWTKBE4atQErwFGqTAa3cS11bpBG1Q//eq wiSA== X-Gm-Message-State: APjAAAVZZX2mkHyU/D3jEIqXK/PVToQ2WFBdu4zbvSgAL5rbmjMnladI BptrLHooqVlwg8OlMW7YYfWkrw== X-Received: by 2002:a2e:9211:: with SMTP id k17mr11838287ljg.157.1561915460022; Sun, 30 Jun 2019 10:24:20 -0700 (PDT) Received: from localhost.localdomain (59-201-94-178.pool.ukrtel.net. [178.94.201.59]) by smtp.gmail.com with ESMTPSA id c1sm2273795lfh.13.2019.06.30.10.24.18 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Sun, 30 Jun 2019 10:24:19 -0700 (PDT) From: Ivan Khoronzhuk To: grygorii.strashko@ti.com, hawk@kernel.org, davem@davemloft.net Cc: ast@kernel.org, linux-kernel@vger.kernel.org, linux-omap@vger.kernel.org, xdp-newbies@vger.kernel.org, ilias.apalodimas@linaro.org, netdev@vger.kernel.org, daniel@iogearbox.net, jakub.kicinski@netronome.com, john.fastabend@gmail.com, Ivan Khoronzhuk Subject: [PATCH v5 net-next 2/6] net: ethernet: ti: davinci_cpdma: add dma mapped submit Date: Sun, 30 Jun 2019 20:23:44 +0300 Message-Id: <20190630172348.5692-3-ivan.khoronzhuk@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190630172348.5692-1-ivan.khoronzhuk@linaro.org> References: <20190630172348.5692-1-ivan.khoronzhuk@linaro.org> Sender: linux-omap-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-omap@vger.kernel.org In case if dma mapped packet needs to be sent, like with XDP page pool, the "mapped" submit can be used. This patch adds dma mapped submit based on regular one. Signed-off-by: Ivan Khoronzhuk --- drivers/net/ethernet/ti/davinci_cpdma.c | 89 ++++++++++++++++++++++--- drivers/net/ethernet/ti/davinci_cpdma.h | 4 ++ 2 files changed, 83 insertions(+), 10 deletions(-) -- 2.17.1 diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c b/drivers/net/ethernet/ti/davinci_cpdma.c index 5cf1758d425b..8da46394c0e7 100644 --- a/drivers/net/ethernet/ti/davinci_cpdma.c +++ b/drivers/net/ethernet/ti/davinci_cpdma.c @@ -139,6 +139,7 @@ struct submit_info { int directed; void *token; void *data; + int flags; int len; }; @@ -184,6 +185,8 @@ static struct cpdma_control_info controls[] = { (directed << CPDMA_TO_PORT_SHIFT)); \ } while (0) +#define CPDMA_DMA_EXT_MAP BIT(16) + static void cpdma_desc_pool_destroy(struct cpdma_ctlr *ctlr) { struct cpdma_desc_pool *pool = ctlr->pool; @@ -1015,6 +1018,7 @@ static int cpdma_chan_submit_si(struct submit_info *si) struct cpdma_chan *chan = si->chan; struct cpdma_ctlr *ctlr = chan->ctlr; int len = si->len; + int swlen = len; struct cpdma_desc __iomem *desc; dma_addr_t buffer; u32 mode; @@ -1036,16 +1040,22 @@ static int cpdma_chan_submit_si(struct submit_info *si) chan->stats.runt_transmit_buff++; } - buffer = dma_map_single(ctlr->dev, si->data, len, chan->dir); - ret = dma_mapping_error(ctlr->dev, buffer); - if (ret) { - cpdma_desc_free(ctlr->pool, desc, 1); - return -EINVAL; - } - mode = CPDMA_DESC_OWNER | CPDMA_DESC_SOP | CPDMA_DESC_EOP; cpdma_desc_to_port(chan, mode, si->directed); + if (si->flags & CPDMA_DMA_EXT_MAP) { + buffer = (u32)si->data; + dma_sync_single_for_device(ctlr->dev, buffer, len, chan->dir); + swlen |= CPDMA_DMA_EXT_MAP; + } else { + buffer = dma_map_single(ctlr->dev, si->data, len, chan->dir); + ret = dma_mapping_error(ctlr->dev, buffer); + if (ret) { + cpdma_desc_free(ctlr->pool, desc, 1); + return -EINVAL; + } + } + /* Relaxed IO accessors can be used here as there is read barrier * at the end of write sequence. */ @@ -1055,7 +1065,7 @@ static int cpdma_chan_submit_si(struct submit_info *si) writel_relaxed(mode | len, &desc->hw_mode); writel_relaxed((uintptr_t)si->token, &desc->sw_token); writel_relaxed(buffer, &desc->sw_buffer); - writel_relaxed(len, &desc->sw_len); + writel_relaxed(swlen, &desc->sw_len); desc_read(desc, sw_len); __cpdma_chan_submit(chan, desc); @@ -1079,6 +1089,32 @@ int cpdma_chan_idle_submit(struct cpdma_chan *chan, void *token, void *data, si.data = data; si.len = len; si.directed = directed; + si.flags = 0; + + spin_lock_irqsave(&chan->lock, flags); + if (chan->state == CPDMA_STATE_TEARDOWN) { + spin_unlock_irqrestore(&chan->lock, flags); + return -EINVAL; + } + + ret = cpdma_chan_submit_si(&si); + spin_unlock_irqrestore(&chan->lock, flags); + return ret; +} + +int cpdma_chan_idle_submit_mapped(struct cpdma_chan *chan, void *token, + dma_addr_t data, int len, int directed) +{ + struct submit_info si; + unsigned long flags; + int ret; + + si.chan = chan; + si.token = token; + si.data = (void *)(u32)data; + si.len = len; + si.directed = directed; + si.flags = CPDMA_DMA_EXT_MAP; spin_lock_irqsave(&chan->lock, flags); if (chan->state == CPDMA_STATE_TEARDOWN) { @@ -1103,6 +1139,32 @@ int cpdma_chan_submit(struct cpdma_chan *chan, void *token, void *data, si.data = data; si.len = len; si.directed = directed; + si.flags = 0; + + spin_lock_irqsave(&chan->lock, flags); + if (chan->state != CPDMA_STATE_ACTIVE) { + spin_unlock_irqrestore(&chan->lock, flags); + return -EINVAL; + } + + ret = cpdma_chan_submit_si(&si); + spin_unlock_irqrestore(&chan->lock, flags); + return ret; +} + +int cpdma_chan_submit_mapped(struct cpdma_chan *chan, void *token, + dma_addr_t data, int len, int directed) +{ + struct submit_info si; + unsigned long flags; + int ret; + + si.chan = chan; + si.token = token; + si.data = (void *)(u32)data; + si.len = len; + si.directed = directed; + si.flags = CPDMA_DMA_EXT_MAP; spin_lock_irqsave(&chan->lock, flags); if (chan->state != CPDMA_STATE_ACTIVE) { @@ -1140,10 +1202,17 @@ static void __cpdma_chan_free(struct cpdma_chan *chan, uintptr_t token; token = desc_read(desc, sw_token); - buff_dma = desc_read(desc, sw_buffer); origlen = desc_read(desc, sw_len); - dma_unmap_single(ctlr->dev, buff_dma, origlen, chan->dir); + buff_dma = desc_read(desc, sw_buffer); + if (origlen & CPDMA_DMA_EXT_MAP) { + origlen &= ~CPDMA_DMA_EXT_MAP; + dma_sync_single_for_cpu(ctlr->dev, buff_dma, origlen, + chan->dir); + } else { + dma_unmap_single(ctlr->dev, buff_dma, origlen, chan->dir); + } + cpdma_desc_free(pool, desc, 1); (*chan->handler)((void *)token, outlen, status); } diff --git a/drivers/net/ethernet/ti/davinci_cpdma.h b/drivers/net/ethernet/ti/davinci_cpdma.h index 9343c8c73c1b..0271a20c2e09 100644 --- a/drivers/net/ethernet/ti/davinci_cpdma.h +++ b/drivers/net/ethernet/ti/davinci_cpdma.h @@ -77,8 +77,12 @@ int cpdma_chan_stop(struct cpdma_chan *chan); int cpdma_chan_get_stats(struct cpdma_chan *chan, struct cpdma_chan_stats *stats); +int cpdma_chan_submit_mapped(struct cpdma_chan *chan, void *token, + dma_addr_t data, int len, int directed); int cpdma_chan_submit(struct cpdma_chan *chan, void *token, void *data, int len, int directed); +int cpdma_chan_idle_submit_mapped(struct cpdma_chan *chan, void *token, + dma_addr_t data, int len, int directed); int cpdma_chan_idle_submit(struct cpdma_chan *chan, void *token, void *data, int len, int directed); int cpdma_chan_process(struct cpdma_chan *chan, int quota); From patchwork Sun Jun 30 17:23:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Khoronzhuk X-Patchwork-Id: 168186 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp1959894ilk; Sun, 30 Jun 2019 10:24:38 -0700 (PDT) X-Google-Smtp-Source: APXvYqypcb2cNzPdWZIuc0XWlhROpome1Oh+FMZ9ZoVdech3TN3gCQOLWt6VnoqK2ir5liKNSzVC X-Received: by 2002:a63:ec13:: with SMTP id j19mr19534970pgh.174.1561915477919; Sun, 30 Jun 2019 10:24:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561915477; cv=none; d=google.com; s=arc-20160816; b=abwTGTM7fZyeyAgXMZFti4YrDUb42AXgqmchJfYBH2+GibD55sT9ELQHmga9u9izLL t+193X0f/P3zn2htqyKHQUakA2w8QxX8/tQBp2b8JZ3HL4LvFcPEm59D7cR+GAFLHDGA 73XR20eCBFZOgDbT+h6wOFAcTa53EfU9EwQukdcByaThUUgXmg08egTWrHJTB66qGUt7 YFtOM+eKma9KadMleWmneHgLxyUOrTJ52ei5bb/A1bUluukZm+mJLF6b3JnvVFfm4uZi xo20voaaVKY+SLiafmLkXXgPyWRgYptY/veTM7cva2cOjdAeop8vpq4Z+Z4j8uvX3YEM hejA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=mzcItkE9NyKz8TKyqVqQ02N7Kdo9xEfAnYuh993LyWk=; b=EJMKr9sVO6kD/58imYg2IwwbhRC5yq48Td7gNbXJO56NIhbr4JSx1zK/7+oAJSpSJV ntNmtPSVAnZQTci02d/g6PRxs9jXno8XwnfCO3smNzKgx2+FbexzIRD0XZnNqvOoifHg G3+0CDrvcbeNZWYOhezLD7p/bmjNjElyPQ9H09fHOE1sAVm/h/FBsllZL5g03XFJOwNI 6DuA2gaX+hiBdnppS68xu2DshT0cWGYW+8QBrz486cB6aQAVMCuvqxYk85g4uLEFS/Mg g00SbpqzAnD/hFhFuKDLtfQQOTLwpQZOLGdai/Gt8ORlkpJkQDUcLEkOSL9k6YqZtVHX goGQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=F1uXSFce; spf=pass (google.com: best guess record for domain of linux-omap-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-omap-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r24si7343977pgv.323.2019.06.30.10.24.37; Sun, 30 Jun 2019 10:24:37 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-omap-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=F1uXSFce; spf=pass (google.com: best guess record for domain of linux-omap-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-omap-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726838AbfF3RYY (ORCPT + 5 others); Sun, 30 Jun 2019 13:24:24 -0400 Received: from mail-lf1-f67.google.com ([209.85.167.67]:37039 "EHLO mail-lf1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726811AbfF3RYX (ORCPT ); Sun, 30 Jun 2019 13:24:23 -0400 Received: by mail-lf1-f67.google.com with SMTP id d11so7176358lfb.4 for ; Sun, 30 Jun 2019 10:24:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=mzcItkE9NyKz8TKyqVqQ02N7Kdo9xEfAnYuh993LyWk=; b=F1uXSFceQFgGnRuQR09fE6WltscOOesSJsvj3dLbd4gDUw3VShHRk8GPHXYRupEDNW now1SzECbP81rM14mBsTtsHichP4uxx2fGzciYi/99Mp//OtLI90kBWxmxRCjWdz8f9y IC+iAuOglh6BNE7xDKrN5Ad2LpV5bkYjViIFmiXFaGFgSNtKlbrqxW/019+BWycBm05o pW/IK3EBiPS57v+g/Y7T6QcQO8mY+6z4aZTnhQE2QU8DhT/LP9KmZLETeL/XgsJybn00 c30sigOudFHicFXOhOTx2N+1gWztjymXz1RIHhvcMXLHUVyqrzMwP+TVHFl9e9ieBAp8 W6XA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=mzcItkE9NyKz8TKyqVqQ02N7Kdo9xEfAnYuh993LyWk=; b=TitbTKr5wnwmLZU61tJt0z4/+Li7wKY2fs6Hu0MSNVS/MDn8XAFJZSglFHK9/8RF7R HKuv1a1RtLXzT93QSEllc++Mwh4bzXjbes5NFQnlSOXhqiLeGpTS5AvVs6QA06Q8b7SS +aHyb1AUBfiTJEtQDNkvdtIt3zucjMpRF9A8786UFrHedWMpeHn880CnE+4VCobagXA7 hsmREC3voMJ+XgiGqGuyV9ON1f5HHp3VdiNZILd3r2i7pFXbweHpBzfQH+l9+sQq6jSl rucY2mMvTsk3rInq2/1x5uubgjquB85uj8vBO45fTZqa1M+w6J38ZAEyuatVt2ZP0skQ mebQ== X-Gm-Message-State: APjAAAX3dtemZSQJURL71tRZguxbr7uhREw42sgGyCP9bB1SaIdIwWds XySQpFiFfiqcFPhhILb1e1Eu7A== X-Received: by 2002:ac2:5a01:: with SMTP id q1mr9871359lfn.46.1561915461257; Sun, 30 Jun 2019 10:24:21 -0700 (PDT) Received: from localhost.localdomain (59-201-94-178.pool.ukrtel.net. [178.94.201.59]) by smtp.gmail.com with ESMTPSA id c1sm2273795lfh.13.2019.06.30.10.24.20 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Sun, 30 Jun 2019 10:24:20 -0700 (PDT) From: Ivan Khoronzhuk To: grygorii.strashko@ti.com, hawk@kernel.org, davem@davemloft.net Cc: ast@kernel.org, linux-kernel@vger.kernel.org, linux-omap@vger.kernel.org, xdp-newbies@vger.kernel.org, ilias.apalodimas@linaro.org, netdev@vger.kernel.org, daniel@iogearbox.net, jakub.kicinski@netronome.com, john.fastabend@gmail.com, Ivan Khoronzhuk Subject: [PATCH v5 net-next 3/6] net: ethernet: ti: davinci_cpdma: return handler status Date: Sun, 30 Jun 2019 20:23:45 +0300 Message-Id: <20190630172348.5692-4-ivan.khoronzhuk@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190630172348.5692-1-ivan.khoronzhuk@linaro.org> References: <20190630172348.5692-1-ivan.khoronzhuk@linaro.org> Sender: linux-omap-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-omap@vger.kernel.org This change is needed to return flush status of rx handler for flushing redirected xdp frames after processing channel packets. Do it as separate patch for simplicity. Signed-off-by: Ivan Khoronzhuk --- drivers/net/ethernet/ti/cpsw.c | 23 +++++++++++++--------- drivers/net/ethernet/ti/cpsw_ethtool.c | 2 +- drivers/net/ethernet/ti/cpsw_priv.h | 2 +- drivers/net/ethernet/ti/davinci_cpdma.c | 26 ++++++++++++++----------- drivers/net/ethernet/ti/davinci_cpdma.h | 4 ++-- drivers/net/ethernet/ti/davinci_emac.c | 17 ++++++++++------ 6 files changed, 44 insertions(+), 30 deletions(-) -- 2.17.1 diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c index 32b7b3b74a6b..4f72dbb5a428 100644 --- a/drivers/net/ethernet/ti/cpsw.c +++ b/drivers/net/ethernet/ti/cpsw.c @@ -337,7 +337,7 @@ void cpsw_intr_disable(struct cpsw_common *cpsw) return; } -void cpsw_tx_handler(void *token, int len, int status) +int cpsw_tx_handler(void *token, int len, int status) { struct netdev_queue *txq; struct sk_buff *skb = token; @@ -355,6 +355,7 @@ void cpsw_tx_handler(void *token, int len, int status) ndev->stats.tx_packets++; ndev->stats.tx_bytes += len; dev_kfree_skb_any(skb); + return 0; } static void cpsw_rx_vlan_encap(struct sk_buff *skb) @@ -400,7 +401,7 @@ static void cpsw_rx_vlan_encap(struct sk_buff *skb) } } -static void cpsw_rx_handler(void *token, int len, int status) +static int cpsw_rx_handler(void *token, int len, int status) { struct cpdma_chan *ch; struct sk_buff *skb = token; @@ -434,7 +435,7 @@ static void cpsw_rx_handler(void *token, int len, int status) /* the interface is going down, skbs are purged */ dev_kfree_skb_any(skb); - return; + return 0; } new_skb = netdev_alloc_skb_ip_align(ndev, cpsw->rx_packet_max); @@ -464,6 +465,8 @@ static void cpsw_rx_handler(void *token, int len, int status) WARN_ON(ret == -ENOMEM); dev_kfree_skb_any(new_skb); } + + return 0; } void cpsw_split_res(struct cpsw_common *cpsw) @@ -588,6 +591,7 @@ static int cpsw_tx_mq_poll(struct napi_struct *napi_tx, int budget) u32 ch_map; int num_tx, cur_budget, ch; struct cpsw_common *cpsw = napi_to_cpsw(napi_tx); + int flags; struct cpsw_vector *txv; /* process every unprocessed channel */ @@ -602,7 +606,7 @@ static int cpsw_tx_mq_poll(struct napi_struct *napi_tx, int budget) else cur_budget = txv->budget; - num_tx += cpdma_chan_process(txv->ch, cur_budget); + num_tx += cpdma_chan_process(txv->ch, cur_budget, &flags); if (num_tx >= budget) break; } @@ -618,9 +622,9 @@ static int cpsw_tx_mq_poll(struct napi_struct *napi_tx, int budget) static int cpsw_tx_poll(struct napi_struct *napi_tx, int budget) { struct cpsw_common *cpsw = napi_to_cpsw(napi_tx); - int num_tx; + int num_tx, flags; - num_tx = cpdma_chan_process(cpsw->txv[0].ch, budget); + num_tx = cpdma_chan_process(cpsw->txv[0].ch, budget, &flags); if (num_tx < budget) { napi_complete(napi_tx); writel(0xff, &cpsw->wr_regs->tx_en); @@ -638,6 +642,7 @@ static int cpsw_rx_mq_poll(struct napi_struct *napi_rx, int budget) u32 ch_map; int num_rx, cur_budget, ch; struct cpsw_common *cpsw = napi_to_cpsw(napi_rx); + int flags; struct cpsw_vector *rxv; /* process every unprocessed channel */ @@ -652,7 +657,7 @@ static int cpsw_rx_mq_poll(struct napi_struct *napi_rx, int budget) else cur_budget = rxv->budget; - num_rx += cpdma_chan_process(rxv->ch, cur_budget); + num_rx += cpdma_chan_process(rxv->ch, cur_budget, &flags); if (num_rx >= budget) break; } @@ -668,9 +673,9 @@ static int cpsw_rx_mq_poll(struct napi_struct *napi_rx, int budget) static int cpsw_rx_poll(struct napi_struct *napi_rx, int budget) { struct cpsw_common *cpsw = napi_to_cpsw(napi_rx); - int num_rx; + int num_rx, flags; - num_rx = cpdma_chan_process(cpsw->rxv[0].ch, budget); + num_rx = cpdma_chan_process(cpsw->rxv[0].ch, budget, &flags); if (num_rx < budget) { napi_complete_done(napi_rx, num_rx); writel(0xff, &cpsw->wr_regs->rx_en); diff --git a/drivers/net/ethernet/ti/cpsw_ethtool.c b/drivers/net/ethernet/ti/cpsw_ethtool.c index f60dc1dfc443..7c19eebbabcc 100644 --- a/drivers/net/ethernet/ti/cpsw_ethtool.c +++ b/drivers/net/ethernet/ti/cpsw_ethtool.c @@ -532,8 +532,8 @@ static int cpsw_update_channels_res(struct cpsw_priv *priv, int ch_num, int rx, cpdma_handler_fn rx_handler) { struct cpsw_common *cpsw = priv->cpsw; - void (*handler)(void *, int, int); struct netdev_queue *queue; + cpdma_handler_fn handler; struct cpsw_vector *vec; int ret, *ch, vch; diff --git a/drivers/net/ethernet/ti/cpsw_priv.h b/drivers/net/ethernet/ti/cpsw_priv.h index 04795b97ee71..2ecb3af59fe9 100644 --- a/drivers/net/ethernet/ti/cpsw_priv.h +++ b/drivers/net/ethernet/ti/cpsw_priv.h @@ -390,7 +390,7 @@ void cpsw_split_res(struct cpsw_common *cpsw); int cpsw_fill_rx_channels(struct cpsw_priv *priv); void cpsw_intr_enable(struct cpsw_common *cpsw); void cpsw_intr_disable(struct cpsw_common *cpsw); -void cpsw_tx_handler(void *token, int len, int status); +int cpsw_tx_handler(void *token, int len, int status); /* ethtool */ u32 cpsw_get_msglevel(struct net_device *ndev); diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c b/drivers/net/ethernet/ti/davinci_cpdma.c index 8da46394c0e7..ea25b23c8058 100644 --- a/drivers/net/ethernet/ti/davinci_cpdma.c +++ b/drivers/net/ethernet/ti/davinci_cpdma.c @@ -1191,15 +1191,16 @@ bool cpdma_check_free_tx_desc(struct cpdma_chan *chan) return free_tx_desc; } -static void __cpdma_chan_free(struct cpdma_chan *chan, - struct cpdma_desc __iomem *desc, - int outlen, int status) +static int __cpdma_chan_free(struct cpdma_chan *chan, + struct cpdma_desc __iomem *desc, int outlen, + int status) { struct cpdma_ctlr *ctlr = chan->ctlr; struct cpdma_desc_pool *pool = ctlr->pool; dma_addr_t buff_dma; int origlen; uintptr_t token; + int ret; token = desc_read(desc, sw_token); origlen = desc_read(desc, sw_len); @@ -1214,14 +1215,16 @@ static void __cpdma_chan_free(struct cpdma_chan *chan, } cpdma_desc_free(pool, desc, 1); - (*chan->handler)((void *)token, outlen, status); + ret = (*chan->handler)((void *)token, outlen, status); + + return ret; } static int __cpdma_chan_process(struct cpdma_chan *chan) { + int status, outlen, ret; struct cpdma_ctlr *ctlr = chan->ctlr; struct cpdma_desc __iomem *desc; - int status, outlen; int cb_status = 0; struct cpdma_desc_pool *pool = ctlr->pool; dma_addr_t desc_dma; @@ -1232,7 +1235,7 @@ static int __cpdma_chan_process(struct cpdma_chan *chan) desc = chan->head; if (!desc) { chan->stats.empty_dequeue++; - status = -ENOENT; + ret = -ENOENT; goto unlock_ret; } desc_dma = desc_phys(pool, desc); @@ -1241,7 +1244,7 @@ static int __cpdma_chan_process(struct cpdma_chan *chan) outlen = status & 0x7ff; if (status & CPDMA_DESC_OWNER) { chan->stats.busy_dequeue++; - status = -EBUSY; + ret = -EBUSY; goto unlock_ret; } @@ -1267,15 +1270,15 @@ static int __cpdma_chan_process(struct cpdma_chan *chan) else cb_status = status; - __cpdma_chan_free(chan, desc, outlen, cb_status); - return status; + ret = __cpdma_chan_free(chan, desc, outlen, cb_status); + return ret; unlock_ret: spin_unlock_irqrestore(&chan->lock, flags); - return status; + return ret; } -int cpdma_chan_process(struct cpdma_chan *chan, int quota) +int cpdma_chan_process(struct cpdma_chan *chan, int quota, int *flags) { int used = 0, ret = 0; @@ -1286,6 +1289,7 @@ int cpdma_chan_process(struct cpdma_chan *chan, int quota) ret = __cpdma_chan_process(chan); if (ret < 0) break; + *flags |= ret; used++; } return used; diff --git a/drivers/net/ethernet/ti/davinci_cpdma.h b/drivers/net/ethernet/ti/davinci_cpdma.h index 0271a20c2e09..aafa8889c789 100644 --- a/drivers/net/ethernet/ti/davinci_cpdma.h +++ b/drivers/net/ethernet/ti/davinci_cpdma.h @@ -61,7 +61,7 @@ struct cpdma_chan_stats { struct cpdma_ctlr; struct cpdma_chan; -typedef void (*cpdma_handler_fn)(void *token, int len, int status); +typedef int (*cpdma_handler_fn)(void *token, int len, int status); struct cpdma_ctlr *cpdma_ctlr_create(struct cpdma_params *params); int cpdma_ctlr_destroy(struct cpdma_ctlr *ctlr); @@ -85,7 +85,7 @@ int cpdma_chan_idle_submit_mapped(struct cpdma_chan *chan, void *token, dma_addr_t data, int len, int directed); int cpdma_chan_idle_submit(struct cpdma_chan *chan, void *token, void *data, int len, int directed); -int cpdma_chan_process(struct cpdma_chan *chan, int quota); +int cpdma_chan_process(struct cpdma_chan *chan, int quota, int *flags); int cpdma_ctlr_int_ctrl(struct cpdma_ctlr *ctlr, bool enable); void cpdma_ctlr_eoi(struct cpdma_ctlr *ctlr, u32 value); diff --git a/drivers/net/ethernet/ti/davinci_emac.c b/drivers/net/ethernet/ti/davinci_emac.c index 5f4ece0d5a73..06756471d586 100644 --- a/drivers/net/ethernet/ti/davinci_emac.c +++ b/drivers/net/ethernet/ti/davinci_emac.c @@ -860,7 +860,7 @@ static struct sk_buff *emac_rx_alloc(struct emac_priv *priv) return skb; } -static void emac_rx_handler(void *token, int len, int status) +static int emac_rx_handler(void *token, int len, int status) { struct sk_buff *skb = token; struct net_device *ndev = skb->dev; @@ -871,7 +871,7 @@ static void emac_rx_handler(void *token, int len, int status) /* free and bail if we are shutting down */ if (unlikely(!netif_running(ndev))) { dev_kfree_skb_any(skb); - return; + return 0; } /* recycle on receive error */ @@ -892,7 +892,7 @@ static void emac_rx_handler(void *token, int len, int status) if (!skb) { if (netif_msg_rx_err(priv) && net_ratelimit()) dev_err(emac_dev, "failed rx buffer alloc\n"); - return; + return 0; } recycle: @@ -902,9 +902,11 @@ static void emac_rx_handler(void *token, int len, int status) WARN_ON(ret == -ENOMEM); if (unlikely(ret < 0)) dev_kfree_skb_any(skb); + + return 0; } -static void emac_tx_handler(void *token, int len, int status) +static int emac_tx_handler(void *token, int len, int status) { struct sk_buff *skb = token; struct net_device *ndev = skb->dev; @@ -917,6 +919,7 @@ static void emac_tx_handler(void *token, int len, int status) ndev->stats.tx_packets++; ndev->stats.tx_bytes += len; dev_kfree_skb_any(skb); + return 0; } /** @@ -1227,6 +1230,7 @@ static int emac_poll(struct napi_struct *napi, int budget) struct device *emac_dev = &ndev->dev; u32 status = 0; u32 num_tx_pkts = 0, num_rx_pkts = 0; + int flags; /* Check interrupt vectors and call packet processing */ status = emac_read(EMAC_MACINVECTOR); @@ -1238,7 +1242,8 @@ static int emac_poll(struct napi_struct *napi, int budget) if (status & mask) { num_tx_pkts = cpdma_chan_process(priv->txchan, - EMAC_DEF_TX_MAX_SERVICE); + EMAC_DEF_TX_MAX_SERVICE, + &flags); } /* TX processing */ mask = EMAC_DM644X_MAC_IN_VECTOR_RX_INT_VEC; @@ -1247,7 +1252,7 @@ static int emac_poll(struct napi_struct *napi, int budget) mask = EMAC_DM646X_MAC_IN_VECTOR_RX_INT_VEC; if (status & mask) { - num_rx_pkts = cpdma_chan_process(priv->rxchan, budget); + num_rx_pkts = cpdma_chan_process(priv->rxchan, budget, &flags); } /* RX processing */ mask = EMAC_DM644X_MAC_IN_VECTOR_HOST_INT;