From patchwork Mon Aug 15 15:16:18 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Khoronzhuk X-Patchwork-Id: 73920 Delivered-To: patch@linaro.org Received: by 10.140.29.52 with SMTP id a49csp1495013qga; Mon, 15 Aug 2016 08:18:07 -0700 (PDT) X-Received: by 10.66.15.138 with SMTP id x10mr54534106pac.92.1471274287146; Mon, 15 Aug 2016 08:18:07 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id yp6si27090895pac.253.2016.08.15.08.18.06; Mon, 15 Aug 2016 08:18:07 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-omap-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-omap-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-omap-owner@vger.kernel.org; dmarc=fail (p=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932423AbcHOPSG (ORCPT + 4 others); Mon, 15 Aug 2016 11:18:06 -0400 Received: from mail-wm0-f52.google.com ([74.125.82.52]:33565 "EHLO mail-wm0-f52.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752863AbcHOPQ3 (ORCPT ); Mon, 15 Aug 2016 11:16:29 -0400 Received: by mail-wm0-f52.google.com with SMTP id d196so21660171wmd.0 for ; Mon, 15 Aug 2016 08:16:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=nEuWmsPYoNGPf8WNeZ6s5Hr/L1N8RjChZrX4LIjnREs=; b=h1nfd9WvWjEsqdB2koOpcvh4+iTk3284U8PtRuKBaQ+0acCqXlQUEZrHovdCbQxrwr wSUPdVSQr55L3R1QwOhGFa/ZXaFFEFJBWMwRrtZgYMuR1ibSJ0gjdx+lhoxCVZpCVJ4b M5xvCoh/mJY4LO+0TSgGZruKXuJWWRrTdMjdw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=nEuWmsPYoNGPf8WNeZ6s5Hr/L1N8RjChZrX4LIjnREs=; b=FYWzheuLeMc5Rpm35yTR1EblK9tQigtqitcRbdSshYvGcYAENXGDdKLoVwInlxt1xn 2X5Fo90sRntdh1hjUWwNdmph0+FqRuzdCh/AJAWd0idnISKruFRtjpM+zrfP6KcCAyFr Fnrc9n3w7J5RLkFzqSf2V9N9xPlOgKJWTo6q5YADChlhKcIr/Jo6PVbjlS6nKor2IYpS NjpkkCuaDQapvZPvhSuiOSCgjYxCAwz8ZgVuS9MWVFOcr/Qd0g0EEY7R+Ppr8M5jqLIt DaRrJiMD5fTJGSyY/QMbcar8VmREAx9ag5BTwJCw7mEJuoqNjZshiZ1O15qZDuozmLyJ 8Nmw== X-Gm-Message-State: AEkoouvV3IT0GpjMULxQSxT0Fr10lcMb8JFgA6S+haYwGLGdG/VwFD9WplkFHzU3u3hc1d8O X-Received: by 10.25.16.162 with SMTP id 34mr5406528lfq.127.1471274188017; Mon, 15 Aug 2016 08:16:28 -0700 (PDT) Received: from localhost.localdomain ([195.238.92.128]) by smtp.gmail.com with ESMTPSA id 17sm77949ljj.49.2016.08.15.08.16.26 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 15 Aug 2016 08:16:27 -0700 (PDT) From: Ivan Khoronzhuk To: davem@davemloft.net, netdev@vger.kernel.org, mugunthanvnm@ti.com, grygorii.strashko@ti.com Cc: linux-kernel@vger.kernel.org, linux-omap@vger.kernel.org, nsekhar@ti.com, Ivan Khoronzhuk Subject: [PATCH v2 1/4] net: ethernet: ti: davinci_cpdma: split descs num between all channels Date: Mon, 15 Aug 2016 18:16:18 +0300 Message-Id: <1471274181-780-2-git-send-email-ivan.khoronzhuk@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1471274181-780-1-git-send-email-ivan.khoronzhuk@linaro.org> References: <1471274181-780-1-git-send-email-ivan.khoronzhuk@linaro.org> Sender: linux-omap-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-omap@vger.kernel.org Currently the tx channels share same pool of descriptors. Thus one channel can block another if pool is emptied by one. But, the shaper should decide which channel is allowed to send packets. To avoid such impact of one channel on another, let every channel to have its own peace of pool. Signed-off-by: Ivan Khoronzhuk --- drivers/net/ethernet/ti/cpsw.c | 61 ++++++++++++++++++++------------- drivers/net/ethernet/ti/davinci_cpdma.c | 47 +++++++++++++++++++++++-- drivers/net/ethernet/ti/davinci_cpdma.h | 2 +- 3 files changed, 84 insertions(+), 26 deletions(-) -- 1.9.1 -- To unsubscribe from this list: send the line "unsubscribe linux-omap" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c index b4d3b41..a4c1538 100644 --- a/drivers/net/ethernet/ti/cpsw.c +++ b/drivers/net/ethernet/ti/cpsw.c @@ -1212,6 +1212,40 @@ static void cpsw_init_host_port(struct cpsw_priv *priv) } } +static int cpsw_fill_rx_channels(struct net_device *ndev) +{ + struct cpsw_priv *priv = netdev_priv(ndev); + struct cpsw_common *cpsw = priv->cpsw; + struct sk_buff *skb; + int ch_buf_num; + int i, ret; + + ch_buf_num = cpdma_chan_get_rx_buf_num(cpsw->rxch); + for (i = 0; i < ch_buf_num; i++) { + skb = __netdev_alloc_skb_ip_align(ndev, + cpsw->rx_packet_max, + GFP_KERNEL); + if (!skb) { + cpsw_err(priv, ifup, "cannot allocate skb\n"); + return -ENOMEM; + } + + ret = cpdma_chan_submit(cpsw->rxch, skb, skb->data, + skb_tailroom(skb), 0); + if (ret < 0) { + cpsw_err(priv, ifup, + "cannot submit skb to rx channel, error %d\n", + ret); + kfree_skb(skb); + return ret; + } + } + + cpsw_info(priv, ifup, "submitted %d rx descriptors\n", ch_buf_num); + + return ch_buf_num; +} + static void cpsw_slave_stop(struct cpsw_slave *slave, struct cpsw_common *cpsw) { u32 slave_port; @@ -1232,7 +1266,7 @@ static int cpsw_ndo_open(struct net_device *ndev) { struct cpsw_priv *priv = netdev_priv(ndev); struct cpsw_common *cpsw = priv->cpsw; - int i, ret; + int ret; u32 reg; ret = pm_runtime_get_sync(cpsw->dev); @@ -1264,8 +1298,6 @@ static int cpsw_ndo_open(struct net_device *ndev) ALE_ALL_PORTS, ALE_ALL_PORTS, 0, 0); if (!cpsw_common_res_usage_state(cpsw)) { - int buf_num; - /* setup tx dma to fixed prio and zero offset */ cpdma_control_set(cpsw->dma, CPDMA_TX_PRIO_FIXED, 1); cpdma_control_set(cpsw->dma, CPDMA_RX_BUFFER_OFFSET, 0); @@ -1292,26 +1324,9 @@ static int cpsw_ndo_open(struct net_device *ndev) enable_irq(cpsw->irqs_table[0]); } - buf_num = cpdma_chan_get_rx_buf_num(cpsw->dma); - for (i = 0; i < buf_num; i++) { - struct sk_buff *skb; - - ret = -ENOMEM; - skb = __netdev_alloc_skb_ip_align(priv->ndev, - cpsw->rx_packet_max, GFP_KERNEL); - if (!skb) - goto err_cleanup; - ret = cpdma_chan_submit(cpsw->rxch, skb, skb->data, - skb_tailroom(skb), 0); - if (ret < 0) { - kfree_skb(skb); - goto err_cleanup; - } - } - /* continue even if we didn't manage to submit all - * receive descs - */ - cpsw_info(priv, ifup, "submitted %d rx descriptors\n", i); + ret = cpsw_fill_rx_channels(ndev); + if (ret < 0) + goto err_cleanup; if (cpts_register(cpsw->dev, cpsw->cpts, cpsw->data.cpts_clock_mult, diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c b/drivers/net/ethernet/ti/davinci_cpdma.c index cf72b33..ec560ab 100644 --- a/drivers/net/ethernet/ti/davinci_cpdma.c +++ b/drivers/net/ethernet/ti/davinci_cpdma.c @@ -104,6 +104,7 @@ struct cpdma_ctlr { struct cpdma_desc_pool *pool; spinlock_t lock; struct cpdma_chan *channels[2 * CPDMA_MAX_CHANNELS]; + int chan_num; }; struct cpdma_chan { @@ -256,6 +257,7 @@ struct cpdma_ctlr *cpdma_ctlr_create(struct cpdma_params *params) ctlr->state = CPDMA_STATE_IDLE; ctlr->params = *params; ctlr->dev = params->dev; + ctlr->chan_num = 0; spin_lock_init(&ctlr->lock); ctlr->pool = cpdma_desc_pool_create(ctlr->dev, @@ -399,6 +401,32 @@ void cpdma_ctlr_eoi(struct cpdma_ctlr *ctlr, u32 value) } EXPORT_SYMBOL_GPL(cpdma_ctlr_eoi); +/** + * cpdma_chan_split_pool - Splits ctrl pool between all channels. + * Has to be called under ctlr lock + * + */ +static void cpdma_chan_split_pool(struct cpdma_ctlr *ctlr) +{ + struct cpdma_desc_pool *pool = ctlr->pool; + struct cpdma_chan *chan; + int ch_desc_num; + int i; + + if (!ctlr->chan_num) + return; + + /* calculate average size of pool slice */ + ch_desc_num = pool->num_desc / ctlr->chan_num; + + /* split ctlr pool */ + for (i = 0; i < ARRAY_SIZE(ctlr->channels); i++) { + chan = ctlr->channels[i]; + if (chan) + chan->desc_num = ch_desc_num; + } +} + struct cpdma_chan *cpdma_chan_create(struct cpdma_ctlr *ctlr, int chan_num, cpdma_handler_fn handler) { @@ -447,14 +475,25 @@ struct cpdma_chan *cpdma_chan_create(struct cpdma_ctlr *ctlr, int chan_num, spin_lock_init(&chan->lock); ctlr->channels[chan_num] = chan; + ctlr->chan_num++; + + cpdma_chan_split_pool(ctlr); + spin_unlock_irqrestore(&ctlr->lock, flags); return chan; } EXPORT_SYMBOL_GPL(cpdma_chan_create); -int cpdma_chan_get_rx_buf_num(struct cpdma_ctlr *ctlr) +int cpdma_chan_get_rx_buf_num(struct cpdma_chan *chan) { - return ctlr->pool->num_desc / 2; + unsigned long flags; + int desc_num; + + spin_lock_irqsave(&chan->lock, flags); + desc_num = chan->desc_num; + spin_unlock_irqrestore(&chan->lock, flags); + + return desc_num; } EXPORT_SYMBOL_GPL(cpdma_chan_get_rx_buf_num); @@ -471,6 +510,10 @@ int cpdma_chan_destroy(struct cpdma_chan *chan) if (chan->state != CPDMA_STATE_IDLE) cpdma_chan_stop(chan); ctlr->channels[chan->chan_num] = NULL; + ctlr->chan_num--; + + cpdma_chan_split_pool(ctlr); + spin_unlock_irqrestore(&ctlr->lock, flags); return 0; } diff --git a/drivers/net/ethernet/ti/davinci_cpdma.h b/drivers/net/ethernet/ti/davinci_cpdma.h index 4b46cd6..9119b43 100644 --- a/drivers/net/ethernet/ti/davinci_cpdma.h +++ b/drivers/net/ethernet/ti/davinci_cpdma.h @@ -80,7 +80,7 @@ int cpdma_ctlr_stop(struct cpdma_ctlr *ctlr); struct cpdma_chan *cpdma_chan_create(struct cpdma_ctlr *ctlr, int chan_num, cpdma_handler_fn handler); -int cpdma_chan_get_rx_buf_num(struct cpdma_ctlr *ctlr); +int cpdma_chan_get_rx_buf_num(struct cpdma_chan *chan); int cpdma_chan_destroy(struct cpdma_chan *chan); int cpdma_chan_start(struct cpdma_chan *chan); int cpdma_chan_stop(struct cpdma_chan *chan);