From patchwork Tue Nov 29 15:00:47 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Khoronzhuk X-Patchwork-Id: 84840 Delivered-To: patch@linaro.org Received: by 10.140.20.101 with SMTP id 92csp1632433qgi; Tue, 29 Nov 2016 07:01:48 -0800 (PST) X-Received: by 10.99.219.21 with SMTP id e21mr49838205pgg.136.1480431708350; Tue, 29 Nov 2016 07:01:48 -0800 (PST) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i124si60195382pgd.33.2016.11.29.07.01.48; Tue, 29 Nov 2016 07:01:48 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of netdev-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of netdev-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933995AbcK2PBc (ORCPT + 5 others); Tue, 29 Nov 2016 10:01:32 -0500 Received: from mail-lf0-f45.google.com ([209.85.215.45]:36695 "EHLO mail-lf0-f45.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933944AbcK2PBW (ORCPT ); Tue, 29 Nov 2016 10:01:22 -0500 Received: by mail-lf0-f45.google.com with SMTP id t196so123867474lff.3 for ; Tue, 29 Nov 2016 07:01:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Yu5kjwOdvZ9+4qJbwQLGK4xkUgbEyEcStNpK38qo9oc=; b=SVtj7/jXQpN+QO+IvSjDfX/VoBSOua/+c73xxYC9PwkKNFvGTikAMrcD0HcA/Pq3eP 0wkF88DSiRuRySm9Dw0OJf7noBF/gOEi/Ba6/uICfB2OQe/WBnjBetlcznXSBqDhguTH xwGbWLJXkH4+Q+YPVWU7pb8D1wvhz64lUYdw0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Yu5kjwOdvZ9+4qJbwQLGK4xkUgbEyEcStNpK38qo9oc=; b=JKLJbWVvH6TNLSHeslt/5iyHGZvIY9scAUqFxhUX0+sys/k5h45+1l9WwkxVs29BWq LdCkU43eEp3/SpQFRXJDB0E2dsCUH/ieMqJuvV/NEJ3h52yDnDKtYXCtiIcsnk0ddmQf zuH+Gv1pKdqXaBW88rObH+SjkKUn9dcExSTHiIu/706+Ibzx464dlVT3N5Kjf2aHjKg7 DzRWdNhJCEXPNPhHOqmUnH1d8yitKV/vFl4zmVtjZuRvm1dsLT8vvYJ7yS07X8m1wDyb UlQtkRmX/QAb6txmj3GH01By0udX2eVVfJQ5DhgNrv8b7BhxS3h0WZ4y2OrHsaScDR4x gi6w== X-Gm-Message-State: AKaTC02tFNINb1aLKHdlDRlMv4uUJEKqGiTLhWxoOWvheqGXBdjze8DF5mg1aJZ0AVyEiQar X-Received: by 10.25.28.145 with SMTP id c139mr8705843lfc.167.1480431680590; Tue, 29 Nov 2016 07:01:20 -0800 (PST) Received: from localhost.localdomain (183-224-132-95.pool.ukrtel.net. [95.132.224.183]) by smtp.gmail.com with ESMTPSA id c78sm9663673lfc.39.2016.11.29.07.01.19 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 29 Nov 2016 07:01:20 -0800 (PST) From: Ivan Khoronzhuk To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, mugunthanvnm@ti.com, grygorii.strashko@ti.com Cc: linux-omap@vger.kernel.org, Ivan Khoronzhuk Subject: [PATCH 1/5] net: ethernet: ti: davinci_cpdma: add weight function for channels Date: Tue, 29 Nov 2016 17:00:47 +0200 Message-Id: <1480431651-6081-2-git-send-email-ivan.khoronzhuk@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1480431651-6081-1-git-send-email-ivan.khoronzhuk@linaro.org> References: <1480431651-6081-1-git-send-email-ivan.khoronzhuk@linaro.org> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org The weight of a channel is needed to split descriptors between channels. The weight can depend on maximum rate of channels, maximum rate of an interface or other reasons. The channel weight is in percentage and is independent for rx and tx channels. Signed-off-by: Ivan Khoronzhuk --- drivers/net/ethernet/ti/davinci_cpdma.c | 124 +++++++++++++++++++++++++++++--- drivers/net/ethernet/ti/davinci_cpdma.h | 1 + 2 files changed, 115 insertions(+), 10 deletions(-) -- 2.7.4 diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c b/drivers/net/ethernet/ti/davinci_cpdma.c index 56708a7..87456a9 100644 --- a/drivers/net/ethernet/ti/davinci_cpdma.c +++ b/drivers/net/ethernet/ti/davinci_cpdma.c @@ -122,6 +122,7 @@ struct cpdma_chan { struct cpdma_chan_stats stats; /* offsets into dmaregs */ int int_set, int_clear, td; + int weight; }; struct cpdma_control_info { @@ -474,29 +475,131 @@ u32 cpdma_ctrl_txchs_state(struct cpdma_ctlr *ctlr) } EXPORT_SYMBOL_GPL(cpdma_ctrl_txchs_state); +static void cpdma_chan_set_descs(struct cpdma_ctlr *ctlr, + int rx, int desc_num, + int per_ch_desc) +{ + struct cpdma_chan *chan, *most_chan = NULL; + int desc_cnt = desc_num; + int most_dnum = 0; + int min, max, i; + + if (!desc_num) + return; + + if (rx) { + min = rx_chan_num(0); + max = rx_chan_num(CPDMA_MAX_CHANNELS); + } else { + min = tx_chan_num(0); + max = tx_chan_num(CPDMA_MAX_CHANNELS); + } + + for (i = min; i < max; i++) { + chan = ctlr->channels[i]; + if (!chan) + continue; + + if (chan->weight) + chan->desc_num = (chan->weight * desc_num) / 100; + else + chan->desc_num = per_ch_desc; + + desc_cnt -= chan->desc_num; + + if (most_dnum < chan->desc_num) { + most_dnum = chan->desc_num; + most_chan = chan; + } + } + /* use remains */ + most_chan->desc_num += desc_cnt; +} + /** * cpdma_chan_split_pool - Splits ctrl pool between all channels. * Has to be called under ctlr lock */ -static void cpdma_chan_split_pool(struct cpdma_ctlr *ctlr) +static int cpdma_chan_split_pool(struct cpdma_ctlr *ctlr) { + int tx_per_ch_desc = 0, rx_per_ch_desc = 0; struct cpdma_desc_pool *pool = ctlr->pool; + int free_rx_num = 0, free_tx_num = 0; + int rx_weight = 0, tx_weight = 0; + int tx_desc_num, rx_desc_num; struct cpdma_chan *chan; - int ch_desc_num; - int i; + int i, tx_num = 0; if (!ctlr->chan_num) - return; - - /* calculate average size of pool slice */ - ch_desc_num = pool->num_desc / ctlr->chan_num; + return 0; - /* split ctlr pool */ for (i = 0; i < ARRAY_SIZE(ctlr->channels); i++) { chan = ctlr->channels[i]; - if (chan) - chan->desc_num = ch_desc_num; + if (!chan) + continue; + + if (is_rx_chan(chan)) { + if (!chan->weight) + free_rx_num++; + rx_weight += chan->weight; + } else { + if (!chan->weight) + free_tx_num++; + tx_weight += chan->weight; + tx_num++; + } + } + + if (rx_weight > 100 || tx_weight > 100) + return -EINVAL; + + tx_desc_num = (tx_num * pool->num_desc) / ctlr->chan_num; + rx_desc_num = pool->num_desc - tx_desc_num; + + if (free_tx_num) { + tx_per_ch_desc = tx_desc_num - (tx_weight * tx_desc_num) / 100; + tx_per_ch_desc /= free_tx_num; + } + if (free_rx_num) { + rx_per_ch_desc = rx_desc_num - (rx_weight * rx_desc_num) / 100; + rx_per_ch_desc /= free_rx_num; } + + cpdma_chan_set_descs(ctlr, 0, tx_desc_num, tx_per_ch_desc); + cpdma_chan_set_descs(ctlr, 1, rx_desc_num, rx_per_ch_desc); + + return 0; +} + +/* cpdma_chan_set_weight - set weight of a channel in percentage. + * Tx and Rx channels have separate weights. That is 100% for RX + * and 100% for Tx. The weight is used to split cpdma resources + * in correct proportion required by the channels, including number + * of descriptors. The channel rate is not enough to know the + * weight of a channel as the maximum rate of an interface is needed. + * If weight = 0, then channel uses rest of descriptors leaved by + * weighted channels. + */ +int cpdma_chan_set_weight(struct cpdma_chan *ch, int weight) +{ + struct cpdma_ctlr *ctlr = ch->ctlr; + unsigned long flags, ch_flags; + int ret; + + spin_lock_irqsave(&ctlr->lock, flags); + spin_lock_irqsave(&ch->lock, ch_flags); + if (ch->weight == weight) { + spin_unlock_irqrestore(&ch->lock, ch_flags); + spin_unlock_irqrestore(&ctlr->lock, flags); + return 0; + } + ch->weight = weight; + spin_unlock_irqrestore(&ch->lock, ch_flags); + + /* re-split pool using new channel weight */ + ret = cpdma_chan_split_pool(ctlr); + spin_unlock_irqrestore(&ctlr->lock, flags); + return ret; } struct cpdma_chan *cpdma_chan_create(struct cpdma_ctlr *ctlr, int chan_num, @@ -527,6 +630,7 @@ struct cpdma_chan *cpdma_chan_create(struct cpdma_ctlr *ctlr, int chan_num, chan->chan_num = chan_num; chan->handler = handler; chan->desc_num = ctlr->pool->num_desc / 2; + chan->weight = 0; if (is_rx_chan(chan)) { chan->hdp = ctlr->params.rxhdp + offset; diff --git a/drivers/net/ethernet/ti/davinci_cpdma.h b/drivers/net/ethernet/ti/davinci_cpdma.h index a07b22b..629020c 100644 --- a/drivers/net/ethernet/ti/davinci_cpdma.h +++ b/drivers/net/ethernet/ti/davinci_cpdma.h @@ -90,6 +90,7 @@ int cpdma_chan_int_ctrl(struct cpdma_chan *chan, bool enable); u32 cpdma_ctrl_rxchs_state(struct cpdma_ctlr *ctlr); u32 cpdma_ctrl_txchs_state(struct cpdma_ctlr *ctlr); bool cpdma_check_free_tx_desc(struct cpdma_chan *chan); +int cpdma_chan_set_weight(struct cpdma_chan *ch, int weight); enum cpdma_control { CPDMA_CMD_IDLE, /* write-only */