From patchwork Tue Mar 18 06:47:13 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungho An X-Patchwork-Id: 26451 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-pb0-f70.google.com (mail-pb0-f70.google.com [209.85.160.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id B69F7203C3 for ; Tue, 18 Mar 2014 06:47:32 +0000 (UTC) Received: by mail-pb0-f70.google.com with SMTP id rp16sf16822554pbb.1 for ; Mon, 17 Mar 2014 23:47:32 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:cc:subject:date:message-id :mime-version:thread-index:dlp-filter:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe:content-type :content-transfer-encoding:content-language; bh=zFeYqpdDdjD3+K2YGQeCl1uVzzEKYZc9qNyGUa0p0PI=; b=HvbQac94JOFSaQyQSkXYcRi2+w3osCLQ+8pErOE3VSQVqW1zGhBTUZAe6i4w4bxUl2 kzbdnXJ/nArwxkMm+zwL7ZMWF0/QjOJ8BIIJSrZUJC5fnMvjNbArjSjl6Q2NbM5oawWx FCpGlMKEQqdEcMdOjcllnNv8RE+hpnNyFxa7km8a/drlw/PawxXudS75vmsCj1NwSAqJ n5f0c8H9MWYG7ilnmHDEcuOZtalacPf3UbHzNV4Z1/XvecHYY+EFj/jKrCRKXps2zAO7 e2cEmsZ6ciEtNSnxcWWfPBJ8nGmpn4a3rx6iFnvn68z2VPplvAKFcTgRy622oCKwYkgY 5jxg== X-Gm-Message-State: ALoCoQlpIG50yskHMSFbqYpY78uuzJm4Plx5LTkIIuVZMS5usKezKPvUVgVwKKX8gXkihCAm9Pxw X-Received: by 10.69.29.33 with SMTP id jt1mr10925114pbd.7.1395125251995; Mon, 17 Mar 2014 23:47:31 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.80.115 with SMTP id b106ls1930681qgd.97.gmail; Mon, 17 Mar 2014 23:47:31 -0700 (PDT) X-Received: by 10.58.238.35 with SMTP id vh3mr22990016vec.16.1395125251866; Mon, 17 Mar 2014 23:47:31 -0700 (PDT) Received: from mail-vc0-f178.google.com (mail-vc0-f178.google.com [209.85.220.178]) by mx.google.com with ESMTPS id sj4si1029130vdc.156.2014.03.17.23.47.31 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 17 Mar 2014 23:47:31 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.178 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.178; Received: by mail-vc0-f178.google.com with SMTP id im17so6928329vcb.9 for ; Mon, 17 Mar 2014 23:47:31 -0700 (PDT) X-Received: by 10.220.106.84 with SMTP id w20mr23422794vco.18.1395125251671; Mon, 17 Mar 2014 23:47:31 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.78.9 with SMTP id i9csp181759vck; Mon, 17 Mar 2014 23:47:30 -0700 (PDT) X-Received: by 10.66.227.193 with SMTP id sc1mr30426563pac.102.1395125250073; Mon, 17 Mar 2014 23:47:30 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id se7si12914985pbb.100.2014.03.17.23.47.29; Mon, 17 Mar 2014 23:47:29 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of netdev-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753287AbaCRGrW (ORCPT + 4 others); Tue, 18 Mar 2014 02:47:22 -0400 Received: from mailout4.samsung.com ([203.254.224.34]:25998 "EHLO mailout4.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753232AbaCRGrU (ORCPT ); Tue, 18 Mar 2014 02:47:20 -0400 Received: from epcpsbgr4.samsung.com (u144.gpu120.samsung.co.kr [203.254.230.144]) by mailout4.samsung.com (Oracle Communications Messaging Server 7u4-24.01 (7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0N2M00I5KDIVBVB0@mailout4.samsung.com>; Tue, 18 Mar 2014 15:47:19 +0900 (KST) Received: from epcpsbgm1.samsung.com ( [203.254.230.51]) by epcpsbgr4.samsung.com (EPCPMTA) with SMTP id FA.87.10364.6FBE7235; Tue, 18 Mar 2014 15:47:18 +0900 (KST) X-AuditID: cbfee690-b7f266d00000287c-ba-5327ebf67375 Received: from epmmp2 ( [203.254.227.17]) by epcpsbgm1.samsung.com (EPCPMTA) with SMTP id 97.29.29263.6FBE7235; Tue, 18 Mar 2014 15:47:18 +0900 (KST) Received: from VISITOR1LAB ([105.128.19.10]) by mmp2.samsung.com (Oracle Communications Messaging Server 7u4-24.01 (7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTPA id <0N2M005LDDIQUQ90@mmp2.samsung.com>; Tue, 18 Mar 2014 15:47:18 +0900 (KST) From: Byungho An To: netdev@vger.kernel.org, linux-samsung-soc@vger.kernel.org Cc: davem@davemloft.net, vipul.pandya@samsung.com, ilho215.lee@samsung.com Subject: [PATCH V4 3/8] net: sxgbe: add TSO support for Samsung sxgbe Date: Mon, 17 Mar 2014 23:47:13 -0700 Message-id: <00f601cf4275$e549a750$afdcf5f0$@samsung.com> MIME-version: 1.0 X-Mailer: Microsoft Outlook 14.0 Thread-index: Ac9CdO7YgwxrF87PS5SW2Vs45jFYrQ== X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrMIsWRmVeSWpSXmKPExsVy+t8zY91vr9WDDS4fULeYc76FxeLov4WM FjPO72OyOLZAzGLbggvMDqweW1beZPLo27KK0ePzJrkA5igum5TUnMyy1CJ9uwSujNZ7zgUb 3Co6ljxhamA8adXFyMkhIWAiceXIanYIW0ziwr31bF2MXBxCAssYJaY9fMsKU/TnxUwWiMR0 Ronf0/YxQzh/GCX+TvjOBFLFJqAm0TzzMhuILSJgK7HkyGewscwCXhJtZ/YATeLgEBZwldjd HAwSZhFQldjYcw9sAa+ApcSu2e+gbEGJH5PvsUC0akms33mcCcKWl9i85i0zxEEKEjvOvmYE GSkioCex7JMbRIm4xKQHD9lBTpMQ2MQuceLGZBaIXQIS3yYfYgGplxCQldh0AGqMpMTBFTdY JjCKzUKyeRaSzbOQbJ6FZMUCRpZVjKKpBckFxUnpRSZ6xYm5xaV56XrJ+bmbGCERNmEH470D 1ocYk4HWT2SWEk3OB0ZoXkm8obGZkYWpiamxkbmlGWnCSuK8ao+SgoQE0hNLUrNTUwtSi+KL SnNSiw8xMnFwSjUw5vKIZLpPPvwjQkT/f3ZfT2HoMbv7u463fO92udeyKaKBZQ+Dl1r/g41H XyguMz6x7+FWseldDvYh0yOz/73uSnsk/XC3jXFIcGLYhZxlzzPDF7/SdvpRuF+9WtXv1uxn 2oKygiGFlbULb5vOu5zM1FPzy/vz0i977P3Kov7P87i5PmrS8Q0dSizFGYmGWsxFxYkAQ+Ma W8YCAAA= X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprAKsWRmVeSWpSXmKPExsVy+t9jQd1vr9WDDX48V7SYc76FxeLov4WM FjPO72OyOLZAzGLbggvMDqweW1beZPLo27KK0ePzJrkA5qgGRpuM1MSU1CKF1Lzk/JTMvHRb Je/geOd4UzMDQ11DSwtzJYW8xNxUWyUXnwBdt8wcoJVKCmWJOaVAoYDE4mIlfTtME0JD3HQt YBojdH1DguB6jAzQQMI6xozWe84FG9wqOpY8YWpgPGnVxcjJISFgIvHnxUwWCFtM4sK99Wxd jFwcQgLTGSV+T9vHDOH8YZT4O+E7E0gVm4CaRPPMy2wgtoiArcSSI5/ZQWxmAS+JtjN7WLsY OTiEBVwldjcHg4RZBFQlNvbcYwWxeQUsJXbNfgdlC0r8mHyPBaJVS2L9zuNMELa8xOY1b5kh DlKQ2HH2NSPISBEBPYlln9wgSsQlJj14yD6BUWAWkkmzkEyahWTSLCQtCxhZVjGKphYkFxQn peca6hUn5haX5qXrJefnbmIEx+8zqR2MKxssDjEKcDAq8fC+YFMPFmJNLCuuzD3EKMHBrCTC u3QyUIg3JbGyKrUoP76oNCe1+BBjMtCjE5mlRJPzgaklryTe0NjEzMjSyMzCyMTcnDRhJXHe A63WgUIC6YklqdmpqQWpRTBbmDg4pRoYxQ743W8X62HzexP/JPu398y05Dqf4vjuFxWuvVzG mds3B+mder/kpVnbv4mhUTOjTK5OOdt+7VC2xAo3FiO/k/bOTbW2h9Z4XGxOfLD60/+Lryao F+SfXGS7NSl3xV5X383qWQHrgv97TVw/559s3qpPwhf3e2gVdjzp/NE643fkM9bnb+NmKrEU ZyQaajEXFScCAJHEVMQjAwAA DLP-Filter: Pass X-MTR: 20000000000000000@CPGS X-CFilter-Loop: Reflected Sender: netdev-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: netdev@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: bh74.an@samsung.com X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.178 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Content-type: text/plain; charset=us-ascii Content-transfer-encoding: 7bit Content-language: en-us From: Vipul Pandya Enable TSO during initialization for each DMA channels Signed-off-by: Vipul Pandya Neatening-by: Joe Perches Signed-off-by: Byungho An --- drivers/net/ethernet/samsung/sxgbe/sxgbe_desc.h | 17 +++-- drivers/net/ethernet/samsung/sxgbe/sxgbe_dma.c | 10 +++ drivers/net/ethernet/samsung/sxgbe/sxgbe_dma.h | 2 + drivers/net/ethernet/samsung/sxgbe/sxgbe_main.c | 75 ++++++++++++++++++++--- 4 files changed, 91 insertions(+), 13 deletions(-) @@ -1143,18 +1174,36 @@ static netdev_tx_t sxgbe_xmit(struct sk_buff *skb, struct net_device *dev) tx_desc = tqueue->dma_tx + entry; first_desc = tx_desc; + if (ctxt_desc_req) + ctxt_desc = (struct sxgbe_tx_ctxt_desc *)first_desc; /* save the skb address */ tqueue->tx_skbuff[entry] = skb; if (!is_jumbo) { - tx_desc->tdes01 = dma_map_single(priv->device, skb->data, - no_pagedlen, DMA_TO_DEVICE); - if (dma_mapping_error(priv->device, tx_desc->tdes01)) - pr_err("%s: TX dma mapping failed!!\n", __func__); - - priv->hw->desc->prepare_tx_desc(tx_desc, 1, no_pagedlen, - no_pagedlen); + if (likely(skb_is_gso(skb))) { + /* TSO support */ + mss = skb_shinfo(skb)->gso_size; + priv->hw->desc->tx_ctxt_desc_set_mss(ctxt_desc, mss); + priv->hw->desc->tx_ctxt_desc_set_tcmssv(ctxt_desc); + priv->hw->desc->tx_ctxt_desc_reset_ostc(ctxt_desc); + priv->hw->desc->tx_ctxt_desc_set_ctxt(ctxt_desc); + priv->hw->desc->tx_ctxt_desc_set_owner(ctxt_desc); + + entry = (++tqueue->cur_tx) % tx_rsize; + first_desc = tqueue->dma_tx + entry; + + sxgbe_tso_prepare(priv, first_desc, skb); + } else { + tx_desc->tdes01 = dma_map_single(priv->device, + skb->data, no_pagedlen, DMA_TO_DEVICE); + if (dma_mapping_error(priv->device, tx_desc->tdes01)) + netdev_err(dev, "%s: TX dma mapping failed!!\n", + __func__); + + priv->hw->desc->prepare_tx_desc(tx_desc, 1, no_pagedlen, + no_pagedlen); + } } for (frag_num = 0; frag_num < nr_frags; frag_num++) { @@ -1861,6 +1910,7 @@ struct sxgbe_priv_data *sxgbe_dvr_probe(struct device *device, int ret = 0; struct net_device *ndev = NULL; struct sxgbe_priv_data *priv; + u8 queue_num; ndev = alloc_etherdev_mqs(sizeof(struct sxgbe_priv_data), SXGBE_TX_QUEUES, SXGBE_RX_QUEUES); @@ -1895,7 +1945,9 @@ struct sxgbe_priv_data *sxgbe_dvr_probe(struct device *device, ndev->netdev_ops = &sxgbe_netdev_ops; - ndev->hw_features = NETIF_F_SG | NETIF_F_RXCSUM; + ndev->hw_features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | + NETIF_F_RXCSUM | NETIF_F_TSO | NETIF_F_TSO6 | + NETIF_F_GRO; ndev->features |= ndev->hw_features | NETIF_F_HIGHDMA; ndev->watchdog_timeo = msecs_to_jiffies(TX_TIMEO); @@ -1907,6 +1959,13 @@ struct sxgbe_priv_data *sxgbe_dvr_probe(struct device *device, if (flow_ctrl) priv->flow_ctrl = SXGBE_FLOW_AUTO; /* RX/TX pause on */ + /* Enable TCP segmentation offload for all DMA channels */ + if (priv->hw_cap.tcpseg_offload) { + SXGBE_FOR_EACH_QUEUE(SXGBE_TX_QUEUES, queue_num) { + priv->hw->dma->enable_tso(priv->ioaddr, queue_num); + } + } + /* Rx Watchdog is available, enable depend on platform data */ if (!priv->plat->riwt_off) { priv->use_riwt = 1; diff --git a/drivers/net/ethernet/samsung/sxgbe/sxgbe_desc.h b/drivers/net/ethernet/samsung/sxgbe/sxgbe_desc.h index 41844d4..547edf3 100644 --- a/drivers/net/ethernet/samsung/sxgbe/sxgbe_desc.h +++ b/drivers/net/ethernet/samsung/sxgbe/sxgbe_desc.h @@ -167,8 +167,9 @@ struct sxgbe_desc_ops { void (*init_tx_desc)(struct sxgbe_tx_norm_desc *p); /* Invoked by the xmit function to prepare the tx descriptor */ - void (*tx_enable_tse)(struct sxgbe_tx_norm_desc *p, u8 is_tse, - u32 hdr_len, u32 payload_len); + void (*tx_desc_enable_tse)(struct sxgbe_tx_norm_desc *p, u8 is_tse, + u32 total_hdr_len, u32 tcp_hdr_len, + u32 tcp_payload_len); /* Assign buffer lengths for descriptor */ void (*prepare_tx_desc)(struct sxgbe_tx_norm_desc *p, u8 is_fd, @@ -207,20 +208,26 @@ struct sxgbe_desc_ops { int (*get_tx_timestamp_status)(struct sxgbe_tx_norm_desc *p); /* TX Context Descripto Specific */ - void (*init_tx_ctxt_desc)(struct sxgbe_tx_ctxt_desc *p); + void (*tx_ctxt_desc_set_ctxt)(struct sxgbe_tx_ctxt_desc *p); /* Set the owner of the TX context descriptor */ - void (*set_tx_ctxt_owner)(struct sxgbe_tx_ctxt_desc *p); + void (*tx_ctxt_desc_set_owner)(struct sxgbe_tx_ctxt_desc *p); /* Get the owner of the TX context descriptor */ int (*get_tx_ctxt_owner)(struct sxgbe_tx_ctxt_desc *p); /* Set TX mss */ - void (*tx_ctxt_desc_setmss)(struct sxgbe_tx_ctxt_desc *p, int mss); + void (*tx_ctxt_desc_set_mss)(struct sxgbe_tx_ctxt_desc *p, u16 mss); /* Set TX mss */ int (*tx_ctxt_desc_get_mss)(struct sxgbe_tx_ctxt_desc *p); + /* Set TX tcmssv */ + void (*tx_ctxt_desc_set_tcmssv)(struct sxgbe_tx_ctxt_desc *p); + + /* Reset TX ostc */ + void (*tx_ctxt_desc_reset_ostc)(struct sxgbe_tx_ctxt_desc *p); + /* Set IVLAN information */ void (*tx_ctxt_desc_set_ivlantag)(struct sxgbe_tx_ctxt_desc *p, int is_ivlanvalid, int ivlan_tag, diff --git a/drivers/net/ethernet/samsung/sxgbe/sxgbe_dma.c b/drivers/net/ethernet/samsung/sxgbe/sxgbe_dma.c index 1e68ef3..1edc451 100644 --- a/drivers/net/ethernet/samsung/sxgbe/sxgbe_dma.c +++ b/drivers/net/ethernet/samsung/sxgbe/sxgbe_dma.c @@ -354,6 +354,15 @@ static void sxgbe_dma_rx_watchdog(void __iomem *ioaddr, u32 riwt) } } +static void sxgbe_enable_tso(void __iomem *ioaddr, u8 chan_num) +{ + u32 ctrl; + + ctrl = readl(ioaddr + SXGBE_DMA_CHA_TXCTL_REG(chan_num)); + ctrl |= SXGBE_DMA_CHA_TXCTL_TSE_ENABLE; + writel(ctrl, ioaddr + SXGBE_DMA_CHA_TXCTL_REG(chan_num)); +} + static const struct sxgbe_dma_ops sxgbe_dma_ops = { .init = sxgbe_dma_init, .cha_init = sxgbe_dma_channel_init, @@ -369,6 +378,7 @@ static const struct sxgbe_dma_ops sxgbe_dma_ops = { .tx_dma_int_status = sxgbe_tx_dma_int_status, .rx_dma_int_status = sxgbe_rx_dma_int_status, .rx_watchdog = sxgbe_dma_rx_watchdog, + .enable_tso = sxgbe_enable_tso, }; const struct sxgbe_dma_ops *sxgbe_get_dma_ops(void) diff --git a/drivers/net/ethernet/samsung/sxgbe/sxgbe_dma.h b/drivers/net/ethernet/samsung/sxgbe/sxgbe_dma.h index 50c8054..6c070ac 100644 --- a/drivers/net/ethernet/samsung/sxgbe/sxgbe_dma.h +++ b/drivers/net/ethernet/samsung/sxgbe/sxgbe_dma.h @@ -42,6 +42,8 @@ struct sxgbe_dma_ops { struct sxgbe_extra_stats *x); /* Program the HW RX Watchdog */ void (*rx_watchdog)(void __iomem *ioaddr, u32 riwt); + /* Enable TSO for each DMA channel */ + void (*enable_tso)(void __iomem *ioaddr, u8 chan_num); }; const struct sxgbe_dma_ops *sxgbe_get_dma_ops(void); diff --git a/drivers/net/ethernet/samsung/sxgbe/sxgbe_main.c b/drivers/net/ethernet/samsung/sxgbe/sxgbe_main.c index 8e4e78f..a8ba1a5 100644 --- a/drivers/net/ethernet/samsung/sxgbe/sxgbe_main.c +++ b/drivers/net/ethernet/samsung/sxgbe/sxgbe_main.c @@ -1101,6 +1101,28 @@ static int sxgbe_release(struct net_device *dev) return 0; } +/* Prepare first Tx descriptor for doing TSO operation */ +void sxgbe_tso_prepare(struct sxgbe_priv_data *priv, + struct sxgbe_tx_norm_desc *first_desc, + struct sk_buff *skb) +{ + unsigned int total_hdr_len, tcp_hdr_len; + + /* Write first Tx descriptor with appropriate value */ + tcp_hdr_len = tcp_hdrlen(skb); + total_hdr_len = skb_transport_offset(skb) + tcp_hdr_len; + + first_desc->tdes01 = dma_map_single(priv->device, skb->data, + total_hdr_len, DMA_TO_DEVICE); + if (dma_mapping_error(priv->device, first_desc->tdes01)) + pr_err("%s: TX dma mapping failed!!\n", __func__); + + first_desc->tdes23.tx_rd_des23.first_desc = 1; + priv->hw->desc->tx_desc_enable_tse(first_desc, 1, total_hdr_len, + tcp_hdr_len, + skb->len - total_hdr_len); +} + /** * sxgbe_xmit: Tx entry point of the driver * @skb : the socket buffer @@ -1118,13 +1140,22 @@ static netdev_tx_t sxgbe_xmit(struct sk_buff *skb, struct net_device *dev) unsigned int tx_rsize = priv->dma_tx_size; struct sxgbe_tx_queue *tqueue = priv->txq[txq_index]; struct sxgbe_tx_norm_desc *tx_desc, *first_desc; + struct sxgbe_tx_ctxt_desc *ctxt_desc = NULL; int nr_frags = skb_shinfo(skb)->nr_frags; int no_pagedlen = skb_headlen(skb); int is_jumbo = 0; + u16 mss; + u32 ctxt_desc_req = 0; /* get the TX queue handle */ dev_txq = netdev_get_tx_queue(dev, txq_index); + if (likely(skb_is_gso(skb) || + vlan_tx_tag_present(skb) || + ((skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) && + tqueue->hwts_tx_en))) + ctxt_desc_req = 1; + /* get the spinlock */ spin_lock(&tqueue->tx_lock);