From patchwork Thu Jun 4 07:32:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ooi, Joyce" X-Patchwork-Id: 218001 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53EB9C433E1 for ; Thu, 4 Jun 2020 07:34:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3C54D2072E for ; Thu, 4 Jun 2020 07:34:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728107AbgFDHeE (ORCPT ); Thu, 4 Jun 2020 03:34:04 -0400 Received: from mga04.intel.com ([192.55.52.120]:55247 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727105AbgFDHeD (ORCPT ); Thu, 4 Jun 2020 03:34:03 -0400 IronPort-SDR: TQEj41qzguCH0biOXjX29TkbAEWNGFi27RxTLnTSJPH+n1lYCOImRva1Ho/mEeEKoRnNxrUgKJ TqYMegs1n4Ow== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jun 2020 00:34:02 -0700 IronPort-SDR: YIqGmvw6f0DRiTaGsg9vkTkLJQAgZbiirFFwXNy7lvmi0+yeYhRVZPtwegYdhwYad77V70nLEu iauv1no0zdVg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,471,1583222400"; d="scan'208";a="348021325" Received: from pg-nxl3.altera.com ([10.142.129.93]) by orsmga001.jf.intel.com with ESMTP; 04 Jun 2020 00:33:59 -0700 From: "Ooi, Joyce" To: Thor Thayer , "David S . Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Dalon Westergreen , Joyce Ooi , Tan Ley Foon , See Chin Liang , Dinh Nguyen , Dalon Westergreen Subject: [PATCH v3 01/10] net: eth: altera: tse_start_xmit ignores tx_buffer call response Date: Thu, 4 Jun 2020 15:32:47 +0800 Message-Id: <20200604073256.25702-2-joyce.ooi@intel.com> X-Mailer: git-send-email 2.13.0 In-Reply-To: <20200604073256.25702-1-joyce.ooi@intel.com> References: <20200604073256.25702-1-joyce.ooi@intel.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Dalon Westergreen The return from tx_buffer call in tse_start_xmit is inapropriately ignored. tse_buffer calls should return 0 for success or NETDEV_TX_BUSY. tse_start_xmit should return not report a successful transmit when the tse_buffer call returns an error condition. In addition to the above, the msgdma and sgdma do not return the same value on success or failure. The sgdma_tx_buffer returned 0 on failure and a positive number of transmitted packets on success. Given that it only ever sends 1 packet, this made no sense. The msgdma implementation msgdma_tx_buffer returns 0 on success. -> Don't ignore the return from tse_buffer calls -> Fix sgdma tse_buffer call to return 0 on success and NETDEV_TX_BUSY on failure. Signed-off-by: Dalon Westergreen Signed-off-by: Joyce Ooi --- v2: no change v3: queue is stopped before returning NETDEV_TX_BUSY --- drivers/net/ethernet/altera/altera_sgdma.c | 19 ++++++++++++------- drivers/net/ethernet/altera/altera_tse_main.c | 4 +++- 2 files changed, 15 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/altera/altera_sgdma.c b/drivers/net/ethernet/altera/altera_sgdma.c index db97170da8c7..fe6276c7e4a3 100644 --- a/drivers/net/ethernet/altera/altera_sgdma.c +++ b/drivers/net/ethernet/altera/altera_sgdma.c @@ -4,6 +4,7 @@ */ #include +#include #include "altera_utils.h" #include "altera_tse.h" #include "altera_sgdmahw.h" @@ -159,10 +160,11 @@ void sgdma_clear_txirq(struct altera_tse_private *priv) SGDMA_CTRLREG_CLRINT); } -/* transmits buffer through SGDMA. Returns number of buffers - * transmitted, 0 if not possible. - * - * tx_lock is held by the caller +/* transmits buffer through SGDMA. + * original behavior returned the number of transmitted packets (always 1) & + * returned 0 on error. This differs from the msgdma. the calling function + * will now actually look at the code, so from now, 0 is good and return + * NETDEV_TX_BUSY when busy. */ int sgdma_tx_buffer(struct altera_tse_private *priv, struct tse_buffer *buffer) { @@ -173,8 +175,11 @@ int sgdma_tx_buffer(struct altera_tse_private *priv, struct tse_buffer *buffer) struct sgdma_descrip __iomem *ndesc = &descbase[1]; /* wait 'til the tx sgdma is ready for the next transmit request */ - if (sgdma_txbusy(priv)) - return 0; + if (sgdma_txbusy(priv)) { + if (!netif_queue_stopped(priv->dev)) + netif_stop_queue(priv->dev); + return NETDEV_TX_BUSY; + } sgdma_setup_descrip(cdesc, /* current descriptor */ ndesc, /* next descriptor */ @@ -191,7 +196,7 @@ int sgdma_tx_buffer(struct altera_tse_private *priv, struct tse_buffer *buffer) /* enqueue the request to the pending transmit queue */ queue_tx(priv, buffer); - return 1; + return 0; } diff --git a/drivers/net/ethernet/altera/altera_tse_main.c b/drivers/net/ethernet/altera/altera_tse_main.c index 1671c1f36691..2a9e6157a8a1 100644 --- a/drivers/net/ethernet/altera/altera_tse_main.c +++ b/drivers/net/ethernet/altera/altera_tse_main.c @@ -595,7 +595,9 @@ static int tse_start_xmit(struct sk_buff *skb, struct net_device *dev) buffer->dma_addr = dma_addr; buffer->len = nopaged_len; - priv->dmaops->tx_buffer(priv, buffer); + ret = priv->dmaops->tx_buffer(priv, buffer); + if (ret) + goto out; skb_tx_timestamp(skb); From patchwork Thu Jun 4 07:32:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ooi, Joyce" X-Patchwork-Id: 218000 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E02B6C433E0 for ; Thu, 4 Jun 2020 07:34:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C41802074B for ; Thu, 4 Jun 2020 07:34:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728112AbgFDHek (ORCPT ); Thu, 4 Jun 2020 03:34:40 -0400 Received: from mga11.intel.com ([192.55.52.93]:54773 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726246AbgFDHek (ORCPT ); Thu, 4 Jun 2020 03:34:40 -0400 IronPort-SDR: yioJl2FqHoISPIU3vUn+K7IUay0VHQVY+qwFEPGl7pJr0UnXqqb6iEYKZ4GOzDoRdnma9eogdk mwTk5S2D36gA== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jun 2020 00:34:38 -0700 IronPort-SDR: RxY9F998L7LxdaV5tLYQ9VS8rwntKgLWJKUDMupTFuK7X3+7E1qvpjhrUR+DTpy9mfAEbpz3Cw GFerA06WeBfw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,471,1583222400"; d="scan'208";a="348021468" Received: from pg-nxl3.altera.com ([10.142.129.93]) by orsmga001.jf.intel.com with ESMTP; 04 Jun 2020 00:34:35 -0700 From: "Ooi, Joyce" To: Thor Thayer , "David S . Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Dalon Westergreen , Joyce Ooi , Tan Ley Foon , See Chin Liang , Dinh Nguyen , Dalon Westergreen Subject: [PATCH v3 03/10] net: eth: altera: fix altera_dmaops declaration Date: Thu, 4 Jun 2020 15:32:49 +0800 Message-Id: <20200604073256.25702-4-joyce.ooi@intel.com> X-Mailer: git-send-email 2.13.0 In-Reply-To: <20200604073256.25702-1-joyce.ooi@intel.com> References: <20200604073256.25702-1-joyce.ooi@intel.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Dalon Westergreen The declaration of struct altera_dmaops does not have identifier names. Add identifier names to confrom with required coding styles. Signed-off-by: Dalon Westergreen Signed-off-by: Joyce Ooi --- v2: no change v3: no change --- drivers/net/ethernet/altera/altera_tse.h | 30 ++++++++++++++++-------------- 1 file changed, 16 insertions(+), 14 deletions(-) diff --git a/drivers/net/ethernet/altera/altera_tse.h b/drivers/net/ethernet/altera/altera_tse.h index f17acfb579a0..7d0c98fc103e 100644 --- a/drivers/net/ethernet/altera/altera_tse.h +++ b/drivers/net/ethernet/altera/altera_tse.h @@ -385,20 +385,22 @@ struct altera_tse_private; struct altera_dmaops { int altera_dtype; int dmamask; - void (*reset_dma)(struct altera_tse_private *); - void (*enable_txirq)(struct altera_tse_private *); - void (*enable_rxirq)(struct altera_tse_private *); - void (*disable_txirq)(struct altera_tse_private *); - void (*disable_rxirq)(struct altera_tse_private *); - void (*clear_txirq)(struct altera_tse_private *); - void (*clear_rxirq)(struct altera_tse_private *); - int (*tx_buffer)(struct altera_tse_private *, struct tse_buffer *); - u32 (*tx_completions)(struct altera_tse_private *); - void (*add_rx_desc)(struct altera_tse_private *, struct tse_buffer *); - u32 (*get_rx_status)(struct altera_tse_private *); - int (*init_dma)(struct altera_tse_private *); - void (*uninit_dma)(struct altera_tse_private *); - void (*start_rxdma)(struct altera_tse_private *); + void (*reset_dma)(struct altera_tse_private *priv); + void (*enable_txirq)(struct altera_tse_private *priv); + void (*enable_rxirq)(struct altera_tse_private *priv); + void (*disable_txirq)(struct altera_tse_private *priv); + void (*disable_rxirq)(struct altera_tse_private *priv); + void (*clear_txirq)(struct altera_tse_private *priv); + void (*clear_rxirq)(struct altera_tse_private *priv); + int (*tx_buffer)(struct altera_tse_private *priv, + struct tse_buffer *buffer); + u32 (*tx_completions)(struct altera_tse_private *priv); + void (*add_rx_desc)(struct altera_tse_private *priv, + struct tse_buffer *buffer); + u32 (*get_rx_status)(struct altera_tse_private *priv); + int (*init_dma)(struct altera_tse_private *priv); + void (*uninit_dma)(struct altera_tse_private *priv); + void (*start_rxdma)(struct altera_tse_private *priv); }; /* This structure is private to each device. From patchwork Thu Jun 4 07:32:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ooi, Joyce" X-Patchwork-Id: 217999 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4817DC433DF for ; Thu, 4 Jun 2020 07:34:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 22C222074B for ; Thu, 4 Jun 2020 07:34:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728194AbgFDHex (ORCPT ); Thu, 4 Jun 2020 03:34:53 -0400 Received: from mga04.intel.com ([192.55.52.120]:55303 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726246AbgFDHeu (ORCPT ); Thu, 4 Jun 2020 03:34:50 -0400 IronPort-SDR: 98WU3Uba0jPvgjD6sXKNvdkdHBHru9D699kSec/kRZEdiyxXZdyO6ZRVO6+K4zKXOqOu1HDhnI MvFJylQi2oig== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jun 2020 00:34:50 -0700 IronPort-SDR: m+3OUd0EgDho0s3l1sa0kCl7TpEk9cWNmzqN2w6FZil69f3ACnNOgWIB46WOK3PytTdBktgCqK uQ6Ih8S9xyLA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,471,1583222400"; d="scan'208";a="348021528" Received: from pg-nxl3.altera.com ([10.142.129.93]) by orsmga001.jf.intel.com with ESMTP; 04 Jun 2020 00:34:47 -0700 From: "Ooi, Joyce" To: Thor Thayer , "David S . Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Dalon Westergreen , Joyce Ooi , Tan Ley Foon , See Chin Liang , Dinh Nguyen , Dalon Westergreen Subject: [PATCH v3 05/10] net: eth: altera: Move common functions to altera_utils Date: Thu, 4 Jun 2020 15:32:51 +0800 Message-Id: <20200604073256.25702-6-joyce.ooi@intel.com> X-Mailer: git-send-email 2.13.0 In-Reply-To: <20200604073256.25702-1-joyce.ooi@intel.com> References: <20200604073256.25702-1-joyce.ooi@intel.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Dalon Westergreen Move request_and_map and other shared functions to altera_utils. This is the first step to moving common code out of tse specific code so that it can be shared with future altera ethernet ip. Signed-off-by: Dalon Westergreen Signed-off-by: Joyce Ooi --- v2: no change v3: no change --- drivers/net/ethernet/altera/altera_tse.h | 45 --------------------- drivers/net/ethernet/altera/altera_tse_ethtool.c | 1 + drivers/net/ethernet/altera/altera_tse_main.c | 32 +-------------- drivers/net/ethernet/altera/altera_utils.c | 29 ++++++++++++++ drivers/net/ethernet/altera/altera_utils.h | 51 ++++++++++++++++++++++++ 5 files changed, 82 insertions(+), 76 deletions(-) diff --git a/drivers/net/ethernet/altera/altera_tse.h b/drivers/net/ethernet/altera/altera_tse.h index 26c5541fda27..fa24ab3c7d6a 100644 --- a/drivers/net/ethernet/altera/altera_tse.h +++ b/drivers/net/ethernet/altera/altera_tse.h @@ -489,49 +489,4 @@ struct altera_tse_private { */ void altera_tse_set_ethtool_ops(struct net_device *); -static inline -u32 csrrd32(void __iomem *mac, size_t offs) -{ - void __iomem *paddr = (void __iomem *)((uintptr_t)mac + offs); - return readl(paddr); -} - -static inline -u16 csrrd16(void __iomem *mac, size_t offs) -{ - void __iomem *paddr = (void __iomem *)((uintptr_t)mac + offs); - return readw(paddr); -} - -static inline -u8 csrrd8(void __iomem *mac, size_t offs) -{ - void __iomem *paddr = (void __iomem *)((uintptr_t)mac + offs); - return readb(paddr); -} - -static inline -void csrwr32(u32 val, void __iomem *mac, size_t offs) -{ - void __iomem *paddr = (void __iomem *)((uintptr_t)mac + offs); - - writel(val, paddr); -} - -static inline -void csrwr16(u16 val, void __iomem *mac, size_t offs) -{ - void __iomem *paddr = (void __iomem *)((uintptr_t)mac + offs); - - writew(val, paddr); -} - -static inline -void csrwr8(u8 val, void __iomem *mac, size_t offs) -{ - void __iomem *paddr = (void __iomem *)((uintptr_t)mac + offs); - - writeb(val, paddr); -} - #endif /* __ALTERA_TSE_H__ */ diff --git a/drivers/net/ethernet/altera/altera_tse_ethtool.c b/drivers/net/ethernet/altera/altera_tse_ethtool.c index 4299f1301149..420d77f00eab 100644 --- a/drivers/net/ethernet/altera/altera_tse_ethtool.c +++ b/drivers/net/ethernet/altera/altera_tse_ethtool.c @@ -22,6 +22,7 @@ #include #include "altera_tse.h" +#include "altera_utils.h" #define TSE_STATS_LEN 31 #define TSE_NUM_REGS 128 diff --git a/drivers/net/ethernet/altera/altera_tse_main.c b/drivers/net/ethernet/altera/altera_tse_main.c index 3c756afd0d39..87f789e42b6e 100644 --- a/drivers/net/ethernet/altera/altera_tse_main.c +++ b/drivers/net/ethernet/altera/altera_tse_main.c @@ -23,7 +23,6 @@ #include #include #include -#include #include #include #include @@ -33,7 +32,7 @@ #include #include #include -#include +#include #include #include @@ -1320,35 +1319,6 @@ static struct net_device_ops altera_tse_netdev_ops = { .ndo_validate_addr = eth_validate_addr, }; -static int request_and_map(struct platform_device *pdev, const char *name, - struct resource **res, void __iomem **ptr) -{ - struct resource *region; - struct device *device = &pdev->dev; - - *res = platform_get_resource_byname(pdev, IORESOURCE_MEM, name); - if (*res == NULL) { - dev_err(device, "resource %s not defined\n", name); - return -ENODEV; - } - - region = devm_request_mem_region(device, (*res)->start, - resource_size(*res), dev_name(device)); - if (region == NULL) { - dev_err(device, "unable to request %s\n", name); - return -EBUSY; - } - - *ptr = devm_ioremap(device, region->start, - resource_size(region)); - if (*ptr == NULL) { - dev_err(device, "ioremap of %s failed!", name); - return -ENOMEM; - } - - return 0; -} - /* Probe Altera TSE MAC device */ static int altera_tse_probe(struct platform_device *pdev) diff --git a/drivers/net/ethernet/altera/altera_utils.c b/drivers/net/ethernet/altera/altera_utils.c index e6a7fc9d8fb1..c9bc7d0ea02a 100644 --- a/drivers/net/ethernet/altera/altera_utils.c +++ b/drivers/net/ethernet/altera/altera_utils.c @@ -31,3 +31,32 @@ int tse_bit_is_clear(void __iomem *ioaddr, size_t offs, u32 bit_mask) u32 value = csrrd32(ioaddr, offs); return (value & bit_mask) ? 0 : 1; } + +int request_and_map(struct platform_device *pdev, const char *name, + struct resource **res, void __iomem **ptr) +{ + struct resource *region; + struct device *device = &pdev->dev; + + *res = platform_get_resource_byname(pdev, IORESOURCE_MEM, name); + if (!*res) { + dev_err(device, "resource %s not defined\n", name); + return -ENODEV; + } + + region = devm_request_mem_region(device, (*res)->start, + resource_size(*res), dev_name(device)); + if (!region) { + dev_err(device, "unable to request %s\n", name); + return -EBUSY; + } + + *ptr = devm_ioremap(device, region->start, + resource_size(region)); + if (!*ptr) { + dev_err(device, "ioremap of %s failed!", name); + return -ENOMEM; + } + + return 0; +} diff --git a/drivers/net/ethernet/altera/altera_utils.h b/drivers/net/ethernet/altera/altera_utils.h index b7d772f2dcbb..fbe985099a44 100644 --- a/drivers/net/ethernet/altera/altera_utils.h +++ b/drivers/net/ethernet/altera/altera_utils.h @@ -3,7 +3,9 @@ * Copyright (C) 2014 Altera Corporation. All rights reserved */ +#include #include +#include #ifndef __ALTERA_UTILS_H__ #define __ALTERA_UTILS_H__ @@ -12,5 +14,54 @@ void tse_set_bit(void __iomem *ioaddr, size_t offs, u32 bit_mask); void tse_clear_bit(void __iomem *ioaddr, size_t offs, u32 bit_mask); int tse_bit_is_set(void __iomem *ioaddr, size_t offs, u32 bit_mask); int tse_bit_is_clear(void __iomem *ioaddr, size_t offs, u32 bit_mask); +int request_and_map(struct platform_device *pdev, const char *name, + struct resource **res, void __iomem **ptr); +static inline +u32 csrrd32(void __iomem *mac, size_t offs) +{ + void __iomem *paddr = (void __iomem *)((uintptr_t)mac + offs); + + return readl(paddr); +} + +static inline +u16 csrrd16(void __iomem *mac, size_t offs) +{ + void __iomem *paddr = (void __iomem *)((uintptr_t)mac + offs); + + return readw(paddr); +} + +static inline +u8 csrrd8(void __iomem *mac, size_t offs) +{ + void __iomem *paddr = (void __iomem *)((uintptr_t)mac + offs); + + return readb(paddr); +} + +static inline +void csrwr32(u32 val, void __iomem *mac, size_t offs) +{ + void __iomem *paddr = (void __iomem *)((uintptr_t)mac + offs); + + writel(val, paddr); +} + +static inline +void csrwr16(u16 val, void __iomem *mac, size_t offs) +{ + void __iomem *paddr = (void __iomem *)((uintptr_t)mac + offs); + + writew(val, paddr); +} + +static inline +void csrwr8(u8 val, void __iomem *mac, size_t offs) +{ + void __iomem *paddr = (void __iomem *)((uintptr_t)mac + offs); + + writeb(val, paddr); +} #endif /* __ALTERA_UTILS_H__*/ From patchwork Thu Jun 4 07:32:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ooi, Joyce" X-Patchwork-Id: 217998 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D4D0C433E0 for ; Thu, 4 Jun 2020 07:35:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F334E206C3 for ; Thu, 4 Jun 2020 07:35:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728270AbgFDHfK (ORCPT ); Thu, 4 Jun 2020 03:35:10 -0400 Received: from mga17.intel.com ([192.55.52.151]:32251 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727944AbgFDHfH (ORCPT ); Thu, 4 Jun 2020 03:35:07 -0400 IronPort-SDR: vUq4SSJXPYtf/j85XCR1wccfiEOgm7F00BMvCuwfyZeOzpY7HIh3uPXNSJw6j3Sj/cDOGzy65L GGMOdrW40LYw== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jun 2020 00:34:57 -0700 IronPort-SDR: esa9Ypk3UmCQcpph61KBDknHwtXEW3LOYZMSUOrO0HQaAq7JixmgHhhl9b+tMxNRKA28EEPHaP LluybsG8ezKg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,471,1583222400"; d="scan'208";a="348021574" Received: from pg-nxl3.altera.com ([10.142.129.93]) by orsmga001.jf.intel.com with ESMTP; 04 Jun 2020 00:34:53 -0700 From: "Ooi, Joyce" To: Thor Thayer , "David S . Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Dalon Westergreen , Joyce Ooi , Tan Ley Foon , See Chin Liang , Dinh Nguyen Subject: [PATCH v3 06/10] net: eth: altera: Add missing identifier names to function declarations Date: Thu, 4 Jun 2020 15:32:52 +0800 Message-Id: <20200604073256.25702-7-joyce.ooi@intel.com> X-Mailer: git-send-email 2.13.0 In-Reply-To: <20200604073256.25702-1-joyce.ooi@intel.com> References: <20200604073256.25702-1-joyce.ooi@intel.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Dalon Westergreen The sgdma and msgdma header files included function declarations without identifier names for pointers. Add appropriate identifier names. Signed-off-by: Dalon Westergreen Signed-off-by: Joyce Ooi --- v2: this patch is added in patch version 2 v3: no change --- drivers/net/ethernet/altera/altera_msgdma.h | 30 ++++++++++++++------------- drivers/net/ethernet/altera/altera_sgdma.h | 32 +++++++++++++++-------------- 2 files changed, 33 insertions(+), 29 deletions(-) diff --git a/drivers/net/ethernet/altera/altera_msgdma.h b/drivers/net/ethernet/altera/altera_msgdma.h index 9813fbfff4d3..23f5b2a13898 100644 --- a/drivers/net/ethernet/altera/altera_msgdma.h +++ b/drivers/net/ethernet/altera/altera_msgdma.h @@ -6,19 +6,21 @@ #ifndef __ALTERA_MSGDMA_H__ #define __ALTERA_MSGDMA_H__ -void msgdma_reset(struct altera_tse_private *); -void msgdma_enable_txirq(struct altera_tse_private *); -void msgdma_enable_rxirq(struct altera_tse_private *); -void msgdma_disable_rxirq(struct altera_tse_private *); -void msgdma_disable_txirq(struct altera_tse_private *); -void msgdma_clear_rxirq(struct altera_tse_private *); -void msgdma_clear_txirq(struct altera_tse_private *); -u32 msgdma_tx_completions(struct altera_tse_private *); -void msgdma_add_rx_desc(struct altera_tse_private *, struct tse_buffer *); -int msgdma_tx_buffer(struct altera_tse_private *, struct tse_buffer *); -u32 msgdma_rx_status(struct altera_tse_private *); -int msgdma_initialize(struct altera_tse_private *); -void msgdma_uninitialize(struct altera_tse_private *); -void msgdma_start_rxdma(struct altera_tse_private *); +void msgdma_reset(struct altera_tse_private *priv); +void msgdma_enable_txirq(struct altera_tse_private *priv); +void msgdma_enable_rxirq(struct altera_tse_private *priv); +void msgdma_disable_rxirq(struct altera_tse_private *priv); +void msgdma_disable_txirq(struct altera_tse_private *priv); +void msgdma_clear_rxirq(struct altera_tse_private *priv); +void msgdma_clear_txirq(struct altera_tse_private *priv); +u32 msgdma_tx_completions(struct altera_tse_private *priv); +void msgdma_add_rx_desc(struct altera_tse_private *priv, + struct tse_buffer *buffer); +int msgdma_tx_buffer(struct altera_tse_private *priv, + struct tse_buffer *buffer); +u32 msgdma_rx_status(struct altera_tse_private *priv); +int msgdma_initialize(struct altera_tse_private *priv); +void msgdma_uninitialize(struct altera_tse_private *priv); +void msgdma_start_rxdma(struct altera_tse_private *priv); #endif /* __ALTERA_MSGDMA_H__ */ diff --git a/drivers/net/ethernet/altera/altera_sgdma.h b/drivers/net/ethernet/altera/altera_sgdma.h index 08afe1c9994f..3fb201417820 100644 --- a/drivers/net/ethernet/altera/altera_sgdma.h +++ b/drivers/net/ethernet/altera/altera_sgdma.h @@ -6,20 +6,22 @@ #ifndef __ALTERA_SGDMA_H__ #define __ALTERA_SGDMA_H__ -void sgdma_reset(struct altera_tse_private *); -void sgdma_enable_txirq(struct altera_tse_private *); -void sgdma_enable_rxirq(struct altera_tse_private *); -void sgdma_disable_rxirq(struct altera_tse_private *); -void sgdma_disable_txirq(struct altera_tse_private *); -void sgdma_clear_rxirq(struct altera_tse_private *); -void sgdma_clear_txirq(struct altera_tse_private *); -int sgdma_tx_buffer(struct altera_tse_private *priv, struct tse_buffer *); -u32 sgdma_tx_completions(struct altera_tse_private *); -void sgdma_add_rx_desc(struct altera_tse_private *priv, struct tse_buffer *); -void sgdma_status(struct altera_tse_private *); -u32 sgdma_rx_status(struct altera_tse_private *); -int sgdma_initialize(struct altera_tse_private *); -void sgdma_uninitialize(struct altera_tse_private *); -void sgdma_start_rxdma(struct altera_tse_private *); +void sgdma_reset(struct altera_tse_private *priv); +void sgdma_enable_txirq(struct altera_tse_private *priv); +void sgdma_enable_rxirq(struct altera_tse_private *priv); +void sgdma_disable_rxirq(struct altera_tse_private *priv); +void sgdma_disable_txirq(struct altera_tse_private *priv); +void sgdma_clear_rxirq(struct altera_tse_private *priv); +void sgdma_clear_txirq(struct altera_tse_private *priv); +int sgdma_tx_buffer(struct altera_tse_private *priv, + struct tse_buffer *buffer); +u32 sgdma_tx_completions(struct altera_tse_private *priv); +void sgdma_add_rx_desc(struct altera_tse_private *priv, + struct tse_buffer *buffer); +void sgdma_status(struct altera_tse_private *priv); +u32 sgdma_rx_status(struct altera_tse_private *priv); +int sgdma_initialize(struct altera_tse_private *priv); +void sgdma_uninitialize(struct altera_tse_private *priv); +void sgdma_start_rxdma(struct altera_tse_private *priv); #endif /* __ALTERA_SGDMA_H__ */ From patchwork Thu Jun 4 07:32:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ooi, Joyce" X-Patchwork-Id: 217997 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0ABEBC433DF for ; Thu, 4 Jun 2020 07:35:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D84932075B for ; Thu, 4 Jun 2020 07:35:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728317AbgFDHfb (ORCPT ); Thu, 4 Jun 2020 03:35:31 -0400 Received: from mga06.intel.com ([134.134.136.31]:31680 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727916AbgFDHfb (ORCPT ); Thu, 4 Jun 2020 03:35:31 -0400 IronPort-SDR: HUOKDfsyLOmjjsqPgJ06wgNNDMA6uclP9GvbqEjuTSEOdn6uAgAcAUvJZfZtsiPB3G87tko7JO p6WqkE49Wdxg== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jun 2020 00:35:30 -0700 IronPort-SDR: nBAjqdTvC6ma4zDetuflNWpqzFagi16kYk6qrgPCfQ+eZ/0NeHbifKk1H0zqLNEc/ObLyfhN0q VwtzcD0uk9yg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,471,1583222400"; d="scan'208";a="348021727" Received: from pg-nxl3.altera.com ([10.142.129.93]) by orsmga001.jf.intel.com with ESMTP; 04 Jun 2020 00:35:27 -0700 From: "Ooi, Joyce" To: Thor Thayer , "David S . Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Dalon Westergreen , Joyce Ooi , Tan Ley Foon , See Chin Liang , Dinh Nguyen , Dalon Westergreen Subject: [PATCH v3 09/10] net: eth: altera: add msgdma prefetcher Date: Thu, 4 Jun 2020 15:32:55 +0800 Message-Id: <20200604073256.25702-10-joyce.ooi@intel.com> X-Mailer: git-send-email 2.13.0 In-Reply-To: <20200604073256.25702-1-joyce.ooi@intel.com> References: <20200604073256.25702-1-joyce.ooi@intel.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Dalon Westergreen Add support for the mSGDMA prefetcher. The prefetcher adds support for a linked list of descriptors in system memory. The prefetcher feeds these to the mSGDMA dispatcher. The prefetcher is configured to poll for the next descriptor in the list to be owned by hardware, then pass the descriptor to the dispatcher. It will then poll the next descriptor until it is owned by hardware. The dispatcher responses are written back to the appropriate descriptor, and the owned by hardware bit is cleared. The driver sets up a linked list twice the tx and rx ring sizes, with the last descriptor pointing back to the first. This ensures that the ring of descriptors will always have inactive descriptors preventing the prefetcher from looping over and reusing descriptors inappropriately. The prefetcher will continuously loop over these descriptors. The driver modifies descriptors as required to update the skb address and length as well as the owned by hardware bit. In addition to the above, the mSGDMA prefetcher can be used to handle rx and tx timestamps coming from the ethernet ip. These can be included in the prefetcher response in the descriptor. Signed-off-by: Dalon Westergreen Signed-off-by: Joyce Ooi --- v2: minor fixes and suggested edits v3: queue is stopped before returning NETDEV_TX_BUSY --- drivers/net/ethernet/altera/Makefile | 2 +- .../net/ethernet/altera/altera_msgdma_prefetcher.c | 431 +++++++++++++++++++++ .../net/ethernet/altera/altera_msgdma_prefetcher.h | 30 ++ .../ethernet/altera/altera_msgdmahw_prefetcher.h | 87 +++++ drivers/net/ethernet/altera/altera_tse.h | 14 + drivers/net/ethernet/altera/altera_tse_main.c | 51 +++ 6 files changed, 614 insertions(+), 1 deletion(-) create mode 100644 drivers/net/ethernet/altera/altera_msgdma_prefetcher.c create mode 100644 drivers/net/ethernet/altera/altera_msgdma_prefetcher.h create mode 100644 drivers/net/ethernet/altera/altera_msgdmahw_prefetcher.h diff --git a/drivers/net/ethernet/altera/Makefile b/drivers/net/ethernet/altera/Makefile index fc2e460926b3..4834e972e906 100644 --- a/drivers/net/ethernet/altera/Makefile +++ b/drivers/net/ethernet/altera/Makefile @@ -6,4 +6,4 @@ obj-$(CONFIG_ALTERA_TSE) += altera_tse.o altera_tse-objs := altera_tse_main.o altera_tse_ethtool.o \ altera_msgdma.o altera_sgdma.o altera_utils.o \ - intel_fpga_tod.o + intel_fpga_tod.o altera_msgdma_prefetcher.o diff --git a/drivers/net/ethernet/altera/altera_msgdma_prefetcher.c b/drivers/net/ethernet/altera/altera_msgdma_prefetcher.c new file mode 100644 index 000000000000..e8cd4d04730d --- /dev/null +++ b/drivers/net/ethernet/altera/altera_msgdma_prefetcher.c @@ -0,0 +1,431 @@ +// SPDX-License-Identifier: GPL-2.0 +/* MSGDMA Prefetcher driver for Altera ethernet devices + * + * Copyright (C) 2020 Intel Corporation. All rights reserved. + * Author(s): + * Dalon Westergreen + */ + +#include +#include +#include +#include "altera_tse.h" +#include "altera_msgdma.h" +#include "altera_msgdmahw.h" +#include "altera_msgdma_prefetcher.h" +#include "altera_msgdmahw_prefetcher.h" +#include "altera_utils.h" + +int msgdma_pref_initialize(struct altera_tse_private *priv) +{ + int i; + struct msgdma_pref_extended_desc *rx_descs; + struct msgdma_pref_extended_desc *tx_descs; + dma_addr_t rx_descsphys; + dma_addr_t tx_descsphys; + + priv->pref_rxdescphys = (dma_addr_t)0; + priv->pref_txdescphys = (dma_addr_t)0; + + /* we need to allocate more pref descriptors than ringsize to + * prevent all of the descriptors being owned by hw. To do this + * we just allocate twice ring_size descriptors. + * rx_ring_size = priv->rx_ring_size * 2 + * tx_ring_size = priv->tx_ring_size * 2 + */ + + /* The prefetcher requires the descriptors to be aligned to the + * descriptor read/write master's data width which worst case is + * 512 bits. Currently we DO NOT CHECK THIS and only support 32-bit + * prefetcher masters. + */ + + /* allocate memory for rx descriptors */ + priv->pref_rxdesc = + dma_alloc_coherent(priv->device, + sizeof(struct msgdma_pref_extended_desc) + * priv->rx_ring_size * 2, + &priv->pref_rxdescphys, GFP_KERNEL); + + if (!priv->pref_rxdesc) + goto err_rx; + + /* allocate memory for tx descriptors */ + priv->pref_txdesc = + dma_alloc_coherent(priv->device, + sizeof(struct msgdma_pref_extended_desc) + * priv->tx_ring_size * 2, + &priv->pref_txdescphys, GFP_KERNEL); + + if (!priv->pref_txdesc) + goto err_tx; + + /* setup base descriptor ring for tx & rx */ + rx_descs = (struct msgdma_pref_extended_desc *)priv->pref_rxdesc; + tx_descs = (struct msgdma_pref_extended_desc *)priv->pref_txdesc; + tx_descsphys = priv->pref_txdescphys; + rx_descsphys = priv->pref_rxdescphys; + + /* setup RX descriptors */ + priv->pref_rx_prod = 0; + for (i = 0; i < priv->rx_ring_size * 2; i++) { + rx_descsphys = priv->pref_rxdescphys + + (((i + 1) % (priv->rx_ring_size * 2)) * + sizeof(struct msgdma_pref_extended_desc)); + rx_descs[i].next_desc_lo = lower_32_bits(rx_descsphys); + rx_descs[i].next_desc_hi = upper_32_bits(rx_descsphys); + rx_descs[i].stride = MSGDMA_DESC_RX_STRIDE; + /* burst set to 0 so it defaults to max configured */ + /* set seq number to desc number */ + rx_descs[i].burst_seq_num = i; + } + + /* setup TX descriptors */ + for (i = 0; i < priv->tx_ring_size * 2; i++) { + tx_descsphys = priv->pref_txdescphys + + (((i + 1) % (priv->tx_ring_size * 2)) * + sizeof(struct msgdma_pref_extended_desc)); + tx_descs[i].next_desc_lo = lower_32_bits(tx_descsphys); + tx_descs[i].next_desc_hi = upper_32_bits(tx_descsphys); + tx_descs[i].stride = MSGDMA_DESC_TX_STRIDE; + /* burst set to 0 so it defaults to max configured */ + /* set seq number to desc number */ + tx_descs[i].burst_seq_num = i; + } + + if (netif_msg_ifup(priv)) + netdev_info(priv->dev, "%s: RX Desc mem at 0x%x\n", __func__, + priv->pref_rxdescphys); + + if (netif_msg_ifup(priv)) + netdev_info(priv->dev, "%s: TX Desc mem at 0x%x\n", __func__, + priv->pref_txdescphys); + + return 0; + +err_tx: + dma_free_coherent(priv->device, + sizeof(struct msgdma_pref_extended_desc) + * priv->rx_ring_size * 2, + priv->pref_rxdesc, priv->pref_rxdescphys); +err_rx: + return -ENOMEM; +} + +void msgdma_pref_uninitialize(struct altera_tse_private *priv) +{ + if (priv->pref_rxdesc) + dma_free_coherent(priv->device, + sizeof(struct msgdma_pref_extended_desc) + * priv->rx_ring_size * 2, + priv->pref_rxdesc, priv->pref_rxdescphys); + + if (priv->pref_txdesc) + dma_free_coherent(priv->device, + sizeof(struct msgdma_pref_extended_desc) + * priv->tx_ring_size * 2, + priv->pref_txdesc, priv->pref_txdescphys); +} + +void msgdma_pref_enable_txirq(struct altera_tse_private *priv) +{ + tse_set_bit(priv->tx_pref_csr, msgdma_pref_csroffs(control), + MSGDMA_PREF_CTL_GLOBAL_INTR); +} + +void msgdma_pref_disable_txirq(struct altera_tse_private *priv) +{ + tse_clear_bit(priv->tx_pref_csr, msgdma_pref_csroffs(control), + MSGDMA_PREF_CTL_GLOBAL_INTR); +} + +void msgdma_pref_clear_txirq(struct altera_tse_private *priv) +{ + csrwr32(MSGDMA_PREF_STAT_IRQ, priv->tx_pref_csr, + msgdma_pref_csroffs(status)); +} + +void msgdma_pref_enable_rxirq(struct altera_tse_private *priv) +{ + tse_set_bit(priv->rx_pref_csr, msgdma_pref_csroffs(control), + MSGDMA_PREF_CTL_GLOBAL_INTR); +} + +void msgdma_pref_disable_rxirq(struct altera_tse_private *priv) +{ + tse_clear_bit(priv->rx_pref_csr, msgdma_pref_csroffs(control), + MSGDMA_PREF_CTL_GLOBAL_INTR); +} + +void msgdma_pref_clear_rxirq(struct altera_tse_private *priv) +{ + csrwr32(MSGDMA_PREF_STAT_IRQ, priv->rx_pref_csr, + msgdma_pref_csroffs(status)); +} + +static u64 timestamp_to_ns(struct msgdma_pref_extended_desc *desc) +{ + u64 ns = 0; + u64 second; + u32 tmp; + + tmp = desc->timestamp_96b[0] >> 16; + tmp |= (desc->timestamp_96b[1] << 16); + + second = desc->timestamp_96b[2]; + second <<= 16; + second |= ((desc->timestamp_96b[1] & 0xffff0000) >> 16); + + ns = second * NSEC_PER_SEC + tmp; + + return ns; +} + +/* Setup TX descriptor + * -> this should never be called when a descriptor isn't available + */ + +netdev_tx_t msgdma_pref_tx_buffer(struct altera_tse_private *priv, + struct tse_buffer *buffer) +{ + u32 desc_entry = priv->tx_prod % (priv->tx_ring_size * 2); + struct msgdma_pref_extended_desc *tx_descs = priv->pref_txdesc; + + /* if for some reason the descriptor is still owned by hardware */ + if (unlikely(tx_descs[desc_entry].desc_control + & MSGDMA_PREF_DESC_CTL_OWNED_BY_HW)) { + if (!netif_queue_stopped(priv->dev)) + netif_stop_queue(priv->dev); + return NETDEV_TX_BUSY; + } + + /* write descriptor entries */ + tx_descs[desc_entry].len = buffer->len; + tx_descs[desc_entry].read_addr_lo = lower_32_bits(buffer->dma_addr); + tx_descs[desc_entry].read_addr_hi = upper_32_bits(buffer->dma_addr); + + /* set the control bits and set owned by hw */ + tx_descs[desc_entry].desc_control = (MSGDMA_DESC_CTL_TX_SINGLE + | MSGDMA_PREF_DESC_CTL_OWNED_BY_HW); + + if (netif_msg_tx_queued(priv)) + netdev_info(priv->dev, "%s: cons: %d prod: %d", + __func__, priv->tx_cons, priv->tx_prod); + + return NETDEV_TX_OK; +} + +u32 msgdma_pref_tx_completions(struct altera_tse_private *priv) +{ + u32 control; + u32 ready = 0; + u32 cons = priv->tx_cons; + u32 desc_ringsize = priv->tx_ring_size * 2; + u32 ringsize = priv->tx_ring_size; + u64 ns = 0; + struct msgdma_pref_extended_desc *cur; + struct tse_buffer *tx_buff; + struct skb_shared_hwtstamps shhwtstamp; + int i; + + if (netif_msg_tx_done(priv)) + for (i = 0; i < desc_ringsize; i++) + netdev_info(priv->dev, "%s: desc: %d control 0x%x\n", + __func__, i, + priv->pref_txdesc[i].desc_control); + + cur = &priv->pref_txdesc[cons % desc_ringsize]; + control = cur->desc_control; + tx_buff = &priv->tx_ring[cons % ringsize]; + + while (!(control & MSGDMA_PREF_DESC_CTL_OWNED_BY_HW) && + (priv->tx_prod != (cons + ready)) && control) { + if (skb_shinfo(tx_buff->skb)->tx_flags & SKBTX_IN_PROGRESS) { + /* Timestamping is enabled, pass timestamp back */ + ns = timestamp_to_ns(cur); + memset(&shhwtstamp, 0, + sizeof(struct skb_shared_hwtstamps)); + shhwtstamp.hwtstamp = ns_to_ktime(ns); + skb_tstamp_tx(tx_buff->skb, &shhwtstamp); + } + + if (netif_msg_tx_done(priv)) + netdev_info(priv->dev, "%s: cur: %d ts: %lld ns\n", + __func__, + ((cons + ready) % desc_ringsize), ns); + + /* clear data */ + cur->desc_control = 0; + cur->timestamp_96b[0] = 0; + cur->timestamp_96b[1] = 0; + cur->timestamp_96b[2] = 0; + + ready++; + cur = &priv->pref_txdesc[(cons + ready) % desc_ringsize]; + tx_buff = &priv->tx_ring[(cons + ready) % ringsize]; + control = cur->desc_control; + } + + return ready; +} + +void msgdma_pref_reset(struct altera_tse_private *priv) +{ + int counter; + + /* turn off polling */ + tse_clear_bit(priv->rx_pref_csr, msgdma_pref_csroffs(control), + MSGDMA_PREF_CTL_DESC_POLL_EN); + tse_clear_bit(priv->tx_pref_csr, msgdma_pref_csroffs(control), + MSGDMA_PREF_CTL_DESC_POLL_EN); + + /* Reset the RX Prefetcher */ + csrwr32(MSGDMA_PREF_STAT_IRQ, priv->rx_pref_csr, + msgdma_pref_csroffs(status)); + csrwr32(MSGDMA_PREF_CTL_RESET, priv->rx_pref_csr, + msgdma_pref_csroffs(control)); + + counter = 0; + while (counter++ < ALTERA_TSE_SW_RESET_WATCHDOG_CNTR) { + if (tse_bit_is_clear(priv->rx_pref_csr, + msgdma_pref_csroffs(control), + MSGDMA_PREF_CTL_RESET)) + break; + udelay(1); + } + + if (counter >= ALTERA_TSE_SW_RESET_WATCHDOG_CNTR) + netif_warn(priv, drv, priv->dev, + "TSE Rx Prefetcher reset bit never cleared!\n"); + + /* Reset the TX Prefetcher */ + csrwr32(MSGDMA_PREF_STAT_IRQ, priv->tx_pref_csr, + msgdma_pref_csroffs(status)); + csrwr32(MSGDMA_PREF_CTL_RESET, priv->tx_pref_csr, + msgdma_pref_csroffs(control)); + + counter = 0; + while (counter++ < ALTERA_TSE_SW_RESET_WATCHDOG_CNTR) { + if (tse_bit_is_clear(priv->tx_pref_csr, + msgdma_pref_csroffs(control), + MSGDMA_PREF_CTL_RESET)) + break; + udelay(1); + } + + if (counter >= ALTERA_TSE_SW_RESET_WATCHDOG_CNTR) + netif_warn(priv, drv, priv->dev, + "TSE Tx Prefetcher reset bit never cleared!\n"); + + /* clear all status bits */ + csrwr32(MSGDMA_PREF_STAT_IRQ, priv->tx_pref_csr, + msgdma_pref_csroffs(status)); + + /* Reset mSGDMA dispatchers*/ + msgdma_reset(priv); +} + +/* Setup the RX and TX prefetchers to poll the descriptor chain */ +void msgdma_pref_start_rxdma(struct altera_tse_private *priv) +{ + csrwr32(priv->rx_poll_freq, priv->rx_pref_csr, + msgdma_pref_csroffs(desc_poll_freq)); + csrwr32(lower_32_bits(priv->pref_rxdescphys), priv->rx_pref_csr, + msgdma_pref_csroffs(next_desc_lo)); + csrwr32(upper_32_bits(priv->pref_rxdescphys), priv->rx_pref_csr, + msgdma_pref_csroffs(next_desc_hi)); + tse_set_bit(priv->rx_pref_csr, msgdma_pref_csroffs(control), + MSGDMA_PREF_CTL_DESC_POLL_EN | MSGDMA_PREF_CTL_RUN); +} + +void msgdma_pref_start_txdma(struct altera_tse_private *priv) +{ + csrwr32(priv->tx_poll_freq, priv->tx_pref_csr, + msgdma_pref_csroffs(desc_poll_freq)); + csrwr32(lower_32_bits(priv->pref_txdescphys), priv->tx_pref_csr, + msgdma_pref_csroffs(next_desc_lo)); + csrwr32(upper_32_bits(priv->pref_txdescphys), priv->tx_pref_csr, + msgdma_pref_csroffs(next_desc_hi)); + tse_set_bit(priv->tx_pref_csr, msgdma_pref_csroffs(control), + MSGDMA_PREF_CTL_DESC_POLL_EN | MSGDMA_PREF_CTL_RUN); +} + +/* Add MSGDMA Prefetcher Descriptor to descriptor list + * -> This should never be called when a descriptor isn't available + */ +void msgdma_pref_add_rx_desc(struct altera_tse_private *priv, + struct tse_buffer *rxbuffer) +{ + struct msgdma_pref_extended_desc *rx_descs = priv->pref_rxdesc; + u32 desc_entry = priv->pref_rx_prod % (priv->rx_ring_size * 2); + + /* write descriptor entries */ + rx_descs[desc_entry].len = priv->rx_dma_buf_sz; + rx_descs[desc_entry].write_addr_lo = lower_32_bits(rxbuffer->dma_addr); + rx_descs[desc_entry].write_addr_hi = upper_32_bits(rxbuffer->dma_addr); + + /* set the control bits and set owned by hw */ + rx_descs[desc_entry].desc_control = (MSGDMA_DESC_CTL_END_ON_EOP + | MSGDMA_DESC_CTL_END_ON_LEN + | MSGDMA_DESC_CTL_TR_COMP_IRQ + | MSGDMA_DESC_CTL_EARLY_IRQ + | MSGDMA_DESC_CTL_TR_ERR_IRQ + | MSGDMA_DESC_CTL_GO + | MSGDMA_PREF_DESC_CTL_OWNED_BY_HW); + + /* we need to keep a separate one for rx as RX_DESCRIPTORS are + * pre-configured at startup + */ + priv->pref_rx_prod++; + + if (netif_msg_rx_status(priv)) { + netdev_info(priv->dev, "%s: desc: %d buf: %d control 0x%x\n", + __func__, desc_entry, + priv->rx_prod % priv->rx_ring_size, + priv->pref_rxdesc[desc_entry].desc_control); + } +} + +u32 msgdma_pref_rx_status(struct altera_tse_private *priv) +{ + u32 rxstatus = 0; + u32 pktlength; + u32 pktstatus; + u64 ns = 0; + u32 entry = priv->rx_cons % priv->rx_ring_size; + u32 desc_entry = priv->rx_prod % (priv->rx_ring_size * 2); + struct msgdma_pref_extended_desc *rx_descs = priv->pref_rxdesc; + struct skb_shared_hwtstamps *shhwtstamp = NULL; + struct tse_buffer *rx_buff = priv->rx_ring; + + /* if the current entry is not owned by hardware, process it */ + if (!(rx_descs[desc_entry].desc_control + & MSGDMA_PREF_DESC_CTL_OWNED_BY_HW) && + rx_descs[desc_entry].desc_control) { + pktlength = rx_descs[desc_entry].bytes_transferred; + pktstatus = rx_descs[desc_entry].desc_status; + rxstatus = pktstatus; + rxstatus = rxstatus << 16; + rxstatus |= (pktlength & 0xffff); + + /* get the timestamp */ + if (priv->hwts_rx_en) { + ns = timestamp_to_ns(&rx_descs[desc_entry]); + shhwtstamp = skb_hwtstamps(rx_buff[entry].skb); + memset(shhwtstamp, 0, + sizeof(struct skb_shared_hwtstamps)); + shhwtstamp->hwtstamp = ns_to_ktime(ns); + } + + /* clear data */ + rx_descs[desc_entry].desc_control = 0; + rx_descs[desc_entry].timestamp_96b[0] = 0; + rx_descs[desc_entry].timestamp_96b[1] = 0; + rx_descs[desc_entry].timestamp_96b[2] = 0; + + if (netif_msg_rx_status(priv)) + netdev_info(priv->dev, "%s: desc: %d buf: %d ts: %lld ns", + __func__, desc_entry, entry, ns); + } + return rxstatus; +} diff --git a/drivers/net/ethernet/altera/altera_msgdma_prefetcher.h b/drivers/net/ethernet/altera/altera_msgdma_prefetcher.h new file mode 100644 index 000000000000..6507c2805a05 --- /dev/null +++ b/drivers/net/ethernet/altera/altera_msgdma_prefetcher.h @@ -0,0 +1,30 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* MSGDMA Prefetcher driver for Altera ethernet devices + * + * Copyright (C) 2020 Intel Corporation. All rights reserved. + * Author(s): + * Dalon Westergreen + */ + +#ifndef __ALTERA_PREF_MSGDMA_H__ +#define __ALTERA_PREF_MSGDMA_H__ + +void msgdma_pref_reset(struct altera_tse_private *priv); +void msgdma_pref_enable_txirq(struct altera_tse_private *priv); +void msgdma_pref_enable_rxirq(struct altera_tse_private *priv); +void msgdma_pref_disable_rxirq(struct altera_tse_private *priv); +void msgdma_pref_disable_txirq(struct altera_tse_private *priv); +void msgdma_pref_clear_rxirq(struct altera_tse_private *priv); +void msgdma_pref_clear_txirq(struct altera_tse_private *priv); +u32 msgdma_pref_tx_completions(struct altera_tse_private *priv); +void msgdma_pref_add_rx_desc(struct altera_tse_private *priv, + struct tse_buffer *buffer); +netdev_tx_t msgdma_pref_tx_buffer(struct altera_tse_private *priv, + struct tse_buffer *buffer); +u32 msgdma_pref_rx_status(struct altera_tse_private *priv); +int msgdma_pref_initialize(struct altera_tse_private *priv); +void msgdma_pref_uninitialize(struct altera_tse_private *priv); +void msgdma_pref_start_rxdma(struct altera_tse_private *priv); +void msgdma_pref_start_txdma(struct altera_tse_private *priv); + +#endif /* __ALTERA_PREF_MSGDMA_H__ */ diff --git a/drivers/net/ethernet/altera/altera_msgdmahw_prefetcher.h b/drivers/net/ethernet/altera/altera_msgdmahw_prefetcher.h new file mode 100644 index 000000000000..efda31e491ca --- /dev/null +++ b/drivers/net/ethernet/altera/altera_msgdmahw_prefetcher.h @@ -0,0 +1,87 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* MSGDMA Prefetcher driver for Altera ethernet devices + * + * Copyright (C) 2020 Intel Corporation. + * Contributors: + * Dalon Westergreen + * Thomas Chou + * Ian Abbott + * Yuriy Kozlov + * Tobias Klauser + * Andriy Smolskyy + * Roman Bulgakov + * Dmytro Mytarchuk + * Matthew Gerlach + */ + +#ifndef __ALTERA_MSGDMAHW_PREFETCHER_H__ +#define __ALTERA_MSGDMAHW_PREFETCHER_H__ + +/* mSGDMA prefetcher extended prefectcher descriptor format + */ +struct msgdma_pref_extended_desc { + /* data buffer source address low bits */ + u32 read_addr_lo; + /* data buffer destination address low bits */ + u32 write_addr_lo; + /* the number of bytes to transfer */ + u32 len; + /* next descriptor low address */ + u32 next_desc_lo; + /* number of bytes transferred */ + u32 bytes_transferred; + u32 desc_status; + u32 reserved_18; + /* bit 31:24 write burst */ + /* bit 23:16 read burst */ + /* bit 15:0 sequence number */ + u32 burst_seq_num; + /* bit 31:16 write stride */ + /* bit 15:0 read stride */ + u32 stride; + /* data buffer source address high bits */ + u32 read_addr_hi; + /* data buffer destination address high bits */ + u32 write_addr_hi; + /* next descriptor high address */ + u32 next_desc_hi; + /* prefetcher mod now writes these reserved bits*/ + /* Response bits [191:160] */ + u32 timestamp_96b[3]; + /* desc_control */ + u32 desc_control; +}; + +/* mSGDMA Prefetcher Descriptor Status bits */ +#define MSGDMA_PREF_DESC_STAT_STOPPED_ON_EARLY BIT(8) +#define MSGDMA_PREF_DESC_STAT_MASK 0xFF + +/* mSGDMA Prefetcher Descriptor Control bits */ +/* bit 31 and bits 29-0 are the same as the normal dispatcher ctl flags */ +#define MSGDMA_PREF_DESC_CTL_OWNED_BY_HW BIT(30) + +/* mSGDMA Prefetcher CSR */ +struct msgdma_prefetcher_csr { + u32 control; + u32 next_desc_lo; + u32 next_desc_hi; + u32 desc_poll_freq; + u32 status; +}; + +/* mSGDMA Prefetcher Control */ +#define MSGDMA_PREF_CTL_PARK BIT(4) +#define MSGDMA_PREF_CTL_GLOBAL_INTR BIT(3) +#define MSGDMA_PREF_CTL_RESET BIT(2) +#define MSGDMA_PREF_CTL_DESC_POLL_EN BIT(1) +#define MSGDMA_PREF_CTL_RUN BIT(0) + +#define MSGDMA_PREF_POLL_FREQ_MASK 0xFFFF + +/* mSGDMA Prefetcher Status */ +#define MSGDMA_PREF_STAT_IRQ BIT(0) + +#define msgdma_pref_csroffs(a) (offsetof(struct msgdma_prefetcher_csr, a)) +#define msgdma_pref_descroffs(a) (offsetof(struct msgdma_pref_extended_desc, a)) + +#endif /* __ALTERA_MSGDMAHW_PREFETCHER_H__*/ diff --git a/drivers/net/ethernet/altera/altera_tse.h b/drivers/net/ethernet/altera/altera_tse.h index b7c176a808ac..35f99bfce4ef 100644 --- a/drivers/net/ethernet/altera/altera_tse.h +++ b/drivers/net/ethernet/altera/altera_tse.h @@ -382,6 +382,7 @@ struct altera_tse_private; #define ALTERA_DTYPE_SGDMA 1 #define ALTERA_DTYPE_MSGDMA 2 +#define ALTERA_DTYPE_MSGDMA_PREF 3 /* standard DMA interface for SGDMA and MSGDMA */ struct altera_dmaops { @@ -434,6 +435,19 @@ struct altera_tse_private { void __iomem *tx_dma_csr; void __iomem *tx_dma_desc; + /* mSGDMA Rx Prefecher address space */ + void __iomem *rx_pref_csr; + struct msgdma_pref_extended_desc *pref_rxdesc; + dma_addr_t pref_rxdescphys; + u32 pref_rx_prod; + + /* mSGDMA Tx Prefecher address space */ + void __iomem *tx_pref_csr; + struct msgdma_pref_extended_desc *pref_txdesc; + dma_addr_t pref_txdescphys; + u32 rx_poll_freq; + u32 tx_poll_freq; + /* Rx buffers queue */ struct tse_buffer *rx_ring; u32 rx_cons; diff --git a/drivers/net/ethernet/altera/altera_tse_main.c b/drivers/net/ethernet/altera/altera_tse_main.c index c874b8c1dd48..d061381bd545 100644 --- a/drivers/net/ethernet/altera/altera_tse_main.c +++ b/drivers/net/ethernet/altera/altera_tse_main.c @@ -44,6 +44,7 @@ #include "altera_sgdma.h" #include "altera_msgdma.h" #include "intel_fpga_tod.h" +#include "altera_msgdma_prefetcher.h" static atomic_t instance_count = ATOMIC_INIT(~0); /* Module parameters */ @@ -1500,6 +1501,34 @@ static int altera_tse_probe(struct platform_device *pdev) priv->rxdescmem = resource_size(dma_res); priv->rxdescmem_busaddr = dma_res->start; + } else if (priv->dmaops && + priv->dmaops->altera_dtype == + ALTERA_DTYPE_MSGDMA_PREF) { + /* mSGDMA Rx Prefetcher address space */ + ret = request_and_map(pdev, "rx_pref", &dma_res, + &priv->rx_pref_csr); + if (ret) + goto err_free_netdev; + + /* mSGDMA Tx Prefetcher address space */ + ret = request_and_map(pdev, "tx_pref", &dma_res, + &priv->tx_pref_csr); + if (ret) + goto err_free_netdev; + + /* get prefetcher rx poll frequency from device tree */ + if (of_property_read_u32(pdev->dev.of_node, "rx-poll-freq", + &priv->rx_poll_freq)) { + dev_info(&pdev->dev, "Defaulting RX Poll Frequency to 128\n"); + priv->rx_poll_freq = 128; + } + + /* get prefetcher rx poll frequency from device tree */ + if (of_property_read_u32(pdev->dev.of_node, "tx-poll-freq", + &priv->tx_poll_freq)) { + dev_info(&pdev->dev, "Defaulting TX Poll Frequency to 128\n"); + priv->tx_poll_freq = 128; + } } else { goto err_free_netdev; } @@ -1758,7 +1787,29 @@ static const struct altera_dmaops altera_dtype_msgdma = { .start_txdma = NULL, }; +static const struct altera_dmaops altera_dtype_prefetcher = { + .altera_dtype = ALTERA_DTYPE_MSGDMA_PREF, + .dmamask = 64, + .reset_dma = msgdma_pref_reset, + .enable_txirq = msgdma_pref_enable_txirq, + .enable_rxirq = msgdma_pref_enable_rxirq, + .disable_txirq = msgdma_pref_disable_txirq, + .disable_rxirq = msgdma_pref_disable_rxirq, + .clear_txirq = msgdma_pref_clear_txirq, + .clear_rxirq = msgdma_pref_clear_rxirq, + .tx_buffer = msgdma_pref_tx_buffer, + .tx_completions = msgdma_pref_tx_completions, + .add_rx_desc = msgdma_pref_add_rx_desc, + .get_rx_status = msgdma_pref_rx_status, + .init_dma = msgdma_pref_initialize, + .uninit_dma = msgdma_pref_uninitialize, + .start_rxdma = msgdma_pref_start_rxdma, + .start_txdma = msgdma_pref_start_txdma, +}; + static const struct of_device_id altera_tse_ids[] = { + { .compatible = "altr,tse-msgdma-2.0", + .data = &altera_dtype_prefetcher, }, { .compatible = "altr,tse-msgdma-1.0", .data = &altera_dtype_msgdma, }, { .compatible = "altr,tse-1.0", .data = &altera_dtype_sgdma, }, { .compatible = "ALTR,tse-1.0", .data = &altera_dtype_sgdma, },