From patchwork Wed Sep 16 09:33:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Huazhong Tan X-Patchwork-Id: 260789 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E12BC43461 for ; Wed, 16 Sep 2020 09:37:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 00CE52220F for ; Wed, 16 Sep 2020 09:37:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726780AbgIPJhA (ORCPT ); Wed, 16 Sep 2020 05:37:00 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:53924 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726723AbgIPJgv (ORCPT ); Wed, 16 Sep 2020 05:36:51 -0400 Received: from DGGEMS412-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 7FB6FEB29E0981FB15AD; Wed, 16 Sep 2020 17:36:48 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by DGGEMS412-HUB.china.huawei.com (10.3.19.212) with Microsoft SMTP Server id 14.3.487.0; Wed, 16 Sep 2020 17:36:38 +0800 From: Huazhong Tan To: CC: , , , , , , , Yunsheng Lin , Huazhong Tan Subject: [PATCH V2 net-next 1/6] net: hns3: batch the page reference count updates Date: Wed, 16 Sep 2020 17:33:45 +0800 Message-ID: <1600248830-59477-2-git-send-email-tanhuazhong@huawei.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1600248830-59477-1-git-send-email-tanhuazhong@huawei.com> References: <1600248830-59477-1-git-send-email-tanhuazhong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-CFilter-Loop: Reflected Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Yunsheng Lin Batch the page reference count updates instead of doing them one at a time. By doing this we can improve the overall receive performance by avoid some atomic increment operations when the rx page is reused. Signed-off-by: Yunsheng Lin Signed-off-by: Huazhong Tan --- drivers/net/ethernet/hisilicon/hns3/hns3_enet.c | 32 ++++++++++++++++++------- drivers/net/ethernet/hisilicon/hns3/hns3_enet.h | 1 + 2 files changed, 25 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c index 93825a4..3762142 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c @@ -2302,6 +2302,8 @@ static int hns3_alloc_buffer(struct hns3_enet_ring *ring, cb->buf = page_address(p); cb->length = hns3_page_size(ring); cb->type = DESC_TYPE_PAGE; + page_ref_add(p, USHRT_MAX - 1); + cb->pagecnt_bias = USHRT_MAX; return 0; } @@ -2311,8 +2313,8 @@ static void hns3_free_buffer(struct hns3_enet_ring *ring, { if (cb->type == DESC_TYPE_SKB) dev_kfree_skb_any((struct sk_buff *)cb->priv); - else if (!HNAE3_IS_TX_RING(ring)) - put_page((struct page *)cb->priv); + else if (!HNAE3_IS_TX_RING(ring) && cb->pagecnt_bias) + __page_frag_cache_drain(cb->priv, cb->pagecnt_bias); memset(cb, 0, sizeof(*cb)); } @@ -2610,6 +2612,11 @@ static bool hns3_page_is_reusable(struct page *page) !page_is_pfmemalloc(page); } +static bool hns3_can_reuse_page(struct hns3_desc_cb *cb) +{ + return (page_count(cb->priv) - cb->pagecnt_bias) == 1; +} + static void hns3_nic_reuse_page(struct sk_buff *skb, int i, struct hns3_enet_ring *ring, int pull_len, struct hns3_desc_cb *desc_cb) @@ -2618,6 +2625,7 @@ static void hns3_nic_reuse_page(struct sk_buff *skb, int i, int size = le16_to_cpu(desc->rx.size); u32 truesize = hns3_buf_size(ring); + desc_cb->pagecnt_bias--; skb_add_rx_frag(skb, i, desc_cb->priv, desc_cb->page_offset + pull_len, size - pull_len, truesize); @@ -2625,20 +2633,27 @@ static void hns3_nic_reuse_page(struct sk_buff *skb, int i, * when page_offset rollback to zero, flag default unreuse */ if (unlikely(!hns3_page_is_reusable(desc_cb->priv)) || - (!desc_cb->page_offset && page_count(desc_cb->priv) > 1)) + (!desc_cb->page_offset && !hns3_can_reuse_page(desc_cb))) { + __page_frag_cache_drain(desc_cb->priv, desc_cb->pagecnt_bias); return; + } /* Move offset up to the next cache line */ desc_cb->page_offset += truesize; if (desc_cb->page_offset + truesize <= hns3_page_size(ring)) { desc_cb->reuse_flag = 1; - /* Bump ref count on page before it is given */ - get_page(desc_cb->priv); - } else if (page_count(desc_cb->priv) == 1) { + } else if (hns3_can_reuse_page(desc_cb)) { desc_cb->reuse_flag = 1; desc_cb->page_offset = 0; - get_page(desc_cb->priv); + } else if (desc_cb->pagecnt_bias) { + __page_frag_cache_drain(desc_cb->priv, desc_cb->pagecnt_bias); + return; + } + + if (unlikely(!desc_cb->pagecnt_bias)) { + page_ref_add(desc_cb->priv, USHRT_MAX); + desc_cb->pagecnt_bias = USHRT_MAX; } } @@ -2846,7 +2861,8 @@ static int hns3_alloc_skb(struct hns3_enet_ring *ring, unsigned int length, if (likely(hns3_page_is_reusable(desc_cb->priv))) desc_cb->reuse_flag = 1; else /* This page cannot be reused so discard it */ - put_page(desc_cb->priv); + __page_frag_cache_drain(desc_cb->priv, + desc_cb->pagecnt_bias); ring_ptr_move_fw(ring, next_to_clean); return 0; diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h index 98ca6ea..8f784094 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h @@ -287,6 +287,7 @@ struct hns3_desc_cb { /* desc type, used by the ring user to mark the type of the priv data */ u16 type; + u16 pagecnt_bias; }; enum hns3_pkt_l3type { From patchwork Wed Sep 16 09:33:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Huazhong Tan X-Patchwork-Id: 260787 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA826C433E2 for ; Wed, 16 Sep 2020 09:37:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B103622210 for ; Wed, 16 Sep 2020 09:37:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726741AbgIPJgz (ORCPT ); Wed, 16 Sep 2020 05:36:55 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:53820 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726722AbgIPJgu (ORCPT ); Wed, 16 Sep 2020 05:36:50 -0400 Received: from DGGEMS412-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 67001740093D4D51A75E; Wed, 16 Sep 2020 17:36:48 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by DGGEMS412-HUB.china.huawei.com (10.3.19.212) with Microsoft SMTP Server id 14.3.487.0; Wed, 16 Sep 2020 17:36:38 +0800 From: Huazhong Tan To: CC: , , , , , , , Yunsheng Lin , Huazhong Tan Subject: [PATCH V2 net-next 2/6] net: hns3: batch tx doorbell operation Date: Wed, 16 Sep 2020 17:33:46 +0800 Message-ID: <1600248830-59477-3-git-send-email-tanhuazhong@huawei.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1600248830-59477-1-git-send-email-tanhuazhong@huawei.com> References: <1600248830-59477-1-git-send-email-tanhuazhong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-CFilter-Loop: Reflected Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Yunsheng Lin Use netdev_xmit_more() to defer the tx doorbell operation when the skb is passed to the driver continuously. By doing this we can improve the overall xmit performance by avoid some doorbell operations. Also, the tx_err_cnt stat is not used, so rename it to tx_more stat. Signed-off-by: Yunsheng Lin Signed-off-by: Huazhong Tan --- drivers/net/ethernet/hisilicon/hns3/hns3_enet.c | 47 +++++++++++++++++----- drivers/net/ethernet/hisilicon/hns3/hns3_enet.h | 2 +- drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c | 2 +- 3 files changed, 39 insertions(+), 12 deletions(-) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c index 3762142..6a57c0d 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c @@ -1383,6 +1383,27 @@ static int hns3_fill_skb_to_desc(struct hns3_enet_ring *ring, return bd_num; } +static void hns3_tx_doorbell(struct hns3_enet_ring *ring, int num, + bool doorbell) +{ + ring->pending_buf += num; + + if (!doorbell) { + u64_stats_update_begin(&ring->syncp); + ring->stats.tx_more++; + u64_stats_update_end(&ring->syncp); + return; + } + + if (!ring->pending_buf) + return; + + wmb(); /* Commit all data before submit */ + + hnae3_queue_xmit(ring->tqp, ring->pending_buf); + ring->pending_buf = 0; +} + netdev_tx_t hns3_nic_net_xmit(struct sk_buff *skb, struct net_device *netdev) { struct hns3_nic_priv *priv = netdev_priv(netdev); @@ -1391,11 +1412,14 @@ netdev_tx_t hns3_nic_net_xmit(struct sk_buff *skb, struct net_device *netdev) int pre_ntu, next_to_use_head; struct sk_buff *frag_skb; int bd_num = 0; + bool doorbell; int ret; /* Hardware can only handle short frames above 32 bytes */ - if (skb_put_padto(skb, HNS3_MIN_TX_LEN)) + if (skb_put_padto(skb, HNS3_MIN_TX_LEN)) { + hns3_tx_doorbell(ring, 0, !netdev_xmit_more()); return NETDEV_TX_OK; + } /* Prefetch the data used later */ prefetch(skb->data); @@ -1406,6 +1430,7 @@ netdev_tx_t hns3_nic_net_xmit(struct sk_buff *skb, struct net_device *netdev) u64_stats_update_begin(&ring->syncp); ring->stats.tx_busy++; u64_stats_update_end(&ring->syncp); + hns3_tx_doorbell(ring, 0, true); return NETDEV_TX_BUSY; } else if (ret == -ENOMEM) { u64_stats_update_begin(&ring->syncp); @@ -1446,11 +1471,9 @@ netdev_tx_t hns3_nic_net_xmit(struct sk_buff *skb, struct net_device *netdev) /* Complete translate all packets */ dev_queue = netdev_get_tx_queue(netdev, ring->queue_index); - netdev_tx_sent_queue(dev_queue, skb->len); - - wmb(); /* Commit all data before submit */ - - hnae3_queue_xmit(ring->tqp, bd_num); + doorbell = __netdev_tx_sent_queue(dev_queue, skb->len, + netdev_xmit_more()); + hns3_tx_doorbell(ring, bd_num, doorbell); return NETDEV_TX_OK; @@ -1459,6 +1482,7 @@ netdev_tx_t hns3_nic_net_xmit(struct sk_buff *skb, struct net_device *netdev) out_err_tx_ok: dev_kfree_skb_any(skb); + hns3_tx_doorbell(ring, 0, !netdev_xmit_more()); return NETDEV_TX_OK; } @@ -1839,13 +1863,14 @@ static bool hns3_get_tx_timeo_queue_info(struct net_device *ndev) tx_ring->next_to_clean, napi->state); netdev_info(ndev, - "tx_pkts: %llu, tx_bytes: %llu, io_err_cnt: %llu, sw_err_cnt: %llu\n", + "tx_pkts: %llu, tx_bytes: %llu, io_err_cnt: %llu, sw_err_cnt: %llu, tx_pending: %d\n", tx_ring->stats.tx_pkts, tx_ring->stats.tx_bytes, - tx_ring->stats.io_err_cnt, tx_ring->stats.sw_err_cnt); + tx_ring->stats.io_err_cnt, tx_ring->stats.sw_err_cnt, + tx_ring->pending_buf); netdev_info(ndev, - "seg_pkt_cnt: %llu, tx_err_cnt: %llu, restart_queue: %llu, tx_busy: %llu\n", - tx_ring->stats.seg_pkt_cnt, tx_ring->stats.tx_err_cnt, + "seg_pkt_cnt: %llu, tx_more: %llu, restart_queue: %llu, tx_busy: %llu\n", + tx_ring->stats.seg_pkt_cnt, tx_ring->stats.tx_more, tx_ring->stats.restart_queue, tx_ring->stats.tx_busy); /* When mac received many pause frames continuous, it's unable to send @@ -4181,6 +4206,8 @@ static void hns3_clear_tx_ring(struct hns3_enet_ring *ring) hns3_free_buffer_detach(ring, ring->next_to_clean); ring_ptr_move_fw(ring, next_to_clean); } + + ring->pending_buf = 0; } static int hns3_clear_rx_ring(struct hns3_enet_ring *ring) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h index 8f784094..f40738c 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h @@ -351,7 +351,7 @@ struct ring_stats { struct { u64 tx_pkts; u64 tx_bytes; - u64 tx_err_cnt; + u64 tx_more; u64 restart_queue; u64 tx_busy; u64 tx_copy; diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c index 2622e04..97ad68b 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c @@ -32,7 +32,7 @@ static const struct hns3_stats hns3_txq_stats[] = { HNS3_TQP_STAT("seg_pkt_cnt", seg_pkt_cnt), HNS3_TQP_STAT("packets", tx_pkts), HNS3_TQP_STAT("bytes", tx_bytes), - HNS3_TQP_STAT("errors", tx_err_cnt), + HNS3_TQP_STAT("more", tx_more), HNS3_TQP_STAT("wake", restart_queue), HNS3_TQP_STAT("busy", tx_busy), HNS3_TQP_STAT("copy", tx_copy), From patchwork Wed Sep 16 09:33:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Huazhong Tan X-Patchwork-Id: 260786 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D05C9C433E2 for ; Wed, 16 Sep 2020 09:37:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A300122224 for ; Wed, 16 Sep 2020 09:37:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726816AbgIPJhw (ORCPT ); Wed, 16 Sep 2020 05:37:52 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:53868 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726696AbgIPJgv (ORCPT ); Wed, 16 Sep 2020 05:36:51 -0400 Received: from DGGEMS412-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 729057651541B03AA858; Wed, 16 Sep 2020 17:36:48 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by DGGEMS412-HUB.china.huawei.com (10.3.19.212) with Microsoft SMTP Server id 14.3.487.0; Wed, 16 Sep 2020 17:36:38 +0800 From: Huazhong Tan To: CC: , , , , , , , Yunsheng Lin , Huazhong Tan Subject: [PATCH V2 net-next 3/6] net: hns3: optimize the tx clean process Date: Wed, 16 Sep 2020 17:33:47 +0800 Message-ID: <1600248830-59477-4-git-send-email-tanhuazhong@huawei.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1600248830-59477-1-git-send-email-tanhuazhong@huawei.com> References: <1600248830-59477-1-git-send-email-tanhuazhong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-CFilter-Loop: Reflected Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Yunsheng Lin Currently HNS3_RING_TX_RING_HEAD_REG register is read to determine how many tx desc can be cleaned. To avoid the register read operation in the critical data path, use the valid bit in the tx desc to determine if a specific tx desc can be cleaned. The hns3 driver sets valid bit in the tx desc before ringing a doorbell to the hw, and hw will only clear the valid bit of the tx desc after corresponding packet is sent out to the wire. And because next_to_use for tx ring is a changing variable when the driver is filling the tx desc, so reuse the pull_len for rx ring to record the tx desc that has notified to the hw, so that hns3_nic_reclaim_desc() can decide how many tx desc's valid bit need checking when reclaiming tx desc. And io_err_cnt stat is also removed for it is not used anymore. Signed-off-by: Yunsheng Lin Signed-off-by: Huazhong Tan --- drivers/net/ethernet/hisilicon/hns3/hns3_enet.c | 64 ++++++++++------------ drivers/net/ethernet/hisilicon/hns3/hns3_enet.h | 12 ++-- drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c | 2 - 3 files changed, 33 insertions(+), 45 deletions(-) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c index 6a57c0d..2db6c03 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c @@ -1402,6 +1402,7 @@ static void hns3_tx_doorbell(struct hns3_enet_ring *ring, int num, hnae3_queue_xmit(ring->tqp, ring->pending_buf); ring->pending_buf = 0; + WRITE_ONCE(ring->last_to_use, ring->next_to_use); } netdev_tx_t hns3_nic_net_xmit(struct sk_buff *skb, struct net_device *netdev) @@ -1863,10 +1864,9 @@ static bool hns3_get_tx_timeo_queue_info(struct net_device *ndev) tx_ring->next_to_clean, napi->state); netdev_info(ndev, - "tx_pkts: %llu, tx_bytes: %llu, io_err_cnt: %llu, sw_err_cnt: %llu, tx_pending: %d\n", + "tx_pkts: %llu, tx_bytes: %llu, sw_err_cnt: %llu, tx_pending: %d\n", tx_ring->stats.tx_pkts, tx_ring->stats.tx_bytes, - tx_ring->stats.io_err_cnt, tx_ring->stats.sw_err_cnt, - tx_ring->pending_buf); + tx_ring->stats.sw_err_cnt, tx_ring->pending_buf); netdev_info(ndev, "seg_pkt_cnt: %llu, tx_more: %llu, restart_queue: %llu, tx_busy: %llu\n", @@ -2491,13 +2491,26 @@ static void hns3_reuse_buffer(struct hns3_enet_ring *ring, int i) DMA_FROM_DEVICE); } -static void hns3_nic_reclaim_desc(struct hns3_enet_ring *ring, int head, +static bool hns3_nic_reclaim_desc(struct hns3_enet_ring *ring, int *bytes, int *pkts) { + /* pair with ring->last_to_use update in hns3_tx_doorbell(), + * smp_store_release() is not used in hns3_tx_doorbell() because + * the doorbell operation already have the needed barrier operation. + */ + int ltu = smp_load_acquire(&ring->last_to_use); int ntc = ring->next_to_clean; struct hns3_desc_cb *desc_cb; + bool reclaimed = false; + struct hns3_desc *desc; + + while (ltu != ntc) { + desc = &ring->desc[ntc]; + + if (le16_to_cpu(desc->tx.bdtp_fe_sc_vld_ra_ri) & + BIT(HNS3_TXD_VLD_B)) + break; - while (head != ntc) { desc_cb = &ring->desc_cb[ntc]; (*pkts) += (desc_cb->type == DESC_TYPE_SKB); (*bytes) += desc_cb->length; @@ -2509,23 +2522,17 @@ static void hns3_nic_reclaim_desc(struct hns3_enet_ring *ring, int head, /* Issue prefetch for next Tx descriptor */ prefetch(&ring->desc_cb[ntc]); + reclaimed = true; } + if (unlikely(!reclaimed)) + return false; + /* This smp_store_release() pairs with smp_load_acquire() in * ring_space called by hns3_nic_net_xmit. */ smp_store_release(&ring->next_to_clean, ntc); -} - -static int is_valid_clean_head(struct hns3_enet_ring *ring, int h) -{ - int u = ring->next_to_use; - int c = ring->next_to_clean; - - if (unlikely(h > ring->desc_num)) - return 0; - - return u > c ? (h > c && h <= u) : (h > c || h <= u); + return true; } void hns3_clean_tx_ring(struct hns3_enet_ring *ring) @@ -2534,28 +2541,12 @@ void hns3_clean_tx_ring(struct hns3_enet_ring *ring) struct hns3_nic_priv *priv = netdev_priv(netdev); struct netdev_queue *dev_queue; int bytes, pkts; - int head; - - head = readl_relaxed(ring->tqp->io_base + HNS3_RING_TX_RING_HEAD_REG); - - if (is_ring_empty(ring) || head == ring->next_to_clean) - return; /* no data to poll */ - - rmb(); /* Make sure head is ready before touch any data */ - - if (unlikely(!is_valid_clean_head(ring, head))) { - hns3_rl_err(netdev, "wrong head (%d, %d-%d)\n", head, - ring->next_to_use, ring->next_to_clean); - - u64_stats_update_begin(&ring->syncp); - ring->stats.io_err_cnt++; - u64_stats_update_end(&ring->syncp); - return; - } bytes = 0; pkts = 0; - hns3_nic_reclaim_desc(ring, head, &bytes, &pkts); + + if (unlikely(!hns3_nic_reclaim_desc(ring, &bytes, &pkts))) + return; ring->tqp_vector->tx_group.total_bytes += bytes; ring->tqp_vector->tx_group.total_packets += pkts; @@ -3714,6 +3705,7 @@ static void hns3_ring_get_cfg(struct hnae3_queue *q, struct hns3_nic_priv *priv, ring->desc_num = desc_num; ring->next_to_use = 0; ring->next_to_clean = 0; + ring->last_to_use = 0; } static void hns3_queue_to_ring(struct hnae3_queue *tqp, @@ -3793,6 +3785,7 @@ void hns3_fini_ring(struct hns3_enet_ring *ring) ring->desc_cb = NULL; ring->next_to_clean = 0; ring->next_to_use = 0; + ring->last_to_use = 0; ring->pending_buf = 0; if (ring->skb) { dev_kfree_skb_any(ring->skb); @@ -4310,6 +4303,7 @@ int hns3_nic_reset_all_ring(struct hnae3_handle *h) hns3_clear_tx_ring(&priv->ring[i]); priv->ring[i].next_to_clean = 0; priv->ring[i].next_to_use = 0; + priv->ring[i].last_to_use = 0; rx_ring = &priv->ring[i + h->kinfo.num_tqps]; hns3_init_ring_hw(rx_ring); diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h index f40738c..876dc09 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h @@ -344,7 +344,6 @@ enum hns3_pkt_ol4type { }; struct ring_stats { - u64 io_err_cnt; u64 sw_err_cnt; u64 seg_pkt_cnt; union { @@ -397,8 +396,10 @@ struct hns3_enet_ring { * next_to_use */ int next_to_clean; - - u32 pull_len; /* head length for current packet */ + union { + int last_to_use; /* last idx used by xmit */ + u32 pull_len; /* memcpy len for current rx packet */ + }; u32 frag_num; void *va; /* first buffer address for current packet */ @@ -513,11 +514,6 @@ static inline int ring_space(struct hns3_enet_ring *ring) (begin - end)) - 1; } -static inline int is_ring_empty(struct hns3_enet_ring *ring) -{ - return ring->next_to_use == ring->next_to_clean; -} - static inline u32 hns3_read_reg(void __iomem *base, u32 reg) { return readl(base + reg); diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c index 97ad68b..f402c39 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c @@ -27,7 +27,6 @@ struct hns3_sfp_type { static const struct hns3_stats hns3_txq_stats[] = { /* Tx per-queue statistics */ - HNS3_TQP_STAT("io_err_cnt", io_err_cnt), HNS3_TQP_STAT("dropped", sw_err_cnt), HNS3_TQP_STAT("seg_pkt_cnt", seg_pkt_cnt), HNS3_TQP_STAT("packets", tx_pkts), @@ -46,7 +45,6 @@ static const struct hns3_stats hns3_txq_stats[] = { static const struct hns3_stats hns3_rxq_stats[] = { /* Rx per-queue statistics */ - HNS3_TQP_STAT("io_err_cnt", io_err_cnt), HNS3_TQP_STAT("dropped", sw_err_cnt), HNS3_TQP_STAT("seg_pkt_cnt", seg_pkt_cnt), HNS3_TQP_STAT("packets", rx_pkts), From patchwork Wed Sep 16 09:33:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Huazhong Tan X-Patchwork-Id: 260788 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6443DC43461 for ; Wed, 16 Sep 2020 09:37:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 38DD42220F for ; Wed, 16 Sep 2020 09:37:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726769AbgIPJg5 (ORCPT ); Wed, 16 Sep 2020 05:36:57 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:53900 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726438AbgIPJgv (ORCPT ); Wed, 16 Sep 2020 05:36:51 -0400 Received: from DGGEMS412-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 793A186F192B503E83E8; Wed, 16 Sep 2020 17:36:48 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by DGGEMS412-HUB.china.huawei.com (10.3.19.212) with Microsoft SMTP Server id 14.3.487.0; Wed, 16 Sep 2020 17:36:39 +0800 From: Huazhong Tan To: CC: , , , , , , , Yunsheng Lin , Huazhong Tan Subject: [PATCH V2 net-next 4/6] net: hns3: optimize the rx clean process Date: Wed, 16 Sep 2020 17:33:48 +0800 Message-ID: <1600248830-59477-5-git-send-email-tanhuazhong@huawei.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1600248830-59477-1-git-send-email-tanhuazhong@huawei.com> References: <1600248830-59477-1-git-send-email-tanhuazhong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-CFilter-Loop: Reflected Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Yunsheng Lin Currently HNS3_RING_RX_RING_FBDNUM_REG register is read to determine how many rx desc can be cleaned. To avoid the register read operation in the critical data path, use the valid bit in the rx desc to determine if a specific rx desc can be cleaned. The hns3 driver clear valid bit in the rx desc before notifying the rx desc to the hw, and hw will only set the valid bit of the rx desc after corresponding buffer is filled with packet data and other field in the rx desc is set accordingly. Add hns3_rx_ring_move_fw() function to clear the valid bit in the rx desc before moving rx ring's next_to_clean forward to avoid double cleaning a rx desc, also add a dma_rmb() barrier in hns3_handle_rx_bd() to make sure valid bit is set before reading other field in the rx desc. Signed-off-by: Yunsheng Lin Signed-off-by: Huazhong Tan --- drivers/net/ethernet/hisilicon/hns3/hns3_enet.c | 61 +++++++++++++------------ 1 file changed, 31 insertions(+), 30 deletions(-) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c index 2db6c03..8490754 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c @@ -2845,6 +2845,16 @@ static bool hns3_parse_vlan_tag(struct hns3_enet_ring *ring, } } +static void hns3_rx_ring_move_fw(struct hns3_enet_ring *ring) +{ + ring->desc[ring->next_to_clean].rx.bd_base_info &= + cpu_to_le32(~BIT(HNS3_RXD_VLD_B)); + ring->next_to_clean += 1; + + if (unlikely(ring->next_to_clean == ring->desc_num)) + ring->next_to_clean = 0; +} + static int hns3_alloc_skb(struct hns3_enet_ring *ring, unsigned int length, unsigned char *va) { @@ -2880,7 +2890,7 @@ static int hns3_alloc_skb(struct hns3_enet_ring *ring, unsigned int length, __page_frag_cache_drain(desc_cb->priv, desc_cb->pagecnt_bias); - ring_ptr_move_fw(ring, next_to_clean); + hns3_rx_ring_move_fw(ring); return 0; } u64_stats_update_begin(&ring->syncp); @@ -2891,7 +2901,7 @@ static int hns3_alloc_skb(struct hns3_enet_ring *ring, unsigned int length, __skb_put(skb, ring->pull_len); hns3_nic_reuse_page(skb, ring->frag_num++, ring, ring->pull_len, desc_cb); - ring_ptr_move_fw(ring, next_to_clean); + hns3_rx_ring_move_fw(ring); return 0; } @@ -2946,7 +2956,7 @@ static int hns3_add_frag(struct hns3_enet_ring *ring) hns3_nic_reuse_page(skb, ring->frag_num++, ring, 0, desc_cb); trace_hns3_rx_desc(ring); - ring_ptr_move_fw(ring, next_to_clean); + hns3_rx_ring_move_fw(ring); ring->pending_buf++; } while (!(bd_base_info & BIT(HNS3_RXD_FE_B))); @@ -3088,32 +3098,32 @@ static int hns3_handle_rx_bd(struct hns3_enet_ring *ring) prefetch(desc); - length = le16_to_cpu(desc->rx.size); - bd_base_info = le32_to_cpu(desc->rx.bd_base_info); + if (!skb) { + bd_base_info = le32_to_cpu(desc->rx.bd_base_info); + + /* Check valid BD */ + if (unlikely(!(bd_base_info & BIT(HNS3_RXD_VLD_B)))) + return -ENXIO; - /* Check valid BD */ - if (unlikely(!(bd_base_info & BIT(HNS3_RXD_VLD_B)))) - return -ENXIO; + dma_rmb(); + length = le16_to_cpu(desc->rx.size); - if (!skb) { ring->va = desc_cb->buf + desc_cb->page_offset; dma_sync_single_for_cpu(ring_to_dev(ring), desc_cb->dma + desc_cb->page_offset, hns3_buf_size(ring), DMA_FROM_DEVICE); - } - /* Prefetch first cache line of first page - * Idea is to cache few bytes of the header of the packet. Our L1 Cache - * line size is 64B so need to prefetch twice to make it 128B. But in - * actual we can have greater size of caches with 128B Level 1 cache - * lines. In such a case, single fetch would suffice to cache in the - * relevant part of the header. - */ - net_prefetch(ring->va); + /* Prefetch first cache line of first page. + * Idea is to cache few bytes of the header of the packet. + * Our L1 Cache line size is 64B so need to prefetch twice to make + * it 128B. But in actual we can have greater size of caches with + * 128B Level 1 cache lines. In such a case, single fetch would + * suffice to cache in the relevant part of the header. + */ + net_prefetch(ring->va); - if (!skb) { ret = hns3_alloc_skb(ring, length, ring->va); skb = ring->skb; @@ -3153,19 +3163,11 @@ int hns3_clean_rx_ring(struct hns3_enet_ring *ring, int budget, #define RCB_NOF_ALLOC_RX_BUFF_ONCE 16 int unused_count = hns3_desc_unused(ring); int recv_pkts = 0; - int recv_bds = 0; - int err, num; + int err; - num = readl_relaxed(ring->tqp->io_base + HNS3_RING_RX_RING_FBDNUM_REG); - num -= unused_count; unused_count -= ring->pending_buf; - if (num <= 0) - goto out; - - rmb(); /* Make sure num taken effect before the other data is touched */ - - while (recv_pkts < budget && recv_bds < num) { + while (recv_pkts < budget) { /* Reuse or realloc buffers */ if (unused_count >= RCB_NOF_ALLOC_RX_BUFF_ONCE) { hns3_nic_alloc_rx_buffers(ring, unused_count); @@ -3183,7 +3185,6 @@ int hns3_clean_rx_ring(struct hns3_enet_ring *ring, int budget, recv_pkts++; } - recv_bds += ring->pending_buf; unused_count += ring->pending_buf; ring->skb = NULL; ring->pending_buf = 0;