From patchwork Thu Dec 6 23:25:52 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jesper Dangaard Brouer X-Patchwork-Id: 153071 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp11146166ljp; Thu, 6 Dec 2018 15:26:04 -0800 (PST) X-Google-Smtp-Source: AFSGD/XygOqpOmWu2Mh1LDUN1dEGVbwJxhJBa+bMNWacQe6B3m0D9ghYpuyLivzeebZ8ZCy+CfCR X-Received: by 2002:a17:902:7443:: with SMTP id e3mr30158642plt.304.1544138764443; Thu, 06 Dec 2018 15:26:04 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1544138764; cv=none; d=google.com; s=arc-20160816; b=s/MzFZ/FiCWg3M0O0EWl1mQ8oB+Lmo7+AZFo+iK59P9DN/3VRQAa6SCaSsjfFvGm4R hCc+ZT1qKmbjgPTe8hmX8EKBl2o5JDGWX9zmqOUl/b3OzWamzVypKiG4SnRRkX7gi5Rb dbQ/W6hPSPB5Bsw//4lzlPa0iHXw0cXuj+oV0rA/aK2aD7m4yjMyLnwvmU2QS2byAZYb 3OCtlMQseh7zLHIjNLkCXZCA4Uo/ulujA+UIg1+D0EBV7HiBYzl6Fsb95J3IhrDrHj3h leXLGxrKx899tS9PTE+X9YyhoMFU/i6P9Hr0OND+Po00OOK4MHeMVRCZqn/8lfClz99k PdXw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject; bh=0ah9YEmt2ZfDRlAVK9QbneEVgUGgc9d1Ug0P/hAumAE=; b=zX7uRiCb2Z2L58+RxNKqxYV8VdSqUjziKlizMLr4KjB9VTIOWbW5KrF0D4bW1yQwE2 PfIHIn3JdOd7LUPyFOWk/eEWaNBGVPbY6YFn2zrOuDKv1lhpErdIopJxFCAWN1KBDPkB OUyPRrd4LtSJhBDdpTgQz2tjoz6jR7U3PoHfUGhQbaHPV8v6uIca1UrtWqcxMB9iJpyM W15TXoQ0kJuc3DSSn5xVMLFx0PhcI/1ENuDjvkp3pRmRGvFK+i1CvAYkolhlXY5bIORF B4hxgzgPc8okBxtpCxuozmEGJgHsGZO9wzyHG96ZvYTzX13dP3UlsebiKLWZ6wQBmbjA F/Iw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of netdev-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m32si1337455pld.86.2018.12.06.15.26.04; Thu, 06 Dec 2018 15:26:04 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of netdev-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of netdev-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726260AbeLFXZ6 (ORCPT + 10 others); Thu, 6 Dec 2018 18:25:58 -0500 Received: from mx1.redhat.com ([209.132.183.28]:59080 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726069AbeLFXZ5 (ORCPT ); Thu, 6 Dec 2018 18:25:57 -0500 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 8FCFD3003B34; Thu, 6 Dec 2018 23:25:56 +0000 (UTC) Received: from firesoul.localdomain (ovpn-200-16.brq.redhat.com [10.40.200.16]) by smtp.corp.redhat.com (Postfix) with ESMTP id 427E465F62; Thu, 6 Dec 2018 23:25:53 +0000 (UTC) Received: from [192.168.5.1] (localhost [IPv6:::1]) by firesoul.localdomain (Postfix) with ESMTP id 6E3CD31256FC6; Fri, 7 Dec 2018 00:25:52 +0100 (CET) Subject: [net-next PATCH RFC 5/8] net: mvneta: remove copybreak, prefetch and use build_skb From: Jesper Dangaard Brouer To: netdev@vger.kernel.org, "David S. Miller" , Jesper Dangaard Brouer Cc: Toke =?utf-8?q?H=C3=B8iland-J=C3=B8rgensen?= , ard.biesheuvel@linaro.org, Jason Wang , ilias.apalodimas@linaro.org, =?utf-8?b?QmrDtnJuVMO2cGVs?= , w@1wt.eu, Saeed Mahameed , mykyta.iziumtsev@gmail.com, Daniel Borkmann , Alexei Starovoitov , Tariq Toukan Date: Fri, 07 Dec 2018 00:25:52 +0100 Message-ID: <154413875238.21735.7746697931250893385.stgit@firesoul> In-Reply-To: <154413868810.21735.572808840657728172.stgit@firesoul> References: <154413868810.21735.572808840657728172.stgit@firesoul> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.46]); Thu, 06 Dec 2018 23:25:56 +0000 (UTC) Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Ilias Apalodimas The driver memcpy for packets < 256b and it's recycle tricks are not needed anymore. As previous patch introduces buffer recycling using the page_pool API (although recycling doesn't get fully activated in this patch). After this switch to using build_skb(). This patch implicit fixes a driver bug where the memory is copied prior to it's syncing for the CPU, in the < 256b case (as this case is removed). We also remove the data prefetch completely. The original driver had the prefetch misplaced before any dma_sync operations took place. Based on Jesper's analysis even if the prefetch is placed after any DMA sync ops it ends up hurting performance. Signed-off-by: Ilias Apalodimas Signed-off-by: Jesper Dangaard Brouer --- drivers/net/ethernet/marvell/mvneta.c | 81 +++++++++------------------------ 1 file changed, 22 insertions(+), 59 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index 2354421fe96f..78f1fcdc1f00 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -643,7 +643,6 @@ static int txq_number = 8; static int rxq_def; static int rx_copybreak __read_mostly = 256; -static int rx_header_size __read_mostly = 128; /* HW BM need that each port be identify by a unique ID */ static int global_port_id; @@ -1823,7 +1822,7 @@ static int mvneta_rx_refill(struct mvneta_port *pp, phys_addr = page_pool_get_dma_addr(page); - phys_addr += pp->rx_offset_correction; + phys_addr += pp->rx_offset_correction + NET_SKB_PAD; mvneta_rx_desc_fill(rx_desc, phys_addr, page, rxq); return 0; } @@ -1944,14 +1943,12 @@ static int mvneta_rx_swbm(struct napi_struct *napi, struct page *page; dma_addr_t phys_addr; u32 rx_status, index; - int rx_bytes, skb_size, copy_size; - int frag_num, frag_size, frag_offset; + int frag_num, frag_size; + int rx_bytes; index = rx_desc - rxq->descs; page = (struct page *)rxq->buf_virt_addr[index]; data = page_address(page); - /* Prefetch header */ - prefetch(data); phys_addr = rx_desc->buf_phys_addr; rx_status = rx_desc->status; @@ -1969,49 +1966,25 @@ static int mvneta_rx_swbm(struct napi_struct *napi, rx_bytes = rx_desc->data_size - (ETH_FCS_LEN + MVNETA_MH_SIZE); - /* Allocate small skb for each new packet */ - skb_size = max(rx_copybreak, rx_header_size); - rxq->skb = netdev_alloc_skb_ip_align(dev, skb_size); - if (unlikely(!rxq->skb)) { - netdev_err(dev, - "Can't allocate skb on queue %d\n", - rxq->id); - dev->stats.rx_dropped++; - rxq->skb_alloc_err++; - continue; - } - copy_size = min(skb_size, rx_bytes); - - /* Copy data from buffer to SKB, skip Marvell header */ - memcpy(rxq->skb->data, data + MVNETA_MH_SIZE, - copy_size); - skb_put(rxq->skb, copy_size); - rxq->left_size = rx_bytes - copy_size; - mvneta_rx_csum(pp, rx_status, rxq->skb); - if (rxq->left_size == 0) { - int size = copy_size + MVNETA_MH_SIZE; - - dma_sync_single_range_for_cpu(dev->dev.parent, - phys_addr, 0, - size, - DMA_FROM_DEVICE); + dma_sync_single_range_for_cpu(dev->dev.parent, + phys_addr, 0, + rx_bytes, + DMA_FROM_DEVICE); - /* leave the descriptor and buffer untouched */ - } else { - /* refill descriptor with new buffer later */ - rx_desc->buf_phys_addr = 0; + rxq->skb = build_skb(data, PAGE_SIZE); + if (!rxq->skb) + break; - frag_num = 0; - frag_offset = copy_size + MVNETA_MH_SIZE; - frag_size = min(rxq->left_size, - (int)(PAGE_SIZE - frag_offset)); - skb_add_rx_frag(rxq->skb, frag_num, page, - frag_offset, frag_size, - PAGE_SIZE); - page_pool_unmap_page(rxq->page_pool, page); - rxq->left_size -= frag_size; - } + rx_desc->buf_phys_addr = 0; + frag_num = 0; + skb_reserve(rxq->skb, MVNETA_MH_SIZE + NET_SKB_PAD); + skb_put(rxq->skb, rx_bytes < PAGE_SIZE ? rx_bytes : + PAGE_SIZE); + mvneta_rx_csum(pp, rx_status, rxq->skb); + page_pool_unmap_page(rxq->page_pool, page); + rxq->left_size = rx_bytes < PAGE_SIZE ? 0 : rx_bytes - + PAGE_SIZE; } else { /* Middle or Last descriptor */ if (unlikely(!rxq->skb)) { @@ -2019,24 +1992,14 @@ static int mvneta_rx_swbm(struct napi_struct *napi, rx_status); continue; } - if (!rxq->left_size) { - /* last descriptor has only FCS */ - /* and can be discarded */ - dma_sync_single_range_for_cpu(dev->dev.parent, - phys_addr, 0, - ETH_FCS_LEN, - DMA_FROM_DEVICE); - /* leave the descriptor and buffer untouched */ - } else { + if (rxq->left_size) { /* refill descriptor with new buffer later */ rx_desc->buf_phys_addr = 0; frag_num = skb_shinfo(rxq->skb)->nr_frags; - frag_offset = 0; - frag_size = min(rxq->left_size, - (int)(PAGE_SIZE - frag_offset)); + frag_size = min(rxq->left_size, (int)PAGE_SIZE); skb_add_rx_frag(rxq->skb, frag_num, page, - frag_offset, frag_size, + 0, frag_size, PAGE_SIZE); page_pool_unmap_page(rxq->page_pool, page);