From patchwork Sat Jun 29 05:23:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ilias Apalodimas X-Patchwork-Id: 168137 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp375785ilk; Fri, 28 Jun 2019 22:27:28 -0700 (PDT) X-Google-Smtp-Source: APXvYqyug4bQYMtIa8z3y0Nc3WuB0UopB69HPaXrwY/2NzmdfkyUY2S7YbGCYPDLQFBuVDF+Kjso X-Received: by 2002:a17:90a:be08:: with SMTP id a8mr16807171pjs.69.1561786048694; Fri, 28 Jun 2019 22:27:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561786048; cv=none; d=google.com; s=arc-20160816; b=VoRUZjzz6Qddb3ehIIuF6MUo5kujPNWzscUHd75Vle4aRWMPE+TCIaf2By3sJ57mo8 EceF9V8Q6sgAfx2Py7HhcBQORW6iQVi+4kx+NvJSsZyW2PTeNIykoy6UbZ4t7KXBTVzF 3zFSnsPJQ0gfxqjPGJ/Mz5aICNSrXnHdTwiAg0qVTXKHsm/jOVu8+7+UU03c5MCVVIRU TEx3pDp6H1UdBRkJnCG32ZCNRTRxtru6iMs0Mlw11MGKxI7gAEogC+1PljhkrzdyDEfk thy3bUKe+ZcI4W2gteasYoCIoVxYb8+YanEy7JXOH8nhDqrXc1e2dpMgXbd4bg4s3ppo k17Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=zcUiQQP+tOahKihNXDAPzGXRwdd3JqZPxFYut3dI9ps=; b=r/3rlnAjCYuda7MXxTcASY5+BJyexQkGRPsjitFy10vrnGrNCRGPlkp61HVoGRgUxI ce1fxSkkistTyF5YfekOCGi+SxfdvcuV1O8JTUKQRskBvAWN+Li/4BrOYZDpZ5RKVw8c IDN7RyJ9bRXNHn9wDTMOLlqKHfgURXwqyOVG5DaZY0bIFxNhFnPyjPv+DGrtv+8FHVqv AOJc8GnGh4F4+Vj8Isf+fM3yR0hJwfQv6C6qTJYYKG5ODZwtBdOz7rAUWukIPziaEvE7 iJGRZZvG9TqWWy5YlmgB/39rIgWoyR2yrtPpVpQJmkeUa7wpD+5Yf2zMRezckwXJdgyY XKsA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=W3mcBNVH; spf=pass (google.com: best guess record for domain of netdev-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m62si3963716pgm.392.2019.06.28.22.27.28; Fri, 28 Jun 2019 22:27:28 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of netdev-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=W3mcBNVH; spf=pass (google.com: best guess record for domain of netdev-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726796AbfF2FXg (ORCPT + 9 others); Sat, 29 Jun 2019 01:23:36 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:46288 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726702AbfF2FXf (ORCPT ); Sat, 29 Jun 2019 01:23:35 -0400 Received: by mail-wr1-f67.google.com with SMTP id n4so8213421wrw.13 for ; Fri, 28 Jun 2019 22:23:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=zcUiQQP+tOahKihNXDAPzGXRwdd3JqZPxFYut3dI9ps=; b=W3mcBNVH7t3/1cvm2UExe1BTpGKf4AJkU5YEm1RRTojg/bEh+aksUCtTpGqwUmsBA6 uzC9WaFBjJ2WsrzLL6E1rHQ5bkGTl2eveN9iVT+DUpiXxcvIVNKZphcE9GuGjuELn7ui WjpYIHvqC78K4sVtSuYQLEDcmBP6tnbbjUypKEwF9v7w124Kl+FRnZZHmo2fGf+9KcZ7 RXDmmtfhnmJlSYVWMeJKmGSKEb9BWR9upk7tTS1RP5BSD5BV9JQj7RCKoQJ3PFonZ+By Jf7XJsRQdqjy2kvi9jU4wbg9X8sgNUWpaGXldhRPjgQX+o25NrjETdHtNdv8kayAKvEG XTXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=zcUiQQP+tOahKihNXDAPzGXRwdd3JqZPxFYut3dI9ps=; b=Pg8Dyx5YgcUbKtAFA8Tisa9iMOeBeAj9dy0IfAxE4dRpNYlBTiPPQIXJoaH6uKKkrR OIOOQC/CFBSYD7rj2ZaIlNrM4aThfykR+qAOmLVimEmg1mq+vu3jQM2dld3eP3vNtia4 +0fZgFhbtjPAYeB7nbdfK7N9+KcpZVM9/zh47ronCar9v8xKkttrYdURl7hceRkNX1z0 yWHNm+cuVd+DGObeOKUegufMlQRiaK4xFx/aR951aNhZ68Egl/UMIRtd0dwCqxmrfHZf XuLvAqYw7zQnR10Go9qFo31CJjlRIAKq9WmrmsYFIgtxE4JuFT2+/TRcGJTKECUUCN7j jDzA== X-Gm-Message-State: APjAAAVli9QMRLqJys1U3a0Ie+gbWyUH1H1t+yHRgWJZyf1kLobKa01m WBHMm8svQRqcCS+WITtTczAuCELdsu4= X-Received: by 2002:a5d:554b:: with SMTP id g11mr8539135wrw.10.1561785811953; Fri, 28 Jun 2019 22:23:31 -0700 (PDT) Received: from apalos.lan (athedsl-4461147.home.otenet.gr. [94.71.2.75]) by smtp.gmail.com with ESMTPSA id g131sm2768887wmf.37.2019.06.28.22.23.30 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 28 Jun 2019 22:23:31 -0700 (PDT) From: Ilias Apalodimas To: netdev@vger.kernel.org, jaswinder.singh@linaro.org Cc: ard.biesheuvel@linaro.org, bjorn.topel@intel.com, magnus.karlsson@intel.com, brouer@redhat.com, daniel@iogearbox.net, ast@kernel.org, makita.toshiaki@lab.ntt.co.jp, jakub.kicinski@netronome.com, john.fastabend@gmail.com, davem@davemloft.net, maciejromanfijalkowski@gmail.com, Ilias Apalodimas Subject: [net-next, PATCH 1/3, v2] net: netsec: Use page_pool API Date: Sat, 29 Jun 2019 08:23:23 +0300 Message-Id: <1561785805-21647-2-git-send-email-ilias.apalodimas@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1561785805-21647-1-git-send-email-ilias.apalodimas@linaro.org> References: <1561785805-21647-1-git-send-email-ilias.apalodimas@linaro.org> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Use page_pool and it's DMA mapping capabilities for Rx buffers instead of netdev/napi_alloc_frag() Although this will result in a slight performance penalty on small sized packets (~10%) the use of the API will allow to easily add XDP support. The penalty won't be visible in network testing i.e ipef/netperf etc, it only happens during raw packet drops. Furthermore we intend to add recycling capabilities on the API in the future. Once the recycling is added the performance penalty will go away. The only 'real' penalty is the slightly increased memory usage, since we now allocate a page per packet instead of the amount of bytes we need + skb metadata (difference is roughly 2kb per packet). With a minimum of 4BG of RAM on the only SoC that has this NIC the extra memory usage is negligible (a bit more on 64K pages) Signed-off-by: Ilias Apalodimas Acked-by: Jesper Dangaard Brouer --- drivers/net/ethernet/socionext/Kconfig | 1 + drivers/net/ethernet/socionext/netsec.c | 126 +++++++++++++++--------- 2 files changed, 80 insertions(+), 47 deletions(-) -- 2.20.1 diff --git a/drivers/net/ethernet/socionext/Kconfig b/drivers/net/ethernet/socionext/Kconfig index 25f18be27423..95e99baf3f45 100644 --- a/drivers/net/ethernet/socionext/Kconfig +++ b/drivers/net/ethernet/socionext/Kconfig @@ -26,6 +26,7 @@ config SNI_NETSEC tristate "Socionext NETSEC ethernet support" depends on (ARCH_SYNQUACER || COMPILE_TEST) && OF select PHYLIB + select PAGE_POOL select MII ---help--- Enable to add support for the SocioNext NetSec Gigabit Ethernet diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c index 48fd7448b513..7791bff2f2af 100644 --- a/drivers/net/ethernet/socionext/netsec.c +++ b/drivers/net/ethernet/socionext/netsec.c @@ -11,6 +11,7 @@ #include #include +#include #include #define NETSEC_REG_SOFT_RST 0x104 @@ -235,7 +236,8 @@ #define DESC_NUM 256 #define NETSEC_SKB_PAD (NET_SKB_PAD + NET_IP_ALIGN) -#define NETSEC_RX_BUF_SZ 1536 +#define NETSEC_RX_BUF_NON_DATA (NETSEC_SKB_PAD + \ + SKB_DATA_ALIGN(sizeof(struct skb_shared_info))) #define DESC_SZ sizeof(struct netsec_de) @@ -258,6 +260,8 @@ struct netsec_desc_ring { struct netsec_desc *desc; void *vaddr; u16 head, tail; + struct page_pool *page_pool; + struct xdp_rxq_info xdp_rxq; }; struct netsec_priv { @@ -673,33 +677,27 @@ static void netsec_process_tx(struct netsec_priv *priv) } static void *netsec_alloc_rx_data(struct netsec_priv *priv, - dma_addr_t *dma_handle, u16 *desc_len, - bool napi) + dma_addr_t *dma_handle, u16 *desc_len) + { - size_t total_len = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); - size_t payload_len = NETSEC_RX_BUF_SZ; - dma_addr_t mapping; - void *buf; - total_len += SKB_DATA_ALIGN(payload_len + NETSEC_SKB_PAD); + struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_RX]; + struct page *page; - buf = napi ? napi_alloc_frag(total_len) : netdev_alloc_frag(total_len); - if (!buf) + page = page_pool_dev_alloc_pages(dring->page_pool); + if (!page) return NULL; - mapping = dma_map_single(priv->dev, buf + NETSEC_SKB_PAD, payload_len, - DMA_FROM_DEVICE); - if (unlikely(dma_mapping_error(priv->dev, mapping))) - goto err_out; - - *dma_handle = mapping; - *desc_len = payload_len; - - return buf; + /* page_pool API will map the whole page, skip + * NET_SKB_PAD + NET_IP_ALIGN for the payload + */ + *dma_handle = page_pool_get_dma_addr(page) + NETSEC_SKB_PAD; + /* make sure the incoming payload fits in the page with the needed + * NET_SKB_PAD + NET_IP_ALIGN + skb_shared_info + */ + *desc_len = PAGE_SIZE - NETSEC_RX_BUF_NON_DATA; -err_out: - skb_free_frag(buf); - return NULL; + return page_address(page); } static void netsec_rx_fill(struct netsec_priv *priv, u16 from, u16 num) @@ -728,10 +726,10 @@ static int netsec_process_rx(struct netsec_priv *priv, int budget) u16 idx = dring->tail; struct netsec_de *de = dring->vaddr + (DESC_SZ * idx); struct netsec_desc *desc = &dring->desc[idx]; + struct page *page = virt_to_page(desc->addr); u16 pkt_len, desc_len; dma_addr_t dma_handle; void *buf_addr; - u32 truesize; if (de->attr & (1U << NETSEC_RX_PKT_OWN_FIELD)) { /* reading the register clears the irq */ @@ -766,8 +764,8 @@ static int netsec_process_rx(struct netsec_priv *priv, int budget) /* allocate a fresh buffer and map it to the hardware. * This will eventually replace the old buffer in the hardware */ - buf_addr = netsec_alloc_rx_data(priv, &dma_handle, &desc_len, - true); + buf_addr = netsec_alloc_rx_data(priv, &dma_handle, &desc_len); + if (unlikely(!buf_addr)) break; @@ -775,22 +773,19 @@ static int netsec_process_rx(struct netsec_priv *priv, int budget) DMA_FROM_DEVICE); prefetch(desc->addr); - truesize = SKB_DATA_ALIGN(desc->len + NETSEC_SKB_PAD) + - SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); - skb = build_skb(desc->addr, truesize); + skb = build_skb(desc->addr, desc->len + NETSEC_RX_BUF_NON_DATA); if (unlikely(!skb)) { - /* free the newly allocated buffer, we are not going to - * use it + /* If skb fails recycle_direct will either unmap and + * free the page or refill the cache depending on the + * cache state. Since we paid the allocation cost if + * building an skb fails try to put the page into cache */ - dma_unmap_single(priv->dev, dma_handle, desc_len, - DMA_FROM_DEVICE); - skb_free_frag(buf_addr); + page_pool_recycle_direct(dring->page_pool, page); netif_err(priv, drv, priv->ndev, "rx failed to build skb\n"); break; } - dma_unmap_single_attrs(priv->dev, desc->dma_addr, desc->len, - DMA_FROM_DEVICE, DMA_ATTR_SKIP_CPU_SYNC); + page_pool_release_page(dring->page_pool, page); /* Update the descriptor with the new buffer we allocated */ desc->len = desc_len; @@ -980,19 +975,31 @@ static void netsec_uninit_pkt_dring(struct netsec_priv *priv, int id) if (!dring->vaddr || !dring->desc) return; - for (idx = 0; idx < DESC_NUM; idx++) { desc = &dring->desc[idx]; if (!desc->addr) continue; - dma_unmap_single(priv->dev, desc->dma_addr, desc->len, - id == NETSEC_RING_RX ? DMA_FROM_DEVICE : - DMA_TO_DEVICE); - if (id == NETSEC_RING_RX) - skb_free_frag(desc->addr); - else if (id == NETSEC_RING_TX) + if (id == NETSEC_RING_RX) { + struct page *page = virt_to_page(desc->addr); + + page_pool_put_page(dring->page_pool, page, false); + } else if (id == NETSEC_RING_TX) { + dma_unmap_single(priv->dev, desc->dma_addr, desc->len, + DMA_TO_DEVICE); dev_kfree_skb(desc->skb); + } + } + + /* Rx is currently using page_pool + * since the pool is created during netsec_setup_rx_dring(), we need to + * free the pool manually if the registration failed + */ + if (id == NETSEC_RING_RX) { + if (xdp_rxq_info_is_reg(&dring->xdp_rxq)) + xdp_rxq_info_unreg(&dring->xdp_rxq); + else + page_pool_free(dring->page_pool); } memset(dring->desc, 0, sizeof(struct netsec_desc) * DESC_NUM); @@ -1059,7 +1066,23 @@ static void netsec_setup_tx_dring(struct netsec_priv *priv) static int netsec_setup_rx_dring(struct netsec_priv *priv) { struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_RX]; - int i; + struct page_pool_params pp_params = { 0 }; + int i, err; + + pp_params.order = 0; + /* internal DMA mapping in page_pool */ + pp_params.flags = PP_FLAG_DMA_MAP; + pp_params.pool_size = DESC_NUM; + pp_params.nid = cpu_to_node(0); + pp_params.dev = priv->dev; + pp_params.dma_dir = DMA_FROM_DEVICE; + + dring->page_pool = page_pool_create(&pp_params); + if (IS_ERR(dring->page_pool)) { + err = PTR_ERR(dring->page_pool); + dring->page_pool = NULL; + goto err_out; + } for (i = 0; i < DESC_NUM; i++) { struct netsec_desc *desc = &dring->desc[i]; @@ -1067,10 +1090,10 @@ static int netsec_setup_rx_dring(struct netsec_priv *priv) void *buf; u16 len; - buf = netsec_alloc_rx_data(priv, &dma_handle, &len, - false); + buf = netsec_alloc_rx_data(priv, &dma_handle, &len); + if (!buf) { - netsec_uninit_pkt_dring(priv, NETSEC_RING_RX); + err = -ENOMEM; goto err_out; } desc->dma_addr = dma_handle; @@ -1079,11 +1102,20 @@ static int netsec_setup_rx_dring(struct netsec_priv *priv) } netsec_rx_fill(priv, 0, DESC_NUM); + err = xdp_rxq_info_reg(&dring->xdp_rxq, priv->ndev, 0); + if (err) + goto err_out; + + err = xdp_rxq_info_reg_mem_model(&dring->xdp_rxq, MEM_TYPE_PAGE_POOL, + dring->page_pool); + if (err) + goto err_out; return 0; err_out: - return -ENOMEM; + netsec_uninit_pkt_dring(priv, NETSEC_RING_RX); + return err; } static int netsec_netdev_load_ucode_region(struct netsec_priv *priv, u32 reg,