From patchwork Fri Jun 28 10:39:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ilias Apalodimas X-Patchwork-Id: 168058 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp3521653ilk; Fri, 28 Jun 2019 03:39:45 -0700 (PDT) X-Google-Smtp-Source: APXvYqyvVElyuA+Lt1v/57CTul4LvbYvqWQOdPlPTNxOnb5RHyetRRYO5eI6xZh9BvsQC9Q1qK84 X-Received: by 2002:a17:90a:21ac:: with SMTP id q41mr12462234pjc.31.1561718384939; Fri, 28 Jun 2019 03:39:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561718384; cv=none; d=google.com; s=arc-20160816; b=NF40uwLHjmZotL99MTEwv8r3NzQQ/vqhOzZa5EhdXaHmFRV91NXyQKoKhDjPpSwjEl RgESSGdjwpnfSkDZfWrfs6bGf5ervMXKFM5kM+5O78yFNdzvAgtLmVHkMXk3XJVimDXa W5LHhVSKa2iEXIH9OLbpG2a+UQ8FdbnvPvQGp2saRRgOwSYE2t9ALg2NI8pZS2QC1We4 cfMu0pPYJapGJFrgdYOVQKQchWSu33X3lV0z/bKyRbQreJfHPTUXnO3I77mWgXBFLteZ 2Zqxik3tF9wJn+FsqXxLz576hL9CDhXIn45fRaV+ARvvg0iHoeY4TQMuD7I8kJWn+jNS Ec4A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=m6+ZOA6WCB5tUFbM8EVXOBvKjOZJWASoT89nEIqWhis=; b=U4wFL85YSCN34SsL+zGRhYP0hQoVrT2H22COR05BI0Ph4RCyVueGeGMI2nioESquQu dpHJqvZRUzOH4lZqlW9ODuu+hXNbk4K8cxdN2nXC7FF6J77LzWYiif7K123wBjeOx+y+ KSN5x8C/JeZF5FCVJaswRjif+kMmRhBIkRBkp1bCnmwEA1usBc/0YaAzeiftOyyrdGrC WUs8WJdfcftXuy4cEq/hwX+Mn3XCtBI5iB81mz4QvaN92ZfkIbY/eiURsy0VCN07AiaP yLM8i958sw+IY8iuooRuGGIZAUA3dGXjjKB2uopmz1g08TMLj+7cZQrZZe+xCKeu183M vzhw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=poCclX2k; spf=pass (google.com: best guess record for domain of netdev-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s12si1938723pfm.113.2019.06.28.03.39.44; Fri, 28 Jun 2019 03:39:44 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of netdev-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=poCclX2k; spf=pass (google.com: best guess record for domain of netdev-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726819AbfF1Kjn (ORCPT + 9 others); Fri, 28 Jun 2019 06:39:43 -0400 Received: from mail-wm1-f67.google.com ([209.85.128.67]:34778 "EHLO mail-wm1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726484AbfF1Kjm (ORCPT ); Fri, 28 Jun 2019 06:39:42 -0400 Received: by mail-wm1-f67.google.com with SMTP id w9so9045422wmd.1 for ; Fri, 28 Jun 2019 03:39:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=m6+ZOA6WCB5tUFbM8EVXOBvKjOZJWASoT89nEIqWhis=; b=poCclX2kYfGq/1UjbebXuw2gjGIxS58Ob61CrkKwLKRl6okK1Ro8ri4u1+ZGRLoRgh KzmttXUyLIxOzbpQvr9vxsmVOywgMMrdsnHlBrjE7fmIu/o4Ucl30H45qS2qwctsNrRE rzPPm7NJ+KNS7uHs6n98xSgPqXE4m2bOflsIbdwJWEpUpwJsvez0jso2fhvp922oUmlF CkcLtavEVB5qcD4m5/Blt+z5igC3/DNfv2o+CQi+y4HZwbr24j1eNGbC948njaSmUzS0 b3BrqOZsDxzYQqzqpv146TQ3fmThrd8mOEOZiDbXW0KD71lZzdEsGd/Q0FHDZMvrCQWE JLHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=m6+ZOA6WCB5tUFbM8EVXOBvKjOZJWASoT89nEIqWhis=; b=QfGb650fxEzu58l+OK2ph23d2Laz7ZIRBTCHXYL/hcZprtG5pAHaJX4YXwPIK0BZ01 ZROS8/rQoM7XHyNEX0LsU52joNviYrWGSy7vTunJVPfltStj4xpw7MTiXCGRLVtH9rN5 KvLvJq2Cd6TXJ54boxTefDNxLDYnyXRvyGa5e2j9Lammp8UDTYEuAcgR1o5eTqcXc812 usxawk40XeL9C9uumtt/ns5kqK2VfIWDQTnoRhIaYiR6U79IlzhaED46n9Y+zIqpKLME L7KnKMNZa5/6yB6puHw6pd0LLrz1bKNlK/hyNdMOLXwodPQczZDCQ2+c50ndzFT1e4Wy z5ow== X-Gm-Message-State: APjAAAUKRv1tV0X5cqVeMH2onQFLo7c7DakXwb1lnyUuzVdB9q5IsfKB 6QBRdWfj5BL2uVBx6+tKLyqtYahuj0g= X-Received: by 2002:a1c:7408:: with SMTP id p8mr6472090wmc.161.1561718379667; Fri, 28 Jun 2019 03:39:39 -0700 (PDT) Received: from apalos.lan (athedsl-4461147.home.otenet.gr. [94.71.2.75]) by smtp.gmail.com with ESMTPSA id r5sm3397742wrg.10.2019.06.28.03.39.37 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 28 Jun 2019 03:39:39 -0700 (PDT) From: Ilias Apalodimas To: netdev@vger.kernel.org, jaswinder.singh@linaro.org Cc: ard.biesheuvel@linaro.org, bjorn.topel@intel.com, magnus.karlsson@intel.com, brouer@redhat.com, daniel@iogearbox.net, ast@kernel.org, makita.toshiaki@lab.ntt.co.jp, jakub.kicinski@netronome.com, john.fastabend@gmail.com, davem@davemloft.net, maciejromanfijalkowski@gmail.com, Ilias Apalodimas Subject: [PATCH 1/3, net-next] net: netsec: Use page_pool API Date: Fri, 28 Jun 2019 13:39:13 +0300 Message-Id: <1561718355-13919-2-git-send-email-ilias.apalodimas@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1561718355-13919-1-git-send-email-ilias.apalodimas@linaro.org> References: <1561718355-13919-1-git-send-email-ilias.apalodimas@linaro.org> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Use page_pool and it's DMA mapping capabilities for Rx buffers instead of netdev/napi_alloc_frag() Although this will result in a slight performance penalty on small sized packets (~10%) the use of the API will allow to easily add XDP support. The penalty won't be visible in network testing i.e ipef/netperf etc, it only happens during raw packet drops. Furthermore we intend to add recycling capabilities on the API in the future. Once the recycling is added the performance penalty will go away. The only 'real' penalty is the slightly increased memory usage, since we now allocate a page per packet instead of the amount of bytes we need + skb metadata (difference is roughly 2kb per packet). With a minimum of 4BG of RAM on the only SoC that has this NIC the extra memory usage is negligible (a bit more on 64K pages) Signed-off-by: Ilias Apalodimas --- drivers/net/ethernet/socionext/Kconfig | 1 + drivers/net/ethernet/socionext/netsec.c | 121 +++++++++++++++--------- 2 files changed, 75 insertions(+), 47 deletions(-) -- 2.20.1 Acked-by: Jesper Dangaard Brouer diff --git a/drivers/net/ethernet/socionext/Kconfig b/drivers/net/ethernet/socionext/Kconfig index 25f18be27423..95e99baf3f45 100644 --- a/drivers/net/ethernet/socionext/Kconfig +++ b/drivers/net/ethernet/socionext/Kconfig @@ -26,6 +26,7 @@ config SNI_NETSEC tristate "Socionext NETSEC ethernet support" depends on (ARCH_SYNQUACER || COMPILE_TEST) && OF select PHYLIB + select PAGE_POOL select MII ---help--- Enable to add support for the SocioNext NetSec Gigabit Ethernet diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c index 48fd7448b513..e653b24d0534 100644 --- a/drivers/net/ethernet/socionext/netsec.c +++ b/drivers/net/ethernet/socionext/netsec.c @@ -11,6 +11,7 @@ #include #include +#include #include #define NETSEC_REG_SOFT_RST 0x104 @@ -235,7 +236,8 @@ #define DESC_NUM 256 #define NETSEC_SKB_PAD (NET_SKB_PAD + NET_IP_ALIGN) -#define NETSEC_RX_BUF_SZ 1536 +#define NETSEC_RX_BUF_NON_DATA (NETSEC_SKB_PAD + \ + SKB_DATA_ALIGN(sizeof(struct skb_shared_info))) #define DESC_SZ sizeof(struct netsec_de) @@ -258,6 +260,8 @@ struct netsec_desc_ring { struct netsec_desc *desc; void *vaddr; u16 head, tail; + struct page_pool *page_pool; + struct xdp_rxq_info xdp_rxq; }; struct netsec_priv { @@ -673,33 +677,27 @@ static void netsec_process_tx(struct netsec_priv *priv) } static void *netsec_alloc_rx_data(struct netsec_priv *priv, - dma_addr_t *dma_handle, u16 *desc_len, - bool napi) + dma_addr_t *dma_handle, u16 *desc_len) + { - size_t total_len = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); - size_t payload_len = NETSEC_RX_BUF_SZ; - dma_addr_t mapping; - void *buf; - total_len += SKB_DATA_ALIGN(payload_len + NETSEC_SKB_PAD); + struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_RX]; + struct page *page; - buf = napi ? napi_alloc_frag(total_len) : netdev_alloc_frag(total_len); - if (!buf) + page = page_pool_dev_alloc_pages(dring->page_pool); + if (!page) return NULL; - mapping = dma_map_single(priv->dev, buf + NETSEC_SKB_PAD, payload_len, - DMA_FROM_DEVICE); - if (unlikely(dma_mapping_error(priv->dev, mapping))) - goto err_out; - - *dma_handle = mapping; - *desc_len = payload_len; - - return buf; + /* page_pool API will map the whole page, skip + * NET_SKB_PAD + NET_IP_ALIGN for the payload + */ + *dma_handle = page_pool_get_dma_addr(page) + NETSEC_SKB_PAD; + /* make sure the incoming payload fits in the page with the needed + * NET_SKB_PAD + NET_IP_ALIGN + skb_shared_info + */ + *desc_len = PAGE_SIZE - NETSEC_RX_BUF_NON_DATA; -err_out: - skb_free_frag(buf); - return NULL; + return page_address(page); } static void netsec_rx_fill(struct netsec_priv *priv, u16 from, u16 num) @@ -728,10 +726,10 @@ static int netsec_process_rx(struct netsec_priv *priv, int budget) u16 idx = dring->tail; struct netsec_de *de = dring->vaddr + (DESC_SZ * idx); struct netsec_desc *desc = &dring->desc[idx]; + struct page *page = virt_to_page(desc->addr); u16 pkt_len, desc_len; dma_addr_t dma_handle; void *buf_addr; - u32 truesize; if (de->attr & (1U << NETSEC_RX_PKT_OWN_FIELD)) { /* reading the register clears the irq */ @@ -766,8 +764,8 @@ static int netsec_process_rx(struct netsec_priv *priv, int budget) /* allocate a fresh buffer and map it to the hardware. * This will eventually replace the old buffer in the hardware */ - buf_addr = netsec_alloc_rx_data(priv, &dma_handle, &desc_len, - true); + buf_addr = netsec_alloc_rx_data(priv, &dma_handle, &desc_len); + if (unlikely(!buf_addr)) break; @@ -775,22 +773,19 @@ static int netsec_process_rx(struct netsec_priv *priv, int budget) DMA_FROM_DEVICE); prefetch(desc->addr); - truesize = SKB_DATA_ALIGN(desc->len + NETSEC_SKB_PAD) + - SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); - skb = build_skb(desc->addr, truesize); + skb = build_skb(desc->addr, desc->len + NETSEC_RX_BUF_NON_DATA); if (unlikely(!skb)) { - /* free the newly allocated buffer, we are not going to - * use it + /* If skb fails recycle_direct will either unmap and + * free the page or refill the cache depending on the + * cache state. Since we paid the allocation cost if + * building an skb fails try to put the page into cache */ - dma_unmap_single(priv->dev, dma_handle, desc_len, - DMA_FROM_DEVICE); - skb_free_frag(buf_addr); + page_pool_recycle_direct(dring->page_pool, page); netif_err(priv, drv, priv->ndev, "rx failed to build skb\n"); break; } - dma_unmap_single_attrs(priv->dev, desc->dma_addr, desc->len, - DMA_FROM_DEVICE, DMA_ATTR_SKIP_CPU_SYNC); + page_pool_release_page(dring->page_pool, page); /* Update the descriptor with the new buffer we allocated */ desc->len = desc_len; @@ -980,21 +975,26 @@ static void netsec_uninit_pkt_dring(struct netsec_priv *priv, int id) if (!dring->vaddr || !dring->desc) return; - for (idx = 0; idx < DESC_NUM; idx++) { desc = &dring->desc[idx]; if (!desc->addr) continue; - dma_unmap_single(priv->dev, desc->dma_addr, desc->len, - id == NETSEC_RING_RX ? DMA_FROM_DEVICE : - DMA_TO_DEVICE); - if (id == NETSEC_RING_RX) - skb_free_frag(desc->addr); - else if (id == NETSEC_RING_TX) + if (id == NETSEC_RING_RX) { + struct page *page = virt_to_page(desc->addr); + + page_pool_put_page(dring->page_pool, page, false); + } else if (id == NETSEC_RING_TX) { + dma_unmap_single(priv->dev, desc->dma_addr, desc->len, + DMA_TO_DEVICE); dev_kfree_skb(desc->skb); + } } + /* Rx is currently using page_pool */ + if (xdp_rxq_info_is_reg(&dring->xdp_rxq)) + xdp_rxq_info_unreg(&dring->xdp_rxq); + memset(dring->desc, 0, sizeof(struct netsec_desc) * DESC_NUM); memset(dring->vaddr, 0, DESC_SZ * DESC_NUM); @@ -1059,7 +1059,23 @@ static void netsec_setup_tx_dring(struct netsec_priv *priv) static int netsec_setup_rx_dring(struct netsec_priv *priv) { struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_RX]; - int i; + struct page_pool_params pp_params = { 0 }; + int i, err; + + pp_params.order = 0; + /* internal DMA mapping in page_pool */ + pp_params.flags = PP_FLAG_DMA_MAP; + pp_params.pool_size = DESC_NUM; + pp_params.nid = cpu_to_node(0); + pp_params.dev = priv->dev; + pp_params.dma_dir = DMA_FROM_DEVICE; + + dring->page_pool = page_pool_create(&pp_params); + if (IS_ERR(dring->page_pool)) { + err = PTR_ERR(dring->page_pool); + dring->page_pool = NULL; + goto err_out; + } for (i = 0; i < DESC_NUM; i++) { struct netsec_desc *desc = &dring->desc[i]; @@ -1067,10 +1083,10 @@ static int netsec_setup_rx_dring(struct netsec_priv *priv) void *buf; u16 len; - buf = netsec_alloc_rx_data(priv, &dma_handle, &len, - false); + buf = netsec_alloc_rx_data(priv, &dma_handle, &len); + if (!buf) { - netsec_uninit_pkt_dring(priv, NETSEC_RING_RX); + err = -ENOMEM; goto err_out; } desc->dma_addr = dma_handle; @@ -1079,11 +1095,22 @@ static int netsec_setup_rx_dring(struct netsec_priv *priv) } netsec_rx_fill(priv, 0, DESC_NUM); + err = xdp_rxq_info_reg(&dring->xdp_rxq, priv->ndev, 0); + if (err) + goto err_out; + + err = xdp_rxq_info_reg_mem_model(&dring->xdp_rxq, MEM_TYPE_PAGE_POOL, + dring->page_pool); + if (err) { + page_pool_free(dring->page_pool); + goto err_out; + } return 0; err_out: - return -ENOMEM; + netsec_uninit_pkt_dring(priv, NETSEC_RING_RX); + return err; } static int netsec_netdev_load_ucode_region(struct netsec_priv *priv, u32 reg, From patchwork Fri Jun 28 10:39:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ilias Apalodimas X-Patchwork-Id: 168059 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp3521683ilk; Fri, 28 Jun 2019 03:39:46 -0700 (PDT) X-Google-Smtp-Source: APXvYqzHLahPpFhPwrdg/2FUL5XsW3mbqaRsiUfDBocURW10DKSXcPA4a/11SAb+ve9ivZt/qYim X-Received: by 2002:a17:902:6848:: with SMTP id f8mr10718557pln.102.1561718386599; Fri, 28 Jun 2019 03:39:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561718386; cv=none; d=google.com; s=arc-20160816; b=yrPaG0JTqHJt0grDXsVv8a5Gngcnpzhsw7BfCMew5sCS3hhDwoNiLVLuCMLh2BBP3y Z1js6OpWiYNMqyOSG2p5in/LN6lBZXQxJCZze5GYPXfd6pEdWp5FF5tGyBEG3h3etEcq b/73QZDBMKN11sm7Uo/wFkwzllZVvwAHorFcV0ZpzfVfbwt+91S7aqlbjHfhpGKxjPbE 49H37VEFshS9izpcKuUFB6XjhIgcup1i9e58yBXccLkOt73AYbFL1YjzBq703RpAHus5 unBedHZTsiU0W5bn6JxOgzoqwIQoYEn5kGurWDKetPR7PaVdAQgXBz0OxIZzMrb/eI8C LJ6A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=Z2LIGKls7zdQKVPI0zcAfuwyrWIatXu4izZ3CIRS768=; b=l3Bu+0AhiBML3BHFK05XZuziYDHbMowyFyc5OEWSqF7b5Jzj6UQL80SjTirsaiQVU/ x9PGMbR0BH9T/USirg5PT+YFGfCOoMFl3bBotXA8IWdEyrLTr09ZKWudYJU14hs8g3IL 4qK8FRi9J9PAu8agV+asMpGlbp9CBnG++ewNb4Zdn81HjN4imQJAC1cD9LomI3FgBzvv qVd0kQ+JEwllmOT7NW8Lsk9EljNXHRvmWKYeuA0qJmILq0MHJmCwEbI6SwFsHhhCSnVB yEm0seadogjIUVqRWUSoMiIQS4xhxIUUgN9Wa2i4Q0Lw3T2uijYwVQuqSNN88LMXjbIR LIBg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ThtdY1gM; spf=pass (google.com: best guess record for domain of netdev-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s12si1938723pfm.113.2019.06.28.03.39.46; Fri, 28 Jun 2019 03:39:46 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of netdev-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ThtdY1gM; spf=pass (google.com: best guess record for domain of netdev-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726833AbfF1Kjp (ORCPT + 9 others); Fri, 28 Jun 2019 06:39:45 -0400 Received: from mail-wm1-f66.google.com ([209.85.128.66]:54810 "EHLO mail-wm1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726524AbfF1Kjn (ORCPT ); Fri, 28 Jun 2019 06:39:43 -0400 Received: by mail-wm1-f66.google.com with SMTP id g135so8608907wme.4 for ; Fri, 28 Jun 2019 03:39:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Z2LIGKls7zdQKVPI0zcAfuwyrWIatXu4izZ3CIRS768=; b=ThtdY1gMwIWKDilrAXPEQGalbzSd02mtPgySi4oQdsY/HA7cZZ1MBNrqy6R2qwBveU qSte4MffImE1J7k6TAtUfisiGZ2wsc+b909DlluOgBmjl7d/YFLhMZvS6vMZ/N3GuerU f2Vpt7NBrIx86p6AoQqMzqSh4sLKzBAiAdNNcYdG8kry0fT03dltvba3mZ08xr5B+she t8v9b8+EIuvdTd+v5sVkgFGWoEzqfNupG8eL+TsUNY/G79TOOczmxt3nPN1khe+ljRLE UjmuTgzLu7almDIeAhVX3wWUxKZPPaP7bucyVyTfWJS6HTzn5ks9XKY8Xc5gor4nri/U +F+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Z2LIGKls7zdQKVPI0zcAfuwyrWIatXu4izZ3CIRS768=; b=JrItgC7x5bHVTQtwqHr4Anv7QpwltmzVf3uK6r9MStnpUm2UHaXuDLgzkzD/R2fwk1 6uMAFGSCwJOiRWK6I/S763KOsj0jWaK1uQTebfEBl0zUPbY1OoeqtO8TbrgJ2aXKDUrh K6HARr8Jbb3/SWqp8gw1vrAkAkedF1q1jtHF3o9QMGJNE52uxjEx1PzLstPjOMt2Wuta xlhE4X3Xi7GZIwUVMAO3fI8mfyx/GCmjiF45TJ/dSzlnS7hBX57paDzYk6xpaguyJoon 2zks5kDmSigEuFZUZjBgyaWBm3pqENL41iwmZPhQS4OpSMXgDw2zYaTnCX7Dq2rEA2Kq ymwA== X-Gm-Message-State: APjAAAXFdThNYLaMHxF8lcpJAtdUR2nsmkj4Fh1+1EFJsxLhVNjXQZ23 VtkXY18dJxXZNmJAVMN5F+UvaJz4170= X-Received: by 2002:a1c:99ca:: with SMTP id b193mr6645469wme.31.1561718381500; Fri, 28 Jun 2019 03:39:41 -0700 (PDT) Received: from apalos.lan (athedsl-4461147.home.otenet.gr. [94.71.2.75]) by smtp.gmail.com with ESMTPSA id r5sm3397742wrg.10.2019.06.28.03.39.39 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 28 Jun 2019 03:39:40 -0700 (PDT) From: Ilias Apalodimas To: netdev@vger.kernel.org, jaswinder.singh@linaro.org Cc: ard.biesheuvel@linaro.org, bjorn.topel@intel.com, magnus.karlsson@intel.com, brouer@redhat.com, daniel@iogearbox.net, ast@kernel.org, makita.toshiaki@lab.ntt.co.jp, jakub.kicinski@netronome.com, john.fastabend@gmail.com, davem@davemloft.net, maciejromanfijalkowski@gmail.com, Ilias Apalodimas Subject: [PATCH 2/3, net-next] net: page_pool: add helper function for retrieving dma direction Date: Fri, 28 Jun 2019 13:39:14 +0300 Message-Id: <1561718355-13919-3-git-send-email-ilias.apalodimas@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1561718355-13919-1-git-send-email-ilias.apalodimas@linaro.org> References: <1561718355-13919-1-git-send-email-ilias.apalodimas@linaro.org> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Since the dma direction is stored in page pool params, offer an API helper for driver that choose not to keep track of it locally Signed-off-by: Ilias Apalodimas --- include/net/page_pool.h | 9 +++++++++ 1 file changed, 9 insertions(+) -- 2.20.1 Acked-by: Jesper Dangaard Brouer diff --git a/include/net/page_pool.h b/include/net/page_pool.h index f07c518ef8a5..ee9c871d2043 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -112,6 +112,15 @@ static inline struct page *page_pool_dev_alloc_pages(struct page_pool *pool) return page_pool_alloc_pages(pool, gfp); } +/* get the stored dma direction. A driver might decide to treat this locally and + * avoid the extra cache line from page_pool to determine the direction + */ +static +inline enum dma_data_direction page_pool_get_dma_dir(struct page_pool *pool) +{ + return pool->p.dma_dir; +} + struct page_pool *page_pool_create(const struct page_pool_params *params); void __page_pool_free(struct page_pool *pool); From patchwork Fri Jun 28 10:39:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ilias Apalodimas X-Patchwork-Id: 168060 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp3521738ilk; Fri, 28 Jun 2019 03:39:49 -0700 (PDT) X-Google-Smtp-Source: APXvYqy6De4TVWaeew74n1z1D6qjsNfkiJkitvzcJRlxK0m8HYOibOoNvTbruxUAe7lMIuafUCuq X-Received: by 2002:a17:902:f216:: with SMTP id gn22mr10603278plb.118.1561718389853; Fri, 28 Jun 2019 03:39:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561718389; cv=none; d=google.com; s=arc-20160816; b=sKloMAxkBWACU+rVHRSQZI0YfA0d8cHixuA5FCyDms3MNthmFohQgVOM1cDAcvA7wG 752Rn8KKgovVjcWGnrqqm8yFFUgCyTxV1tr41eG1DN0ajdS8QWztFzW/Ch4IrXJViVwT vdlyVPyc/VIv59X/n84JVoYo3O/gJGs1pnofvfpzH4rCCZ6WNQzS9vRUjhG4zeu0RQxd nWLzh7mss2c+xHFsoUCRLlkS3/BWWG+OSAndGLGXM9TOVwU58ev+0eyhBovA92izhp7J tB0AcOXnfRngTYS+IJfPTpE3WMJsr3KRb8FSXsPIsGCHebATz9BS83HXSnslmCqD/dYD 9FnQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=Ti5t6+B7qATr/1F7BjRU8Ci0tFq+LYPMBgPHvP5lBAM=; b=usNKbfcNhAybStFEDIyS+c+6orXrN1OIJCKLdRF35XwH55bsS4OsbroBKrzQt5nsiE KKImnBpkwu2OW1bLVokCJN1vAOvOYDocD/I7CcLIRL5ngdr7AhMXp5EUeycl2Ji/ksZC 739MAzf4yOSGO7qFejTjSBns10ZO+YJFXaJyJtXvwA8mw3pM3DYzhregCwkdCTvz8dan mGPEjItsp8LswL1JOdA96ikNu+D6z/mnXVwhTGUnjjBV0bs4/FbJOTh1yzF8MQOS+nJj CKAq2fdpNq6Bk3fGLiVJAGjjvlHiLGC74XJvKV0llYQKCl3zyZ2CJKb+Gi0gELjNc8nr eZQQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="G7r89/QB"; spf=pass (google.com: best guess record for domain of netdev-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s12si1938723pfm.113.2019.06.28.03.39.49; Fri, 28 Jun 2019 03:39:49 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of netdev-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="G7r89/QB"; spf=pass (google.com: best guess record for domain of netdev-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726849AbfF1Kjs (ORCPT + 9 others); Fri, 28 Jun 2019 06:39:48 -0400 Received: from mail-wr1-f65.google.com ([209.85.221.65]:33677 "EHLO mail-wr1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726484AbfF1Kjr (ORCPT ); Fri, 28 Jun 2019 06:39:47 -0400 Received: by mail-wr1-f65.google.com with SMTP id n9so5791513wru.0 for ; Fri, 28 Jun 2019 03:39:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Ti5t6+B7qATr/1F7BjRU8Ci0tFq+LYPMBgPHvP5lBAM=; b=G7r89/QBzRqIJBTv8xCsbRez/77IDxih0WIC76o21aMnqDt+L9l6DVaQXH0KvdQcL0 abOitA5KwXblYfqPBnwIBOQd471/K+pas2s8cwSqZZyu0RMioJkjC0CH0vTrvu2P9GP5 ZYL3x4Wf5k8R+6bqRTuT+WT8RyIxCJOGHIrkLf5tv3Y4AhBZktN0ft33BvwJCEdXdtOZ UPYt1fpCOTcdHOkHgjMHIz+qq+p18+bOZcBUUl2xlmavCCdTArgwPn0m66cpRulHTBte 8tpPj9wTn8DNBZ7PFvEdDGTosHnejAsiCoUEYKxF+hH4qAYQM59WKcPnjoENZMuQSvhE yudA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Ti5t6+B7qATr/1F7BjRU8Ci0tFq+LYPMBgPHvP5lBAM=; b=ZKcosfhiaqtn58m4UCfoThcaSYo3KzATXD1R4kZM9kvUtfgsvDWKGkqjNPGhZ2Lzrs Jl+Ek3VBByzTG5xNVGKmlB6qwzp/VXoYsS5/V+ALJy4/7fMVr62xhcS1aSviGufjCIum M0V4cfwX8YWtk+Xl93PtK7/G2RVT2PHN8Uf3DfvY4x+diha8pdJpIewOH12b6NngiZs/ xk/MZoyFGNWwefrFHVdoxLkecerBvRgr2JWr3aNRagx2Uh+kQLkZJHzxSR/Ibi0BCqXE hcugLLB8erW7j7Uouj5riikroB5E+KgwSHlM8YxabOQfVJTfpGhGH3iPbxRfCKmKlNqB suEA== X-Gm-Message-State: APjAAAV+Zj8skv38oNQhRCgUSDgkVBFqKdWlbX/73T18CK4ccEnBUARC BiWls2skHhqCZ5EzQQKqJJh32asXT2w= X-Received: by 2002:a5d:67cd:: with SMTP id n13mr7428696wrw.138.1561718383414; Fri, 28 Jun 2019 03:39:43 -0700 (PDT) Received: from apalos.lan (athedsl-4461147.home.otenet.gr. [94.71.2.75]) by smtp.gmail.com with ESMTPSA id r5sm3397742wrg.10.2019.06.28.03.39.41 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 28 Jun 2019 03:39:42 -0700 (PDT) From: Ilias Apalodimas To: netdev@vger.kernel.org, jaswinder.singh@linaro.org Cc: ard.biesheuvel@linaro.org, bjorn.topel@intel.com, magnus.karlsson@intel.com, brouer@redhat.com, daniel@iogearbox.net, ast@kernel.org, makita.toshiaki@lab.ntt.co.jp, jakub.kicinski@netronome.com, john.fastabend@gmail.com, davem@davemloft.net, maciejromanfijalkowski@gmail.com, Ilias Apalodimas Subject: [PATCH 3/3, net-next] net: netsec: add XDP support Date: Fri, 28 Jun 2019 13:39:15 +0300 Message-Id: <1561718355-13919-4-git-send-email-ilias.apalodimas@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1561718355-13919-1-git-send-email-ilias.apalodimas@linaro.org> References: <1561718355-13919-1-git-send-email-ilias.apalodimas@linaro.org> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org The interface only supports 1 Tx queue so locking is introduced on the Tx queue if XDP is enabled to make sure .ndo_start_xmit and .ndo_xdp_xmit won't corrupt Tx ring - Performance (SMMU off) Benchmark XDP_SKB XDP_DRV xdp1 291kpps 344kpps rxdrop 282kpps 342kpps - Performance (SMMU on) Benchmark XDP_SKB XDP_DRV xdp1 167kpps 324kpps rxdrop 164kpps 323kpps Signed-off-by: Ilias Apalodimas --- drivers/net/ethernet/socionext/netsec.c | 361 ++++++++++++++++++++++-- 1 file changed, 334 insertions(+), 27 deletions(-) -- 2.20.1 diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c index e653b24d0534..d200d47df1b4 100644 --- a/drivers/net/ethernet/socionext/netsec.c +++ b/drivers/net/ethernet/socionext/netsec.c @@ -9,6 +9,9 @@ #include #include #include +#include +#include +#include #include #include @@ -236,23 +239,41 @@ #define DESC_NUM 256 #define NETSEC_SKB_PAD (NET_SKB_PAD + NET_IP_ALIGN) -#define NETSEC_RX_BUF_NON_DATA (NETSEC_SKB_PAD + \ +#define NETSEC_RXBUF_HEADROOM (max(XDP_PACKET_HEADROOM, NET_SKB_PAD) + \ + NET_IP_ALIGN) +#define NETSEC_RX_BUF_NON_DATA (NETSEC_RXBUF_HEADROOM + \ SKB_DATA_ALIGN(sizeof(struct skb_shared_info))) #define DESC_SZ sizeof(struct netsec_de) #define NETSEC_F_NETSEC_VER_MAJOR_NUM(x) ((x) & 0xffff0000) +#define NETSEC_XDP_PASS 0 +#define NETSEC_XDP_CONSUMED BIT(0) +#define NETSEC_XDP_TX BIT(1) +#define NETSEC_XDP_REDIR BIT(2) +#define NETSEC_XDP_RX_OK (NETSEC_XDP_PASS | NETSEC_XDP_TX | NETSEC_XDP_REDIR) + enum ring_id { NETSEC_RING_TX = 0, NETSEC_RING_RX }; +enum buf_type { + TYPE_NETSEC_SKB = 0, + TYPE_NETSEC_XDP_TX, + TYPE_NETSEC_XDP_NDO, +}; + struct netsec_desc { - struct sk_buff *skb; + union { + struct sk_buff *skb; + struct xdp_frame *xdpf; + }; dma_addr_t dma_addr; void *addr; u16 len; + u8 buf_type; }; struct netsec_desc_ring { @@ -260,13 +281,17 @@ struct netsec_desc_ring { struct netsec_desc *desc; void *vaddr; u16 head, tail; + u16 xdp_xmit; /* netsec_xdp_xmit packets */ + bool is_xdp; struct page_pool *page_pool; struct xdp_rxq_info xdp_rxq; + spinlock_t lock; /* XDP tx queue locking */ }; struct netsec_priv { struct netsec_desc_ring desc_ring[NETSEC_RING_MAX]; struct ethtool_coalesce et_coalesce; + struct bpf_prog *xdp_prog; spinlock_t reglock; /* protect reg access */ struct napi_struct napi; phy_interface_t phy_interface; @@ -303,6 +328,11 @@ struct netsec_rx_pkt_info { bool err_flag; }; +static void netsec_set_tx_de(struct netsec_priv *priv, + struct netsec_desc_ring *dring, + const struct netsec_tx_pkt_ctrl *tx_ctrl, + const struct netsec_desc *desc, void *buf); + static void netsec_write(struct netsec_priv *priv, u32 reg_addr, u32 val) { writel(val, priv->ioaddr + reg_addr); @@ -609,6 +639,9 @@ static bool netsec_clean_tx_dring(struct netsec_priv *priv) int tail = dring->tail; int cnt = 0; + if (dring->is_xdp) + spin_lock(&dring->lock); + pkts = 0; bytes = 0; entry = dring->vaddr + DESC_SZ * tail; @@ -622,13 +655,23 @@ static bool netsec_clean_tx_dring(struct netsec_priv *priv) eop = (entry->attr >> NETSEC_TX_LAST) & 1; dma_rmb(); - dma_unmap_single(priv->dev, desc->dma_addr, desc->len, - DMA_TO_DEVICE); - if (eop) { - pkts++; + if (desc->buf_type == TYPE_NETSEC_SKB) + dma_unmap_single(priv->dev, desc->dma_addr, desc->len, + DMA_TO_DEVICE); + else if (desc->buf_type == TYPE_NETSEC_XDP_NDO) + dma_unmap_single(priv->dev, desc->dma_addr, + desc->len, DMA_TO_DEVICE); + + if (!eop) + goto next; + + if (desc->buf_type == TYPE_NETSEC_SKB) { bytes += desc->skb->len; dev_kfree_skb(desc->skb); + } else { + xdp_return_frame(desc->xdpf); } +next: /* clean up so netsec_uninit_pkt_dring() won't free the skb * again */ @@ -645,6 +688,8 @@ static bool netsec_clean_tx_dring(struct netsec_priv *priv) entry = dring->vaddr + DESC_SZ * tail; cnt++; } + if (dring->is_xdp) + spin_unlock(&dring->lock); if (!cnt) return false; @@ -688,12 +733,13 @@ static void *netsec_alloc_rx_data(struct netsec_priv *priv, if (!page) return NULL; - /* page_pool API will map the whole page, skip - * NET_SKB_PAD + NET_IP_ALIGN for the payload + /* We allocate the same buffer length for XDP and non-XDP cases. + * page_pool API will map the whole page, skip what's needed for + * network payloads and/or XDP */ - *dma_handle = page_pool_get_dma_addr(page) + NETSEC_SKB_PAD; - /* make sure the incoming payload fits in the page with the needed - * NET_SKB_PAD + NET_IP_ALIGN + skb_shared_info + *dma_handle = page_pool_get_dma_addr(page) + NETSEC_RXBUF_HEADROOM; + /* Make sure the incoming payload fits in the page for XDP and non-XDP + * cases and reserve enough space for headroom + skb_shared_info */ *desc_len = PAGE_SIZE - NETSEC_RX_BUF_NON_DATA; @@ -714,21 +760,159 @@ static void netsec_rx_fill(struct netsec_priv *priv, u16 from, u16 num) } } +static void netsec_xdp_ring_tx_db(struct netsec_priv *priv, u16 pkts) +{ + if (likely(pkts)) + netsec_write(priv, NETSEC_REG_NRM_TX_PKTCNT, pkts); +} + +static void netsec_finalize_xdp_rx(struct netsec_priv *priv, u32 xdp_res, + u16 pkts) +{ + if (xdp_res & NETSEC_XDP_REDIR) + xdp_do_flush_map(); + + if (xdp_res & NETSEC_XDP_TX) + netsec_xdp_ring_tx_db(priv, pkts); +} + +/* The current driver only supports 1 Txq, this should run under spin_lock() */ +static u32 netsec_xdp_queue_one(struct netsec_priv *priv, + struct xdp_frame *xdpf, bool is_ndo) + +{ + struct netsec_desc_ring *tx_ring = &priv->desc_ring[NETSEC_RING_TX]; + struct page *page = virt_to_page(xdpf->data); + struct netsec_tx_pkt_ctrl tx_ctrl = {}; + struct netsec_desc tx_desc; + dma_addr_t dma_handle; + u16 filled; + + if (tx_ring->head >= tx_ring->tail) + filled = tx_ring->head - tx_ring->tail; + else + filled = tx_ring->head + DESC_NUM - tx_ring->tail; + + if (DESC_NUM - filled <= 1) + return NETSEC_XDP_CONSUMED; + + if (is_ndo) { + /* this is for ndo_xdp_xmit, the buffer needs mapping before + * sending + */ + dma_handle = dma_map_single(priv->dev, xdpf->data, xdpf->len, + DMA_TO_DEVICE); + if (dma_mapping_error(priv->dev, dma_handle)) + return NETSEC_XDP_CONSUMED; + tx_desc.buf_type = TYPE_NETSEC_XDP_NDO; + } else { + /* This is the device Rx buffer from page_pool. No need to remap + * just sync and send it + */ + struct netsec_desc_ring *rx_ring = + &priv->desc_ring[NETSEC_RING_RX]; + enum dma_data_direction dma_dir = + page_pool_get_dma_dir(rx_ring->page_pool); + + dma_handle = page_pool_get_dma_addr(page) + + NETSEC_RXBUF_HEADROOM; + dma_sync_single_for_device(priv->dev, dma_handle, xdpf->len, + dma_dir); + tx_desc.buf_type = TYPE_NETSEC_XDP_TX; + } + + tx_desc.dma_addr = dma_handle; + tx_desc.addr = xdpf->data; + tx_desc.len = xdpf->len; + + netsec_set_tx_de(priv, tx_ring, &tx_ctrl, &tx_desc, xdpf); + + return NETSEC_XDP_TX; +} + +static u32 netsec_xdp_xmit_back(struct netsec_priv *priv, struct xdp_buff *xdp) +{ + struct netsec_desc_ring *tx_ring = &priv->desc_ring[NETSEC_RING_TX]; + struct xdp_frame *xdpf = convert_to_xdp_frame(xdp); + u32 ret; + + if (unlikely(!xdpf)) + return NETSEC_XDP_CONSUMED; + + spin_lock(&tx_ring->lock); + ret = netsec_xdp_queue_one(priv, xdpf, false); + spin_unlock(&tx_ring->lock); + + return ret; +} + +static u32 netsec_run_xdp(struct netsec_priv *priv, struct bpf_prog *prog, + struct xdp_buff *xdp) +{ + u32 ret = NETSEC_XDP_PASS; + int err; + u32 act; + + act = bpf_prog_run_xdp(prog, xdp); + + switch (act) { + case XDP_PASS: + ret = NETSEC_XDP_PASS; + break; + case XDP_TX: + ret = netsec_xdp_xmit_back(priv, xdp); + if (ret != NETSEC_XDP_TX) + xdp_return_buff(xdp); + break; + case XDP_REDIRECT: + err = xdp_do_redirect(priv->ndev, xdp, prog); + if (!err) { + ret = NETSEC_XDP_REDIR; + } else { + ret = NETSEC_XDP_CONSUMED; + xdp_return_buff(xdp); + } + break; + default: + bpf_warn_invalid_xdp_action(act); + /* fall through */ + case XDP_ABORTED: + trace_xdp_exception(priv->ndev, prog, act); + /* fall through -- handle aborts by dropping packet */ + case XDP_DROP: + ret = NETSEC_XDP_CONSUMED; + xdp_return_buff(xdp); + break; + } + + return ret; +} + static int netsec_process_rx(struct netsec_priv *priv, int budget) { struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_RX]; struct net_device *ndev = priv->ndev; struct netsec_rx_pkt_info rx_info; - struct sk_buff *skb; + enum dma_data_direction dma_dir; + struct bpf_prog *xdp_prog; + struct sk_buff *skb = NULL; + u16 xdp_xmit = 0; + u32 xdp_act = 0; int done = 0; + rcu_read_lock(); + xdp_prog = READ_ONCE(priv->xdp_prog); + dma_dir = page_pool_get_dma_dir(dring->page_pool); + while (done < budget) { u16 idx = dring->tail; struct netsec_de *de = dring->vaddr + (DESC_SZ * idx); struct netsec_desc *desc = &dring->desc[idx]; struct page *page = virt_to_page(desc->addr); + u32 xdp_result = XDP_PASS; u16 pkt_len, desc_len; dma_addr_t dma_handle; + struct xdp_buff xdp; void *buf_addr; if (de->attr & (1U << NETSEC_RX_PKT_OWN_FIELD)) { @@ -770,10 +954,26 @@ static int netsec_process_rx(struct netsec_priv *priv, int budget) break; dma_sync_single_for_cpu(priv->dev, desc->dma_addr, pkt_len, - DMA_FROM_DEVICE); + dma_dir); prefetch(desc->addr); + xdp.data_hard_start = desc->addr; + xdp.data = desc->addr + NETSEC_RXBUF_HEADROOM; + xdp_set_data_meta_invalid(&xdp); + xdp.data_end = xdp.data + pkt_len; + xdp.rxq = &dring->xdp_rxq; + + if (xdp_prog) { + xdp_result = netsec_run_xdp(priv, xdp_prog, &xdp); + if (xdp_result != NETSEC_XDP_PASS) { + xdp_act |= xdp_result; + if (xdp_result == NETSEC_XDP_TX) + xdp_xmit++; + goto next; + } + } skb = build_skb(desc->addr, desc->len + NETSEC_RX_BUF_NON_DATA); + if (unlikely(!skb)) { /* If skb fails recycle_direct will either unmap and * free the page or refill the cache depending on the @@ -787,27 +987,32 @@ static int netsec_process_rx(struct netsec_priv *priv, int budget) } page_pool_release_page(dring->page_pool, page); - /* Update the descriptor with the new buffer we allocated */ - desc->len = desc_len; - desc->dma_addr = dma_handle; - desc->addr = buf_addr; - - skb_reserve(skb, NETSEC_SKB_PAD); - skb_put(skb, pkt_len); + skb_reserve(skb, xdp.data - xdp.data_hard_start); + skb_put(skb, xdp.data_end - xdp.data); skb->protocol = eth_type_trans(skb, priv->ndev); if (priv->rx_cksum_offload_flag && rx_info.rx_cksum_result == NETSEC_RX_CKSUM_OK) skb->ip_summed = CHECKSUM_UNNECESSARY; - if (napi_gro_receive(&priv->napi, skb) != GRO_DROP) { +next: + if ((skb && napi_gro_receive(&priv->napi, skb) != GRO_DROP) || + xdp_result & NETSEC_XDP_RX_OK) { ndev->stats.rx_packets++; - ndev->stats.rx_bytes += pkt_len; + ndev->stats.rx_bytes += xdp.data_end - xdp.data; } + /* Update the descriptor with fresh buffers */ + desc->len = desc_len; + desc->dma_addr = dma_handle; + desc->addr = buf_addr; + netsec_rx_fill(priv, idx, 1); dring->tail = (dring->tail + 1) % DESC_NUM; } + netsec_finalize_xdp_rx(priv, xdp_act, xdp_xmit); + + rcu_read_unlock(); return done; } @@ -837,8 +1042,7 @@ static int netsec_napi_poll(struct napi_struct *napi, int budget) static void netsec_set_tx_de(struct netsec_priv *priv, struct netsec_desc_ring *dring, const struct netsec_tx_pkt_ctrl *tx_ctrl, - const struct netsec_desc *desc, - struct sk_buff *skb) + const struct netsec_desc *desc, void *buf) { int idx = dring->head; struct netsec_de *de; @@ -861,10 +1065,16 @@ static void netsec_set_tx_de(struct netsec_priv *priv, de->data_buf_addr_lw = lower_32_bits(desc->dma_addr); de->buf_len_info = (tx_ctrl->tcp_seg_len << 16) | desc->len; de->attr = attr; - dma_wmb(); + /* under spin_lock if using XDP */ + if (!dring->is_xdp) + dma_wmb(); dring->desc[idx] = *desc; - dring->desc[idx].skb = skb; + if (desc->buf_type == TYPE_NETSEC_SKB) + dring->desc[idx].skb = buf; + else if (desc->buf_type == TYPE_NETSEC_XDP_TX || + desc->buf_type == TYPE_NETSEC_XDP_NDO) + dring->desc[idx].xdpf = buf; /* move head ahead */ dring->head = (dring->head + 1) % DESC_NUM; @@ -915,8 +1125,12 @@ static netdev_tx_t netsec_netdev_start_xmit(struct sk_buff *skb, u16 tso_seg_len = 0; int filled; + if (dring->is_xdp) + spin_lock_bh(&dring->lock); filled = netsec_desc_used(dring); if (netsec_check_stop_tx(priv, filled)) { + if (dring->is_xdp) + spin_unlock_bh(&dring->lock); net_warn_ratelimited("%s %s Tx queue full\n", dev_name(priv->dev), ndev->name); return NETDEV_TX_BUSY; @@ -949,6 +1163,8 @@ static netdev_tx_t netsec_netdev_start_xmit(struct sk_buff *skb, tx_desc.dma_addr = dma_map_single(priv->dev, skb->data, skb_headlen(skb), DMA_TO_DEVICE); if (dma_mapping_error(priv->dev, tx_desc.dma_addr)) { + if (dring->is_xdp) + spin_unlock_bh(&dring->lock); netif_err(priv, drv, priv->ndev, "%s: DMA mapping failed\n", __func__); ndev->stats.tx_dropped++; @@ -957,11 +1173,14 @@ static netdev_tx_t netsec_netdev_start_xmit(struct sk_buff *skb, } tx_desc.addr = skb->data; tx_desc.len = skb_headlen(skb); + tx_desc.buf_type = TYPE_NETSEC_SKB; skb_tx_timestamp(skb); netdev_sent_queue(priv->ndev, skb->len); netsec_set_tx_de(priv, dring, &tx_ctrl, &tx_desc, skb); + if (dring->is_xdp) + spin_unlock_bh(&dring->lock); netsec_write(priv, NETSEC_REG_NRM_TX_PKTCNT, 1); /* submit another tx */ return NETDEV_TX_OK; @@ -1042,6 +1261,7 @@ static int netsec_alloc_dring(struct netsec_priv *priv, enum ring_id id) static void netsec_setup_tx_dring(struct netsec_priv *priv) { struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_TX]; + struct bpf_prog *xdp_prog = READ_ONCE(priv->xdp_prog); int i; for (i = 0; i < DESC_NUM; i++) { @@ -1054,11 +1274,18 @@ static void netsec_setup_tx_dring(struct netsec_priv *priv) */ de->attr = 1U << NETSEC_TX_SHIFT_OWN_FIELD; } + + if (xdp_prog) + dring->is_xdp = true; + else + dring->is_xdp = false; + } static int netsec_setup_rx_dring(struct netsec_priv *priv) { struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_RX]; + struct bpf_prog *xdp_prog = READ_ONCE(priv->xdp_prog); struct page_pool_params pp_params = { 0 }; int i, err; @@ -1068,7 +1295,7 @@ static int netsec_setup_rx_dring(struct netsec_priv *priv) pp_params.pool_size = DESC_NUM; pp_params.nid = cpu_to_node(0); pp_params.dev = priv->dev; - pp_params.dma_dir = DMA_FROM_DEVICE; + pp_params.dma_dir = xdp_prog ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE; dring->page_pool = page_pool_create(&pp_params); if (IS_ERR(dring->page_pool)) { @@ -1490,6 +1717,9 @@ static int netsec_netdev_init(struct net_device *ndev) if (ret) goto err2; + spin_lock_init(&priv->desc_ring[NETSEC_RING_TX].lock); + spin_lock_init(&priv->desc_ring[NETSEC_RING_RX].lock); + return 0; err2: netsec_free_dring(priv, NETSEC_RING_RX); @@ -1522,6 +1752,81 @@ static int netsec_netdev_ioctl(struct net_device *ndev, struct ifreq *ifr, return phy_mii_ioctl(ndev->phydev, ifr, cmd); } +static int netsec_xdp_xmit(struct net_device *ndev, int n, + struct xdp_frame **frames, u32 flags) +{ + struct netsec_priv *priv = netdev_priv(ndev); + struct netsec_desc_ring *tx_ring = &priv->desc_ring[NETSEC_RING_TX]; + int drops = 0; + int i; + + if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK)) + return -EINVAL; + + spin_lock(&tx_ring->lock); + for (i = 0; i < n; i++) { + struct xdp_frame *xdpf = frames[i]; + int err; + + err = netsec_xdp_queue_one(priv, xdpf, true); + if (err != NETSEC_XDP_TX) { + xdp_return_frame_rx_napi(xdpf); + drops++; + } else { + tx_ring->xdp_xmit++; + } + } + spin_unlock(&tx_ring->lock); + + if (unlikely(flags & XDP_XMIT_FLUSH)) { + netsec_xdp_ring_tx_db(priv, tx_ring->xdp_xmit); + tx_ring->xdp_xmit = 0; + } + + return n - drops; +} + +static int netsec_xdp_setup(struct netsec_priv *priv, struct bpf_prog *prog, + struct netlink_ext_ack *extack) +{ + struct net_device *dev = priv->ndev; + struct bpf_prog *old_prog; + + /* For now just support only the usual MTU sized frames */ + if (prog && dev->mtu > 1500) { + NL_SET_ERR_MSG_MOD(extack, "Jumbo frames not supported on XDP"); + return -EOPNOTSUPP; + } + + if (netif_running(dev)) + netsec_netdev_stop(dev); + + /* Detach old prog, if any */ + old_prog = xchg(&priv->xdp_prog, prog); + if (old_prog) + bpf_prog_put(old_prog); + + if (netif_running(dev)) + netsec_netdev_open(dev); + + return 0; +} + +static int netsec_xdp(struct net_device *ndev, struct netdev_bpf *xdp) +{ + struct netsec_priv *priv = netdev_priv(ndev); + + switch (xdp->command) { + case XDP_SETUP_PROG: + return netsec_xdp_setup(priv, xdp->prog, xdp->extack); + case XDP_QUERY_PROG: + xdp->prog_id = priv->xdp_prog ? priv->xdp_prog->aux->id : 0; + return 0; + default: + return -EINVAL; + } +} + static const struct net_device_ops netsec_netdev_ops = { .ndo_init = netsec_netdev_init, .ndo_uninit = netsec_netdev_uninit, @@ -1532,6 +1837,8 @@ static const struct net_device_ops netsec_netdev_ops = { .ndo_set_mac_address = eth_mac_addr, .ndo_validate_addr = eth_validate_addr, .ndo_do_ioctl = netsec_netdev_ioctl, + .ndo_xdp_xmit = netsec_xdp_xmit, + .ndo_bpf = netsec_xdp, }; static int netsec_of_probe(struct platform_device *pdev,