From patchwork Tue Jun 25 15:06:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ilias Apalodimas X-Patchwork-Id: 167731 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp5668703ilk; Tue, 25 Jun 2019 08:06:31 -0700 (PDT) X-Google-Smtp-Source: APXvYqxmv4WMIZu6W9S31u3wEkK0UvGwCqQ88VdZpKvXM3ce/dBxF5Yyp25qbhp9AOP27mxtdEj8 X-Received: by 2002:a63:4105:: with SMTP id o5mr40442879pga.308.1561475190607; Tue, 25 Jun 2019 08:06:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561475190; cv=none; d=google.com; s=arc-20160816; b=UWm3ZeA4kjNavkxlS7dKXOP98/d5OumEh1SSaglpA5di80m88P++7NQa+MBbkGH26J 80mYF3tK0PU2JVN/+dbH7wg9d2wN9dEFmM3ytlib998Oion4809LPbLFv2HHAV+/he0T VtuEc/Scs4OE6ypul0C4qTqYCzt00afDJN2jvAHecJg7vlN+d4hW2dnKT7TSasn7oOEY vuDxvYRe0Ryw9a5nBrjfWEnfd3vfVdyp+iO/PdvYPnFJOZOkTMuRrtyGQo6w+uebxOsc 3V5I3L+SsOLYQc6Qy+7k/nEENWZJwkWD2SDbItE5eXccKxNDTzh8VMpIFPZD0S/6f+RG E6Nw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=m6+ZOA6WCB5tUFbM8EVXOBvKjOZJWASoT89nEIqWhis=; b=Y5oPvodVUzb9IgxqeWM3SxLlAScbe+nc6/rb7z5dowoplMDrX86Gj/yipWZ3sJA3JM JzOlZ/eCbva/Ve+vkGs7qqZDjEItIdPJctz88Y3baA10Jdy3OhWbYvBZ/Q+TrkpwSDJP habvwBpN4809JPSSQr3j9N4EhuyaoObF/CV7D6YQc6cjgroB9HKMVGDX9opc/XZtqdt2 +kTV89BWCkiU6WWgjshCoVgEl9Rromy1ACKiQBUbtb1L/pHtaCgd7SqLZHdqOyXSgpFT 4zYwNZ6Nn7UgL7YbCD1Psp5XtFtBgSjjFqaqPKgj+50KoVhWcPeYKPsZiEh4vtRMp4ox 5qUg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=f3RXceZV; spf=pass (google.com: best guess record for domain of netdev-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a64si569208pla.432.2019.06.25.08.06.30; Tue, 25 Jun 2019 08:06:30 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of netdev-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=f3RXceZV; spf=pass (google.com: best guess record for domain of netdev-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732001AbfFYPG3 (ORCPT + 9 others); Tue, 25 Jun 2019 11:06:29 -0400 Received: from mail-wm1-f65.google.com ([209.85.128.65]:51892 "EHLO mail-wm1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730957AbfFYPG3 (ORCPT ); Tue, 25 Jun 2019 11:06:29 -0400 Received: by mail-wm1-f65.google.com with SMTP id 207so3222749wma.1 for ; Tue, 25 Jun 2019 08:06:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=m6+ZOA6WCB5tUFbM8EVXOBvKjOZJWASoT89nEIqWhis=; b=f3RXceZVeCBvtGZ5Mu/pfA/vFreF2PVuQeO+WLZ34aA8q7YsapaYt0PNKAWEc6ufHb CSUyXv1uLQXROgmC2WZ+mhaWlMilKTnJaJULji+8P9T8m05nHydILcsv8guCkGWmvfsc GNmf/RKOXpuJt1RcXbKIwDcu+vZB3glWEryGN+JF+YR29E+UTfdnjne/XxwgPFGKobkT gCvNBBz4bayJDsRT282+D3k4N2IUMmX8JLc0iBnP5RDat+0RhYgYqQW3nqtsO/q4uDrj uXIKTJAJ8OI9/0zTV90m2w58cmxATz7uOZFIEeMuBpO0vjusDntkfwrQTWFC+JB+j1gt 8nFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=m6+ZOA6WCB5tUFbM8EVXOBvKjOZJWASoT89nEIqWhis=; b=XObE/SE1kvN1XO9u+k1msAJHNuFmdtazPfTHxZXMiKEBrX4P2KmR33btrWnseW7H1v +EdWxj4xt2PnmUuQXNMnc3FY65f+H6pG758nPHFnecRamhdfOr2wyXg4G/pgpHhSdfXB 9wJBSZu3RYxQ/wwiBRzd6owFwxY0ZCOWOSTfrcOAugRGpoljZnQdIY5YFJVI632dqk5I Almlqeq614oAOazU8LIlg1p7hnaMrgB6oCzUz1BWE0FqxcGA7m4VzzAmwdmiq4tZIbWi Ing1viE9v9+iD+KTukOAPCb5e0SndrCEdfTho0x4SYC+fmD7aSm5jAZC6/93mLsefKc6 t4lw== X-Gm-Message-State: APjAAAUcpYIyxN7pr0wjvxWP3jXDWokZ2Ok7xtiHAiR6jRtJKeTg0ihM HV8+wW2Yy4nTxXye72Qo3DHhT4VHUzw= X-Received: by 2002:a1c:c6:: with SMTP id 189mr11691211wma.112.1561475185789; Tue, 25 Jun 2019 08:06:25 -0700 (PDT) Received: from apalos.lan (athedsl-4461147.home.otenet.gr. [94.71.2.75]) by smtp.gmail.com with ESMTPSA id r131sm1982108wmf.4.2019.06.25.08.06.23 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 25 Jun 2019 08:06:24 -0700 (PDT) From: Ilias Apalodimas To: netdev@vger.kernel.org, jaswinder.singh@linaro.org Cc: ard.biesheuvel@linaro.org, bjorn.topel@intel.com, magnus.karlsson@intel.com, brouer@redhat.com, daniel@iogearbox.net, ast@kernel.org, makita.toshiaki@lab.ntt.co.jp, jakub.kicinski@netronome.com, john.fastabend@gmail.com, davem@davemloft.net, Ilias Apalodimas Subject: [RFC, PATCH 1/2, net-next] net: netsec: Use page_pool API Date: Tue, 25 Jun 2019 18:06:18 +0300 Message-Id: <1561475179-7686-2-git-send-email-ilias.apalodimas@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1561475179-7686-1-git-send-email-ilias.apalodimas@linaro.org> References: <1561475179-7686-1-git-send-email-ilias.apalodimas@linaro.org> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Use page_pool and it's DMA mapping capabilities for Rx buffers instead of netdev/napi_alloc_frag() Although this will result in a slight performance penalty on small sized packets (~10%) the use of the API will allow to easily add XDP support. The penalty won't be visible in network testing i.e ipef/netperf etc, it only happens during raw packet drops. Furthermore we intend to add recycling capabilities on the API in the future. Once the recycling is added the performance penalty will go away. The only 'real' penalty is the slightly increased memory usage, since we now allocate a page per packet instead of the amount of bytes we need + skb metadata (difference is roughly 2kb per packet). With a minimum of 4BG of RAM on the only SoC that has this NIC the extra memory usage is negligible (a bit more on 64K pages) Signed-off-by: Ilias Apalodimas --- drivers/net/ethernet/socionext/Kconfig | 1 + drivers/net/ethernet/socionext/netsec.c | 121 +++++++++++++++--------- 2 files changed, 75 insertions(+), 47 deletions(-) -- 2.20.1 diff --git a/drivers/net/ethernet/socionext/Kconfig b/drivers/net/ethernet/socionext/Kconfig index 25f18be27423..95e99baf3f45 100644 --- a/drivers/net/ethernet/socionext/Kconfig +++ b/drivers/net/ethernet/socionext/Kconfig @@ -26,6 +26,7 @@ config SNI_NETSEC tristate "Socionext NETSEC ethernet support" depends on (ARCH_SYNQUACER || COMPILE_TEST) && OF select PHYLIB + select PAGE_POOL select MII ---help--- Enable to add support for the SocioNext NetSec Gigabit Ethernet diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c index 48fd7448b513..e653b24d0534 100644 --- a/drivers/net/ethernet/socionext/netsec.c +++ b/drivers/net/ethernet/socionext/netsec.c @@ -11,6 +11,7 @@ #include #include +#include #include #define NETSEC_REG_SOFT_RST 0x104 @@ -235,7 +236,8 @@ #define DESC_NUM 256 #define NETSEC_SKB_PAD (NET_SKB_PAD + NET_IP_ALIGN) -#define NETSEC_RX_BUF_SZ 1536 +#define NETSEC_RX_BUF_NON_DATA (NETSEC_SKB_PAD + \ + SKB_DATA_ALIGN(sizeof(struct skb_shared_info))) #define DESC_SZ sizeof(struct netsec_de) @@ -258,6 +260,8 @@ struct netsec_desc_ring { struct netsec_desc *desc; void *vaddr; u16 head, tail; + struct page_pool *page_pool; + struct xdp_rxq_info xdp_rxq; }; struct netsec_priv { @@ -673,33 +677,27 @@ static void netsec_process_tx(struct netsec_priv *priv) } static void *netsec_alloc_rx_data(struct netsec_priv *priv, - dma_addr_t *dma_handle, u16 *desc_len, - bool napi) + dma_addr_t *dma_handle, u16 *desc_len) + { - size_t total_len = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); - size_t payload_len = NETSEC_RX_BUF_SZ; - dma_addr_t mapping; - void *buf; - total_len += SKB_DATA_ALIGN(payload_len + NETSEC_SKB_PAD); + struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_RX]; + struct page *page; - buf = napi ? napi_alloc_frag(total_len) : netdev_alloc_frag(total_len); - if (!buf) + page = page_pool_dev_alloc_pages(dring->page_pool); + if (!page) return NULL; - mapping = dma_map_single(priv->dev, buf + NETSEC_SKB_PAD, payload_len, - DMA_FROM_DEVICE); - if (unlikely(dma_mapping_error(priv->dev, mapping))) - goto err_out; - - *dma_handle = mapping; - *desc_len = payload_len; - - return buf; + /* page_pool API will map the whole page, skip + * NET_SKB_PAD + NET_IP_ALIGN for the payload + */ + *dma_handle = page_pool_get_dma_addr(page) + NETSEC_SKB_PAD; + /* make sure the incoming payload fits in the page with the needed + * NET_SKB_PAD + NET_IP_ALIGN + skb_shared_info + */ + *desc_len = PAGE_SIZE - NETSEC_RX_BUF_NON_DATA; -err_out: - skb_free_frag(buf); - return NULL; + return page_address(page); } static void netsec_rx_fill(struct netsec_priv *priv, u16 from, u16 num) @@ -728,10 +726,10 @@ static int netsec_process_rx(struct netsec_priv *priv, int budget) u16 idx = dring->tail; struct netsec_de *de = dring->vaddr + (DESC_SZ * idx); struct netsec_desc *desc = &dring->desc[idx]; + struct page *page = virt_to_page(desc->addr); u16 pkt_len, desc_len; dma_addr_t dma_handle; void *buf_addr; - u32 truesize; if (de->attr & (1U << NETSEC_RX_PKT_OWN_FIELD)) { /* reading the register clears the irq */ @@ -766,8 +764,8 @@ static int netsec_process_rx(struct netsec_priv *priv, int budget) /* allocate a fresh buffer and map it to the hardware. * This will eventually replace the old buffer in the hardware */ - buf_addr = netsec_alloc_rx_data(priv, &dma_handle, &desc_len, - true); + buf_addr = netsec_alloc_rx_data(priv, &dma_handle, &desc_len); + if (unlikely(!buf_addr)) break; @@ -775,22 +773,19 @@ static int netsec_process_rx(struct netsec_priv *priv, int budget) DMA_FROM_DEVICE); prefetch(desc->addr); - truesize = SKB_DATA_ALIGN(desc->len + NETSEC_SKB_PAD) + - SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); - skb = build_skb(desc->addr, truesize); + skb = build_skb(desc->addr, desc->len + NETSEC_RX_BUF_NON_DATA); if (unlikely(!skb)) { - /* free the newly allocated buffer, we are not going to - * use it + /* If skb fails recycle_direct will either unmap and + * free the page or refill the cache depending on the + * cache state. Since we paid the allocation cost if + * building an skb fails try to put the page into cache */ - dma_unmap_single(priv->dev, dma_handle, desc_len, - DMA_FROM_DEVICE); - skb_free_frag(buf_addr); + page_pool_recycle_direct(dring->page_pool, page); netif_err(priv, drv, priv->ndev, "rx failed to build skb\n"); break; } - dma_unmap_single_attrs(priv->dev, desc->dma_addr, desc->len, - DMA_FROM_DEVICE, DMA_ATTR_SKIP_CPU_SYNC); + page_pool_release_page(dring->page_pool, page); /* Update the descriptor with the new buffer we allocated */ desc->len = desc_len; @@ -980,21 +975,26 @@ static void netsec_uninit_pkt_dring(struct netsec_priv *priv, int id) if (!dring->vaddr || !dring->desc) return; - for (idx = 0; idx < DESC_NUM; idx++) { desc = &dring->desc[idx]; if (!desc->addr) continue; - dma_unmap_single(priv->dev, desc->dma_addr, desc->len, - id == NETSEC_RING_RX ? DMA_FROM_DEVICE : - DMA_TO_DEVICE); - if (id == NETSEC_RING_RX) - skb_free_frag(desc->addr); - else if (id == NETSEC_RING_TX) + if (id == NETSEC_RING_RX) { + struct page *page = virt_to_page(desc->addr); + + page_pool_put_page(dring->page_pool, page, false); + } else if (id == NETSEC_RING_TX) { + dma_unmap_single(priv->dev, desc->dma_addr, desc->len, + DMA_TO_DEVICE); dev_kfree_skb(desc->skb); + } } + /* Rx is currently using page_pool */ + if (xdp_rxq_info_is_reg(&dring->xdp_rxq)) + xdp_rxq_info_unreg(&dring->xdp_rxq); + memset(dring->desc, 0, sizeof(struct netsec_desc) * DESC_NUM); memset(dring->vaddr, 0, DESC_SZ * DESC_NUM); @@ -1059,7 +1059,23 @@ static void netsec_setup_tx_dring(struct netsec_priv *priv) static int netsec_setup_rx_dring(struct netsec_priv *priv) { struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_RX]; - int i; + struct page_pool_params pp_params = { 0 }; + int i, err; + + pp_params.order = 0; + /* internal DMA mapping in page_pool */ + pp_params.flags = PP_FLAG_DMA_MAP; + pp_params.pool_size = DESC_NUM; + pp_params.nid = cpu_to_node(0); + pp_params.dev = priv->dev; + pp_params.dma_dir = DMA_FROM_DEVICE; + + dring->page_pool = page_pool_create(&pp_params); + if (IS_ERR(dring->page_pool)) { + err = PTR_ERR(dring->page_pool); + dring->page_pool = NULL; + goto err_out; + } for (i = 0; i < DESC_NUM; i++) { struct netsec_desc *desc = &dring->desc[i]; @@ -1067,10 +1083,10 @@ static int netsec_setup_rx_dring(struct netsec_priv *priv) void *buf; u16 len; - buf = netsec_alloc_rx_data(priv, &dma_handle, &len, - false); + buf = netsec_alloc_rx_data(priv, &dma_handle, &len); + if (!buf) { - netsec_uninit_pkt_dring(priv, NETSEC_RING_RX); + err = -ENOMEM; goto err_out; } desc->dma_addr = dma_handle; @@ -1079,11 +1095,22 @@ static int netsec_setup_rx_dring(struct netsec_priv *priv) } netsec_rx_fill(priv, 0, DESC_NUM); + err = xdp_rxq_info_reg(&dring->xdp_rxq, priv->ndev, 0); + if (err) + goto err_out; + + err = xdp_rxq_info_reg_mem_model(&dring->xdp_rxq, MEM_TYPE_PAGE_POOL, + dring->page_pool); + if (err) { + page_pool_free(dring->page_pool); + goto err_out; + } return 0; err_out: - return -ENOMEM; + netsec_uninit_pkt_dring(priv, NETSEC_RING_RX); + return err; } static int netsec_netdev_load_ucode_region(struct netsec_priv *priv, u32 reg, From patchwork Tue Jun 25 15:06:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ilias Apalodimas X-Patchwork-Id: 167732 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp5668707ilk; Tue, 25 Jun 2019 08:06:31 -0700 (PDT) X-Google-Smtp-Source: APXvYqxPzYDFORCriwNA0BXNnHR1LJE0fDAHd5aoWJ4oYXA7Ch66dYoxKuPZtARIbMtsNGPv7/WO X-Received: by 2002:a17:90a:a00d:: with SMTP id q13mr32101113pjp.80.1561475191407; Tue, 25 Jun 2019 08:06:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561475191; cv=none; d=google.com; s=arc-20160816; b=C58rXqZBx/1a8kufUfFbqn9OYKyTkNARO6fgdj92KJQmBwnaHx2g1VgPB50OB+wm/v mLCgSjeGjNXz1iFXbZk94JXk7/ufo9YMkE1JYEiltf3YNfdaKFRk37iZ8Yp29eZ44G4s +UwRUSzwSpOEsAEB5I8fSdvkn7/X7rrvwATjiKg1Xdvw+KDKTMABBDmykEgQipg/DBv8 b7NvnxhjQcBJ884p/fwMKJS6jP3hwP1hLh2/N40Vy1kEwoB+wXoTBWAHBhB3GHIqM4dZ z6+kBF5ZqKoH08x8xRPtKXefVPVKKhyLwSEqNlWIc4InFxfFPxis37kNvpgU3vJJQVbP q6+g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=6uE0o1e6m9pAymtEtefJVO5xedkQCUUL6Oh6UkUvWk0=; b=SJjeLmpJ4n2gf8ROCup6Tz8wFdZszWxMJVOAZ4JdHXhLEFnBTaX+8z90xQE8DF2Qcx IHaFH1TzzBRFOpqt4BQ1HFB/1arJ19W6fF0WAqJ0yDsYSqk0zdR5edVUuEq8xZpFVI1z bwfyIP/Fk/WP62LqndONXd3wSl0hWaNdCaLYz1h9ee02l8UpOz6rAvJd9V3SaNNbQZJu VVeDSVcQo4Wikts4uMBaSDWr0dPpkvs4UOohPvSKs9TykL4cK9QsHhUVMbpjvEVUUA2O te+fyU/hn1u4xq8JugHA4O7vHiqxg3MKIk72gmWSU1/E6PSDjI0SYvNHCUMBhKC2p+b/ N64Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=lzIAHDY5; spf=pass (google.com: best guess record for domain of netdev-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a64si569208pla.432.2019.06.25.08.06.30; Tue, 25 Jun 2019 08:06:31 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of netdev-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=lzIAHDY5; spf=pass (google.com: best guess record for domain of netdev-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732021AbfFYPGa (ORCPT + 9 others); Tue, 25 Jun 2019 11:06:30 -0400 Received: from mail-wm1-f68.google.com ([209.85.128.68]:51899 "EHLO mail-wm1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730607AbfFYPG3 (ORCPT ); Tue, 25 Jun 2019 11:06:29 -0400 Received: by mail-wm1-f68.google.com with SMTP id 207so3222863wma.1 for ; Tue, 25 Jun 2019 08:06:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=6uE0o1e6m9pAymtEtefJVO5xedkQCUUL6Oh6UkUvWk0=; b=lzIAHDY5RNGoyqL67DbinugFPintAAAmLWTYJid4YHYdu/hrSgECEYyfo9pIKuBEDy RLOkaTeQqxY4K9ORR9vU36dx9t15jbNNZrTrKzz4r2rgtP7nXIoiLYe6yQk1jOHWzhnB lI4diIQ85lz1vprgcoiM9KOPgs1dNBkQKSHBmu2yotfXdhTF5asm4DvHsEGo7DUBj5Uw TTMlXTx6BKRib/xcE9zuhHMvh3b8G1hAqwHuvsq2OGXq8F6C2W2FERrhp2NSfj09Y1m5 YnNMVgSEx2/Mjs3YT6xhx2CrJoGD2m03b3yF6nqPw+yhIQ14lwI38Ec2hsnNClHAjDwR e1OA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=6uE0o1e6m9pAymtEtefJVO5xedkQCUUL6Oh6UkUvWk0=; b=LhT7nk/MLBEs6pYWutiO5oypnSuz00eUabsCejhtKhi7jV8MOXeA+kOnzpiqCE8So9 rYD78+1lu1Ba5CLNmGQOukqRuqTJdYs4hXthJ82t6i2nBckyYGDKnh76DDs2/6fEf2R/ JKBdpzSE2lk6PpnB0Ydxw4D+eQBH7BKGmKriHJKEvyY63ATKC/vynLiF46bDJn5cLrT2 eOjMBQoJz+lR+GMogbdvuyiMg5Rzqx5AS/nczy4q8XZ0KD1qnhuLBerF3ZaC0i+9TFp1 nc0YB+dQVpyX3EigTE594Ayhm4YdJisl8Shr13HUX6ifMMjry7FCRBE3g1Za3wCOfya/ 25BA== X-Gm-Message-State: APjAAAVrhFRiUToz22uZi4dBDrS8GUfACLqPD4YLohqRiEt2J5H5gYti gk4vtZUL0WsJPKLVpnRVh3l6QhsYHDI= X-Received: by 2002:a05:600c:114f:: with SMTP id z15mr19771282wmz.131.1561475187664; Tue, 25 Jun 2019 08:06:27 -0700 (PDT) Received: from apalos.lan (athedsl-4461147.home.otenet.gr. [94.71.2.75]) by smtp.gmail.com with ESMTPSA id r131sm1982108wmf.4.2019.06.25.08.06.25 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 25 Jun 2019 08:06:27 -0700 (PDT) From: Ilias Apalodimas To: netdev@vger.kernel.org, jaswinder.singh@linaro.org Cc: ard.biesheuvel@linaro.org, bjorn.topel@intel.com, magnus.karlsson@intel.com, brouer@redhat.com, daniel@iogearbox.net, ast@kernel.org, makita.toshiaki@lab.ntt.co.jp, jakub.kicinski@netronome.com, john.fastabend@gmail.com, davem@davemloft.net, Ilias Apalodimas Subject: [RFC, PATCH 2/2, net-next] net: netsec: add XDP support Date: Tue, 25 Jun 2019 18:06:19 +0300 Message-Id: <1561475179-7686-3-git-send-email-ilias.apalodimas@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1561475179-7686-1-git-send-email-ilias.apalodimas@linaro.org> References: <1561475179-7686-1-git-send-email-ilias.apalodimas@linaro.org> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org The interface only supports 1 Tx queue so locking is introduced on the Tx queue if XDP is enabled to make sure .ndo_start_xmit and .ndo_xdp_xmit won't corrupt Tx ring - Performance (SMMU off) Benchmark XDP_SKB XDP_DRV xdp1 291kpps 344kpps rxdrop 282kpps 342kpps - Performance (SMMU on) Benchmark XDP_SKB XDP_DRV xdp1 167kpps 324kpps rxdrop 164kpps 323kpps Signed-off-by: Ilias Apalodimas --- drivers/net/ethernet/socionext/netsec.c | 351 ++++++++++++++++++++++-- 1 file changed, 325 insertions(+), 26 deletions(-) -- 2.20.1 diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c index e653b24d0534..c7c7e5119b46 100644 --- a/drivers/net/ethernet/socionext/netsec.c +++ b/drivers/net/ethernet/socionext/netsec.c @@ -9,6 +9,9 @@ #include #include #include +#include +#include +#include #include #include @@ -236,23 +239,41 @@ #define DESC_NUM 256 #define NETSEC_SKB_PAD (NET_SKB_PAD + NET_IP_ALIGN) -#define NETSEC_RX_BUF_NON_DATA (NETSEC_SKB_PAD + \ +#define NETSEC_RXBUF_HEADROOM (max(XDP_PACKET_HEADROOM, NET_SKB_PAD) + \ + NET_IP_ALIGN) +#define NETSEC_RX_BUF_NON_DATA (NETSEC_RXBUF_HEADROOM + \ SKB_DATA_ALIGN(sizeof(struct skb_shared_info))) #define DESC_SZ sizeof(struct netsec_de) #define NETSEC_F_NETSEC_VER_MAJOR_NUM(x) ((x) & 0xffff0000) +#define NETSEC_XDP_PASS 0 +#define NETSEC_XDP_CONSUMED BIT(0) +#define NETSEC_XDP_TX BIT(1) +#define NETSEC_XDP_REDIR BIT(2) +#define NETSEC_XDP_RX_OK (NETSEC_XDP_PASS | NETSEC_XDP_TX | NETSEC_XDP_REDIR) + enum ring_id { NETSEC_RING_TX = 0, NETSEC_RING_RX }; +enum buf_type { + TYPE_NETSEC_SKB = 0, + TYPE_NETSEC_XDP_TX, + TYPE_NETSEC_XDP_NDO, +}; + struct netsec_desc { - struct sk_buff *skb; + union { + struct sk_buff *skb; + struct xdp_frame *xdpf; + }; dma_addr_t dma_addr; void *addr; u16 len; + u8 buf_type; }; struct netsec_desc_ring { @@ -260,13 +281,17 @@ struct netsec_desc_ring { struct netsec_desc *desc; void *vaddr; u16 head, tail; + u16 xdp_xmit; /* netsec_xdp_xmit packets */ + bool is_xdp; struct page_pool *page_pool; struct xdp_rxq_info xdp_rxq; + spinlock_t lock; /* XDP tx queue locking */ }; struct netsec_priv { struct netsec_desc_ring desc_ring[NETSEC_RING_MAX]; struct ethtool_coalesce et_coalesce; + struct bpf_prog *xdp_prog; spinlock_t reglock; /* protect reg access */ struct napi_struct napi; phy_interface_t phy_interface; @@ -303,6 +328,11 @@ struct netsec_rx_pkt_info { bool err_flag; }; +static void netsec_set_tx_de(struct netsec_priv *priv, + struct netsec_desc_ring *dring, + const struct netsec_tx_pkt_ctrl *tx_ctrl, + const struct netsec_desc *desc, void *buf); + static void netsec_write(struct netsec_priv *priv, u32 reg_addr, u32 val) { writel(val, priv->ioaddr + reg_addr); @@ -609,6 +639,9 @@ static bool netsec_clean_tx_dring(struct netsec_priv *priv) int tail = dring->tail; int cnt = 0; + if (dring->is_xdp) + spin_lock(&dring->lock); + pkts = 0; bytes = 0; entry = dring->vaddr + DESC_SZ * tail; @@ -622,16 +655,24 @@ static bool netsec_clean_tx_dring(struct netsec_priv *priv) eop = (entry->attr >> NETSEC_TX_LAST) & 1; dma_rmb(); - dma_unmap_single(priv->dev, desc->dma_addr, desc->len, - DMA_TO_DEVICE); - if (eop) { - pkts++; + if (!eop) + goto next; + + if (desc->buf_type == TYPE_NETSEC_SKB) { + dma_unmap_single(priv->dev, desc->dma_addr, desc->len, + DMA_TO_DEVICE); bytes += desc->skb->len; dev_kfree_skb(desc->skb); + } else { + if (desc->buf_type == TYPE_NETSEC_XDP_NDO) + dma_unmap_single(priv->dev, desc->dma_addr, + desc->len, DMA_TO_DEVICE); + xdp_return_frame(desc->xdpf); } /* clean up so netsec_uninit_pkt_dring() won't free the skb * again */ +next: *desc = (struct netsec_desc){}; /* entry->attr is not going to be accessed by the NIC until @@ -645,6 +686,8 @@ static bool netsec_clean_tx_dring(struct netsec_priv *priv) entry = dring->vaddr + DESC_SZ * tail; cnt++; } + if (dring->is_xdp) + spin_unlock(&dring->lock); if (!cnt) return false; @@ -688,12 +731,13 @@ static void *netsec_alloc_rx_data(struct netsec_priv *priv, if (!page) return NULL; - /* page_pool API will map the whole page, skip - * NET_SKB_PAD + NET_IP_ALIGN for the payload + /* We allocate the same buffer length for XDP and non-XDP cases. + * page_pool API will map the whole page, skip what's needed for + * network payloads and/or XDP */ - *dma_handle = page_pool_get_dma_addr(page) + NETSEC_SKB_PAD; - /* make sure the incoming payload fits in the page with the needed - * NET_SKB_PAD + NET_IP_ALIGN + skb_shared_info + *dma_handle = page_pool_get_dma_addr(page) + NETSEC_RXBUF_HEADROOM; + /* Make sure the incoming payload fits in the page for XDP and non-XDP + * cases and reserve enough space for headroom + skb_shared_info */ *desc_len = PAGE_SIZE - NETSEC_RX_BUF_NON_DATA; @@ -714,12 +758,144 @@ static void netsec_rx_fill(struct netsec_priv *priv, u16 from, u16 num) } } +static void netsec_xdp_ring_tx_db(struct netsec_priv *priv, u16 pkts) +{ + if (likely(pkts)) + netsec_write(priv, NETSEC_REG_NRM_TX_PKTCNT, pkts); +} + +static void netsec_finalize_xdp_rx(struct netsec_priv *priv, u32 xdp_res, + u16 pkts) +{ + if (xdp_res & NETSEC_XDP_REDIR) + xdp_do_flush_map(); + + if (xdp_res & NETSEC_XDP_TX) + netsec_xdp_ring_tx_db(priv, pkts); +} + +/* The current driver only supports 1 Txq, this should run under spin_lock() */ +static u32 netsec_xdp_queue_one(struct netsec_priv *priv, + struct xdp_frame *xdpf, bool is_ndo) + +{ + struct netsec_desc_ring *tx_ring = &priv->desc_ring[NETSEC_RING_TX]; + struct page *page = virt_to_page(xdpf->data); + struct netsec_tx_pkt_ctrl tx_ctrl = {}; + struct netsec_desc tx_desc; + dma_addr_t dma_handle; + u16 filled; + + if (tx_ring->head >= tx_ring->tail) + filled = tx_ring->head - tx_ring->tail; + else + filled = tx_ring->head + DESC_NUM - tx_ring->tail; + + if (DESC_NUM - filled <= 1) + return NETSEC_XDP_CONSUMED; + + if (is_ndo) { + /* this is for ndo_xdp_xmit, the buffer needs mapping before + * sending + */ + dma_handle = dma_map_single(priv->dev, xdpf->data, xdpf->len, + DMA_TO_DEVICE); + if (dma_mapping_error(priv->dev, dma_handle)) + return NETSEC_XDP_CONSUMED; + tx_desc.buf_type = TYPE_NETSEC_XDP_NDO; + } else { + /* This is the device Rx buffer from page_pool. No need to remap + * just sync and send it + */ + dma_handle = page_pool_get_dma_addr(page) + + NETSEC_RXBUF_HEADROOM; + dma_sync_single_for_device(priv->dev, dma_handle, xdpf->len, + DMA_BIDIRECTIONAL); + tx_desc.buf_type = TYPE_NETSEC_XDP_TX; + } + tx_ctrl.cksum_offload_flag = false; + tx_ctrl.tcp_seg_offload_flag = false; + tx_ctrl.tcp_seg_len = 0; + + tx_desc.dma_addr = dma_handle; + tx_desc.addr = xdpf->data; + tx_desc.len = xdpf->len; + + netsec_set_tx_de(priv, tx_ring, &tx_ctrl, &tx_desc, xdpf); + + return NETSEC_XDP_TX; +} + +static u32 netsec_xdp_xmit_back(struct netsec_priv *priv, struct xdp_buff *xdp) +{ + struct netsec_desc_ring *tx_ring = &priv->desc_ring[NETSEC_RING_TX]; + struct xdp_frame *xdpf = convert_to_xdp_frame(xdp); + u32 ret; + + if (unlikely(!xdpf)) + return NETSEC_XDP_CONSUMED; + + spin_lock(&tx_ring->lock); + ret = netsec_xdp_queue_one(priv, xdpf, false); + spin_unlock(&tx_ring->lock); + + return ret; +} + +static u32 netsec_run_xdp(struct netsec_priv *priv, struct bpf_prog *prog, + struct xdp_buff *xdp) +{ + u32 ret = NETSEC_XDP_PASS; + int err; + u32 act; + + rcu_read_lock(); + act = bpf_prog_run_xdp(prog, xdp); + + switch (act) { + case XDP_PASS: + ret = NETSEC_XDP_PASS; + break; + case XDP_TX: + ret = netsec_xdp_xmit_back(priv, xdp); + if (ret != NETSEC_XDP_TX) + xdp_return_buff(xdp); + break; + case XDP_REDIRECT: + err = xdp_do_redirect(priv->ndev, xdp, prog); + if (!err) { + ret = NETSEC_XDP_REDIR; + } else { + ret = NETSEC_XDP_CONSUMED; + xdp_return_buff(xdp); + } + break; + default: + bpf_warn_invalid_xdp_action(act); + /* fall through */ + case XDP_ABORTED: + trace_xdp_exception(priv->ndev, prog, act); + /* fall through -- handle aborts by dropping packet */ + case XDP_DROP: + ret = NETSEC_XDP_CONSUMED; + xdp_return_buff(xdp); + break; + } + + rcu_read_unlock(); + + return ret; +} + static int netsec_process_rx(struct netsec_priv *priv, int budget) { struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_RX]; + struct bpf_prog *xdp_prog = READ_ONCE(priv->xdp_prog); struct net_device *ndev = priv->ndev; struct netsec_rx_pkt_info rx_info; - struct sk_buff *skb; + struct sk_buff *skb = NULL; + u16 xdp_xmit = 0; + u32 xdp_act = 0; int done = 0; while (done < budget) { @@ -727,8 +903,10 @@ static int netsec_process_rx(struct netsec_priv *priv, int budget) struct netsec_de *de = dring->vaddr + (DESC_SZ * idx); struct netsec_desc *desc = &dring->desc[idx]; struct page *page = virt_to_page(desc->addr); + u32 xdp_result = XDP_PASS; u16 pkt_len, desc_len; dma_addr_t dma_handle; + struct xdp_buff xdp; void *buf_addr; if (de->attr & (1U << NETSEC_RX_PKT_OWN_FIELD)) { @@ -773,7 +951,23 @@ static int netsec_process_rx(struct netsec_priv *priv, int budget) DMA_FROM_DEVICE); prefetch(desc->addr); + xdp.data_hard_start = desc->addr; + xdp.data = desc->addr + NETSEC_RXBUF_HEADROOM; + xdp_set_data_meta_invalid(&xdp); + xdp.data_end = xdp.data + pkt_len; + xdp.rxq = &dring->xdp_rxq; + + if (xdp_prog) { + xdp_result = netsec_run_xdp(priv, xdp_prog, &xdp); + if (xdp_result != NETSEC_XDP_PASS) { + xdp_act |= xdp_result; + if (xdp_result == NETSEC_XDP_TX) + xdp_xmit++; + goto next; + } + } skb = build_skb(desc->addr, desc->len + NETSEC_RX_BUF_NON_DATA); + if (unlikely(!skb)) { /* If skb fails recycle_direct will either unmap and * free the page or refill the cache depending on the @@ -787,27 +981,30 @@ static int netsec_process_rx(struct netsec_priv *priv, int budget) } page_pool_release_page(dring->page_pool, page); - /* Update the descriptor with the new buffer we allocated */ - desc->len = desc_len; - desc->dma_addr = dma_handle; - desc->addr = buf_addr; - - skb_reserve(skb, NETSEC_SKB_PAD); - skb_put(skb, pkt_len); + skb_reserve(skb, xdp.data - xdp.data_hard_start); + skb_put(skb, xdp.data_end - xdp.data); skb->protocol = eth_type_trans(skb, priv->ndev); if (priv->rx_cksum_offload_flag && rx_info.rx_cksum_result == NETSEC_RX_CKSUM_OK) skb->ip_summed = CHECKSUM_UNNECESSARY; - if (napi_gro_receive(&priv->napi, skb) != GRO_DROP) { +next: + if ((skb && napi_gro_receive(&priv->napi, skb) != GRO_DROP) || + xdp_result & NETSEC_XDP_RX_OK) { ndev->stats.rx_packets++; - ndev->stats.rx_bytes += pkt_len; + ndev->stats.rx_bytes += xdp.data_end - xdp.data; } + /* Update the descriptor with fresh buffers */ + desc->len = desc_len; + desc->dma_addr = dma_handle; + desc->addr = buf_addr; + netsec_rx_fill(priv, idx, 1); dring->tail = (dring->tail + 1) % DESC_NUM; } + netsec_finalize_xdp_rx(priv, xdp_act, xdp_xmit); return done; } @@ -837,8 +1034,7 @@ static int netsec_napi_poll(struct napi_struct *napi, int budget) static void netsec_set_tx_de(struct netsec_priv *priv, struct netsec_desc_ring *dring, const struct netsec_tx_pkt_ctrl *tx_ctrl, - const struct netsec_desc *desc, - struct sk_buff *skb) + const struct netsec_desc *desc, void *buf) { int idx = dring->head; struct netsec_de *de; @@ -861,10 +1057,16 @@ static void netsec_set_tx_de(struct netsec_priv *priv, de->data_buf_addr_lw = lower_32_bits(desc->dma_addr); de->buf_len_info = (tx_ctrl->tcp_seg_len << 16) | desc->len; de->attr = attr; - dma_wmb(); + /* under spin_lock if using XDP */ + if (!dring->is_xdp) + dma_wmb(); dring->desc[idx] = *desc; - dring->desc[idx].skb = skb; + if (desc->buf_type == TYPE_NETSEC_SKB) + dring->desc[idx].skb = buf; + else if (desc->buf_type == TYPE_NETSEC_XDP_TX || + desc->buf_type == TYPE_NETSEC_XDP_NDO) + dring->desc[idx].xdpf = buf; /* move head ahead */ dring->head = (dring->head + 1) % DESC_NUM; @@ -915,8 +1117,12 @@ static netdev_tx_t netsec_netdev_start_xmit(struct sk_buff *skb, u16 tso_seg_len = 0; int filled; + if (dring->is_xdp) + spin_lock_bh(&dring->lock); filled = netsec_desc_used(dring); if (netsec_check_stop_tx(priv, filled)) { + if (dring->is_xdp) + spin_unlock_bh(&dring->lock); net_warn_ratelimited("%s %s Tx queue full\n", dev_name(priv->dev), ndev->name); return NETDEV_TX_BUSY; @@ -949,6 +1155,8 @@ static netdev_tx_t netsec_netdev_start_xmit(struct sk_buff *skb, tx_desc.dma_addr = dma_map_single(priv->dev, skb->data, skb_headlen(skb), DMA_TO_DEVICE); if (dma_mapping_error(priv->dev, tx_desc.dma_addr)) { + if (dring->is_xdp) + spin_unlock_bh(&dring->lock); netif_err(priv, drv, priv->ndev, "%s: DMA mapping failed\n", __func__); ndev->stats.tx_dropped++; @@ -957,11 +1165,14 @@ static netdev_tx_t netsec_netdev_start_xmit(struct sk_buff *skb, } tx_desc.addr = skb->data; tx_desc.len = skb_headlen(skb); + tx_desc.buf_type = TYPE_NETSEC_SKB; skb_tx_timestamp(skb); netdev_sent_queue(priv->ndev, skb->len); netsec_set_tx_de(priv, dring, &tx_ctrl, &tx_desc, skb); + if (dring->is_xdp) + spin_unlock_bh(&dring->lock); netsec_write(priv, NETSEC_REG_NRM_TX_PKTCNT, 1); /* submit another tx */ return NETDEV_TX_OK; @@ -1042,6 +1253,7 @@ static int netsec_alloc_dring(struct netsec_priv *priv, enum ring_id id) static void netsec_setup_tx_dring(struct netsec_priv *priv) { struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_TX]; + struct bpf_prog *xdp_prog = READ_ONCE(priv->xdp_prog); int i; for (i = 0; i < DESC_NUM; i++) { @@ -1054,11 +1266,18 @@ static void netsec_setup_tx_dring(struct netsec_priv *priv) */ de->attr = 1U << NETSEC_TX_SHIFT_OWN_FIELD; } + + if (xdp_prog) + dring->is_xdp = true; + else + dring->is_xdp = false; + } static int netsec_setup_rx_dring(struct netsec_priv *priv) { struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_RX]; + struct bpf_prog *xdp_prog = READ_ONCE(priv->xdp_prog); struct page_pool_params pp_params = { 0 }; int i, err; @@ -1068,7 +1287,7 @@ static int netsec_setup_rx_dring(struct netsec_priv *priv) pp_params.pool_size = DESC_NUM; pp_params.nid = cpu_to_node(0); pp_params.dev = priv->dev; - pp_params.dma_dir = DMA_FROM_DEVICE; + pp_params.dma_dir = xdp_prog ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE; dring->page_pool = page_pool_create(&pp_params); if (IS_ERR(dring->page_pool)) { @@ -1490,6 +1709,9 @@ static int netsec_netdev_init(struct net_device *ndev) if (ret) goto err2; + spin_lock_init(&priv->desc_ring[NETSEC_RING_TX].lock); + spin_lock_init(&priv->desc_ring[NETSEC_RING_RX].lock); + return 0; err2: netsec_free_dring(priv, NETSEC_RING_RX); @@ -1522,6 +1744,81 @@ static int netsec_netdev_ioctl(struct net_device *ndev, struct ifreq *ifr, return phy_mii_ioctl(ndev->phydev, ifr, cmd); } +static int netsec_xdp_xmit(struct net_device *ndev, int n, + struct xdp_frame **frames, u32 flags) +{ + struct netsec_priv *priv = netdev_priv(ndev); + struct netsec_desc_ring *tx_ring = &priv->desc_ring[NETSEC_RING_TX]; + int drops = 0; + int i; + + if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK)) + return -EINVAL; + + spin_lock(&tx_ring->lock); + for (i = 0; i < n; i++) { + struct xdp_frame *xdpf = frames[i]; + int err; + + err = netsec_xdp_queue_one(priv, xdpf, true); + if (err != NETSEC_XDP_TX) { + xdp_return_frame_rx_napi(xdpf); + drops++; + } else { + tx_ring->xdp_xmit++; + } + } + spin_unlock(&tx_ring->lock); + + if (unlikely(flags & XDP_XMIT_FLUSH)) { + netsec_xdp_ring_tx_db(priv, tx_ring->xdp_xmit); + tx_ring->xdp_xmit = 0; + } + + return n - drops; +} + +static int netsec_xdp_setup(struct netsec_priv *priv, struct bpf_prog *prog, + struct netlink_ext_ack *extack) +{ + struct net_device *dev = priv->ndev; + struct bpf_prog *old_prog; + + /* For now just support only the usual MTU sized frames */ + if (prog && dev->mtu > 1500) { + NL_SET_ERR_MSG_MOD(extack, "Jumbo frames not supported on XDP"); + return -EOPNOTSUPP; + } + + if (netif_running(dev)) + netsec_netdev_stop(dev); + + /* Detach old prog, if any */ + old_prog = xchg(&priv->xdp_prog, prog); + if (old_prog) + bpf_prog_put(old_prog); + + if (netif_running(dev)) + netsec_netdev_open(dev); + + return 0; +} + +static int netsec_xdp(struct net_device *ndev, struct netdev_bpf *xdp) +{ + struct netsec_priv *priv = netdev_priv(ndev); + + switch (xdp->command) { + case XDP_SETUP_PROG: + return netsec_xdp_setup(priv, xdp->prog, xdp->extack); + case XDP_QUERY_PROG: + xdp->prog_id = priv->xdp_prog ? priv->xdp_prog->aux->id : 0; + return 0; + default: + return -EINVAL; + } +} + static const struct net_device_ops netsec_netdev_ops = { .ndo_init = netsec_netdev_init, .ndo_uninit = netsec_netdev_uninit, @@ -1532,6 +1829,8 @@ static const struct net_device_ops netsec_netdev_ops = { .ndo_set_mac_address = eth_mac_addr, .ndo_validate_addr = eth_validate_addr, .ndo_do_ioctl = netsec_netdev_ioctl, + .ndo_xdp_xmit = netsec_xdp_xmit, + .ndo_bpf = netsec_xdp, }; static int netsec_of_probe(struct platform_device *pdev,