From patchwork Thu Feb 11 18:54:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 382310 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE6C4C433DB for ; Thu, 11 Feb 2021 19:05:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 97F9464DE3 for ; Thu, 11 Feb 2021 19:05:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230260AbhBKS5g (ORCPT ); Thu, 11 Feb 2021 13:57:36 -0500 Received: from mail-40136.protonmail.ch ([185.70.40.136]:49918 "EHLO mail-40136.protonmail.ch" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231274AbhBKSzq (ORCPT ); Thu, 11 Feb 2021 13:55:46 -0500 Date: Thu, 11 Feb 2021 18:54:57 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail; t=1613069704; bh=qPQcxPN1Q43SY2WoV1Q6WuT37zpt1UbANMJCgrZmA8M=; h=Date:To:From:Cc:Reply-To:Subject:In-Reply-To:References:From; b=mFZGK9DqkMzDxBvGEhde2DTkEr1oebfk3zuZk4wy9nGQJ++vDr2MfT//zJDpO2fIg kyHcmbdskJfgW9hvxcIhVMyYgZRfJajM+cdnHzUPUgn+cymioddjITy8K+CTMhMh/p dBAzoess7TsA7HtxsqT2+JxrNgdihQo80CAvN4uSHWUScuvQfrrJSY2nPq5wQMMEyY EbIQ70RouwDCrNjdMm+kfINatpt8bA2h5n53pgjGK+StIUD4fRaLtcOJZdzFB+FZjf FkdhqYk8A7VYS0T3j1Il0LgrfEDnIS2zayx1gunpuplpXkTyXEy+F5nLPwBhlqVpfa NrwADqks2dBOg== To: "David S. Miller" , Jakub Kicinski From: Alexander Lobakin Cc: Jonathan Lemon , Eric Dumazet , Dmitry Vyukov , Willem de Bruijn , Alexander Lobakin , Randy Dunlap , Kevin Hao , Pablo Neira Ayuso , Jakub Sitnicki , Marco Elver , Dexuan Cui , Paolo Abeni , Jesper Dangaard Brouer , Alexander Duyck , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Taehee Yoo , Cong Wang , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Miaohe Lin , Guillaume Nault , Yonghong Song , zhudi , Michal Kubecek , Marcelo Ricardo Leitner , Dmitry Safonov <0x7f454c46@gmail.com>, Yang Yingliang , Florian Westphal , Edward Cree , linux-kernel@vger.kernel.org, netdev@vger.kernel.org Reply-To: Alexander Lobakin Subject: [PATCH v5 net-next 09/11] skbuff: allow to optionally use NAPI cache from __alloc_skb() Message-ID: <20210211185220.9753-10-alobakin@pm.me> In-Reply-To: <20210211185220.9753-1-alobakin@pm.me> References: <20210211185220.9753-1-alobakin@pm.me> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Reuse the old and forgotten SKB_ALLOC_NAPI to add an option to get an skbuff_head from the NAPI cache instead of inplace allocation inside __alloc_skb(). This implies that the function is called from softirq or BH-off context, not for allocating a clone or from a distant node. Signed-off-by: Alexander Lobakin --- net/core/skbuff.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 9e1a8ded4acc..a0b457ae87c2 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -397,15 +397,20 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask, struct sk_buff *skb; u8 *data; bool pfmemalloc; + bool clone; - cache = (flags & SKB_ALLOC_FCLONE) - ? skbuff_fclone_cache : skbuff_head_cache; + clone = !!(flags & SKB_ALLOC_FCLONE); + cache = clone ? skbuff_fclone_cache : skbuff_head_cache; if (sk_memalloc_socks() && (flags & SKB_ALLOC_RX)) gfp_mask |= __GFP_MEMALLOC; /* Get the HEAD */ - skb = kmem_cache_alloc_node(cache, gfp_mask & ~__GFP_DMA, node); + if ((flags & SKB_ALLOC_NAPI) && !clone && + likely(node == NUMA_NO_NODE || node == numa_mem_id())) + skb = napi_skb_cache_get(); + else + skb = kmem_cache_alloc_node(cache, gfp_mask & ~GFP_DMA, node); if (unlikely(!skb)) return NULL; prefetchw(skb); @@ -436,7 +441,7 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask, __build_skb_around(skb, data, 0); skb->pfmemalloc = pfmemalloc; - if (flags & SKB_ALLOC_FCLONE) { + if (clone) { struct sk_buff_fclones *fclones; fclones = container_of(skb, struct sk_buff_fclones, skb1);