From patchwork Mon Jan 11 18:28:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 360818 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01FA6C433E0 for ; Mon, 11 Jan 2021 18:29:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BEEAD2222D for ; Mon, 11 Jan 2021 18:29:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390434AbhAKS3J (ORCPT ); Mon, 11 Jan 2021 13:29:09 -0500 Received: from mail2.protonmail.ch ([185.70.40.22]:46450 "EHLO mail2.protonmail.ch" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389074AbhAKS3J (ORCPT ); Mon, 11 Jan 2021 13:29:09 -0500 X-Greylist: delayed 22704 seconds by postgrey-1.27 at vger.kernel.org; Mon, 11 Jan 2021 13:29:08 EST Date: Mon, 11 Jan 2021 18:28:21 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail; t=1610389706; bh=gXXU1styr81EC3F2Bwbxq+s5c4Q77ESXBToxUwsCrQs=; h=Date:To:From:Cc:Reply-To:Subject:In-Reply-To:References:From; b=RGkKbnVP8ThOFM7803+38qDOY3bT36OdUxsDpI0RmrxKBowzL+X+fiZ4rBoS7dhL8 mBVt/UmLpRuEWsHnfofnzXQc99P33fa+1PzRr8LmmMA16Q+H11RfCY/T6jf+z6SFw8 BtT79uC4U2tXtY01x8GPv3Qk5ItKb3p9AEECxUumKMB+o5rvFF1wjEe2pppSbmLwrS B08FYcU7Uk0uBciwT5Qx7N943xzLW2vx4suxQ5ZNElFZtO2ebZVR0CdVRwpy1Gm5tn dvexlp53KIKtkso+1kjdGTh3PSkNjCFhyBvgtjx9Ipvy0y+c7QJu+Rkkq40E5Imm/f AGGzlloiDMUaw== To: "David S. Miller" , Jakub Kicinski From: Alexander Lobakin Cc: Eric Dumazet , Edward Cree , Jonathan Lemon , Willem de Bruijn , Miaohe Lin , Alexander Lobakin , Steffen Klassert , Guillaume Nault , Yadu Kishore , Al Viro , netdev@vger.kernel.org, linux-kernel@vger.kernel.org Reply-To: Alexander Lobakin Subject: [PATCH net-next 1/5] skbuff: rename fields of struct napi_alloc_cache to be more intuitive Message-ID: <20210111182801.12609-1-alobakin@pm.me> In-Reply-To: <20210111182655.12159-1-alobakin@pm.me> References: <20210111182655.12159-1-alobakin@pm.me> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org skb_cache and skb_count fields are used to store skbuff_heads queued for freeing to flush them by bulks, and aren't related to allocation path. Give them more obvious names to improve code understanding and allow to expand this struct with more allocation-related elements. Misc: indent struct napi_alloc_cache declaration for better reading. Signed-off-by: Alexander Lobakin --- net/core/skbuff.c | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 7626a33cce59..17ae5e90f103 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -366,9 +366,9 @@ EXPORT_SYMBOL(build_skb_around); #define NAPI_SKB_CACHE_SIZE 64 struct napi_alloc_cache { - struct page_frag_cache page; - unsigned int skb_count; - void *skb_cache[NAPI_SKB_CACHE_SIZE]; + struct page_frag_cache page; + u32 flush_skb_count; + void *flush_skb_cache[NAPI_SKB_CACHE_SIZE]; }; static DEFINE_PER_CPU(struct page_frag_cache, netdev_alloc_cache); @@ -860,11 +860,11 @@ void __kfree_skb_flush(void) { struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache); - /* flush skb_cache if containing objects */ - if (nc->skb_count) { - kmem_cache_free_bulk(skbuff_head_cache, nc->skb_count, - nc->skb_cache); - nc->skb_count = 0; + /* flush flush_skb_cache if containing objects */ + if (nc->flush_skb_count) { + kmem_cache_free_bulk(skbuff_head_cache, nc->flush_skb_count, + nc->flush_skb_cache); + nc->flush_skb_count = 0; } } @@ -876,18 +876,18 @@ static inline void _kfree_skb_defer(struct sk_buff *skb) skb_release_all(skb); /* record skb to CPU local list */ - nc->skb_cache[nc->skb_count++] = skb; + nc->flush_skb_cache[nc->flush_skb_count++] = skb; #ifdef CONFIG_SLUB /* SLUB writes into objects when freeing */ prefetchw(skb); #endif - /* flush skb_cache if it is filled */ - if (unlikely(nc->skb_count == NAPI_SKB_CACHE_SIZE)) { + /* flush flush_skb_cache if it is filled */ + if (unlikely(nc->flush_skb_count == NAPI_SKB_CACHE_SIZE)) { kmem_cache_free_bulk(skbuff_head_cache, NAPI_SKB_CACHE_SIZE, - nc->skb_cache); - nc->skb_count = 0; + nc->flush_skb_cache); + nc->flush_skb_count = 0; } } void __kfree_skb_defer(struct sk_buff *skb) From patchwork Mon Jan 11 18:29:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 360817 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2DADEC433E9 for ; Mon, 11 Jan 2021 18:29:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E288F2250E for ; Mon, 11 Jan 2021 18:29:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390518AbhAKS3s (ORCPT ); Mon, 11 Jan 2021 13:29:48 -0500 Received: from mail-40134.protonmail.ch ([185.70.40.134]:16187 "EHLO mail-40134.protonmail.ch" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390502AbhAKS3r (ORCPT ); Mon, 11 Jan 2021 13:29:47 -0500 Date: Mon, 11 Jan 2021 18:29:00 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail; t=1610389745; bh=uLN743u/wqC/gFFuGPFTwvMOdJ2kjErr9WqlX51ROd4=; h=Date:To:From:Cc:Reply-To:Subject:In-Reply-To:References:From; b=nkuQl8J/da2u0Tmb+7OxCZKNk/wFExeqrMkX5jgpkqnU+jSg6DqQjd6fHnytlLjUZ Qq1kC/v0PSwTbZe+uPUjwnnrFgnV3xb9mIcRVucIkZaA5u6jAiuO3zKw1nS30pIFgx JhKH/V3eBxsc5KHmPKrKViFH0mRAjmD4hxD6mDzMrOPgf2JJzIdB9i5qM3jVvRvopq eqBFwDzQZYsk1pelTAGCJRLjb1STPtge7qU3qLsc6xxv8JfQeqBfT1aFAaWB6hU/Fq xdq383aD6ZHBBE9XLXWbWYrKRt+fTepkRePEZQBy/BUXTHTgf0dnrZerIjgHq2JTWr MB0QOp3VQR+KQ== To: "David S. Miller" , Jakub Kicinski From: Alexander Lobakin Cc: Eric Dumazet , Edward Cree , Jonathan Lemon , Willem de Bruijn , Miaohe Lin , Alexander Lobakin , Steffen Klassert , Guillaume Nault , Yadu Kishore , Al Viro , netdev@vger.kernel.org, linux-kernel@vger.kernel.org Reply-To: Alexander Lobakin Subject: [PATCH net-next 3/5] skbuff: reuse skbuff_heads from flush_skb_cache if available Message-ID: <20210111182801.12609-3-alobakin@pm.me> In-Reply-To: <20210111182801.12609-1-alobakin@pm.me> References: <20210111182655.12159-1-alobakin@pm.me> <20210111182801.12609-1-alobakin@pm.me> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Instead of unconditional allocating a new skbuff_head and unconditional flushing of flush_skb_cache, reuse the ones queued up for flushing if there are any. skbuff_heads stored in flush_skb_cache are already unreferenced from any pages or extensions and almost ready for use. We perform zeroing in __napi_alloc_skb() anyway, regardless of where did our skbuff_head came from. Signed-off-by: Alexander Lobakin --- net/core/skbuff.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 3c904c29efbb..0e8c597ff6ce 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -487,6 +487,9 @@ EXPORT_SYMBOL(__netdev_alloc_skb); static struct sk_buff *__napi_decache_skb(struct napi_alloc_cache *nc) { + if (nc->flush_skb_count) + return nc->flush_skb_cache[--nc->flush_skb_count]; + return kmem_cache_alloc(skbuff_head_cache, GFP_ATOMIC); } From patchwork Mon Jan 11 18:29:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 360816 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00E39C432C3 for ; Mon, 11 Jan 2021 18:30:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CD27F22ADF for ; Mon, 11 Jan 2021 18:30:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390593AbhAKSae (ORCPT ); Mon, 11 Jan 2021 13:30:34 -0500 Received: from mail-40133.protonmail.ch ([185.70.40.133]:35082 "EHLO mail-40133.protonmail.ch" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390532AbhAKSac (ORCPT ); Mon, 11 Jan 2021 13:30:32 -0500 Date: Mon, 11 Jan 2021 18:29:44 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail; t=1610389789; bh=ed2E3ZDT8/tIMKq0U8JNRKLCBc4eky4hYA4RXRAiMWs=; h=Date:To:From:Cc:Reply-To:Subject:In-Reply-To:References:From; b=XrpbpgkFYGY8WwMuHq87QqgNIu7l+lsA2yUPH41gqPBBZh3R/kMs6CFmeinU1Zsr5 z58eX9902Rn+oIxj0Cb/AgwmHkbw/RfKHgK019AIcX7euBUWPADyc5rsXFEjOXcEkR vbRB30hBlfCaiQ5x38bHxwxM3fkR1dCEZGjnWqxtFYSAWIsIWEElj/BbtBZmqMHoIl +XhBUJVawl6/qs0Hs2RrKG/B66moh68B4SuT61suhO0XLQhjlRkuiZLAnIQjyxc0bD wOs4lyl1MI62nGmjE9goqazW3kLnrs8eaRRLtvYkFrrO4646b64vzr64wvozj3vrvo G/PxsoNsEPXUg== To: "David S. Miller" , Jakub Kicinski From: Alexander Lobakin Cc: Eric Dumazet , Edward Cree , Jonathan Lemon , Willem de Bruijn , Miaohe Lin , Alexander Lobakin , Steffen Klassert , Guillaume Nault , Yadu Kishore , Al Viro , netdev@vger.kernel.org, linux-kernel@vger.kernel.org Reply-To: Alexander Lobakin Subject: [PATCH net-next 5/5] skbuff: refill skb_cache early from deferred-to-consume entries Message-ID: <20210111182801.12609-5-alobakin@pm.me> In-Reply-To: <20210111182801.12609-1-alobakin@pm.me> References: <20210111182655.12159-1-alobakin@pm.me> <20210111182801.12609-1-alobakin@pm.me> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Instead of unconditional queueing of ready-to-consume skbuff_heads to flush_skb_cache, feed skb_cache with them instead if it's not full already. This greatly reduces the frequency of kmem_cache_alloc_bulk() calls. Signed-off-by: Alexander Lobakin --- net/core/skbuff.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 57a7307689f3..ba0d5611635e 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -904,6 +904,11 @@ static inline void _kfree_skb_defer(struct sk_buff *skb) /* drop skb->head and call any destructors for packet */ skb_release_all(skb); + if (nc->skb_count < NAPI_SKB_CACHE_SIZE) { + nc->skb_cache[nc->skb_count++] = skb; + return; + } + /* record skb to CPU local list */ nc->flush_skb_cache[nc->flush_skb_count++] = skb;