From patchwork Thu Nov 5 20:12:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 322254 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24EAEC55178 for ; Thu, 5 Nov 2020 20:13:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B54DF20729 for ; Thu, 5 Nov 2020 20:13:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="DwyvGcvt" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732136AbgKEUNH (ORCPT ); Thu, 5 Nov 2020 15:13:07 -0500 Received: from hqnvemgate25.nvidia.com ([216.228.121.64]:6024 "EHLO hqnvemgate25.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732033AbgKEUNB (ORCPT ); Thu, 5 Nov 2020 15:13:01 -0500 Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Thu, 05 Nov 2020 12:13:00 -0800 Received: from sx1.mtl.com (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 5 Nov 2020 20:13:01 +0000 From: Saeed Mahameed To: Jakub Kicinski CC: , "David S. Miller" , "Yevgeny Kliteynik" , Erez Shitrit , Alex Vesker , Mark Bloch , Saeed Mahameed Subject: [net-next v2 06/12] net/mlx5: DR, Sync chunks only during free Date: Thu, 5 Nov 2020 12:12:36 -0800 Message-ID: <20201105201242.21716-7-saeedm@nvidia.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201105201242.21716-1-saeedm@nvidia.com> References: <20201105201242.21716-1-saeedm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1604607180; bh=/CXbSfyI/MbYr2+Ar+3wtDX8QM7y3m+T9ksjUkE6JnM=; h=From:To:CC:Subject:Date:Message-ID:X-Mailer:In-Reply-To: References:MIME-Version:Content-Transfer-Encoding:Content-Type: X-Originating-IP:X-ClientProxiedBy; b=DwyvGcvtPb6V4aExE52que42rUC7J8otyMpZhqT61Ur/RGu3iRyrk9gSgqhfDjLc6 lLVbSUGosAAhsquWhE+zBPwAB7PCsR5rddFXBOyomPYrELK9WdbZg62DznaCvKHNSW u/ajcNAM7G97Kzrv1npNT+KxZiEA5pEgy+m2shhBB/a9v5aCMFosx56OSRnSgMdhjZ jnPS6GWZ/mBl1/KFu43UyEHGQrZ9y0sKt7F3qkPEIhGYDzTkKlgfCaKgmY0zdpUI/b Cs8pLbchmKmoOaxLQYAkj8Mnk97Gg/Wr0ca2Su53InnXt72I54Qoxf0tqsUjA/N4xJ hd4S4vscxeUlQ== Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Yevgeny Kliteynik When freeing chunks, we want to sync the steering so that all the "hot" memory will be written to ICM and all the chunks that are in the hot_list will be actually destroyed. When allocating from the pool, we don't have a need to sync the steering, as we're not freeing anything, and sync might just hurt the performance in terms of flow-per-second offloaded. Signed-off-by: Erez Shitrit Signed-off-by: Yevgeny Kliteynik Reviewed-by: Alex Vesker Reviewed-by: Mark Bloch Signed-off-by: Saeed Mahameed --- .../mellanox/mlx5/core/steering/dr_icm_pool.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c index 2c5886b469f7..4d8330aab169 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c @@ -332,10 +332,6 @@ static int dr_icm_handle_buddies_get_mem(struct mlx5dr_icm_pool *pool, bool new_mem = false; int err; - /* Check if we have chunks that are waiting for sync-ste */ - if (dr_icm_pool_is_sync_required(pool)) - dr_icm_pool_sync_all_buddy_pools(pool); - alloc_buddy_mem: /* find the next free place from the buddy list */ list_for_each_entry(buddy_mem_pool, &pool->buddy_mem_list, list_node) { @@ -409,12 +405,18 @@ mlx5dr_icm_alloc_chunk(struct mlx5dr_icm_pool *pool, void mlx5dr_icm_free_chunk(struct mlx5dr_icm_chunk *chunk) { struct mlx5dr_icm_buddy_mem *buddy = chunk->buddy_mem; + struct mlx5dr_icm_pool *pool = buddy->pool; /* move the memory to the waiting list AKA "hot" */ - mutex_lock(&buddy->pool->mutex); + mutex_lock(&pool->mutex); list_move_tail(&chunk->chunk_list, &buddy->hot_list); buddy->hot_memory_size += chunk->byte_size; - mutex_unlock(&buddy->pool->mutex); + + /* Check if we have chunks that are waiting for sync-ste */ + if (dr_icm_pool_is_sync_required(pool)) + dr_icm_pool_sync_all_buddy_pools(pool); + + mutex_unlock(&pool->mutex); } struct mlx5dr_icm_pool *mlx5dr_icm_pool_create(struct mlx5dr_domain *dmn,