From patchwork Tue Sep 15 20:25:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 260841 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83042C43461 for ; Tue, 15 Sep 2020 22:06:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3DB522080C for ; Tue, 15 Sep 2020 22:06:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1600207574; bh=ST3cSFqSSpB7f/pNfFFvHDFCM9EzKTIepJ3nW1RhTFA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=xphPzu6S2O+GXuq5UQeEgNrPklrelw+oxsn6HtM1SgmiC+gd33FYmKLhzGM0F8qtJ dw4RwSgHFNecvsOXZrAR5X1RIcASgv+5cTV9dXNxZDJJtHRiT811jmbaxompz+Nu0i LoxIvnRKojZKGcRJDo9aQfGPuSVAMGoMWk2+xJc4= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728076AbgIOWGL (ORCPT ); Tue, 15 Sep 2020 18:06:11 -0400 Received: from mail.kernel.org ([198.145.29.99]:53754 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728016AbgIOU0A (ORCPT ); Tue, 15 Sep 2020 16:26:00 -0400 Received: from sx1.lan (c-24-6-56-119.hsd1.ca.comcast.net [24.6.56.119]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 3914C20809; Tue, 15 Sep 2020 20:25:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1600201554; bh=ST3cSFqSSpB7f/pNfFFvHDFCM9EzKTIepJ3nW1RhTFA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=drZ3yJETy9CYVELlJJW3GP8UY0+iuMFBQECnOyqyYwY00ssLxuQ2CboUlfUYGiV42 GnZlB9kOiK/gJVkWoO051nUqrzcfDEIwqkyI5jNlHuf9VdFGeWrS9/nhsntOvoW6qG AX2Cp2U+UTZjELrifmSGBUej/bUpWoymOPiLX3xc= From: saeed@kernel.org To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Moshe Tal , Aya Levin , Saeed Mahameed Subject: [net-next 01/16] net/mlx5: Fix uninitialized variable warning Date: Tue, 15 Sep 2020 13:25:18 -0700 Message-Id: <20200915202533.64389-2-saeed@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200915202533.64389-1-saeed@kernel.org> References: <20200915202533.64389-1-saeed@kernel.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Moshe Tal Add variable initialization to eliminate the warning "variable may be used uninitialized". Fixes: 5f29458b77d5 ("net/mlx5e: Support dump callback in TX reporter") Signed-off-by: Moshe Tal Reviewed-by: Aya Levin Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en/health.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/health.c b/drivers/net/ethernet/mellanox/mlx5/core/en/health.c index 3dc200bcfabd..69a05da0e3e3 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/health.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/health.c @@ -242,8 +242,8 @@ static int mlx5e_health_rsc_fmsg_binary(struct devlink_fmsg *fmsg, { u32 data_size; + int err = 0; u32 offset; - int err; for (offset = 0; offset < value_len; offset += data_size) { data_size = value_len - offset; From patchwork Tue Sep 15 20:25:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 260848 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B585C43461 for ; Tue, 15 Sep 2020 20:29:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0B82A20809 for ; Tue, 15 Sep 2020 20:29:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1600201761; bh=Mgd0bBvJuBI59QKfqqPfdbn5p4ofeM4J2pdGmv2KLww=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=nvnuvN7A1cnFC0aC/Fi5uaU2xjf263l7887R7o5rdQ89Xv6BhIwidfpsHwU1G/s/e jaEuo9XIWnYojC8+FDbMi3lbePLsMIQAm64oDbn2zpnYgrAqa2TxVOyuP18BkqM57S WgYE7psVbIT/WgzG27dbQzKqQuqX1XYPyoPiaYzM= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727945AbgIOU20 (ORCPT ); Tue, 15 Sep 2020 16:28:26 -0400 Received: from mail.kernel.org ([198.145.29.99]:53786 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728021AbgIOU0A (ORCPT ); Tue, 15 Sep 2020 16:26:00 -0400 Received: from sx1.lan (c-24-6-56-119.hsd1.ca.comcast.net [24.6.56.119]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 41A59212CC; Tue, 15 Sep 2020 20:25:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1600201555; bh=Mgd0bBvJuBI59QKfqqPfdbn5p4ofeM4J2pdGmv2KLww=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ybjxr1GmkYY0vsCbblx1N8qQw3pQlYECbyaS0CmS9dI4M3rOjCDyo70O5XgAK8ecg QQCsNRjKQwXDj8+JLYg3DGnUz7k37Dm9jZDtFfE1Uh2u1s0TInhVdRy8zToDWxxTUy XIhbX5UcI3qKI/etUApeyAmRaY9G/LvnV+OL1Pm0= From: saeed@kernel.org To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Eran Ben Elisha , Moshe Shemesh , Saeed Mahameed Subject: [net-next 03/16] net/mlx5: Always use container_of to find mdev pointer from clock struct Date: Tue, 15 Sep 2020 13:25:20 -0700 Message-Id: <20200915202533.64389-4-saeed@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200915202533.64389-1-saeed@kernel.org> References: <20200915202533.64389-1-saeed@kernel.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Eran Ben Elisha Clock struct is part of struct mlx5_core_dev. Code was inconsistent, on some cases used container_of and on another used clock->mdev. Align code to use container_of amd remove clock->mdev pointer. While here, fix reverse xmas tree coding style. Signed-off-by: Eran Ben Elisha Reviewed-by: Moshe Shemesh Signed-off-by: Saeed Mahameed --- .../ethernet/mellanox/mlx5/core/lib/clock.c | 52 +++++++++++-------- include/linux/mlx5/driver.h | 1 - 2 files changed, 29 insertions(+), 24 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c index 2d55b7c22c03..a07aeb97d027 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c @@ -150,28 +150,30 @@ static void mlx5_pps_out(struct work_struct *work) static void mlx5_timestamp_overflow(struct work_struct *work) { struct delayed_work *dwork = to_delayed_work(work); - struct mlx5_clock *clock = container_of(dwork, struct mlx5_clock, - overflow_work); + struct mlx5_core_dev *mdev; + struct mlx5_clock *clock; unsigned long flags; + clock = container_of(dwork, struct mlx5_clock, overflow_work); + mdev = container_of(clock, struct mlx5_core_dev, clock); write_seqlock_irqsave(&clock->lock, flags); timecounter_read(&clock->tc); - mlx5_update_clock_info_page(clock->mdev); + mlx5_update_clock_info_page(mdev); write_sequnlock_irqrestore(&clock->lock, flags); schedule_delayed_work(&clock->overflow_work, clock->overflow_period); } -static int mlx5_ptp_settime(struct ptp_clock_info *ptp, - const struct timespec64 *ts) +static int mlx5_ptp_settime(struct ptp_clock_info *ptp, const struct timespec64 *ts) { - struct mlx5_clock *clock = container_of(ptp, struct mlx5_clock, - ptp_info); + struct mlx5_clock *clock = container_of(ptp, struct mlx5_clock, ptp_info); u64 ns = timespec64_to_ns(ts); + struct mlx5_core_dev *mdev; unsigned long flags; + mdev = container_of(clock, struct mlx5_core_dev, clock); write_seqlock_irqsave(&clock->lock, flags); timecounter_init(&clock->tc, &clock->cycles, ns); - mlx5_update_clock_info_page(clock->mdev); + mlx5_update_clock_info_page(mdev); write_sequnlock_irqrestore(&clock->lock, flags); return 0; @@ -180,13 +182,12 @@ static int mlx5_ptp_settime(struct ptp_clock_info *ptp, static int mlx5_ptp_gettimex(struct ptp_clock_info *ptp, struct timespec64 *ts, struct ptp_system_timestamp *sts) { - struct mlx5_clock *clock = container_of(ptp, struct mlx5_clock, - ptp_info); - struct mlx5_core_dev *mdev = container_of(clock, struct mlx5_core_dev, - clock); + struct mlx5_clock *clock = container_of(ptp, struct mlx5_clock, ptp_info); + struct mlx5_core_dev *mdev; unsigned long flags; u64 cycles, ns; + mdev = container_of(clock, struct mlx5_core_dev, clock); write_seqlock_irqsave(&clock->lock, flags); cycles = mlx5_read_internal_timer(mdev, sts); ns = timecounter_cyc2time(&clock->tc, cycles); @@ -199,13 +200,14 @@ static int mlx5_ptp_gettimex(struct ptp_clock_info *ptp, struct timespec64 *ts, static int mlx5_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta) { - struct mlx5_clock *clock = container_of(ptp, struct mlx5_clock, - ptp_info); + struct mlx5_clock *clock = container_of(ptp, struct mlx5_clock, ptp_info); + struct mlx5_core_dev *mdev; unsigned long flags; + mdev = container_of(clock, struct mlx5_core_dev, clock); write_seqlock_irqsave(&clock->lock, flags); timecounter_adjtime(&clock->tc, delta); - mlx5_update_clock_info_page(clock->mdev); + mlx5_update_clock_info_page(mdev); write_sequnlock_irqrestore(&clock->lock, flags); return 0; @@ -213,12 +215,13 @@ static int mlx5_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta) static int mlx5_ptp_adjfreq(struct ptp_clock_info *ptp, s32 delta) { - u64 adj; - u32 diff; + struct mlx5_clock *clock = container_of(ptp, struct mlx5_clock, ptp_info); + struct mlx5_core_dev *mdev; unsigned long flags; int neg_adj = 0; - struct mlx5_clock *clock = container_of(ptp, struct mlx5_clock, - ptp_info); + u32 diff; + u64 adj; + if (delta < 0) { neg_adj = 1; @@ -229,11 +232,12 @@ static int mlx5_ptp_adjfreq(struct ptp_clock_info *ptp, s32 delta) adj *= delta; diff = div_u64(adj, 1000000000ULL); + mdev = container_of(clock, struct mlx5_core_dev, clock); write_seqlock_irqsave(&clock->lock, flags); timecounter_read(&clock->tc); clock->cycles.mult = neg_adj ? clock->nominal_c_mult - diff : clock->nominal_c_mult + diff; - mlx5_update_clock_info_page(clock->mdev); + mlx5_update_clock_info_page(mdev); write_sequnlock_irqrestore(&clock->lock, flags); return 0; @@ -465,7 +469,8 @@ static int mlx5_query_mtpps_pin_mode(struct mlx5_core_dev *mdev, u8 pin, static int mlx5_get_pps_pin_mode(struct mlx5_clock *clock, u8 pin) { - struct mlx5_core_dev *mdev = clock->mdev; + struct mlx5_core_dev *mdev = container_of(clock, struct mlx5_core_dev, clock); + u32 out[MLX5_ST_SZ_DW(mtpps_reg)] = {}; u8 mode; int err; @@ -538,15 +543,17 @@ static int mlx5_pps_event(struct notifier_block *nb, unsigned long type, void *data) { struct mlx5_clock *clock = mlx5_nb_cof(nb, struct mlx5_clock, pps_nb); - struct mlx5_core_dev *mdev = clock->mdev; struct ptp_clock_event ptp_event; u64 cycles_now, cycles_delta; u64 nsec_now, nsec_delta, ns; struct mlx5_eqe *eqe = data; int pin = eqe->data.pps.pin; + struct mlx5_core_dev *mdev; struct timespec64 ts; unsigned long flags; + mdev = container_of(clock, struct mlx5_core_dev, clock); + switch (clock->ptp_info.pin_config[pin].func) { case PTP_PF_EXTTS: ptp_event.index = pin; @@ -605,7 +612,6 @@ void mlx5_init_clock(struct mlx5_core_dev *mdev) clock->cycles.shift); clock->nominal_c_mult = clock->cycles.mult; clock->cycles.mask = CLOCKSOURCE_MASK(41); - clock->mdev = mdev; timecounter_init(&clock->tc, &clock->cycles, ktime_to_ns(ktime_get_real())); diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h index c145de0473bc..8dc3da6e6480 100644 --- a/include/linux/mlx5/driver.h +++ b/include/linux/mlx5/driver.h @@ -643,7 +643,6 @@ struct mlx5_pps { }; struct mlx5_clock { - struct mlx5_core_dev *mdev; struct mlx5_nb pps_nb; seqlock_t lock; struct cyclecounter cycles; From patchwork Tue Sep 15 20:25:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 260845 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 33C36C43461 for ; Tue, 15 Sep 2020 22:05:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F28D3206B7 for ; Tue, 15 Sep 2020 22:05:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1600207502; bh=rIbuF4LBIDj3nrco7Vxo0vv2OzeqRPw/dqH/gR1Urtk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=2jYw7WNYvw8TuTJwoBdjWZlDKdc3T4ie1T81vs5SvrSHuL/lLnE/fzC5stmZvyHzJ MHVqKgM3tCu1LNjG66b9BkOOeZGMB9SU6Uw/Qpw3XbPP8MZ6LFUl4KYS6QdILfvLdG 1+i9XNCGptTN7wkccUPHzJEUVVr3EiQALo7Y+WEk= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728014AbgIOWFA (ORCPT ); Tue, 15 Sep 2020 18:05:00 -0400 Received: from mail.kernel.org ([198.145.29.99]:54380 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727907AbgIOU0w (ORCPT ); Tue, 15 Sep 2020 16:26:52 -0400 Received: from sx1.lan (c-24-6-56-119.hsd1.ca.comcast.net [24.6.56.119]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 2153E21974; Tue, 15 Sep 2020 20:25:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1600201556; bh=rIbuF4LBIDj3nrco7Vxo0vv2OzeqRPw/dqH/gR1Urtk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Dj9/Ovb2VLirUEWerTggRLTHhNKUiS0uoH3jSNuJhQeH6BTY8dS/2fF4LCwSyelUw Hbn/J3D36w0i5JaTTEVwDZ5Shwol2TnN1oUcn5XqCsZuBGVd3haAQdD5EfNEDA3Vcy c4bNfQQFPcR7pe1zHgrzd9/jilK9SPPe/maTEShQ= From: saeed@kernel.org To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Eran Ben Elisha , Moshe Shemesh , Saeed Mahameed Subject: [net-next 05/16] net/mlx5: Release clock lock before scheduling a PPS work Date: Tue, 15 Sep 2020 13:25:22 -0700 Message-Id: <20200915202533.64389-6-saeed@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200915202533.64389-1-saeed@kernel.org> References: <20200915202533.64389-1-saeed@kernel.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Eran Ben Elisha Holding the clock lock is not required when scheduling a PPS work. Signed-off-by: Eran Ben Elisha Reviewed-by: Moshe Shemesh Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c index b62daf7b9a5c..f8465e42b238 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c @@ -581,8 +581,8 @@ static int mlx5_pps_event(struct notifier_block *nb, cycles_delta = div64_u64(nsec_delta << clock->cycles.shift, clock->cycles.mult); clock->pps_info.start[pin] = cycles_now + cycles_delta; - schedule_work(&clock->pps_info.out_work); write_sequnlock_irqrestore(&clock->lock, flags); + schedule_work(&clock->pps_info.out_work); break; default: mlx5_core_err(mdev, " Unhandled clock PPS event, func %d\n", From patchwork Tue Sep 15 20:25:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 260843 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5984C43461 for ; Tue, 15 Sep 2020 22:05:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9E9CD20663 for ; Tue, 15 Sep 2020 22:05:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1600207540; bh=Qm3l8yQ0OD8GhTIoECQ0am40hEd2UFkbvRqIbWs+aew=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=QnzGpFijVGZeuE5lN1khm83T+30xx4PhBYfTr6S13euSuufr4ER2xFX/06KRUY8tU P86WLj1OXnm9sdloeqrJrDAlgH4+GS3Jm6a4MKSEm/gmuitXXEznOM/Bvfd/Bl9cc9 QmpnB+KIsz9A4KFn7703PO21NoOKP8IxdsMLoK/E= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728120AbgIOWFk (ORCPT ); Tue, 15 Sep 2020 18:05:40 -0400 Received: from mail.kernel.org ([198.145.29.99]:54384 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727976AbgIOU0w (ORCPT ); Tue, 15 Sep 2020 16:26:52 -0400 Received: from sx1.lan (c-24-6-56-119.hsd1.ca.comcast.net [24.6.56.119]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 233102193E; Tue, 15 Sep 2020 20:25:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1600201558; bh=Qm3l8yQ0OD8GhTIoECQ0am40hEd2UFkbvRqIbWs+aew=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HJuBXH9bvcoFCIlUDgbkVqGdWAckAVmesbWYiJCDMAfROo9Nw8kSjHU4Odx3cTp7O cLgAQHS4115tTjMOtjcEa6YvQRzRjFuvWhtELYWJcgLABfl34oBOudLN7qIXDbWk9K TVQVYa0xjt9JBBjDF4hCpZtzv5bw+QbyQWCVDOew= From: saeed@kernel.org To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Jianbo Liu , Raed Salem , Roi Dayan , Jiri Pirko , Raed Salem , Roi Dayan , Saeed Mahameed Subject: [net-next 09/16] net/mlx5e: Add LAG warning if bond slave is not lag master Date: Tue, 15 Sep 2020 13:25:26 -0700 Message-Id: <20200915202533.64389-10-saeed@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200915202533.64389-1-saeed@kernel.org> References: <20200915202533.64389-1-saeed@kernel.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Jianbo Liu LAG offload can't be enabled if the enslaved PF is not lag master, which is indicated by HCA capabilities bit. It is cleared if more than 64 VFs are configured for this PF. Previously, a data structure is created to store lag info, including PFs to be enslaved, then a handler is registered for netdev notifier. However, this initialization is skipped if PF is not lag master. So PF can't handle CHANGEUPPER event from upper bond device. Even worse, PF is enslaved silently, and LAG offload is not activated. Fix this by registering netdev notifier for PFs which are not lag masters. When CHANGEUPPER event is received, and both physical ports (and only them) on the same NIC are about to be enslaved, a warning is returned for user to know it. Signed-off-by: Jianbo Liu Reviewed-by: Raed Salem Reviewed-by: Roi Dayan Reviewed-by: Jiri Pirko Reviewed-by: Raed Salem Reviewed-by: Roi Dayan Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/lag.c | 43 ++++++++++++++----- drivers/net/ethernet/mellanox/mlx5/core/lag.h | 7 +++ .../net/ethernet/mellanox/mlx5/core/lag_mp.c | 2 +- 3 files changed, 41 insertions(+), 11 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag.c b/drivers/net/ethernet/mellanox/mlx5/core/lag.c index 191d3d5be46d..33081b24f10a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/lag.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/lag.c @@ -271,7 +271,7 @@ static void mlx5_do_bond(struct mlx5_lag *ldev) bool do_bond, roce_lag; int err; - if (!dev0 || !dev1) + if (!mlx5_lag_is_ready(ldev)) return; spin_lock(&lag_lock); @@ -394,6 +394,12 @@ static int mlx5_handle_changeupper_event(struct mlx5_lag *ldev, */ is_in_lag = num_slaves == MLX5_MAX_PORTS && bond_status == 0x3; + if (!mlx5_lag_is_ready(ldev) && is_in_lag) { + NL_SET_ERR_MSG_MOD(info->info.extack, + "Can't activate LAG offload, PF is configured with more than 64 VFs"); + return 0; + } + /* Lag mode must be activebackup or hash. */ mode_supported = tracker->tx_type == NETDEV_LAG_TX_TYPE_ACTIVEBACKUP || tracker->tx_type == NETDEV_LAG_TX_TYPE_HASH; @@ -450,6 +456,10 @@ static int mlx5_lag_netdev_event(struct notifier_block *this, return NOTIFY_DONE; ldev = container_of(this, struct mlx5_lag, nb); + + if (!mlx5_lag_is_ready(ldev) && event == NETDEV_CHANGELOWERSTATE) + return NOTIFY_DONE; + tracker = ldev->tracker; switch (event) { @@ -498,14 +508,14 @@ static void mlx5_lag_dev_free(struct mlx5_lag *ldev) kfree(ldev); } -static void mlx5_lag_dev_add_pf(struct mlx5_lag *ldev, - struct mlx5_core_dev *dev, - struct net_device *netdev) +static int mlx5_lag_dev_add_pf(struct mlx5_lag *ldev, + struct mlx5_core_dev *dev, + struct net_device *netdev) { unsigned int fn = PCI_FUNC(dev->pdev->devfn); if (fn >= MLX5_MAX_PORTS) - return; + return -EPERM; spin_lock(&lag_lock); ldev->pf[fn].dev = dev; @@ -516,6 +526,8 @@ static void mlx5_lag_dev_add_pf(struct mlx5_lag *ldev, dev->priv.lag = ldev; spin_unlock(&lag_lock); + + return fn; } static void mlx5_lag_dev_remove_pf(struct mlx5_lag *ldev, @@ -542,11 +554,9 @@ void mlx5_lag_add(struct mlx5_core_dev *dev, struct net_device *netdev) { struct mlx5_lag *ldev = NULL; struct mlx5_core_dev *tmp_dev; - int err; + int i, err; - if (!MLX5_CAP_GEN(dev, vport_group_manager) || - !MLX5_CAP_GEN(dev, lag_master) || - (MLX5_CAP_GEN(dev, num_lag_ports) != MLX5_MAX_PORTS)) + if (!MLX5_CAP_GEN(dev, vport_group_manager)) return; tmp_dev = mlx5_get_next_phys_dev(dev); @@ -561,7 +571,18 @@ void mlx5_lag_add(struct mlx5_core_dev *dev, struct net_device *netdev) } } - mlx5_lag_dev_add_pf(ldev, dev, netdev); + if (mlx5_lag_dev_add_pf(ldev, dev, netdev) < 0) + return; + + for (i = 0; i < MLX5_MAX_PORTS; i++) { + tmp_dev = ldev->pf[i].dev; + if (!tmp_dev || !MLX5_CAP_GEN(tmp_dev, lag_master) || + MLX5_CAP_GEN(tmp_dev, num_lag_ports) != MLX5_MAX_PORTS) + break; + } + + if (i >= MLX5_MAX_PORTS) + ldev->flags |= MLX5_LAG_FLAG_READY; if (!ldev->nb.notifier_call) { ldev->nb.notifier_call = mlx5_lag_netdev_event; @@ -592,6 +613,8 @@ void mlx5_lag_remove(struct mlx5_core_dev *dev) mlx5_lag_dev_remove_pf(ldev, dev); + ldev->flags &= ~MLX5_LAG_FLAG_READY; + for (i = 0; i < MLX5_MAX_PORTS; i++) if (ldev->pf[i].dev) break; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag.h b/drivers/net/ethernet/mellanox/mlx5/core/lag.h index f1068aac6406..8d8cf2d0bc6d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/lag.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/lag.h @@ -16,6 +16,7 @@ enum { MLX5_LAG_FLAG_ROCE = 1 << 0, MLX5_LAG_FLAG_SRIOV = 1 << 1, MLX5_LAG_FLAG_MULTIPATH = 1 << 2, + MLX5_LAG_FLAG_READY = 1 << 3, }; #define MLX5_LAG_MODE_FLAGS (MLX5_LAG_FLAG_ROCE | MLX5_LAG_FLAG_SRIOV |\ @@ -59,6 +60,12 @@ __mlx5_lag_is_active(struct mlx5_lag *ldev) return !!(ldev->flags & MLX5_LAG_MODE_FLAGS); } +static inline bool +mlx5_lag_is_ready(struct mlx5_lag *ldev) +{ + return ldev->flags & MLX5_LAG_FLAG_READY; +} + void mlx5_modify_lag(struct mlx5_lag *ldev, struct lag_tracker *tracker); int mlx5_activate_lag(struct mlx5_lag *ldev, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c b/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c index d192d25cff33..88e58ac902de 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c @@ -11,7 +11,7 @@ static bool mlx5_lag_multipath_check_prereq(struct mlx5_lag *ldev) { - if (!ldev->pf[MLX5_LAG_P1].dev || !ldev->pf[MLX5_LAG_P2].dev) + if (!mlx5_lag_is_ready(ldev)) return false; return mlx5_esw_multipath_prereq(ldev->pf[MLX5_LAG_P1].dev, From patchwork Tue Sep 15 20:25:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 260842 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D5BAC43461 for ; Tue, 15 Sep 2020 22:05:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 519A920663 for ; Tue, 15 Sep 2020 22:05:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1600207556; bh=Otn5AXqU7jSSGF4Kp7jdoMkKIFC8yC7olVrEaqpjakU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=bPYapEfGv2HZ+mXlSqXXr7wt0q0V9vprd3f3Y8EKIkDyg8ElBZsjaEJoIH6GgUz/f ohD+YOfisCZiVVqV9ScWxXL741bgmJzFbLvhgdeAFDACJZMzx8ol67mIqyBqV/zzWy fLE+vOXPODj9EY0xS80XO1T4aB+TVGwh39KU4mm4= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727976AbgIOWFz (ORCPT ); Tue, 15 Sep 2020 18:05:55 -0400 Received: from mail.kernel.org ([198.145.29.99]:54390 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727788AbgIOU0w (ORCPT ); Tue, 15 Sep 2020 16:26:52 -0400 Received: from sx1.lan (c-24-6-56-119.hsd1.ca.comcast.net [24.6.56.119]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 5114F21D91; Tue, 15 Sep 2020 20:25:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1600201559; bh=Otn5AXqU7jSSGF4Kp7jdoMkKIFC8yC7olVrEaqpjakU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jjgDLvZbJwFFNo9P9d43IHhM9Ajy1aoRH6yBrDJN6UgWezxv8Ft338VPjd0o+t+fN jCKC/wwJM4Ohv/IZjwIVn6AtOooWZEEUV7UCDbLdt5hAH3yhoNKQKYcSGgENeWgzvG gKSI0b5+mzde/g00OtlqrTa6nU+YYo0+Yr88wLMg= From: saeed@kernel.org To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Vu Pham , Mark Bloch , Saeed Mahameed Subject: [net-next 11/16] net/mlx5: E-Switch, Dedicated metadata for uplink vport Date: Tue, 15 Sep 2020 13:25:28 -0700 Message-Id: <20200915202533.64389-12-saeed@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200915202533.64389-1-saeed@kernel.org> References: <20200915202533.64389-1-saeed@kernel.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Vu Pham Uplink vport must have a dedicated metadata with vhca_id being part of the metadata. Fixes: 133dcfc577ea ("net/mlx5: E-Switch, Alloc and free unique metadata for match") Signed-off-by: Vu Pham Reviewed-by: Mark Bloch Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c index 4cbadb15297c..9c740ce73085 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c @@ -1908,9 +1908,6 @@ void mlx5_esw_match_metadata_free(struct mlx5_eswitch *esw, u32 metadata) static int esw_offloads_vport_metadata_setup(struct mlx5_eswitch *esw, struct mlx5_vport *vport) { - if (vport->vport == MLX5_VPORT_UPLINK) - return 0; - vport->default_metadata = mlx5_esw_match_metadata_alloc(esw); vport->metadata = vport->default_metadata; return vport->metadata ? 0 : -ENOSPC; @@ -1919,7 +1916,7 @@ static int esw_offloads_vport_metadata_setup(struct mlx5_eswitch *esw, static void esw_offloads_vport_metadata_cleanup(struct mlx5_eswitch *esw, struct mlx5_vport *vport) { - if (vport->vport == MLX5_VPORT_UPLINK || !vport->default_metadata) + if (!vport->default_metadata) return; WARN_ON(vport->metadata != vport->default_metadata); From patchwork Tue Sep 15 20:25:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 260844 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45881C43461 for ; Tue, 15 Sep 2020 22:05:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F3BFD206B7 for ; Tue, 15 Sep 2020 22:05:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1600207519; bh=eOIgJ3K5fKzjDwbyfKRxAW5sRKnnAo83yf+Zt09mhK4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=a/E9RKC4rMhtmg6KUNESjftB/E0O4+5IYV6o/myOmFSmRoMPkimRrBI/d9Jjc6wz8 x1y8dh+A76ffaDw3iHEgGxNM8AbNggRzmr6Vr6ovbH5kObOUWWDC2fZOg9UvEZrPAb AL8E1WEMV3R2RyonI8bfQfulDjyOSgA9VNK77YoU= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728105AbgIOWFQ (ORCPT ); Tue, 15 Sep 2020 18:05:16 -0400 Received: from mail.kernel.org ([198.145.29.99]:54460 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728024AbgIOU0w (ORCPT ); Tue, 15 Sep 2020 16:26:52 -0400 Received: from sx1.lan (c-24-6-56-119.hsd1.ca.comcast.net [24.6.56.119]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id BEDCF21D95; Tue, 15 Sep 2020 20:25:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1600201560; bh=eOIgJ3K5fKzjDwbyfKRxAW5sRKnnAo83yf+Zt09mhK4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WpSjtKPvs1WN4/rXnkdqZM7s9d/eobwQ63cvbGfJsMZefiok+7RwyoCWXGgZxwmVR kBt+oPGvWEboicUndyMl4oet0UUaUBY+ozkgQ6tsI1r4WnVlnD+1enhfpWJq2XYMP8 2cpp3IHLqcB4uEvuIAKPZcrl0d+Xy+e0byTq3z94= From: saeed@kernel.org To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Vu Pham , Bodong Wang , Roi Dayan , Mark Bloch , Saeed Mahameed Subject: [net-next 12/16] net/mlx5: E-Switch, Setup all vports' metadata to support peer miss rule Date: Tue, 15 Sep 2020 13:25:29 -0700 Message-Id: <20200915202533.64389-13-saeed@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200915202533.64389-1-saeed@kernel.org> References: <20200915202533.64389-1-saeed@kernel.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Vu Pham In merged eswitch configuration, peer miss rule is setup for all vports. If metadata is enabled, peer miss rule with metadata matching will be configured instead of source port matching; however, some vports that have not yet been enabled don't have default_metadata setup and their default_metadata will be zero. Hence, setup/cleanup default metadata for all vports when eswitch moves in/out of offloads mode. Fixes: 133dcfc577ea ("net/mlx5: E-Switch, Alloc and free unique metadata for match") Signed-off-by: Vu Pham Reviewed-by: Bodong Wang Reviewed-by: Roi Dayan Reviewed-by: Mark Bloch Signed-off-by: Saeed Mahameed --- .../mellanox/mlx5/core/eswitch_offloads.c | 51 +++++++++++++++---- 1 file changed, 42 insertions(+), 9 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c index 9c740ce73085..3321bb1f188d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c @@ -1923,19 +1923,49 @@ static void esw_offloads_vport_metadata_cleanup(struct mlx5_eswitch *esw, mlx5_esw_match_metadata_free(esw, vport->default_metadata); } +static void esw_offloads_metadata_uninit(struct mlx5_eswitch *esw) +{ + struct mlx5_vport *vport; + int i; + + if (!mlx5_eswitch_vport_match_metadata_enabled(esw)) + return; + + mlx5_esw_for_all_vports_reverse(esw, i, vport) + esw_offloads_vport_metadata_cleanup(esw, vport); +} + +static int esw_offloads_metadata_init(struct mlx5_eswitch *esw) +{ + struct mlx5_vport *vport; + int err; + int i; + + if (!mlx5_eswitch_vport_match_metadata_enabled(esw)) + return 0; + + mlx5_esw_for_all_vports(esw, i, vport) { + err = esw_offloads_vport_metadata_setup(esw, vport); + if (err) + goto metadata_err; + } + + return 0; + +metadata_err: + esw_offloads_metadata_uninit(esw); + return err; +} + int esw_vport_create_offloads_acl_tables(struct mlx5_eswitch *esw, struct mlx5_vport *vport) { int err; - err = esw_offloads_vport_metadata_setup(esw, vport); - if (err) - goto metadata_err; - err = esw_acl_ingress_ofld_setup(esw, vport); if (err) - goto ingress_err; + return err; if (mlx5_eswitch_is_vf_vport(esw, vport->vport)) { err = esw_acl_egress_ofld_setup(esw, vport); @@ -1947,9 +1977,6 @@ esw_vport_create_offloads_acl_tables(struct mlx5_eswitch *esw, egress_err: esw_acl_ingress_ofld_cleanup(esw, vport); -ingress_err: - esw_offloads_vport_metadata_cleanup(esw, vport); -metadata_err: return err; } @@ -1959,7 +1986,6 @@ esw_vport_destroy_offloads_acl_tables(struct mlx5_eswitch *esw, { esw_acl_egress_ofld_cleanup(vport); esw_acl_ingress_ofld_cleanup(esw, vport); - esw_offloads_vport_metadata_cleanup(esw, vport); } static int esw_create_uplink_offloads_acl_tables(struct mlx5_eswitch *esw) @@ -2138,6 +2164,10 @@ int esw_offloads_enable(struct mlx5_eswitch *esw) if (esw_use_vport_metadata(esw)) esw->flags |= MLX5_ESWITCH_VPORT_MATCH_METADATA; + err = esw_offloads_metadata_init(esw); + if (err) + goto err_metadata; + err = esw_set_passing_vport_metadata(esw, true); if (err) goto err_vport_metadata; @@ -2170,6 +2200,8 @@ int esw_offloads_enable(struct mlx5_eswitch *esw) err_steering_init: esw_set_passing_vport_metadata(esw, false); err_vport_metadata: + esw_offloads_metadata_uninit(esw); +err_metadata: esw->flags &= ~MLX5_ESWITCH_VPORT_MATCH_METADATA; mlx5_rdma_disable_roce(esw->dev); mutex_destroy(&esw->offloads.termtbl_mutex); @@ -2204,6 +2236,7 @@ void esw_offloads_disable(struct mlx5_eswitch *esw) esw_offloads_unload_rep(esw, MLX5_VPORT_UPLINK); esw_set_passing_vport_metadata(esw, false); esw_offloads_steering_cleanup(esw); + esw_offloads_metadata_uninit(esw); esw->flags &= ~MLX5_ESWITCH_VPORT_MATCH_METADATA; mlx5_rdma_disable_roce(esw->dev); mutex_destroy(&esw->offloads.termtbl_mutex); From patchwork Tue Sep 15 20:25:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 260846 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A890C433E2 for ; Tue, 15 Sep 2020 21:11:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 08EFE20795 for ; Tue, 15 Sep 2020 21:11:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1600204262; bh=fvWSxNm0lJCp/4k9z5U/5ySAw3alA5D82FxUeCeGzw0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=VIkXPWity2Fh/lBbFaVo+8F63S2Zs4wRiKUwsY66zuIVoIXP+EiosrdHhz/7aHSW5 3um26SSoAeAB0QorASPbHYwxUbMbAGUZvrKz1D4+fWfpWQsYY/AwuNupqCKORXPKlt nyFr4n+2D70gbZElFmOaOsT2WU96F/wBgs4LSWW0= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726454AbgIOVK6 (ORCPT ); Tue, 15 Sep 2020 17:10:58 -0400 Received: from mail.kernel.org ([198.145.29.99]:54388 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728029AbgIOU1n (ORCPT ); Tue, 15 Sep 2020 16:27:43 -0400 Received: from sx1.lan (c-24-6-56-119.hsd1.ca.comcast.net [24.6.56.119]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id EF783221E7; Tue, 15 Sep 2020 20:26:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1600201562; bh=fvWSxNm0lJCp/4k9z5U/5ySAw3alA5D82FxUeCeGzw0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Z8wslCqqvwvQS+mzIibksMuhpcZtzzx/jB3BUH91yDuU1OJwZy4psovKdgndZvQDN EKvlzsmh1yBRz0CsOaps4NZDkqvYs4K+Gw48VImtalO9LG4GMlfFnoPw9MuA8VR6/7 qnNDC6aMnb6i37o38FGSW7AMMPp+6dn/ruVrG91A= From: saeed@kernel.org To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Ofer Levi , Tariq Toukan , Saeed Mahameed Subject: [net-next 16/16] net/mlx5e: Add CQE compression support for multi-strides packets Date: Tue, 15 Sep 2020 13:25:33 -0700 Message-Id: <20200915202533.64389-17-saeed@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200915202533.64389-1-saeed@kernel.org> References: <20200915202533.64389-1-saeed@kernel.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Ofer Levi Add CQE compression support for completions of packets that span multiple strides in a Striding RQ, per the HW capability. In our memory model, we use small strides (256B as of today) for the non-linear SKB mode. This feature allows CQE compression to work also for multiple strides packets. In this case decompressing the mini CQE array will use stride index provided by HW as part of the mini CQE. Before this feature, compression was possible only for single-strided packets, i.e. for packets of size up to 256 bytes when in non-linear mode, and the index was maintained by SW. This feature is supported for ConnectX-5 and above. Feature performance test: This was whitebox-tested, we reduced the PCI speed from 125Gb/s to 62.5Gb/s to overload pci and manipulated mlx5 driver to drop incoming packets before building the SKB to achieve low cpu utilization. Outcome is low cpu utilization and bottleneck on pci only. Test setup: Server: Intel(R) Xeon(R) Silver 4108 CPU @ 1.80GHz server, 32 cores NIC: ConnectX-6 DX. Sender side generates 300 byte packets at full pci bandwidth. Receiver side configuration: Single channel, one cpu processing with one ring allocated. Cpu utilization is ~20% while pci bandwidth is fully utilized. For the generated traffic and interface MTU of 4500B (to activate the non-linear SKB mode), packet rate improvement is about 19% from ~17.6Mpps to ~21Mpps. Without this feature, counters show no CQE compression blocks for this setup, while with the feature, counters show ~20.7Mpps compressed CQEs in ~500K compression blocks. Signed-off-by: Ofer Levi Reviewed-by: Tariq Toukan Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en.h | 1 + drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 12 +++++++++++- drivers/net/ethernet/mellanox/mlx5/core/en_rx.c | 11 ++++++++++- include/linux/mlx5/device.h | 3 ++- 4 files changed, 24 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index 4f33658da25a..95aab8b429cf 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -265,6 +265,7 @@ enum { MLX5E_RQ_STATE_NO_CSUM_COMPLETE, MLX5E_RQ_STATE_CSUM_FULL, /* cqe_csum_full hw bit is set */ MLX5E_RQ_STATE_FPGA_TLS, /* FPGA TLS enabled */ + MLX5E_RQ_STATE_MINI_CQE_HW_STRIDX /* set when mini_cqe_resp_stride_index cap is used */ }; struct mlx5e_cq { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index 26834625556d..b057a6c3a6d5 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -848,6 +848,13 @@ int mlx5e_open_rq(struct mlx5e_channel *c, struct mlx5e_params *params, if (MLX5E_GET_PFLAG(params, MLX5E_PFLAG_RX_NO_CSUM_COMPLETE) || c->xdp) __set_bit(MLX5E_RQ_STATE_NO_CSUM_COMPLETE, &c->rq.state); + /* For CQE compression on striding RQ, use stride index provided by + * HW if capability is supported. + */ + if (MLX5E_GET_PFLAG(params, MLX5E_PFLAG_RX_STRIDING_RQ) && + MLX5_CAP_GEN(c->mdev, mini_cqe_resp_stride_index)) + __set_bit(MLX5E_RQ_STATE_MINI_CQE_HW_STRIDX, &c->rq.state); + return 0; err_destroy_rq: @@ -2182,6 +2189,7 @@ void mlx5e_build_rx_cq_param(struct mlx5e_priv *priv, struct mlx5e_cq_param *param) { struct mlx5_core_dev *mdev = priv->mdev; + bool hw_stridx = false; void *cqc = param->cqc; u8 log_cq_size; @@ -2189,6 +2197,7 @@ void mlx5e_build_rx_cq_param(struct mlx5e_priv *priv, case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ: log_cq_size = mlx5e_mpwqe_get_log_rq_size(params, xsk) + mlx5e_mpwqe_get_log_num_strides(mdev, params, xsk); + hw_stridx = MLX5_CAP_GEN(mdev, mini_cqe_resp_stride_index); break; default: /* MLX5_WQ_TYPE_CYCLIC */ log_cq_size = params->log_rq_mtu_frames; @@ -2196,7 +2205,8 @@ void mlx5e_build_rx_cq_param(struct mlx5e_priv *priv, MLX5_SET(cqc, cqc, log_cq_size, log_cq_size); if (MLX5E_GET_PFLAG(params, MLX5E_PFLAG_RX_CQE_COMPRESS)) { - MLX5_SET(cqc, cqc, mini_cqe_res_format, MLX5_CQE_FORMAT_CSUM); + MLX5_SET(cqc, cqc, mini_cqe_res_format, hw_stridx ? + MLX5_CQE_FORMAT_CSUM_STRIDX : MLX5_CQE_FORMAT_CSUM); MLX5_SET(cqc, cqc, cqe_comp_en, 1); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index 7aab69e991a5..c9c82b14060a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -137,8 +137,17 @@ static inline void mlx5e_decompress_cqe(struct mlx5e_rq *rq, title->check_sum = mini_cqe->checksum; title->op_own &= 0xf0; title->op_own |= 0x01 & (cqcc >> wq->fbc.log_sz); - title->wqe_counter = cpu_to_be16(cqd->wqe_counter); + /* state bit set implies linked-list striding RQ wq type and + * HW stride index capability supported + */ + if (test_bit(MLX5E_RQ_STATE_MINI_CQE_HW_STRIDX, &rq->state)) { + title->wqe_counter = mini_cqe->stridx; + return; + } + + /* HW stride index capability not supported */ + title->wqe_counter = cpu_to_be16(cqd->wqe_counter); if (rq->wq_type == MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ) cqd->wqe_counter += mpwrq_get_cqe_consumed_strides(title); else diff --git a/include/linux/mlx5/device.h b/include/linux/mlx5/device.h index 4d3376e20f5e..81ca5989009b 100644 --- a/include/linux/mlx5/device.h +++ b/include/linux/mlx5/device.h @@ -816,7 +816,7 @@ struct mlx5_mini_cqe8 { __be32 rx_hash_result; struct { __be16 checksum; - __be16 rsvd; + __be16 stridx; }; struct { __be16 wqe_counter; @@ -836,6 +836,7 @@ enum { enum { MLX5_CQE_FORMAT_CSUM = 0x1, + MLX5_CQE_FORMAT_CSUM_STRIDX = 0x3, }; #define MLX5_MINI_CQE_ARRAY_SIZE 8