From patchwork Tue Mar 31 08:57:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 228687 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7329C2D0EE for ; Tue, 31 Mar 2020 09:04:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id BDAED208E0 for ; Tue, 31 Mar 2020 09:04:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1585645460; bh=AAIC3ulIF5Lxcc2vVMGl4LOOkWq5W734wr6sjWOCIBg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=CqmVRAVLYXOHec2IjfnlludtUqKduVY5OArk5Q8V16iqhC7l48mrkn1dr8X4v/7JV x5ediPjoZbUnSIjEXo7s273dZPEOoMxaMCCS0fWU/qOeuUYLW5Wz3YtQQLISD8+7K/ eOVx/QXHF/kC9KeIpZv4v3TiR+YgrGJHwq8zBXO8= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730711AbgCaJET (ORCPT ); Tue, 31 Mar 2020 05:04:19 -0400 Received: from mail.kernel.org ([198.145.29.99]:44170 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730591AbgCaJES (ORCPT ); Tue, 31 Mar 2020 05:04:18 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 2D28820675; Tue, 31 Mar 2020 09:04:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1585645457; bh=AAIC3ulIF5Lxcc2vVMGl4LOOkWq5W734wr6sjWOCIBg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=f2tzfaRkArBY420E1VB0qST3O3ZWt9WIxyC4YlNl9wFWhusgmrEIvetb5wbwkWnoF EoR90k4x0R+0QW807iBrUfgLrf9fdPT/ngtqjSVGSp3U6h3r66gmbglNba1S+RWqpv xTcZ6ltUC00XNysXlCAqra4qsiW2P4SkPY59M7fE= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Aya Levin , Tariq Toukan , Saeed Mahameed , "David S. Miller" Subject: [PATCH 5.5 059/170] net/mlx5e: Fix ICOSQ recovery flow with Striding RQ Date: Tue, 31 Mar 2020 10:57:53 +0200 Message-Id: <20200331085430.831549978@linuxfoundation.org> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200331085423.990189598@linuxfoundation.org> References: <20200331085423.990189598@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Aya Levin [ Upstream commit e239c6d686e1c37fb2ab143162dfb57471a8643f ] In striding RQ mode, the buffers of an RX WQE are first prepared and posted to the HW using a UMR WQEs via the ICOSQ. We maintain the state of these in-progress WQEs in the RQ SW struct. In the flow of ICOSQ recovery, the corresponding RQ is not in error state, hence: - The buffers of the in-progress WQEs must be released and the RQ metadata should reflect it. - Existing RX WQEs in the RQ should not be affected. For this, wrap the dealloc of the in-progress WQEs in a function, and use it in the ICOSQ recovery flow instead of mlx5e_free_rx_descs(). Fixes: be5323c8379f ("net/mlx5e: Report and recover from CQE error on ICOSQ") Signed-off-by: Aya Levin Reviewed-by: Tariq Toukan Signed-off-by: Saeed Mahameed Signed-off-by: David S. Miller Signed-off-by: Greg Kroah-Hartman --- drivers/net/ethernet/mellanox/mlx5/core/en.h | 1 drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c | 2 drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 31 +++++++++++---- 3 files changed, 26 insertions(+), 8 deletions(-) --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -1059,6 +1059,7 @@ int mlx5e_modify_rq_state(struct mlx5e_r void mlx5e_activate_rq(struct mlx5e_rq *rq); void mlx5e_deactivate_rq(struct mlx5e_rq *rq); void mlx5e_free_rx_descs(struct mlx5e_rq *rq); +void mlx5e_free_rx_in_progress_descs(struct mlx5e_rq *rq); void mlx5e_activate_icosq(struct mlx5e_icosq *icosq); void mlx5e_deactivate_icosq(struct mlx5e_icosq *icosq); --- a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c @@ -90,7 +90,7 @@ static int mlx5e_rx_reporter_err_icosq_c goto out; mlx5e_reset_icosq_cc_pc(icosq); - mlx5e_free_rx_descs(rq); + mlx5e_free_rx_in_progress_descs(rq); clear_bit(MLX5E_SQ_STATE_RECOVERING, &icosq->state); mlx5e_activate_icosq(icosq); mlx5e_activate_rq(rq); --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -822,6 +822,29 @@ int mlx5e_wait_for_min_rx_wqes(struct ml return -ETIMEDOUT; } +void mlx5e_free_rx_in_progress_descs(struct mlx5e_rq *rq) +{ + struct mlx5_wq_ll *wq; + u16 head; + int i; + + if (rq->wq_type != MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ) + return; + + wq = &rq->mpwqe.wq; + head = wq->head; + + /* Outstanding UMR WQEs (in progress) start at wq->head */ + for (i = 0; i < rq->mpwqe.umr_in_progress; i++) { + rq->dealloc_wqe(rq, head); + head = mlx5_wq_ll_get_wqe_next_ix(wq, head); + } + + rq->mpwqe.actual_wq_head = wq->head; + rq->mpwqe.umr_in_progress = 0; + rq->mpwqe.umr_completed = 0; +} + void mlx5e_free_rx_descs(struct mlx5e_rq *rq) { __be16 wqe_ix_be; @@ -829,14 +852,8 @@ void mlx5e_free_rx_descs(struct mlx5e_rq if (rq->wq_type == MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ) { struct mlx5_wq_ll *wq = &rq->mpwqe.wq; - u16 head = wq->head; - int i; - /* Outstanding UMR WQEs (in progress) start at wq->head */ - for (i = 0; i < rq->mpwqe.umr_in_progress; i++) { - rq->dealloc_wqe(rq, head); - head = mlx5_wq_ll_get_wqe_next_ix(wq, head); - } + mlx5e_free_rx_in_progress_descs(rq); while (!mlx5_wq_ll_is_empty(wq)) { struct mlx5e_rx_wqe_ll *wqe;