From patchwork Mon Jun 21 13:48:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Coiby Xu X-Patchwork-Id: 464896 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DAF44C4743C for ; Mon, 21 Jun 2021 13:49:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C1ABC6100B for ; Mon, 21 Jun 2021 13:49:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230118AbhFUNvi (ORCPT ); Mon, 21 Jun 2021 09:51:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49174 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230056AbhFUNvh (ORCPT ); Mon, 21 Jun 2021 09:51:37 -0400 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9828DC061574; Mon, 21 Jun 2021 06:49:22 -0700 (PDT) Received: by mail-pj1-x1036.google.com with SMTP id g6-20020a17090adac6b029015d1a9a6f1aso10214581pjx.1; Mon, 21 Jun 2021 06:49:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7viMnibfS6VsxzlqCvYb7FwB8QCMPLVzuuh0psBNrJU=; b=tjJFcaz40QGLpVSYaxM07H8/eGzJESZ6yozxTdvYlveGp0wRby2HUQECj350JoC5if kDo/DC08G/mA2KmpyzjofP83+sJnABlcXw37McOROnOgmcqUhG17ULQrqzZjbTS1v0kS 82Oyyo7j2Jz3xUUaKtR+4UDKnR1dLpIe7s3chpgUP7vbb6pMmOgYIQkRvtG2yVsXP7hq cfOKqtBRT/EdrWVW41y5Chrymf3V1QS1KFSkkkOa29Fy7j0Sptx+1BDepm2+FVXcyMss 3FthpzKHS55bIjL/2DxwCOonm7w74cGB64GEJhW4nzUSc8nbKjBZClxFOHOMx1ooa7Cx zOog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7viMnibfS6VsxzlqCvYb7FwB8QCMPLVzuuh0psBNrJU=; b=YXEmMINUsQgoaiSu8Fwh85ETWeqDC6SSQDUaEYmxWVsHQk91OgrmWHw8J2hdPeIOmh etPViZ5DtMd+M6EKK6RVemREKxXBhPco1gxvnUp/994c1gZMgVJ3WDpkh7rpjUUxv9BC uLYPxIccqZ+Rplas1tUzxn34kyG6SrpDRrhc9fo0Rj0SfG/Vq9D0upUlOYAy6NFn5lZj OGi045CRcIGSVSwuWjFelgrZ/js/s48lx+6H3v0rKlIGfSi14GNLpFhDeysnxJYp4Ht+ eXtttSyeewoT3S4NzSVAxs+wFLd3N7qlYLUuQhITQAa67lYkErrtPuGb8QdqfwJy7f8m ZGCQ== X-Gm-Message-State: AOAM533xMcBqyy+DE2cjhkSCGuk7KUUh3+55GDqUM9kZ7gA0nLZzja9U KUAl4uo314UIdh1qoBMhXPW9NK1yFcCLTCRh X-Google-Smtp-Source: ABdhPJxyDY65TbjH9aeMH8nrQD14NHk4OE72CXa7oqptYgUzmsUi/Cm7+meEe8G2ml/icEIlQjGhzQ== X-Received: by 2002:a17:902:8c87:b029:11d:6f72:78aa with SMTP id t7-20020a1709028c87b029011d6f7278aamr18252075plo.12.1624283362192; Mon, 21 Jun 2021 06:49:22 -0700 (PDT) Received: from localhost ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id m12sm8594396pfd.151.2021.06.21.06.49.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Jun 2021 06:49:21 -0700 (PDT) From: Coiby Xu To: linux-staging@lists.linux.dev Cc: netdev@vger.kernel.org, Benjamin Poirier , Shung-Hsi Yu , Manish Chopra , GR-Linux-NIC-Dev@marvell.com (supporter:QLOGIC QLGE 10Gb ETHERNET DRIVER), Greg Kroah-Hartman , linux-kernel@vger.kernel.org (open list) Subject: [RFC 01/19] staging: qlge: fix incorrect truesize accounting Date: Mon, 21 Jun 2021 21:48:44 +0800 Message-Id: <20210621134902.83587-2-coiby.xu@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210621134902.83587-1-coiby.xu@gmail.com> References: <20210621134902.83587-1-coiby.xu@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Commit 7c734359d3504c869132166d159c7f0649f0ab34 ("qlge: Size RX buffers based on MTU") introduced page_chunk structure. We should add qdev->lbq_buf_size to skb->truesize after __skb_fill_page_desc. Signed-off-by: Coiby Xu --- drivers/staging/qlge/TODO | 2 -- drivers/staging/qlge/qlge_main.c | 10 +++++----- 2 files changed, 5 insertions(+), 7 deletions(-) diff --git a/drivers/staging/qlge/TODO b/drivers/staging/qlge/TODO index c76394b9451b..449d7dca478b 100644 --- a/drivers/staging/qlge/TODO +++ b/drivers/staging/qlge/TODO @@ -4,8 +4,6 @@ ql_build_rx_skb(). That function is now used exclusively to handle packets that underwent header splitting but it still contains code to handle non split cases. -* truesize accounting is incorrect (ex: a 9000B frame has skb->truesize 10280 - while containing two frags of order-1 allocations, ie. >16K) * while in that area, using two 8k buffers to store one 9k frame is a poor choice of buffer size. * in the "chain of large buffers" case, the driver uses an skb allocated with diff --git a/drivers/staging/qlge/qlge_main.c b/drivers/staging/qlge/qlge_main.c index 19a02e958865..6dd69b689a58 100644 --- a/drivers/staging/qlge/qlge_main.c +++ b/drivers/staging/qlge/qlge_main.c @@ -1446,7 +1446,7 @@ static void qlge_process_mac_rx_gro_page(struct qlge_adapter *qdev, skb->len += length; skb->data_len += length; - skb->truesize += length; + skb->truesize += qdev->lbq_buf_size; skb_shinfo(skb)->nr_frags++; rx_ring->rx_packets++; @@ -1507,7 +1507,7 @@ static void qlge_process_mac_rx_page(struct qlge_adapter *qdev, lbq_desc->p.pg_chunk.offset + hlen, length - hlen); skb->len += length - hlen; skb->data_len += length - hlen; - skb->truesize += length - hlen; + skb->truesize += qdev->lbq_buf_size; rx_ring->rx_packets++; rx_ring->rx_bytes += skb->len; @@ -1757,7 +1757,7 @@ static struct sk_buff *qlge_build_rx_skb(struct qlge_adapter *qdev, lbq_desc->p.pg_chunk.offset, length); skb->len += length; skb->data_len += length; - skb->truesize += length; + skb->truesize += qdev->lbq_buf_size; } else { /* * The headers and data are in a single large buffer. We @@ -1783,7 +1783,7 @@ static struct sk_buff *qlge_build_rx_skb(struct qlge_adapter *qdev, length); skb->len += length; skb->data_len += length; - skb->truesize += length; + skb->truesize += qdev->lbq_buf_size; qlge_update_mac_hdr_len(qdev, ib_mac_rsp, lbq_desc->p.pg_chunk.va, &hlen); @@ -1835,7 +1835,7 @@ static struct sk_buff *qlge_build_rx_skb(struct qlge_adapter *qdev, lbq_desc->p.pg_chunk.offset, size); skb->len += size; skb->data_len += size; - skb->truesize += size; + skb->truesize += qdev->lbq_buf_size; length -= size; i++; } while (length > 0); From patchwork Mon Jun 21 13:48:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Coiby Xu X-Patchwork-Id: 465635 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ABEA6C4743C for ; Mon, 21 Jun 2021 13:49:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7F38B61042 for ; Mon, 21 Jun 2021 13:49:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230202AbhFUNvr (ORCPT ); Mon, 21 Jun 2021 09:51:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49216 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230056AbhFUNvp (ORCPT ); Mon, 21 Jun 2021 09:51:45 -0400 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E67A0C061574; Mon, 21 Jun 2021 06:49:31 -0700 (PDT) Received: by mail-pj1-x1032.google.com with SMTP id g6-20020a17090adac6b029015d1a9a6f1aso10214814pjx.1; Mon, 21 Jun 2021 06:49:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xDbsgsugh71ao8Y6yea5NvWv35h64tsG9JYlXTC4X2E=; b=Bs34u2IQVoPUuboSu29SwNzeeXtm41TgzrzBMk1bm8bSocn2Y0MNuyxoDV3o8HkX61 pDMjOMD5boamR/s/UdQ1oM1F3aSxLcoGN/rDvQvAQZ1VqRpfisZVO6a2dAsocARutAo3 /zPjeGy8AcSuOmzZZSIl4IYU6QLInkhyL1nJycxuKQm0d4UZzgEImftURaIFdA1PICHf xApqK5dPA1YwAjDdJG1gD+iZZCyJ0g2EddTF96isJzlnSzX3brnwJEIZwneN1a9Cpmk5 erim2+jQ+fzIt8mlOEDbSes8ZuTi+TBYXky1PrwTKGSzkQftWXR/90v3I4hPeM+d308D m6cg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xDbsgsugh71ao8Y6yea5NvWv35h64tsG9JYlXTC4X2E=; b=bOzb41u9UQBZVtqa6HqGBqFHTyw668zBGeI1m7YQLJNmLi9hBkuQse/L5KAtPru+EE gqHC5y151W6E/NIWItDxK63HjaFwa3QFp9ZfTOU0kXm+prxtr7wwzR1LQSTyOj/RmNVC zTUcyV9hue6OHryTu3kEBz8todVWSp2h4g/2xXq4AAE8JhXzqQPr1fB7SApruRAf7NnD U2p2aYCXQ8DJY/UVNk20tX/3wKtFT00lq6sNHuCIcX5FKLrEhKVluD81K8MkAlXR6A3d oRER0OlQ7dV9lj5cLJVaZHsUWjb1yXBBSkNdeI/Woqyl++j5fkvzUDfA6Dcrev7ZKMgt pSqQ== X-Gm-Message-State: AOAM533v77QSLsDN0aZFV+G7DzxDxjpiiQLs5d9Jp+iJSQTKBmV+nMo6 OnpPary4Kyh+JSSPwsu4/f8= X-Google-Smtp-Source: ABdhPJxQfEFHZfiT7GkUBrNkZukPjstERXlXZ4ZyR9s8pLbDYwd8zlu+CFX157mIxWOY3suJ96ZXig== X-Received: by 2002:a17:902:ba8b:b029:120:1d2b:f94b with SMTP id k11-20020a170902ba8bb02901201d2bf94bmr18026289pls.44.1624283371550; Mon, 21 Jun 2021 06:49:31 -0700 (PDT) Received: from localhost ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id n6sm16957370pgt.7.2021.06.21.06.49.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Jun 2021 06:49:31 -0700 (PDT) From: Coiby Xu To: linux-staging@lists.linux.dev Cc: netdev@vger.kernel.org, Benjamin Poirier , Shung-Hsi Yu , Manish Chopra , GR-Linux-NIC-Dev@marvell.com (supporter:QLOGIC QLGE 10Gb ETHERNET DRIVER), Greg Kroah-Hartman , linux-kernel@vger.kernel.org (open list) Subject: [RFC 02/19] staging: qlge: change LARGE_BUFFER_MAX_SIZE to 4096 Date: Mon, 21 Jun 2021 21:48:45 +0800 Message-Id: <20210621134902.83587-3-coiby.xu@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210621134902.83587-1-coiby.xu@gmail.com> References: <20210621134902.83587-1-coiby.xu@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org LARGE_BUFFER_MAX_SIZE=4096 could make better use of memory. This choice is consist with ixgbe and e1000 , - ixgbe sets the rx buffer's page order to 0 unless FCoE is enabled - e1000 allocs a page for a jumbo receive buffer Signed-off-by: Coiby Xu --- drivers/staging/qlge/TODO | 2 -- drivers/staging/qlge/qlge.h | 2 +- 2 files changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/staging/qlge/TODO b/drivers/staging/qlge/TODO index 449d7dca478b..0e26fac1ddc5 100644 --- a/drivers/staging/qlge/TODO +++ b/drivers/staging/qlge/TODO @@ -4,8 +4,6 @@ ql_build_rx_skb(). That function is now used exclusively to handle packets that underwent header splitting but it still contains code to handle non split cases. -* while in that area, using two 8k buffers to store one 9k frame is a poor - choice of buffer size. * in the "chain of large buffers" case, the driver uses an skb allocated with head room but only puts data in the frags. * rename "rx" queues to "completion" queues. Calling tx completion queues "rx diff --git a/drivers/staging/qlge/qlge.h b/drivers/staging/qlge/qlge.h index 55e0ad759250..f54d38606b78 100644 --- a/drivers/staging/qlge/qlge.h +++ b/drivers/staging/qlge/qlge.h @@ -52,7 +52,7 @@ #define RX_RING_SHADOW_SPACE (sizeof(u64) + \ MAX_DB_PAGES_PER_BQ(QLGE_BQ_LEN) * sizeof(u64) + \ MAX_DB_PAGES_PER_BQ(QLGE_BQ_LEN) * sizeof(u64)) -#define LARGE_BUFFER_MAX_SIZE 8192 +#define LARGE_BUFFER_MAX_SIZE 4096 #define LARGE_BUFFER_MIN_SIZE 2048 #define MAX_CQ 128 From patchwork Mon Jun 21 13:48:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Coiby Xu X-Patchwork-Id: 464895 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7A3DC4743C for ; Mon, 21 Jun 2021 13:49:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8F0D4611C1 for ; Mon, 21 Jun 2021 13:49:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230229AbhFUNv5 (ORCPT ); Mon, 21 Jun 2021 09:51:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49268 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230161AbhFUNv4 (ORCPT ); Mon, 21 Jun 2021 09:51:56 -0400 Received: from mail-pf1-x429.google.com (mail-pf1-x429.google.com [IPv6:2607:f8b0:4864:20::429]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A159DC061574; Mon, 21 Jun 2021 06:49:41 -0700 (PDT) Received: by mail-pf1-x429.google.com with SMTP id h26so2594469pfo.5; Mon, 21 Jun 2021 06:49:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=L3aJmS+A5BgPt7scDAGNNWhO+vstoQ9bplelvtban/o=; b=aOBfGhsGV/NRRTJeWDNv1z2nkwtvmYnwdFVVgldj0IhV9Q1geXvY7Z9nnZQlwK+Rbc qk5YZ5aoOpdkwISZyz4dOCvaldF5KTmZ+mhIWZjtSASooLEyjZE00qBS/Om5znfIqsNV JSTF6fEJRVqb2QuyMqjZY/bbeLfCQIvpIj1Jzs4wlyZdH730LJJksdmBL0YRHiDk9J1i ekSpgge0OpCm2BLgSJXFpkTMITKLlOh6D8VDjcELFy19cdB8RaHskS9plLqXcWdDa5EJ GUVjB7YybpgG43u/RJjRQPpovrMkoG9cXG7DtzBKimJSxcHpV7xIK5fKb9QkXxn5Pmt3 s0vw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=L3aJmS+A5BgPt7scDAGNNWhO+vstoQ9bplelvtban/o=; b=ba4IXb9F4j2o/nzejN85mVig8otlRhmeWOPZVHbR0o0ad+T/tWwPkByX0j4/wlDORU DNx55NJrOm2alAUvSiL4fDmsj/Q/iW19gRhjA/oz4TcVRlBQQYSHSh4U2G9PhB0pVifN XOmzDyLVTPyKm7J0A9vw/l7nI1x+X8CUJqjV5agzOSKRAbnjurG0HIZHEi9/va6djP2e cNucesLdKY0YZ3mv8XvCE9tmsE+mryzTtPdbHp89dNES2GvBjywjmGsP+pIpMZMQAgHp Rh3h+AKtXIoAFfIEPwPU3bpzoFW3t/AgNjow7nMwfK5o8klr1B9qmiIecaM2pryQ7xom s5Sw== X-Gm-Message-State: AOAM533htqNxXUTW5Dt+WpnwsGNnQPldusyn4PhrJ1WvW1b/L5TE3AkA Li0PH3lCYq3Zxd8vlWbQsUE= X-Google-Smtp-Source: ABdhPJx6GhFN8rBBIVU5c7BUKeFC5mwf3vsK+P3X25A5+EpaHo0zFKilXMuP9A16UB/t5FJOPr9V4g== X-Received: by 2002:a63:5705:: with SMTP id l5mr23575883pgb.227.1624283381216; Mon, 21 Jun 2021 06:49:41 -0700 (PDT) Received: from localhost ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id y15sm15913517pji.47.2021.06.21.06.49.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Jun 2021 06:49:40 -0700 (PDT) From: Coiby Xu To: linux-staging@lists.linux.dev Cc: netdev@vger.kernel.org, Benjamin Poirier , Shung-Hsi Yu , Manish Chopra , GR-Linux-NIC-Dev@marvell.com (supporter:QLOGIC QLGE 10Gb ETHERNET DRIVER), Greg Kroah-Hartman , linux-kernel@vger.kernel.org (open list) Subject: [RFC 03/19] staging: qlge: alloc skb with only enough room for header when data is put in the fragments Date: Mon, 21 Jun 2021 21:48:46 +0800 Message-Id: <20210621134902.83587-4-coiby.xu@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210621134902.83587-1-coiby.xu@gmail.com> References: <20210621134902.83587-1-coiby.xu@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Data is put in the fragments. No need to alloc a skb with unnecessarily large data buffer. Suggested-by: Benjamin Poirier Signed-off-by: Coiby Xu --- drivers/staging/qlge/TODO | 2 -- drivers/staging/qlge/qlge_main.c | 4 ++-- 2 files changed, 2 insertions(+), 4 deletions(-) diff --git a/drivers/staging/qlge/TODO b/drivers/staging/qlge/TODO index 0e26fac1ddc5..49cb09fc2be4 100644 --- a/drivers/staging/qlge/TODO +++ b/drivers/staging/qlge/TODO @@ -4,8 +4,6 @@ ql_build_rx_skb(). That function is now used exclusively to handle packets that underwent header splitting but it still contains code to handle non split cases. -* in the "chain of large buffers" case, the driver uses an skb allocated with - head room but only puts data in the frags. * rename "rx" queues to "completion" queues. Calling tx completion queues "rx queues" is confusing. * struct rx_ring is used for rx and tx completions, with some members relevant diff --git a/drivers/staging/qlge/qlge_main.c b/drivers/staging/qlge/qlge_main.c index 6dd69b689a58..c91969b01bd5 100644 --- a/drivers/staging/qlge/qlge_main.c +++ b/drivers/staging/qlge/qlge_main.c @@ -1471,7 +1471,7 @@ static void qlge_process_mac_rx_page(struct qlge_adapter *qdev, struct napi_struct *napi = &rx_ring->napi; size_t hlen = ETH_HLEN; - skb = netdev_alloc_skb(ndev, length); + skb = napi_alloc_skb(&rx_ring->napi, SMALL_BUFFER_SIZE); if (!skb) { rx_ring->rx_dropped++; put_page(lbq_desc->p.pg_chunk.page); @@ -1765,7 +1765,7 @@ static struct sk_buff *qlge_build_rx_skb(struct qlge_adapter *qdev, * jumbo mtu on a non-TCP/UDP frame. */ lbq_desc = qlge_get_curr_lchunk(qdev, rx_ring); - skb = netdev_alloc_skb(qdev->ndev, length); + skb = napi_alloc_skb(&rx_ring->napi, SMALL_BUFFER_SIZE); if (!skb) { netif_printk(qdev, probe, KERN_DEBUG, qdev->ndev, "No skb available, drop the packet.\n"); From patchwork Mon Jun 21 13:48:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Coiby Xu X-Patchwork-Id: 465634 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 204E3C48BC2 for ; Mon, 21 Jun 2021 13:49:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 095E861206 for ; Mon, 21 Jun 2021 13:49:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230161AbhFUNwH (ORCPT ); Mon, 21 Jun 2021 09:52:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49318 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229837AbhFUNwG (ORCPT ); Mon, 21 Jun 2021 09:52:06 -0400 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5A8E0C061574; Mon, 21 Jun 2021 06:49:51 -0700 (PDT) Received: by mail-pj1-x1030.google.com with SMTP id o10-20020a17090aac0ab029016e92770073so12146pjq.5; Mon, 21 Jun 2021 06:49:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=M1jDp+GZOF7l5gwkQF4lqrVNesnClGcHs8kELn01Fz8=; b=m9r75rp8My5qvOqXske2JMVLctgKvs/QGhxOls+Gc5M01+qHg/suHCr8Nu+ytafUGP A0733d9IftUFdWjWtV00WJ9HuYlN6TJRK9N9TwYAnhcRb3I0Qi+2kn2C11/wPbAVrA/p /X3MXRsxEoUGJkFOEKkJfnESngCaooApHfNuNBKsnM7mxm3MIcUjP8PjOvRY4pN22lbk EdwIIO2a7DYYzXx9z8KlRWWoThDec9y3qrHlS8oR7tdvevMt7JbCb3aUPpkVqfeJIGQg +6GCM3MUxsfmPqTILSoNPxQrQNIUNZdDwwG09oi5O8SVUJkbk21II4ACCs9oVTTaUPcj /HVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=M1jDp+GZOF7l5gwkQF4lqrVNesnClGcHs8kELn01Fz8=; b=Dj/Po9RVS4TQl2+xFqzxSGV2vwfsLAtSJZa94IZrqLB2JZsZXsZqkptdaeUHYmzZ8D gZHe94Wc1lTwpzyveSq8+ogYImwhrgNBoGsn/n4SZM3WkSSqdyvy3TugUDDE3JpFzXjP 47H1lpK78a+RkEDZ+SNg8eCft3rhHHchiLZqIsijsc2xZBWcE4MYDvIppkBBI+Uz2HHI 8aBe9MFZ5RmPu0UMPf9RLfbybtJnZjHg81nA0AJ+NvRnz+S1NVbkxMOHQ3XZ0hg7fW5C heS9q6J51j9dRlgCD0teHfeymFUvBcfah6XYrsc2KxpOhICu1j4L/p27kR4tPc6kOADc e/fA== X-Gm-Message-State: AOAM532Xy2Iu9T6BJCusdNHX1nGUnYSIKqPIDTbF+SPNg0f81MnQIsWM 07t1dVm1StcXjpPLVtazLqg= X-Google-Smtp-Source: ABdhPJxtULWv0iOqSvXrINHY44yy6MnF7sUCMncpWGlYtkN5PUtZPaCVbLql7wVVoWpRD34DW7wX4w== X-Received: by 2002:a17:90a:a790:: with SMTP id f16mr38840604pjq.176.1624283390811; Mon, 21 Jun 2021 06:49:50 -0700 (PDT) Received: from localhost ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id z15sm18030311pgu.71.2021.06.21.06.49.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Jun 2021 06:49:50 -0700 (PDT) From: Coiby Xu To: linux-staging@lists.linux.dev Cc: netdev@vger.kernel.org, Benjamin Poirier , Shung-Hsi Yu , Manish Chopra , GR-Linux-NIC-Dev@marvell.com (supporter:QLOGIC QLGE 10Gb ETHERNET DRIVER), Greg Kroah-Hartman , linux-kernel@vger.kernel.org (open list) Subject: [RFC 04/19] staging: qlge: add qlge_* prefix to avoid namespace clashes Date: Mon, 21 Jun 2021 21:48:47 +0800 Message-Id: <20210621134902.83587-5-coiby.xu@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210621134902.83587-1-coiby.xu@gmail.com> References: <20210621134902.83587-1-coiby.xu@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This patch extends commit f8c047be540197ec69cde33e00e82d23961459ea ("staging: qlge: use qlge_* prefix to avoid namespace clashes with other qlogic drivers") to add qlge_ prefix to rx_ring and tx_ring related structures. Suggested-by: Benjamin Poirier Signed-off-by: Coiby Xu --- drivers/staging/qlge/qlge.h | 40 ++++----- drivers/staging/qlge/qlge_ethtool.c | 4 +- drivers/staging/qlge/qlge_main.c | 124 ++++++++++++++-------------- 3 files changed, 84 insertions(+), 84 deletions(-) diff --git a/drivers/staging/qlge/qlge.h b/drivers/staging/qlge/qlge.h index f54d38606b78..09d5878b95f7 100644 --- a/drivers/staging/qlge/qlge.h +++ b/drivers/staging/qlge/qlge.h @@ -869,17 +869,17 @@ enum { }; #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS -#define SMALL_BUFFER_SIZE 256 -#define SMALL_BUF_MAP_SIZE SMALL_BUFFER_SIZE +#define QLGE_SMALL_BUFFER_SIZE 256 +#define QLGE_SMALL_BUF_MAP_SIZE QLGE_SMALL_BUFFER_SIZE #define SPLT_SETTING FSC_DBRST_1024 #define SPLT_LEN 0 #define QLGE_SB_PAD 0 #else -#define SMALL_BUFFER_SIZE 512 -#define SMALL_BUF_MAP_SIZE (SMALL_BUFFER_SIZE / 2) +#define QLGE_SMALL_BUFFER_SIZE 512 +#define QLGE_SMALL_BUF_MAP_SIZE (QLGE_SMALL_BUFFER_SIZE / 2) #define SPLT_SETTING FSC_SH #define SPLT_LEN (SPLT_HDR_EP | \ - min(SMALL_BUF_MAP_SIZE, 1023)) + min(QLGE_SMALL_BUF_MAP_SIZE, 1023)) #define QLGE_SB_PAD 32 #endif @@ -1063,7 +1063,7 @@ struct tx_doorbell_context { }; /* DATA STRUCTURES SHARED WITH HARDWARE. */ -struct tx_buf_desc { +struct qlge_tx_buf_desc { __le64 addr; __le32 len; #define TX_DESC_LEN_MASK 0x000fffff @@ -1101,7 +1101,7 @@ struct qlge_ob_mac_iocb_req { __le32 reserved3; __le16 vlan_tci; __le16 reserved4; - struct tx_buf_desc tbd[TX_DESC_PER_IOCB]; + struct qlge_tx_buf_desc tbd[TX_DESC_PER_IOCB]; } __packed; struct qlge_ob_mac_iocb_rsp { @@ -1146,7 +1146,7 @@ struct qlge_ob_mac_tso_iocb_req { #define OB_MAC_TRANSPORT_HDR_SHIFT 6 __le16 vlan_tci; __le16 mss; - struct tx_buf_desc tbd[TX_DESC_PER_IOCB]; + struct qlge_tx_buf_desc tbd[TX_DESC_PER_IOCB]; } __packed; struct qlge_ob_mac_tso_iocb_rsp { @@ -1347,7 +1347,7 @@ struct ricb { /* SOFTWARE/DRIVER DATA STRUCTURES. */ struct qlge_oal { - struct tx_buf_desc oal[TX_DESC_PER_OAL]; + struct qlge_tx_buf_desc oal[TX_DESC_PER_OAL]; }; struct map_list { @@ -1355,19 +1355,19 @@ struct map_list { DEFINE_DMA_UNMAP_LEN(maplen); }; -struct tx_ring_desc { +struct qlge_tx_ring_desc { struct sk_buff *skb; struct qlge_ob_mac_iocb_req *queue_entry; u32 index; struct qlge_oal oal; struct map_list map[MAX_SKB_FRAGS + 2]; int map_cnt; - struct tx_ring_desc *next; + struct qlge_tx_ring_desc *next; }; #define QL_TXQ_IDX(qdev, skb) (smp_processor_id() % (qdev->tx_ring_count)) -struct tx_ring { +struct qlge_tx_ring { /* * queue info. */ @@ -1384,7 +1384,7 @@ struct tx_ring { u16 cq_id; /* completion (rx) queue for tx completions */ u8 wq_id; /* queue id for this entry */ u8 reserved1[3]; - struct tx_ring_desc *q; /* descriptor list for the queue */ + struct qlge_tx_ring_desc *q; /* descriptor list for the queue */ spinlock_t lock; atomic_t tx_count; /* counts down for every outstanding IO */ struct delayed_work tx_work; @@ -1437,9 +1437,9 @@ struct qlge_bq { #define QLGE_BQ_CONTAINER(bq) \ ({ \ typeof(bq) _bq = bq; \ - (struct rx_ring *)((char *)_bq - (_bq->type == QLGE_SB ? \ - offsetof(struct rx_ring, sbq) : \ - offsetof(struct rx_ring, lbq))); \ + (struct qlge_rx_ring *)((char *)_bq - (_bq->type == QLGE_SB ? \ + offsetof(struct qlge_rx_ring, sbq) : \ + offsetof(struct qlge_rx_ring, lbq))); \ }) /* Experience shows that the device ignores the low 4 bits of the tail index. @@ -1456,7 +1456,7 @@ struct qlge_bq { (_bq)->next_to_clean); \ }) -struct rx_ring { +struct qlge_rx_ring { struct cqicb cqicb; /* The chip's completion queue init control block. */ /* Completion queue elements. */ @@ -2135,8 +2135,8 @@ struct qlge_adapter { int ring_mem_size; void *ring_mem; - struct rx_ring rx_ring[MAX_RX_RINGS]; - struct tx_ring tx_ring[MAX_TX_RINGS]; + struct qlge_rx_ring rx_ring[MAX_RX_RINGS]; + struct qlge_tx_ring tx_ring[MAX_TX_RINGS]; unsigned int lbq_buf_order; u32 lbq_buf_size; @@ -2287,6 +2287,6 @@ void qlge_get_dump(struct qlge_adapter *qdev, void *buff); netdev_tx_t qlge_lb_send(struct sk_buff *skb, struct net_device *ndev); void qlge_check_lb_frame(struct qlge_adapter *qdev, struct sk_buff *skb); int qlge_own_firmware(struct qlge_adapter *qdev); -int qlge_clean_lb_rx_ring(struct rx_ring *rx_ring, int budget); +int qlge_clean_lb_rx_ring(struct qlge_rx_ring *rx_ring, int budget); #endif /* _QLGE_H_ */ diff --git a/drivers/staging/qlge/qlge_ethtool.c b/drivers/staging/qlge/qlge_ethtool.c index b70570b7b467..22c27b97a908 100644 --- a/drivers/staging/qlge/qlge_ethtool.c +++ b/drivers/staging/qlge/qlge_ethtool.c @@ -186,7 +186,7 @@ static const char qlge_gstrings_test[][ETH_GSTRING_LEN] = { static int qlge_update_ring_coalescing(struct qlge_adapter *qdev) { int i, status = 0; - struct rx_ring *rx_ring; + struct qlge_rx_ring *rx_ring; struct cqicb *cqicb; if (!netif_running(qdev->ndev)) @@ -537,7 +537,7 @@ static int qlge_run_loopback_test(struct qlge_adapter *qdev) int i; netdev_tx_t rc; struct sk_buff *skb; - unsigned int size = SMALL_BUF_MAP_SIZE; + unsigned int size = QLGE_SMALL_BUF_MAP_SIZE; for (i = 0; i < 64; i++) { skb = netdev_alloc_skb(qdev->ndev, size); diff --git a/drivers/staging/qlge/qlge_main.c b/drivers/staging/qlge/qlge_main.c index c91969b01bd5..77c71ae698ab 100644 --- a/drivers/staging/qlge/qlge_main.c +++ b/drivers/staging/qlge/qlge_main.c @@ -964,7 +964,7 @@ static struct qlge_bq_desc *qlge_get_curr_buf(struct qlge_bq *bq) } static struct qlge_bq_desc *qlge_get_curr_lchunk(struct qlge_adapter *qdev, - struct rx_ring *rx_ring) + struct qlge_rx_ring *rx_ring) { struct qlge_bq_desc *lbq_desc = qlge_get_curr_buf(&rx_ring->lbq); @@ -982,7 +982,7 @@ static struct qlge_bq_desc *qlge_get_curr_lchunk(struct qlge_adapter *qdev, } /* Update an rx ring index. */ -static void qlge_update_cq(struct rx_ring *rx_ring) +static void qlge_update_cq(struct qlge_rx_ring *rx_ring) { rx_ring->cnsmr_idx++; rx_ring->curr_entry++; @@ -992,7 +992,7 @@ static void qlge_update_cq(struct rx_ring *rx_ring) } } -static void qlge_write_cq_idx(struct rx_ring *rx_ring) +static void qlge_write_cq_idx(struct qlge_rx_ring *rx_ring) { qlge_write_db_reg(rx_ring->cnsmr_idx, rx_ring->cnsmr_idx_db_reg); } @@ -1003,7 +1003,7 @@ static const char * const bq_type_name[] = { }; /* return 0 or negative error */ -static int qlge_refill_sb(struct rx_ring *rx_ring, +static int qlge_refill_sb(struct qlge_rx_ring *rx_ring, struct qlge_bq_desc *sbq_desc, gfp_t gfp) { struct qlge_adapter *qdev = rx_ring->qdev; @@ -1016,13 +1016,13 @@ static int qlge_refill_sb(struct rx_ring *rx_ring, "ring %u sbq: getting new skb for index %d.\n", rx_ring->cq_id, sbq_desc->index); - skb = __netdev_alloc_skb(qdev->ndev, SMALL_BUFFER_SIZE, gfp); + skb = __netdev_alloc_skb(qdev->ndev, QLGE_SMALL_BUFFER_SIZE, gfp); if (!skb) return -ENOMEM; skb_reserve(skb, QLGE_SB_PAD); sbq_desc->dma_addr = dma_map_single(&qdev->pdev->dev, skb->data, - SMALL_BUF_MAP_SIZE, + QLGE_SMALL_BUF_MAP_SIZE, DMA_FROM_DEVICE); if (dma_mapping_error(&qdev->pdev->dev, sbq_desc->dma_addr)) { netif_err(qdev, ifup, qdev->ndev, "PCI mapping failed.\n"); @@ -1036,7 +1036,7 @@ static int qlge_refill_sb(struct rx_ring *rx_ring, } /* return 0 or negative error */ -static int qlge_refill_lb(struct rx_ring *rx_ring, +static int qlge_refill_lb(struct qlge_rx_ring *rx_ring, struct qlge_bq_desc *lbq_desc, gfp_t gfp) { struct qlge_adapter *qdev = rx_ring->qdev; @@ -1086,7 +1086,7 @@ static int qlge_refill_lb(struct rx_ring *rx_ring, /* return 0 or negative error */ static int qlge_refill_bq(struct qlge_bq *bq, gfp_t gfp) { - struct rx_ring *rx_ring = QLGE_BQ_CONTAINER(bq); + struct qlge_rx_ring *rx_ring = QLGE_BQ_CONTAINER(bq); struct qlge_adapter *qdev = rx_ring->qdev; struct qlge_bq_desc *bq_desc; int refill_count; @@ -1141,7 +1141,7 @@ static int qlge_refill_bq(struct qlge_bq *bq, gfp_t gfp) return retval; } -static void qlge_update_buffer_queues(struct rx_ring *rx_ring, gfp_t gfp, +static void qlge_update_buffer_queues(struct qlge_rx_ring *rx_ring, gfp_t gfp, unsigned long delay) { bool sbq_fail, lbq_fail; @@ -1168,7 +1168,7 @@ static void qlge_update_buffer_queues(struct rx_ring *rx_ring, gfp_t gfp, static void qlge_slow_refill(struct work_struct *work) { - struct rx_ring *rx_ring = container_of(work, struct rx_ring, + struct qlge_rx_ring *rx_ring = container_of(work, struct qlge_rx_ring, refill_work.work); struct napi_struct *napi = &rx_ring->napi; @@ -1189,7 +1189,7 @@ static void qlge_slow_refill(struct work_struct *work) * fails at some stage, or from the interrupt when a tx completes. */ static void qlge_unmap_send(struct qlge_adapter *qdev, - struct tx_ring_desc *tx_ring_desc, int mapped) + struct qlge_tx_ring_desc *tx_ring_desc, int mapped) { int i; @@ -1232,12 +1232,12 @@ static void qlge_unmap_send(struct qlge_adapter *qdev, */ static int qlge_map_send(struct qlge_adapter *qdev, struct qlge_ob_mac_iocb_req *mac_iocb_ptr, - struct sk_buff *skb, struct tx_ring_desc *tx_ring_desc) + struct sk_buff *skb, struct qlge_tx_ring_desc *tx_ring_desc) { int len = skb_headlen(skb); dma_addr_t map; int frag_idx, err, map_idx = 0; - struct tx_buf_desc *tbd = mac_iocb_ptr->tbd; + struct qlge_tx_buf_desc *tbd = mac_iocb_ptr->tbd; int frag_cnt = skb_shinfo(skb)->nr_frags; if (frag_cnt) { @@ -1312,13 +1312,13 @@ static int qlge_map_send(struct qlge_adapter *qdev, * of our sglist (OAL). */ tbd->len = - cpu_to_le32((sizeof(struct tx_buf_desc) * + cpu_to_le32((sizeof(struct qlge_tx_buf_desc) * (frag_cnt - frag_idx)) | TX_DESC_C); dma_unmap_addr_set(&tx_ring_desc->map[map_idx], mapaddr, map); dma_unmap_len_set(&tx_ring_desc->map[map_idx], maplen, sizeof(struct qlge_oal)); - tbd = (struct tx_buf_desc *)&tx_ring_desc->oal; + tbd = (struct qlge_tx_buf_desc *)&tx_ring_desc->oal; map_idx++; } @@ -1358,7 +1358,7 @@ static int qlge_map_send(struct qlge_adapter *qdev, /* Categorizing receive firmware frame errors */ static void qlge_categorize_rx_err(struct qlge_adapter *qdev, u8 rx_err, - struct rx_ring *rx_ring) + struct qlge_rx_ring *rx_ring) { struct nic_stats *stats = &qdev->nic_stats; @@ -1414,7 +1414,7 @@ static void qlge_update_mac_hdr_len(struct qlge_adapter *qdev, /* Process an inbound completion from an rx ring. */ static void qlge_process_mac_rx_gro_page(struct qlge_adapter *qdev, - struct rx_ring *rx_ring, + struct qlge_rx_ring *rx_ring, struct qlge_ib_mac_iocb_rsp *ib_mac_rsp, u32 length, u16 vlan_id) { @@ -1460,7 +1460,7 @@ static void qlge_process_mac_rx_gro_page(struct qlge_adapter *qdev, /* Process an inbound completion from an rx ring. */ static void qlge_process_mac_rx_page(struct qlge_adapter *qdev, - struct rx_ring *rx_ring, + struct qlge_rx_ring *rx_ring, struct qlge_ib_mac_iocb_rsp *ib_mac_rsp, u32 length, u16 vlan_id) { @@ -1471,7 +1471,7 @@ static void qlge_process_mac_rx_page(struct qlge_adapter *qdev, struct napi_struct *napi = &rx_ring->napi; size_t hlen = ETH_HLEN; - skb = napi_alloc_skb(&rx_ring->napi, SMALL_BUFFER_SIZE); + skb = napi_alloc_skb(&rx_ring->napi, QLGE_SMALL_BUFFER_SIZE); if (!skb) { rx_ring->rx_dropped++; put_page(lbq_desc->p.pg_chunk.page); @@ -1551,7 +1551,7 @@ static void qlge_process_mac_rx_page(struct qlge_adapter *qdev, /* Process an inbound completion from an rx ring. */ static void qlge_process_mac_rx_skb(struct qlge_adapter *qdev, - struct rx_ring *rx_ring, + struct qlge_rx_ring *rx_ring, struct qlge_ib_mac_iocb_rsp *ib_mac_rsp, u32 length, u16 vlan_id) { @@ -1569,7 +1569,7 @@ static void qlge_process_mac_rx_skb(struct qlge_adapter *qdev, skb_reserve(new_skb, NET_IP_ALIGN); dma_sync_single_for_cpu(&qdev->pdev->dev, sbq_desc->dma_addr, - SMALL_BUF_MAP_SIZE, DMA_FROM_DEVICE); + QLGE_SMALL_BUF_MAP_SIZE, DMA_FROM_DEVICE); skb_put_data(new_skb, skb->data, length); @@ -1671,7 +1671,7 @@ static void qlge_realign_skb(struct sk_buff *skb, int len) * future, but for not it works well. */ static struct sk_buff *qlge_build_rx_skb(struct qlge_adapter *qdev, - struct rx_ring *rx_ring, + struct qlge_rx_ring *rx_ring, struct qlge_ib_mac_iocb_rsp *ib_mac_rsp) { u32 length = le32_to_cpu(ib_mac_rsp->data_len); @@ -1692,7 +1692,7 @@ static struct sk_buff *qlge_build_rx_skb(struct qlge_adapter *qdev, */ sbq_desc = qlge_get_curr_buf(&rx_ring->sbq); dma_unmap_single(&qdev->pdev->dev, sbq_desc->dma_addr, - SMALL_BUF_MAP_SIZE, DMA_FROM_DEVICE); + QLGE_SMALL_BUF_MAP_SIZE, DMA_FROM_DEVICE); skb = sbq_desc->p.skb; qlge_realign_skb(skb, hdr_len); skb_put(skb, hdr_len); @@ -1723,7 +1723,7 @@ static struct sk_buff *qlge_build_rx_skb(struct qlge_adapter *qdev, sbq_desc = qlge_get_curr_buf(&rx_ring->sbq); dma_sync_single_for_cpu(&qdev->pdev->dev, sbq_desc->dma_addr, - SMALL_BUF_MAP_SIZE, + QLGE_SMALL_BUF_MAP_SIZE, DMA_FROM_DEVICE); skb_put_data(skb, sbq_desc->p.skb->data, length); } else { @@ -1735,7 +1735,7 @@ static struct sk_buff *qlge_build_rx_skb(struct qlge_adapter *qdev, qlge_realign_skb(skb, length); skb_put(skb, length); dma_unmap_single(&qdev->pdev->dev, sbq_desc->dma_addr, - SMALL_BUF_MAP_SIZE, + QLGE_SMALL_BUF_MAP_SIZE, DMA_FROM_DEVICE); sbq_desc->p.skb = NULL; } @@ -1765,7 +1765,7 @@ static struct sk_buff *qlge_build_rx_skb(struct qlge_adapter *qdev, * jumbo mtu on a non-TCP/UDP frame. */ lbq_desc = qlge_get_curr_lchunk(qdev, rx_ring); - skb = napi_alloc_skb(&rx_ring->napi, SMALL_BUFFER_SIZE); + skb = napi_alloc_skb(&rx_ring->napi, QLGE_SMALL_BUFFER_SIZE); if (!skb) { netif_printk(qdev, probe, KERN_DEBUG, qdev->ndev, "No skb available, drop the packet.\n"); @@ -1805,7 +1805,7 @@ static struct sk_buff *qlge_build_rx_skb(struct qlge_adapter *qdev, sbq_desc = qlge_get_curr_buf(&rx_ring->sbq); dma_unmap_single(&qdev->pdev->dev, sbq_desc->dma_addr, - SMALL_BUF_MAP_SIZE, DMA_FROM_DEVICE); + QLGE_SMALL_BUF_MAP_SIZE, DMA_FROM_DEVICE); if (!(ib_mac_rsp->flags4 & IB_MAC_IOCB_RSP_HS)) { /* * This is an non TCP/UDP IP frame, so @@ -1848,7 +1848,7 @@ static struct sk_buff *qlge_build_rx_skb(struct qlge_adapter *qdev, /* Process an inbound completion from an rx ring. */ static void qlge_process_mac_split_rx_intr(struct qlge_adapter *qdev, - struct rx_ring *rx_ring, + struct qlge_rx_ring *rx_ring, struct qlge_ib_mac_iocb_rsp *ib_mac_rsp, u16 vlan_id) { @@ -1942,7 +1942,7 @@ static void qlge_process_mac_split_rx_intr(struct qlge_adapter *qdev, /* Process an inbound completion from an rx ring. */ static unsigned long qlge_process_mac_rx_intr(struct qlge_adapter *qdev, - struct rx_ring *rx_ring, + struct qlge_rx_ring *rx_ring, struct qlge_ib_mac_iocb_rsp *ib_mac_rsp) { u32 length = le32_to_cpu(ib_mac_rsp->data_len); @@ -1993,8 +1993,8 @@ static unsigned long qlge_process_mac_rx_intr(struct qlge_adapter *qdev, static void qlge_process_mac_tx_intr(struct qlge_adapter *qdev, struct qlge_ob_mac_iocb_rsp *mac_rsp) { - struct tx_ring *tx_ring; - struct tx_ring_desc *tx_ring_desc; + struct qlge_tx_ring *tx_ring; + struct qlge_tx_ring_desc *tx_ring_desc; tx_ring = &qdev->tx_ring[mac_rsp->txq_idx]; tx_ring_desc = &tx_ring->q[mac_rsp->tid]; @@ -2087,14 +2087,14 @@ static void qlge_process_chip_ae_intr(struct qlge_adapter *qdev, } } -static int qlge_clean_outbound_rx_ring(struct rx_ring *rx_ring) +static int qlge_clean_outbound_rx_ring(struct qlge_rx_ring *rx_ring) { struct qlge_adapter *qdev = rx_ring->qdev; u32 prod = qlge_read_sh_reg(rx_ring->prod_idx_sh_reg); struct qlge_ob_mac_iocb_rsp *net_rsp = NULL; int count = 0; - struct tx_ring *tx_ring; + struct qlge_tx_ring *tx_ring; /* While there are entries in the completion queue. */ while (prod != rx_ring->cnsmr_idx) { netif_printk(qdev, rx_status, KERN_DEBUG, qdev->ndev, @@ -2133,7 +2133,7 @@ static int qlge_clean_outbound_rx_ring(struct rx_ring *rx_ring) return count; } -static int qlge_clean_inbound_rx_ring(struct rx_ring *rx_ring, int budget) +static int qlge_clean_inbound_rx_ring(struct qlge_rx_ring *rx_ring, int budget) { struct qlge_adapter *qdev = rx_ring->qdev; u32 prod = qlge_read_sh_reg(rx_ring->prod_idx_sh_reg); @@ -2178,9 +2178,9 @@ static int qlge_clean_inbound_rx_ring(struct rx_ring *rx_ring, int budget) static int qlge_napi_poll_msix(struct napi_struct *napi, int budget) { - struct rx_ring *rx_ring = container_of(napi, struct rx_ring, napi); + struct qlge_rx_ring *rx_ring = container_of(napi, struct qlge_rx_ring, napi); struct qlge_adapter *qdev = rx_ring->qdev; - struct rx_ring *trx_ring; + struct qlge_rx_ring *trx_ring; int i, work_done = 0; struct intr_context *ctx = &qdev->intr_context[rx_ring->cq_id]; @@ -2368,7 +2368,7 @@ static void qlge_restore_vlan(struct qlge_adapter *qdev) /* MSI-X Multiple Vector Interrupt Handler for inbound completions. */ static irqreturn_t qlge_msix_rx_isr(int irq, void *dev_id) { - struct rx_ring *rx_ring = dev_id; + struct qlge_rx_ring *rx_ring = dev_id; napi_schedule(&rx_ring->napi); return IRQ_HANDLED; @@ -2381,7 +2381,7 @@ static irqreturn_t qlge_msix_rx_isr(int irq, void *dev_id) */ static irqreturn_t qlge_isr(int irq, void *dev_id) { - struct rx_ring *rx_ring = dev_id; + struct qlge_rx_ring *rx_ring = dev_id; struct qlge_adapter *qdev = rx_ring->qdev; struct intr_context *intr_context = &qdev->intr_context[0]; u32 var; @@ -2529,9 +2529,9 @@ static netdev_tx_t qlge_send(struct sk_buff *skb, struct net_device *ndev) { struct qlge_adapter *qdev = netdev_to_qdev(ndev); struct qlge_ob_mac_iocb_req *mac_iocb_ptr; - struct tx_ring_desc *tx_ring_desc; + struct qlge_tx_ring_desc *tx_ring_desc; int tso; - struct tx_ring *tx_ring; + struct qlge_tx_ring *tx_ring; u32 tx_ring_idx = (u32)skb->queue_mapping; tx_ring = &qdev->tx_ring[tx_ring_idx]; @@ -2654,9 +2654,9 @@ static int qlge_alloc_shadow_space(struct qlge_adapter *qdev) return -ENOMEM; } -static void qlge_init_tx_ring(struct qlge_adapter *qdev, struct tx_ring *tx_ring) +static void qlge_init_tx_ring(struct qlge_adapter *qdev, struct qlge_tx_ring *tx_ring) { - struct tx_ring_desc *tx_ring_desc; + struct qlge_tx_ring_desc *tx_ring_desc; int i; struct qlge_ob_mac_iocb_req *mac_iocb_ptr; @@ -2673,7 +2673,7 @@ static void qlge_init_tx_ring(struct qlge_adapter *qdev, struct tx_ring *tx_ring } static void qlge_free_tx_resources(struct qlge_adapter *qdev, - struct tx_ring *tx_ring) + struct qlge_tx_ring *tx_ring) { if (tx_ring->wq_base) { dma_free_coherent(&qdev->pdev->dev, tx_ring->wq_size, @@ -2685,7 +2685,7 @@ static void qlge_free_tx_resources(struct qlge_adapter *qdev, } static int qlge_alloc_tx_resources(struct qlge_adapter *qdev, - struct tx_ring *tx_ring) + struct qlge_tx_ring *tx_ring) { tx_ring->wq_base = dma_alloc_coherent(&qdev->pdev->dev, tx_ring->wq_size, @@ -2696,7 +2696,7 @@ static int qlge_alloc_tx_resources(struct qlge_adapter *qdev, goto pci_alloc_err; tx_ring->q = - kmalloc_array(tx_ring->wq_len, sizeof(struct tx_ring_desc), + kmalloc_array(tx_ring->wq_len, sizeof(struct qlge_tx_ring_desc), GFP_KERNEL); if (!tx_ring->q) goto err; @@ -2711,7 +2711,7 @@ static int qlge_alloc_tx_resources(struct qlge_adapter *qdev, return -ENOMEM; } -static void qlge_free_lbq_buffers(struct qlge_adapter *qdev, struct rx_ring *rx_ring) +static void qlge_free_lbq_buffers(struct qlge_adapter *qdev, struct qlge_rx_ring *rx_ring) { struct qlge_bq *lbq = &rx_ring->lbq; unsigned int last_offset; @@ -2738,7 +2738,7 @@ static void qlge_free_lbq_buffers(struct qlge_adapter *qdev, struct rx_ring *rx_ } } -static void qlge_free_sbq_buffers(struct qlge_adapter *qdev, struct rx_ring *rx_ring) +static void qlge_free_sbq_buffers(struct qlge_adapter *qdev, struct qlge_rx_ring *rx_ring) { int i; @@ -2752,7 +2752,7 @@ static void qlge_free_sbq_buffers(struct qlge_adapter *qdev, struct rx_ring *rx_ } if (sbq_desc->p.skb) { dma_unmap_single(&qdev->pdev->dev, sbq_desc->dma_addr, - SMALL_BUF_MAP_SIZE, + QLGE_SMALL_BUF_MAP_SIZE, DMA_FROM_DEVICE); dev_kfree_skb(sbq_desc->p.skb); sbq_desc->p.skb = NULL; @@ -2768,7 +2768,7 @@ static void qlge_free_rx_buffers(struct qlge_adapter *qdev) int i; for (i = 0; i < qdev->rx_ring_count; i++) { - struct rx_ring *rx_ring = &qdev->rx_ring[i]; + struct qlge_rx_ring *rx_ring = &qdev->rx_ring[i]; if (rx_ring->lbq.queue) qlge_free_lbq_buffers(qdev, rx_ring); @@ -2788,7 +2788,7 @@ static void qlge_alloc_rx_buffers(struct qlge_adapter *qdev) static int qlge_init_bq(struct qlge_bq *bq) { - struct rx_ring *rx_ring = QLGE_BQ_CONTAINER(bq); + struct qlge_rx_ring *rx_ring = QLGE_BQ_CONTAINER(bq); struct qlge_adapter *qdev = rx_ring->qdev; struct qlge_bq_desc *bq_desc; __le64 *buf_ptr; @@ -2816,7 +2816,7 @@ static int qlge_init_bq(struct qlge_bq *bq) } static void qlge_free_rx_resources(struct qlge_adapter *qdev, - struct rx_ring *rx_ring) + struct qlge_rx_ring *rx_ring) { /* Free the small buffer queue. */ if (rx_ring->sbq.base) { @@ -2853,7 +2853,7 @@ static void qlge_free_rx_resources(struct qlge_adapter *qdev, * on the values in the parameter structure. */ static int qlge_alloc_rx_resources(struct qlge_adapter *qdev, - struct rx_ring *rx_ring) + struct qlge_rx_ring *rx_ring) { /* * Allocate the completion queue for this rx_ring. @@ -2878,8 +2878,8 @@ static int qlge_alloc_rx_resources(struct qlge_adapter *qdev, static void qlge_tx_ring_clean(struct qlge_adapter *qdev) { - struct tx_ring *tx_ring; - struct tx_ring_desc *tx_ring_desc; + struct qlge_tx_ring *tx_ring; + struct qlge_tx_ring_desc *tx_ring_desc; int i, j; /* @@ -2949,7 +2949,7 @@ static int qlge_alloc_mem_resources(struct qlge_adapter *qdev) * The control block is defined as * "Completion Queue Initialization Control Block", or cqicb. */ -static int qlge_start_rx_ring(struct qlge_adapter *qdev, struct rx_ring *rx_ring) +static int qlge_start_rx_ring(struct qlge_adapter *qdev, struct qlge_rx_ring *rx_ring) { struct cqicb *cqicb = &rx_ring->cqicb; void *shadow_reg = qdev->rx_ring_shadow_reg_area + @@ -3036,7 +3036,7 @@ static int qlge_start_rx_ring(struct qlge_adapter *qdev, struct rx_ring *rx_ring } while (page_entries < MAX_DB_PAGES_PER_BQ(QLGE_BQ_LEN)); cqicb->sbq_addr = cpu_to_le64(rx_ring->sbq.base_indirect_dma); - cqicb->sbq_buf_size = cpu_to_le16(SMALL_BUFFER_SIZE); + cqicb->sbq_buf_size = cpu_to_le16(QLGE_SMALL_BUFFER_SIZE); cqicb->sbq_len = cpu_to_le16(QLGE_FIT16(QLGE_BQ_LEN)); rx_ring->sbq.next_to_use = 0; rx_ring->sbq.next_to_clean = 0; @@ -3062,7 +3062,7 @@ static int qlge_start_rx_ring(struct qlge_adapter *qdev, struct rx_ring *rx_ring return err; } -static int qlge_start_tx_ring(struct qlge_adapter *qdev, struct tx_ring *tx_ring) +static int qlge_start_tx_ring(struct qlge_adapter *qdev, struct qlge_tx_ring *tx_ring) { struct wqicb *wqicb = (struct wqicb *)tx_ring; void __iomem *doorbell_area = @@ -3917,8 +3917,8 @@ static void qlge_set_lb_size(struct qlge_adapter *qdev) static int qlge_configure_rings(struct qlge_adapter *qdev) { int i; - struct rx_ring *rx_ring; - struct tx_ring *tx_ring; + struct qlge_rx_ring *rx_ring; + struct qlge_tx_ring *tx_ring; int cpu_cnt = min_t(int, MAX_CPUS, num_online_cpus()); /* In a perfect world we have one RSS ring for each CPU @@ -4083,8 +4083,8 @@ static struct net_device_stats *qlge_get_stats(struct net_device *ndev) { struct qlge_adapter *qdev = netdev_to_qdev(ndev); - struct rx_ring *rx_ring = &qdev->rx_ring[0]; - struct tx_ring *tx_ring = &qdev->tx_ring[0]; + struct qlge_rx_ring *rx_ring = &qdev->rx_ring[0]; + struct qlge_tx_ring *tx_ring = &qdev->tx_ring[0]; unsigned long pkts, mcast, dropped, errors, bytes; int i; @@ -4648,7 +4648,7 @@ netdev_tx_t qlge_lb_send(struct sk_buff *skb, struct net_device *ndev) return qlge_send(skb, ndev); } -int qlge_clean_lb_rx_ring(struct rx_ring *rx_ring, int budget) +int qlge_clean_lb_rx_ring(struct qlge_rx_ring *rx_ring, int budget) { return qlge_clean_inbound_rx_ring(rx_ring, budget); } From patchwork Mon Jun 21 13:48:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Coiby Xu X-Patchwork-Id: 464894 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 908D4C4743C for ; Mon, 21 Jun 2021 13:50:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6E75D61245 for ; Mon, 21 Jun 2021 13:50:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230137AbhFUNwS (ORCPT ); Mon, 21 Jun 2021 09:52:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49370 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229747AbhFUNwR (ORCPT ); Mon, 21 Jun 2021 09:52:17 -0400 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 11DC5C061574; Mon, 21 Jun 2021 06:50:01 -0700 (PDT) Received: by mail-pj1-x1031.google.com with SMTP id m15-20020a17090a5a4fb029016f385ffad0so42017pji.0; Mon, 21 Jun 2021 06:50:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=OBrCHO0tuZVBJieIjRNRGhNU8OnoyFF5mjeTtNd26b8=; b=vcKZB8lc1DNNmTfHst0H4rvRBalfqu649rA+GhCF9+y4ZRT7DQua3DTfN/uDo9a6tn QUY5Uj0Pg4C4Dn83hm3pwKGwXKfZaR1zioXqBIaLmAPLrts9VzN0X8+NFEpw0S8ntleQ 5t6ftWepnzN1qAAFdFI+64by8DCUt9kLMjhL5RHwiUDB1M02Ma2oeizSZk4pLxvKLT/v Y1ay1g6u0S0I4bVc3SKKrauUDlbBxXsIKTtuzEcsGQCMv2Wu/0mYcCUnU9SDGt13YYxj tFTbiltK+00hM/ISqZjHbimox6y6lrbCb1+8rBWLl/nIDv0sXt4i/pFwBiotzgeMgsj8 e2rA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OBrCHO0tuZVBJieIjRNRGhNU8OnoyFF5mjeTtNd26b8=; b=j+NyjGCVM6GLz3kD6fVwqNbsP7l0eb7MJTa5wzV/C7lndqjEEwXkYcx9qhYXvi8e/n D5bn+lcGYvePVrjRyzmUL81QH7v0DxQnaWbTAX9ZDLyErVcNTehyEdsYhny8XnOeSkPq i6dCGWelbTCE6KMM5zcPUxNma/Kj7w2IYkgduevcEoCsvR17Co07rWKEwiCbu59hwiKT icSzdKbDoJFyz5J67TDwHNFqxsWRStTKewHNWr7KMoq0b9LNpjTqT9j6bFhXqmu6gs46 jsKqyzgoaCmUmQ4Eehqv/hm7AMg4lu1zc0ICYbejsNztfabLOevdWNnJmtQF/GNpa4z8 WZLw== X-Gm-Message-State: AOAM5320eOAElZTS9l0MSUj2Vf3ww6eNlBgx+MwAEj6G8YutKrelaNUm A+3ku64HkLpsZKVXzQlT26I= X-Google-Smtp-Source: ABdhPJxy3fT+wgDw89imJJu6L0T2FoaHlXyfZ/p+a2aqXn4UKILxfcz5lzLvVKXisq3D04tcz6E1ng== X-Received: by 2002:a17:902:8c87:b029:11d:6f72:78aa with SMTP id t7-20020a1709028c87b029011d6f7278aamr18254523plo.12.1624283401263; Mon, 21 Jun 2021 06:50:01 -0700 (PDT) Received: from localhost ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id i6sm4279390pjg.31.2021.06.21.06.49.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Jun 2021 06:50:00 -0700 (PDT) From: Coiby Xu To: linux-staging@lists.linux.dev Cc: netdev@vger.kernel.org, Benjamin Poirier , Shung-Hsi Yu , Manish Chopra , GR-Linux-NIC-Dev@marvell.com (supporter:QLOGIC QLGE 10Gb ETHERNET DRIVER), Greg Kroah-Hartman , linux-kernel@vger.kernel.org (open list) Subject: [RFC 05/19] staging: qlge: rename rx to completion queue and seperate rx_ring from completion queue Date: Mon, 21 Jun 2021 21:48:48 +0800 Message-Id: <20210621134902.83587-6-coiby.xu@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210621134902.83587-1-coiby.xu@gmail.com> References: <20210621134902.83587-1-coiby.xu@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This patch addresses the following TODO items, > * rename "rx" queues to "completion" queues. Calling tx completion queues "rx > queues" is confusing. > * struct rx_ring is used for rx and tx completions, with some members relevant > to one case only The first part of the completion queue (index range: [0, qdev->rss_ring_count)) is for inbound completions and the remaining part (index range: [qdev->rss_ring_count, qdev->cq_count)) is for outbound completions. Note the structure filed name and reserved is unused, remove it to reduce holes. Signed-off-by: Coiby Xu --- drivers/staging/qlge/TODO | 4 - drivers/staging/qlge/qlge.h | 30 +-- drivers/staging/qlge/qlge_dbg.c | 6 +- drivers/staging/qlge/qlge_ethtool.c | 24 +-- drivers/staging/qlge/qlge_main.c | 291 +++++++++++++++------------- 5 files changed, 188 insertions(+), 167 deletions(-) diff --git a/drivers/staging/qlge/TODO b/drivers/staging/qlge/TODO index 49cb09fc2be4..b7a60425fcd2 100644 --- a/drivers/staging/qlge/TODO +++ b/drivers/staging/qlge/TODO @@ -4,10 +4,6 @@ ql_build_rx_skb(). That function is now used exclusively to handle packets that underwent header splitting but it still contains code to handle non split cases. -* rename "rx" queues to "completion" queues. Calling tx completion queues "rx - queues" is confusing. -* struct rx_ring is used for rx and tx completions, with some members relevant - to one case only * the flow control implementation in firmware is buggy (sends a flood of pause frames, resets the link, device and driver buffer queues become desynchronized), disable it by default diff --git a/drivers/staging/qlge/qlge.h b/drivers/staging/qlge/qlge.h index 09d5878b95f7..926af25b14fa 100644 --- a/drivers/staging/qlge/qlge.h +++ b/drivers/staging/qlge/qlge.h @@ -1456,15 +1456,15 @@ struct qlge_bq { (_bq)->next_to_clean); \ }) -struct qlge_rx_ring { +struct qlge_cq { struct cqicb cqicb; /* The chip's completion queue init control block. */ /* Completion queue elements. */ + u16 cq_id; void *cq_base; dma_addr_t cq_base_dma; u32 cq_size; u32 cq_len; - u16 cq_id; __le32 *prod_idx_sh_reg; /* Shadowed producer register. */ dma_addr_t prod_idx_sh_reg_dma; void __iomem *cnsmr_idx_db_reg; /* PCI doorbell mem area + 0 */ @@ -1472,6 +1472,13 @@ struct qlge_rx_ring { struct qlge_net_rsp_iocb *curr_entry; /* next entry on queue */ void __iomem *valid_db_reg; /* PCI doorbell mem area + 0x04 */ + /* Misc. handler elements. */ + u32 irq; /* Which vector this ring is assigned. */ + u32 cpu; /* Which CPU this should run on. */ + struct qlge_adapter *qdev; +}; + +struct qlge_rx_ring { /* Large buffer queue elements. */ struct qlge_bq lbq; struct qlge_page_chunk master_chunk; @@ -1480,19 +1487,15 @@ struct qlge_rx_ring { /* Small buffer queue elements. */ struct qlge_bq sbq; - /* Misc. handler elements. */ - u32 irq; /* Which vector this ring is assigned. */ - u32 cpu; /* Which CPU this should run on. */ - struct delayed_work refill_work; - char name[IFNAMSIZ + 5]; struct napi_struct napi; - u8 reserved; - struct qlge_adapter *qdev; + struct delayed_work refill_work; u64 rx_packets; u64 rx_multicast; u64 rx_bytes; u64 rx_dropped; u64 rx_errors; + struct qlge_adapter *qdev; + u16 cq_id; }; /* @@ -1753,7 +1756,7 @@ enum { #define SHADOW_REG_SHIFT 20 struct qlge_nic_misc { - u32 rx_ring_count; + u32 cq_count; u32 tx_ring_count; u32 intr_count; u32 function; @@ -2127,14 +2130,15 @@ struct qlge_adapter { int tx_ring_count; /* One per online CPU. */ u32 rss_ring_count; /* One per irq vector. */ /* - * rx_ring_count = + * cq_count = * (CPU count * outbound completion rx_ring) + * (irq_vector_cnt * inbound (RSS) completion rx_ring) */ - int rx_ring_count; + int cq_count; int ring_mem_size; void *ring_mem; + struct qlge_cq cq[MAX_RX_RINGS]; struct qlge_rx_ring rx_ring[MAX_RX_RINGS]; struct qlge_tx_ring tx_ring[MAX_TX_RINGS]; unsigned int lbq_buf_order; @@ -2287,6 +2291,6 @@ void qlge_get_dump(struct qlge_adapter *qdev, void *buff); netdev_tx_t qlge_lb_send(struct sk_buff *skb, struct net_device *ndev); void qlge_check_lb_frame(struct qlge_adapter *qdev, struct sk_buff *skb); int qlge_own_firmware(struct qlge_adapter *qdev); -int qlge_clean_lb_rx_ring(struct qlge_rx_ring *rx_ring, int budget); +int qlge_clean_lb_cq(struct qlge_cq *cq, int budget); #endif /* _QLGE_H_ */ diff --git a/drivers/staging/qlge/qlge_dbg.c b/drivers/staging/qlge/qlge_dbg.c index 37e593f0fd82..d093e6c9f19c 100644 --- a/drivers/staging/qlge/qlge_dbg.c +++ b/drivers/staging/qlge/qlge_dbg.c @@ -403,7 +403,7 @@ static void qlge_get_intr_states(struct qlge_adapter *qdev, u32 *buf) { int i; - for (i = 0; i < qdev->rx_ring_count; i++, buf++) { + for (i = 0; i < qdev->cq_count; i++, buf++) { qlge_write32(qdev, INTR_EN, qdev->intr_context[i].intr_read_mask); *buf = qlge_read32(qdev, INTR_EN); @@ -1074,7 +1074,7 @@ int qlge_core_dump(struct qlge_adapter *qdev, struct qlge_mpi_coredump *mpi_core sizeof(struct mpi_coredump_segment_header) + sizeof(mpi_coredump->misc_nic_info), "MISC NIC INFO"); - mpi_coredump->misc_nic_info.rx_ring_count = qdev->rx_ring_count; + mpi_coredump->misc_nic_info.cq_count = qdev->cq_count; mpi_coredump->misc_nic_info.tx_ring_count = qdev->tx_ring_count; mpi_coredump->misc_nic_info.intr_count = qdev->intr_count; mpi_coredump->misc_nic_info.function = qdev->func; @@ -1237,7 +1237,7 @@ static void qlge_gen_reg_dump(struct qlge_adapter *qdev, sizeof(struct mpi_coredump_segment_header) + sizeof(mpi_coredump->misc_nic_info), "MISC NIC INFO"); - mpi_coredump->misc_nic_info.rx_ring_count = qdev->rx_ring_count; + mpi_coredump->misc_nic_info.cq_count = qdev->cq_count; mpi_coredump->misc_nic_info.tx_ring_count = qdev->tx_ring_count; mpi_coredump->misc_nic_info.intr_count = qdev->intr_count; mpi_coredump->misc_nic_info.function = qdev->func; diff --git a/drivers/staging/qlge/qlge_ethtool.c b/drivers/staging/qlge/qlge_ethtool.c index 22c27b97a908..7f77f99cc047 100644 --- a/drivers/staging/qlge/qlge_ethtool.c +++ b/drivers/staging/qlge/qlge_ethtool.c @@ -186,7 +186,7 @@ static const char qlge_gstrings_test[][ETH_GSTRING_LEN] = { static int qlge_update_ring_coalescing(struct qlge_adapter *qdev) { int i, status = 0; - struct qlge_rx_ring *rx_ring; + struct qlge_cq *cq; struct cqicb *cqicb; if (!netif_running(qdev->ndev)) @@ -195,18 +195,18 @@ static int qlge_update_ring_coalescing(struct qlge_adapter *qdev) /* Skip the default queue, and update the outbound handler * queues if they changed. */ - cqicb = (struct cqicb *)&qdev->rx_ring[qdev->rss_ring_count]; + cqicb = (struct cqicb *)&qdev->cq[qdev->rss_ring_count]; if (le16_to_cpu(cqicb->irq_delay) != qdev->tx_coalesce_usecs || le16_to_cpu(cqicb->pkt_delay) != qdev->tx_max_coalesced_frames) { - for (i = qdev->rss_ring_count; i < qdev->rx_ring_count; i++) { - rx_ring = &qdev->rx_ring[i]; - cqicb = (struct cqicb *)rx_ring; + for (i = qdev->rss_ring_count; i < qdev->cq_count; i++) { + cq = &qdev->cq[i]; + cqicb = (struct cqicb *)cq; cqicb->irq_delay = cpu_to_le16(qdev->tx_coalesce_usecs); cqicb->pkt_delay = cpu_to_le16(qdev->tx_max_coalesced_frames); cqicb->flags = FLAGS_LI; status = qlge_write_cfg(qdev, cqicb, sizeof(*cqicb), - CFG_LCQ, rx_ring->cq_id); + CFG_LCQ, cq->cq_id); if (status) { netif_err(qdev, ifup, qdev->ndev, "Failed to load CQICB.\n"); @@ -216,18 +216,18 @@ static int qlge_update_ring_coalescing(struct qlge_adapter *qdev) } /* Update the inbound (RSS) handler queues if they changed. */ - cqicb = (struct cqicb *)&qdev->rx_ring[0]; + cqicb = (struct cqicb *)&qdev->cq[0]; if (le16_to_cpu(cqicb->irq_delay) != qdev->rx_coalesce_usecs || le16_to_cpu(cqicb->pkt_delay) != qdev->rx_max_coalesced_frames) { - for (i = 0; i < qdev->rss_ring_count; i++, rx_ring++) { - rx_ring = &qdev->rx_ring[i]; - cqicb = (struct cqicb *)rx_ring; + for (i = 0; i < qdev->rss_ring_count; i++, cq++) { + cq = &qdev->cq[i]; + cqicb = (struct cqicb *)cq; cqicb->irq_delay = cpu_to_le16(qdev->rx_coalesce_usecs); cqicb->pkt_delay = cpu_to_le16(qdev->rx_max_coalesced_frames); cqicb->flags = FLAGS_LI; status = qlge_write_cfg(qdev, cqicb, sizeof(*cqicb), - CFG_LCQ, rx_ring->cq_id); + CFG_LCQ, cq->cq_id); if (status) { netif_err(qdev, ifup, qdev->ndev, "Failed to load CQICB.\n"); @@ -554,7 +554,7 @@ static int qlge_run_loopback_test(struct qlge_adapter *qdev) } /* Give queue time to settle before testing results. */ msleep(2); - qlge_clean_lb_rx_ring(&qdev->rx_ring[0], 128); + qlge_clean_lb_cq(&qdev->cq[0], 128); return atomic_read(&qdev->lb_count) ? -EIO : 0; } diff --git a/drivers/staging/qlge/qlge_main.c b/drivers/staging/qlge/qlge_main.c index 77c71ae698ab..94853b182608 100644 --- a/drivers/staging/qlge/qlge_main.c +++ b/drivers/staging/qlge/qlge_main.c @@ -982,19 +982,19 @@ static struct qlge_bq_desc *qlge_get_curr_lchunk(struct qlge_adapter *qdev, } /* Update an rx ring index. */ -static void qlge_update_cq(struct qlge_rx_ring *rx_ring) +static void qlge_update_cq(struct qlge_cq *cq) { - rx_ring->cnsmr_idx++; - rx_ring->curr_entry++; - if (unlikely(rx_ring->cnsmr_idx == rx_ring->cq_len)) { - rx_ring->cnsmr_idx = 0; - rx_ring->curr_entry = rx_ring->cq_base; + cq->cnsmr_idx++; + cq->curr_entry++; + if (unlikely(cq->cnsmr_idx == cq->cq_len)) { + cq->cnsmr_idx = 0; + cq->curr_entry = cq->cq_base; } } -static void qlge_write_cq_idx(struct qlge_rx_ring *rx_ring) +static void qlge_write_cq_idx(struct qlge_cq *cq) { - qlge_write_db_reg(rx_ring->cnsmr_idx, rx_ring->cnsmr_idx_db_reg); + qlge_write_db_reg(cq->cnsmr_idx, cq->cnsmr_idx_db_reg); } static const char * const bq_type_name[] = { @@ -2087,21 +2087,21 @@ static void qlge_process_chip_ae_intr(struct qlge_adapter *qdev, } } -static int qlge_clean_outbound_rx_ring(struct qlge_rx_ring *rx_ring) +static int qlge_clean_outbound_cq(struct qlge_cq *cq) { - struct qlge_adapter *qdev = rx_ring->qdev; - u32 prod = qlge_read_sh_reg(rx_ring->prod_idx_sh_reg); + struct qlge_adapter *qdev = cq->qdev; + u32 prod = qlge_read_sh_reg(cq->prod_idx_sh_reg); struct qlge_ob_mac_iocb_rsp *net_rsp = NULL; int count = 0; struct qlge_tx_ring *tx_ring; /* While there are entries in the completion queue. */ - while (prod != rx_ring->cnsmr_idx) { + while (prod != cq->cnsmr_idx) { netif_printk(qdev, rx_status, KERN_DEBUG, qdev->ndev, "cq_id = %d, prod = %d, cnsmr = %d\n", - rx_ring->cq_id, prod, rx_ring->cnsmr_idx); + cq->cq_id, prod, cq->cnsmr_idx); - net_rsp = (struct qlge_ob_mac_iocb_rsp *)rx_ring->curr_entry; + net_rsp = (struct qlge_ob_mac_iocb_rsp *)cq->curr_entry; rmb(); switch (net_rsp->opcode) { case OPCODE_OB_MAC_TSO_IOCB: @@ -2114,12 +2114,12 @@ static int qlge_clean_outbound_rx_ring(struct qlge_rx_ring *rx_ring) net_rsp->opcode); } count++; - qlge_update_cq(rx_ring); - prod = qlge_read_sh_reg(rx_ring->prod_idx_sh_reg); + qlge_update_cq(cq); + prod = qlge_read_sh_reg(cq->prod_idx_sh_reg); } if (!net_rsp) return 0; - qlge_write_cq_idx(rx_ring); + qlge_write_cq_idx(cq); tx_ring = &qdev->tx_ring[net_rsp->txq_idx]; if (__netif_subqueue_stopped(qdev->ndev, tx_ring->wq_id)) { if ((atomic_read(&tx_ring->tx_count) > (tx_ring->wq_len / 4))) @@ -2133,20 +2133,21 @@ static int qlge_clean_outbound_rx_ring(struct qlge_rx_ring *rx_ring) return count; } -static int qlge_clean_inbound_rx_ring(struct qlge_rx_ring *rx_ring, int budget) +static int qlge_clean_inbound_cq(struct qlge_cq *cq, int budget) { - struct qlge_adapter *qdev = rx_ring->qdev; - u32 prod = qlge_read_sh_reg(rx_ring->prod_idx_sh_reg); + struct qlge_adapter *qdev = cq->qdev; + u32 prod = qlge_read_sh_reg(cq->prod_idx_sh_reg); + struct qlge_rx_ring *rx_ring = &qdev->rx_ring[cq->cq_id]; struct qlge_net_rsp_iocb *net_rsp; int count = 0; /* While there are entries in the completion queue. */ - while (prod != rx_ring->cnsmr_idx) { + while (prod != cq->cnsmr_idx) { netif_printk(qdev, rx_status, KERN_DEBUG, qdev->ndev, "cq_id = %d, prod = %d, cnsmr = %d\n", - rx_ring->cq_id, prod, rx_ring->cnsmr_idx); + cq->cq_id, prod, cq->cnsmr_idx); - net_rsp = rx_ring->curr_entry; + net_rsp = cq->curr_entry; rmb(); switch (net_rsp->opcode) { case OPCODE_IB_MAC_IOCB: @@ -2166,13 +2167,13 @@ static int qlge_clean_inbound_rx_ring(struct qlge_rx_ring *rx_ring, int budget) break; } count++; - qlge_update_cq(rx_ring); - prod = qlge_read_sh_reg(rx_ring->prod_idx_sh_reg); + qlge_update_cq(cq); + prod = qlge_read_sh_reg(cq->prod_idx_sh_reg); if (count == budget) break; } qlge_update_buffer_queues(rx_ring, GFP_ATOMIC, 0); - qlge_write_cq_idx(rx_ring); + qlge_write_cq_idx(cq); return count; } @@ -2180,45 +2181,46 @@ static int qlge_napi_poll_msix(struct napi_struct *napi, int budget) { struct qlge_rx_ring *rx_ring = container_of(napi, struct qlge_rx_ring, napi); struct qlge_adapter *qdev = rx_ring->qdev; - struct qlge_rx_ring *trx_ring; + struct qlge_cq *tcq, *rcq; int i, work_done = 0; struct intr_context *ctx = &qdev->intr_context[rx_ring->cq_id]; + rcq = &qdev->cq[rx_ring->cq_id]; + netif_printk(qdev, rx_status, KERN_DEBUG, qdev->ndev, "Enter, NAPI POLL cq_id = %d.\n", rx_ring->cq_id); /* Service the TX rings first. They start * right after the RSS rings. */ - for (i = qdev->rss_ring_count; i < qdev->rx_ring_count; i++) { - trx_ring = &qdev->rx_ring[i]; + for (i = qdev->rss_ring_count; i < qdev->cq_count; i++) { + tcq = &qdev->cq[i]; /* If this TX completion ring belongs to this vector and * it's not empty then service it. */ - if ((ctx->irq_mask & (1 << trx_ring->cq_id)) && - (qlge_read_sh_reg(trx_ring->prod_idx_sh_reg) != - trx_ring->cnsmr_idx)) { + if ((ctx->irq_mask & (1 << tcq->cq_id)) && + (qlge_read_sh_reg(tcq->prod_idx_sh_reg) != + tcq->cnsmr_idx)) { netif_printk(qdev, intr, KERN_DEBUG, qdev->ndev, "%s: Servicing TX completion ring %d.\n", - __func__, trx_ring->cq_id); - qlge_clean_outbound_rx_ring(trx_ring); + __func__, tcq->cq_id); + qlge_clean_outbound_cq(tcq); } } /* * Now service the RSS ring if it's active. */ - if (qlge_read_sh_reg(rx_ring->prod_idx_sh_reg) != - rx_ring->cnsmr_idx) { + if (qlge_read_sh_reg(rcq->prod_idx_sh_reg) != rcq->cnsmr_idx) { netif_printk(qdev, intr, KERN_DEBUG, qdev->ndev, "%s: Servicing RX completion ring %d.\n", - __func__, rx_ring->cq_id); - work_done = qlge_clean_inbound_rx_ring(rx_ring, budget); + __func__, rcq->cq_id); + work_done = qlge_clean_inbound_cq(rcq, budget); } if (work_done < budget) { napi_complete_done(napi, work_done); - qlge_enable_completion_interrupt(qdev, rx_ring->irq); + qlge_enable_completion_interrupt(qdev, rcq->irq); } return work_done; } @@ -2368,8 +2370,10 @@ static void qlge_restore_vlan(struct qlge_adapter *qdev) /* MSI-X Multiple Vector Interrupt Handler for inbound completions. */ static irqreturn_t qlge_msix_rx_isr(int irq, void *dev_id) { - struct qlge_rx_ring *rx_ring = dev_id; + struct qlge_cq *cq = dev_id; + struct qlge_rx_ring *rx_ring; + rx_ring = &cq->qdev->rx_ring[cq->cq_id]; napi_schedule(&rx_ring->napi); return IRQ_HANDLED; } @@ -2381,11 +2385,16 @@ static irqreturn_t qlge_msix_rx_isr(int irq, void *dev_id) */ static irqreturn_t qlge_isr(int irq, void *dev_id) { - struct qlge_rx_ring *rx_ring = dev_id; - struct qlge_adapter *qdev = rx_ring->qdev; - struct intr_context *intr_context = &qdev->intr_context[0]; - u32 var; + struct intr_context *intr_context; + struct qlge_rx_ring *rx_ring; + struct qlge_cq *cq = dev_id; + struct qlge_adapter *qdev; int work_done = 0; + u32 var; + + qdev = cq->qdev; + rx_ring = &qdev->rx_ring[cq->cq_id]; + intr_context = &qdev->intr_context[0]; /* Experience shows that when using INTx interrupts, interrupts must * be masked manually. @@ -2767,7 +2776,7 @@ static void qlge_free_rx_buffers(struct qlge_adapter *qdev) { int i; - for (i = 0; i < qdev->rx_ring_count; i++) { + for (i = 0; i < qdev->rss_ring_count; i++) { struct qlge_rx_ring *rx_ring = &qdev->rx_ring[i]; if (rx_ring->lbq.queue) @@ -2815,9 +2824,12 @@ static int qlge_init_bq(struct qlge_bq *bq) return 0; } -static void qlge_free_rx_resources(struct qlge_adapter *qdev, - struct qlge_rx_ring *rx_ring) +static void qlge_free_cq_resources(struct qlge_adapter *qdev, + struct qlge_cq *cq) { + struct qlge_rx_ring *rx_ring; + + rx_ring = &qdev->rx_ring[cq->cq_id]; /* Free the small buffer queue. */ if (rx_ring->sbq.base) { dma_free_coherent(&qdev->pdev->dev, QLGE_BQ_SIZE, @@ -2836,40 +2848,43 @@ static void qlge_free_rx_resources(struct qlge_adapter *qdev, rx_ring->lbq.base = NULL; } + rx_ring = &qdev->rx_ring[cq->cq_id]; /* Free the large buffer queue control blocks. */ kfree(rx_ring->lbq.queue); rx_ring->lbq.queue = NULL; - /* Free the rx queue. */ - if (rx_ring->cq_base) { + /* Free the completion queue. */ + if (cq->cq_base) { dma_free_coherent(&qdev->pdev->dev, - rx_ring->cq_size, - rx_ring->cq_base, rx_ring->cq_base_dma); - rx_ring->cq_base = NULL; + cq->cq_size, + cq->cq_base, cq->cq_base_dma); + cq->cq_base = NULL; } } /* Allocate queues and buffers for this completions queue based * on the values in the parameter structure. */ -static int qlge_alloc_rx_resources(struct qlge_adapter *qdev, - struct qlge_rx_ring *rx_ring) +static int qlge_alloc_cq_resources(struct qlge_adapter *qdev, + struct qlge_cq *cq) { + struct qlge_rx_ring *rx_ring; /* * Allocate the completion queue for this rx_ring. */ - rx_ring->cq_base = - dma_alloc_coherent(&qdev->pdev->dev, rx_ring->cq_size, - &rx_ring->cq_base_dma, GFP_ATOMIC); + cq->cq_base = + dma_alloc_coherent(&qdev->pdev->dev, cq->cq_size, + &cq->cq_base_dma, GFP_ATOMIC); - if (!rx_ring->cq_base) { - netif_err(qdev, ifup, qdev->ndev, "rx_ring alloc failed.\n"); + if (!cq->cq_base) { + netif_err(qdev, ifup, qdev->ndev, "cq alloc failed.\n"); return -ENOMEM; } - if (rx_ring->cq_id < qdev->rss_ring_count && + rx_ring = &qdev->rx_ring[cq->cq_id]; + if (cq->cq_id < qdev->rss_ring_count && (qlge_init_bq(&rx_ring->sbq) || qlge_init_bq(&rx_ring->lbq))) { - qlge_free_rx_resources(qdev, rx_ring); + qlge_free_cq_resources(qdev, cq); return -ENOMEM; } @@ -2910,8 +2925,8 @@ static void qlge_free_mem_resources(struct qlge_adapter *qdev) for (i = 0; i < qdev->tx_ring_count; i++) qlge_free_tx_resources(qdev, &qdev->tx_ring[i]); - for (i = 0; i < qdev->rx_ring_count; i++) - qlge_free_rx_resources(qdev, &qdev->rx_ring[i]); + for (i = 0; i < qdev->cq_count; i++) + qlge_free_cq_resources(qdev, &qdev->cq[i]); qlge_free_shadow_space(qdev); } @@ -2923,8 +2938,8 @@ static int qlge_alloc_mem_resources(struct qlge_adapter *qdev) if (qlge_alloc_shadow_space(qdev)) return -ENOMEM; - for (i = 0; i < qdev->rx_ring_count; i++) { - if (qlge_alloc_rx_resources(qdev, &qdev->rx_ring[i]) != 0) { + for (i = 0; i < qdev->cq_count; i++) { + if (qlge_alloc_cq_resources(qdev, &qdev->cq[i]) != 0) { netif_err(qdev, ifup, qdev->ndev, "RX resource allocation failed.\n"); goto err_mem; @@ -2949,56 +2964,43 @@ static int qlge_alloc_mem_resources(struct qlge_adapter *qdev) * The control block is defined as * "Completion Queue Initialization Control Block", or cqicb. */ -static int qlge_start_rx_ring(struct qlge_adapter *qdev, struct qlge_rx_ring *rx_ring) +static int qlge_start_cq(struct qlge_adapter *qdev, struct qlge_cq *cq) { - struct cqicb *cqicb = &rx_ring->cqicb; + struct cqicb *cqicb = &cq->cqicb; void *shadow_reg = qdev->rx_ring_shadow_reg_area + - (rx_ring->cq_id * RX_RING_SHADOW_SPACE); + (cq->cq_id * RX_RING_SHADOW_SPACE); u64 shadow_reg_dma = qdev->rx_ring_shadow_reg_dma + - (rx_ring->cq_id * RX_RING_SHADOW_SPACE); + (cq->cq_id * RX_RING_SHADOW_SPACE); void __iomem *doorbell_area = - qdev->doorbell_area + (DB_PAGE_SIZE * (128 + rx_ring->cq_id)); + qdev->doorbell_area + (DB_PAGE_SIZE * (128 + cq->cq_id)); + struct qlge_rx_ring *rx_ring; int err = 0; u64 tmp; __le64 *base_indirect_ptr; int page_entries; - /* Set up the shadow registers for this ring. */ - rx_ring->prod_idx_sh_reg = shadow_reg; - rx_ring->prod_idx_sh_reg_dma = shadow_reg_dma; - *rx_ring->prod_idx_sh_reg = 0; - shadow_reg += sizeof(u64); - shadow_reg_dma += sizeof(u64); - rx_ring->lbq.base_indirect = shadow_reg; - rx_ring->lbq.base_indirect_dma = shadow_reg_dma; - shadow_reg += (sizeof(u64) * MAX_DB_PAGES_PER_BQ(QLGE_BQ_LEN)); - shadow_reg_dma += (sizeof(u64) * MAX_DB_PAGES_PER_BQ(QLGE_BQ_LEN)); - rx_ring->sbq.base_indirect = shadow_reg; - rx_ring->sbq.base_indirect_dma = shadow_reg_dma; + /* Set up the shadow registers for this cq. */ + cq->prod_idx_sh_reg = shadow_reg; + cq->prod_idx_sh_reg_dma = shadow_reg_dma; + *cq->prod_idx_sh_reg = 0; /* PCI doorbell mem area + 0x00 for consumer index register */ - rx_ring->cnsmr_idx_db_reg = (u32 __iomem *)doorbell_area; - rx_ring->cnsmr_idx = 0; - rx_ring->curr_entry = rx_ring->cq_base; + cq->cnsmr_idx_db_reg = (u32 __iomem *)doorbell_area; + cq->cnsmr_idx = 0; + cq->curr_entry = cq->cq_base; /* PCI doorbell mem area + 0x04 for valid register */ - rx_ring->valid_db_reg = doorbell_area + 0x04; - - /* PCI doorbell mem area + 0x18 for large buffer consumer */ - rx_ring->lbq.prod_idx_db_reg = (u32 __iomem *)(doorbell_area + 0x18); - - /* PCI doorbell mem area + 0x1c */ - rx_ring->sbq.prod_idx_db_reg = (u32 __iomem *)(doorbell_area + 0x1c); + cq->valid_db_reg = doorbell_area + 0x04; memset((void *)cqicb, 0, sizeof(struct cqicb)); - cqicb->msix_vect = rx_ring->irq; + cqicb->msix_vect = cq->irq; - cqicb->len = cpu_to_le16(QLGE_FIT16(rx_ring->cq_len) | LEN_V | + cqicb->len = cpu_to_le16(QLGE_FIT16(cq->cq_len) | LEN_V | LEN_CPP_CONT); - cqicb->addr = cpu_to_le64(rx_ring->cq_base_dma); + cqicb->addr = cpu_to_le64(cq->cq_base_dma); - cqicb->prod_idx_addr = cpu_to_le64(rx_ring->prod_idx_sh_reg_dma); + cqicb->prod_idx_addr = cpu_to_le64(cq->prod_idx_sh_reg_dma); /* * Set up the control block load flags. @@ -3006,7 +3008,23 @@ static int qlge_start_rx_ring(struct qlge_adapter *qdev, struct qlge_rx_ring *rx cqicb->flags = FLAGS_LC | /* Load queue base address */ FLAGS_LV | /* Load MSI-X vector */ FLAGS_LI; /* Load irq delay values */ - if (rx_ring->cq_id < qdev->rss_ring_count) { + + if (cq->cq_id < qdev->rss_ring_count) { + rx_ring = &qdev->rx_ring[cq->cq_id]; + shadow_reg += sizeof(u64); + shadow_reg_dma += sizeof(u64); + rx_ring->lbq.base_indirect = shadow_reg; + rx_ring->lbq.base_indirect_dma = shadow_reg_dma; + shadow_reg += (sizeof(u64) * MAX_DB_PAGES_PER_BQ(QLGE_BQ_LEN)); + shadow_reg_dma += (sizeof(u64) * MAX_DB_PAGES_PER_BQ(QLGE_BQ_LEN)); + rx_ring->sbq.base_indirect = shadow_reg; + rx_ring->sbq.base_indirect_dma = shadow_reg_dma; + /* PCI doorbell mem area + 0x18 for large buffer consumer */ + rx_ring->lbq.prod_idx_db_reg = (u32 __iomem *)(doorbell_area + 0x18); + + /* PCI doorbell mem area + 0x1c */ + rx_ring->sbq.prod_idx_db_reg = (u32 __iomem *)(doorbell_area + 0x1c); + cqicb->flags |= FLAGS_LL; /* Load lbq values */ tmp = (u64)rx_ring->lbq.base_dma; base_indirect_ptr = rx_ring->lbq.base_indirect; @@ -3034,14 +3052,12 @@ static int qlge_start_rx_ring(struct qlge_adapter *qdev, struct qlge_rx_ring *rx base_indirect_ptr++; page_entries++; } while (page_entries < MAX_DB_PAGES_PER_BQ(QLGE_BQ_LEN)); - cqicb->sbq_addr = - cpu_to_le64(rx_ring->sbq.base_indirect_dma); + cqicb->sbq_addr = cpu_to_le64(rx_ring->sbq.base_indirect_dma); cqicb->sbq_buf_size = cpu_to_le16(QLGE_SMALL_BUFFER_SIZE); cqicb->sbq_len = cpu_to_le16(QLGE_FIT16(QLGE_BQ_LEN)); rx_ring->sbq.next_to_use = 0; rx_ring->sbq.next_to_clean = 0; - } - if (rx_ring->cq_id < qdev->rss_ring_count) { + /* Inbound completion handling rx_rings run in * separate NAPI contexts. */ @@ -3054,7 +3070,7 @@ static int qlge_start_rx_ring(struct qlge_adapter *qdev, struct qlge_rx_ring *rx cqicb->pkt_delay = cpu_to_le16(qdev->tx_max_coalesced_frames); } err = qlge_write_cfg(qdev, cqicb, sizeof(struct cqicb), - CFG_LCQ, rx_ring->cq_id); + CFG_LCQ, cq->cq_id); if (err) { netif_err(qdev, ifup, qdev->ndev, "Failed to load CQICB.\n"); return err; @@ -3195,20 +3211,20 @@ static void qlge_set_tx_vect(struct qlge_adapter *qdev) if (likely(test_bit(QL_MSIX_ENABLED, &qdev->flags))) { /* Assign irq vectors to TX rx_rings.*/ for (vect = 0, j = 0, i = qdev->rss_ring_count; - i < qdev->rx_ring_count; i++) { + i < qdev->cq_count; i++) { if (j == tx_rings_per_vector) { vect++; j = 0; } - qdev->rx_ring[i].irq = vect; + qdev->cq[i].irq = vect; j++; } } else { /* For single vector all rings have an irq * of zero. */ - for (i = 0; i < qdev->rx_ring_count; i++) - qdev->rx_ring[i].irq = 0; + for (i = 0; i < qdev->cq_count; i++) + qdev->cq[i].irq = 0; } } @@ -3226,21 +3242,21 @@ static void qlge_set_irq_mask(struct qlge_adapter *qdev, struct intr_context *ct /* Add the RSS ring serviced by this vector * to the mask. */ - ctx->irq_mask = (1 << qdev->rx_ring[vect].cq_id); + ctx->irq_mask = (1 << qdev->cq[vect].cq_id); /* Add the TX ring(s) serviced by this vector * to the mask. */ for (j = 0; j < tx_rings_per_vector; j++) { ctx->irq_mask |= - (1 << qdev->rx_ring[qdev->rss_ring_count + + (1 << qdev->cq[qdev->rss_ring_count + (vect * tx_rings_per_vector) + j].cq_id); } } else { /* For single vector we just shift each queue's * ID into the mask. */ - for (j = 0; j < qdev->rx_ring_count; j++) - ctx->irq_mask |= (1 << qdev->rx_ring[j].cq_id); + for (j = 0; j < qdev->cq_count; j++) + ctx->irq_mask |= (1 << qdev->cq[j].cq_id); } } @@ -3261,7 +3277,7 @@ static void qlge_resolve_queues_to_irqs(struct qlge_adapter *qdev) * vectors for each queue. */ for (i = 0; i < qdev->intr_count; i++, intr_context++) { - qdev->rx_ring[i].irq = i; + qdev->cq[i].irq = i; intr_context->intr = i; intr_context->qdev = qdev; /* Set up this vector's bit-mask that indicates @@ -3357,9 +3373,9 @@ static void qlge_free_irq(struct qlge_adapter *qdev) if (intr_context->hooked) { if (test_bit(QL_MSIX_ENABLED, &qdev->flags)) { free_irq(qdev->msi_x_entry[i].vector, - &qdev->rx_ring[i]); + &qdev->cq[i]); } else { - free_irq(qdev->pdev->irq, &qdev->rx_ring[0]); + free_irq(qdev->pdev->irq, &qdev->cq[0]); } } } @@ -3381,7 +3397,7 @@ static int qlge_request_irq(struct qlge_adapter *qdev) intr_context->handler, 0, intr_context->name, - &qdev->rx_ring[i]); + &qdev->cq[i]); if (status) { netif_err(qdev, ifup, qdev->ndev, "Failed request for MSIX interrupt %d.\n", @@ -3398,13 +3414,13 @@ static int qlge_request_irq(struct qlge_adapter *qdev) intr_context->name); netif_printk(qdev, ifup, KERN_DEBUG, qdev->ndev, "%s: dev_id = 0x%p.\n", __func__, - &qdev->rx_ring[0]); + &qdev->cq[0]); status = request_irq(pdev->irq, qlge_isr, test_bit(QL_MSI_ENABLED, &qdev->flags) ? 0 : IRQF_SHARED, - intr_context->name, &qdev->rx_ring[0]); + intr_context->name, &qdev->cq[0]); if (status) goto err_irq; @@ -3620,8 +3636,8 @@ static int qlge_adapter_initialize(struct qlge_adapter *qdev) qdev->wol = WAKE_MAGIC; /* Start up the rx queues. */ - for (i = 0; i < qdev->rx_ring_count; i++) { - status = qlge_start_rx_ring(qdev, &qdev->rx_ring[i]); + for (i = 0; i < qdev->cq_count; i++) { + status = qlge_start_cq(qdev, &qdev->cq[i]); if (status) { netif_err(qdev, ifup, qdev->ndev, "Failed to start rx ring[%d].\n", i); @@ -3916,10 +3932,11 @@ static void qlge_set_lb_size(struct qlge_adapter *qdev) static int qlge_configure_rings(struct qlge_adapter *qdev) { - int i; + int cpu_cnt = min_t(int, MAX_CPUS, num_online_cpus()); struct qlge_rx_ring *rx_ring; struct qlge_tx_ring *tx_ring; - int cpu_cnt = min_t(int, MAX_CPUS, num_online_cpus()); + struct qlge_cq *cq; + int i; /* In a perfect world we have one RSS ring for each CPU * and each has it's own vector. To do that we ask for @@ -3933,7 +3950,7 @@ static int qlge_configure_rings(struct qlge_adapter *qdev) /* Adjust the RSS ring count to the actual vector count. */ qdev->rss_ring_count = qdev->intr_count; qdev->tx_ring_count = cpu_cnt; - qdev->rx_ring_count = qdev->tx_ring_count + qdev->rss_ring_count; + qdev->cq_count = qdev->tx_ring_count + qdev->rss_ring_count; for (i = 0; i < qdev->tx_ring_count; i++) { tx_ring = &qdev->tx_ring[i]; @@ -3951,31 +3968,35 @@ static int qlge_configure_rings(struct qlge_adapter *qdev) tx_ring->cq_id = qdev->rss_ring_count + i; } - for (i = 0; i < qdev->rx_ring_count; i++) { - rx_ring = &qdev->rx_ring[i]; - memset((void *)rx_ring, 0, sizeof(*rx_ring)); - rx_ring->qdev = qdev; - rx_ring->cq_id = i; - rx_ring->cpu = i % cpu_cnt; /* CPU to run handler on. */ + for (i = 0; i < qdev->cq_count; i++) { + cq = &qdev->cq[i]; + memset((void *)cq, 0, sizeof(*cq)); + cq->qdev = qdev; + cq->cq_id = i; + cq->cpu = i % cpu_cnt; /* CPU to run handler on. */ if (i < qdev->rss_ring_count) { /* * Inbound (RSS) queues. */ - rx_ring->cq_len = qdev->rx_ring_size; - rx_ring->cq_size = - rx_ring->cq_len * sizeof(struct qlge_net_rsp_iocb); + cq->cq_len = qdev->rx_ring_size; + cq->cq_size = + cq->cq_len * sizeof(struct qlge_net_rsp_iocb); + rx_ring = &qdev->rx_ring[i]; + memset((void *)rx_ring, 0, sizeof(*rx_ring)); rx_ring->lbq.type = QLGE_LB; + rx_ring->cq_id = i; rx_ring->sbq.type = QLGE_SB; INIT_DELAYED_WORK(&rx_ring->refill_work, &qlge_slow_refill); + rx_ring->qdev = qdev; } else { /* * Outbound queue handles outbound completions only. */ /* outbound cq is same size as tx_ring it services. */ - rx_ring->cq_len = qdev->tx_ring_size; - rx_ring->cq_size = - rx_ring->cq_len * sizeof(struct qlge_net_rsp_iocb); + cq->cq_len = qdev->tx_ring_size; + cq->cq_size = + cq->cq_len * sizeof(struct qlge_net_rsp_iocb); } } return 0; @@ -4648,9 +4669,9 @@ netdev_tx_t qlge_lb_send(struct sk_buff *skb, struct net_device *ndev) return qlge_send(skb, ndev); } -int qlge_clean_lb_rx_ring(struct qlge_rx_ring *rx_ring, int budget) +int qlge_clean_lb_cq(struct qlge_cq *cq, int budget) { - return qlge_clean_inbound_rx_ring(rx_ring, budget); + return qlge_clean_inbound_cq(cq, budget); } static void qlge_remove(struct pci_dev *pdev) From patchwork Mon Jun 21 13:48:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Coiby Xu X-Patchwork-Id: 465633 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8F10C4743C for ; Mon, 21 Jun 2021 13:50:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id ABCE461042 for ; Mon, 21 Jun 2021 13:50:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230286AbhFUNw1 (ORCPT ); Mon, 21 Jun 2021 09:52:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49422 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229710AbhFUNw0 (ORCPT ); Mon, 21 Jun 2021 09:52:26 -0400 Received: from mail-pf1-x436.google.com (mail-pf1-x436.google.com [IPv6:2607:f8b0:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5515DC061574; Mon, 21 Jun 2021 06:50:11 -0700 (PDT) Received: by mail-pf1-x436.google.com with SMTP id t32so2341811pfg.2; Mon, 21 Jun 2021 06:50:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=DBou2U3DbdBx2HAvE0ysMUQo/yZWXheADNN9pWctBks=; b=GWKNw1JcmWis30OKyHO71+fBGmI3NbAib4vqlDZAp7UkhXBybqg12/Fq1WMl046P99 N2PqLpoPFKSwx+3e7bdgHm7l9jBeB37OhjxaalKlZG5wv1WzHcOuoCb5+LEIoCZwqhuP jfNpsDSGY7RMwCsS6IiZlEa3OrkudQPsp1t3TudPBH2j30JIt8uyKvMApEF13ZS9GbpI UHuHOQZdE/DvRLvEL93OpmUxC9/WXGLGuZF9DKaowQyDnxtHaYPyE/KB9pXskdBASjOq U8k/mX83ZPGNuhDAINrYs8SSsEay1XwCeRvrvr17KZSGgTysvuc6P+nvTwM3rwPzf/gC NYuQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=DBou2U3DbdBx2HAvE0ysMUQo/yZWXheADNN9pWctBks=; b=LGlq9ee6WhmDkBs5jHvnK3QghbkkUbbl/YduhlHTO7gG7tvNmCD+OqxPMm78AGpN9K v7I4u8++r01MAW4wvli4C0lNmB6gcRusH+d5lrbMG3sqiqASELY4zdDbDkxEPNVr7whQ KS9O6LThNL2U5tJARd2Kc7N2FOPBmtk68YT+4qjc79KrKAWbHPZ4YYjLNaOPffQEqa0d sGeOe9jTzmxN7KJw6pHZnTglshLQnYWc3yhMXQrgcmSn6kqNcx5OU2ExMfAyIwl+GLyU Kg+/MYPY1/dqDQe0c5ZL5mP4UGe1DC3BxEg2F6GFslfH3MY5BAvO1t/DdWFF5jDMtA6D O8ww== X-Gm-Message-State: AOAM530Wpl638rw6E1tTKHrfYJNiYbfOzCZhZWOkBx40/hQq1qyR2xGG Xva4yOZKJCjucGsE6j+/xe8= X-Google-Smtp-Source: ABdhPJzmU9VIYcBXi5sAGEdgbXP6fDRUJGxHEedFTshK6heS/TZJbo1xbXtw69UFb3nHPDQuBUzYng== X-Received: by 2002:a63:d909:: with SMTP id r9mr24445084pgg.285.1624283410868; Mon, 21 Jun 2021 06:50:10 -0700 (PDT) Received: from localhost ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id u11sm12723177pjf.46.2021.06.21.06.50.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Jun 2021 06:50:10 -0700 (PDT) From: Coiby Xu To: linux-staging@lists.linux.dev Cc: netdev@vger.kernel.org, Benjamin Poirier , Shung-Hsi Yu , Manish Chopra , GR-Linux-NIC-Dev@marvell.com (supporter:QLOGIC QLGE 10Gb ETHERNET DRIVER), Greg Kroah-Hartman , linux-kernel@vger.kernel.org (open list) Subject: [RFC 06/19] staging: qlge: disable flow control by default Date: Mon, 21 Jun 2021 21:48:49 +0800 Message-Id: <20210621134902.83587-7-coiby.xu@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210621134902.83587-1-coiby.xu@gmail.com> References: <20210621134902.83587-1-coiby.xu@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org According to the TODO item, > * the flow control implementation in firmware is buggy (sends a flood of pause > frames, resets the link, device and driver buffer queues become > desynchronized), disable it by default Currently, qlge_mpi_port_cfg_work calls qlge_mb_get_port_cfg which gets the link config from the firmware and saves it to qdev->link_config. By default, flow control is enabled. This commit writes the save the pause parameter of qdev->link_config and don't let it overwritten by link settings of current port. Since qdev->link_config=0 when qdev is initialized, this could disable flow control by default and the pause parameter value could also survive MPI resetting, $ ethtool -a enp94s0f0 Pause parameters for enp94s0f0: Autonegotiate: off RX: off TX: off The follow control can be enabled manually, $ ethtool -A enp94s0f0 rx on tx on $ ethtool -a enp94s0f0 Pause parameters for enp94s0f0: Autonegotiate: off RX: on TX: on Signed-off-by: Coiby Xu --- drivers/staging/qlge/TODO | 3 --- drivers/staging/qlge/qlge_mpi.c | 11 ++++++++++- 2 files changed, 10 insertions(+), 4 deletions(-) diff --git a/drivers/staging/qlge/TODO b/drivers/staging/qlge/TODO index b7a60425fcd2..8c84160b5993 100644 --- a/drivers/staging/qlge/TODO +++ b/drivers/staging/qlge/TODO @@ -4,9 +4,6 @@ ql_build_rx_skb(). That function is now used exclusively to handle packets that underwent header splitting but it still contains code to handle non split cases. -* the flow control implementation in firmware is buggy (sends a flood of pause - frames, resets the link, device and driver buffer queues become - desynchronized), disable it by default * some structures are initialized redundantly (ex. memset 0 after alloc_etherdev()) * the driver has a habit of using runtime checks where compile time checks are diff --git a/drivers/staging/qlge/qlge_mpi.c b/drivers/staging/qlge/qlge_mpi.c index 2630ebf50341..0f1c7da80413 100644 --- a/drivers/staging/qlge/qlge_mpi.c +++ b/drivers/staging/qlge/qlge_mpi.c @@ -806,6 +806,7 @@ int qlge_mb_get_port_cfg(struct qlge_adapter *qdev) { struct mbox_params mbc; struct mbox_params *mbcp = &mbc; + u32 saved_pause_link_config = 0; int status = 0; memset(mbcp, 0, sizeof(struct mbox_params)); @@ -826,7 +827,15 @@ int qlge_mb_get_port_cfg(struct qlge_adapter *qdev) } else { netif_printk(qdev, drv, KERN_DEBUG, qdev->ndev, "Passed Get Port Configuration.\n"); - qdev->link_config = mbcp->mbox_out[1]; + /* + * Don't let the pause parameter be overwritten by + * + * In this way, follow control can be disabled by default + * and the setting could also survive the MPI reset + */ + saved_pause_link_config = qdev->link_config & CFG_PAUSE_STD; + qdev->link_config = ~CFG_PAUSE_STD & mbcp->mbox_out[1]; + qdev->link_config |= saved_pause_link_config; qdev->max_frame_size = mbcp->mbox_out[2]; } return status; From patchwork Mon Jun 21 13:48:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Coiby Xu X-Patchwork-Id: 464893 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56EB7C4743C for ; Mon, 21 Jun 2021 13:50:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3A50261245 for ; Mon, 21 Jun 2021 13:50:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230252AbhFUNwf (ORCPT ); Mon, 21 Jun 2021 09:52:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49464 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229945AbhFUNwe (ORCPT ); Mon, 21 Jun 2021 09:52:34 -0400 Received: from mail-pl1-x62e.google.com (mail-pl1-x62e.google.com [IPv6:2607:f8b0:4864:20::62e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 96D9AC061574; Mon, 21 Jun 2021 06:50:20 -0700 (PDT) Received: by mail-pl1-x62e.google.com with SMTP id v13so8581146ple.9; Mon, 21 Jun 2021 06:50:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=nQXYpr9OGIzLz2GoYn3JoWophCOazgqDgqjJlPCW9Rs=; b=lv9U+FuTpoaz7AymRBNPFNKHjQcfwKiwWvhz1Hdsy28AWvlUDmXgBbGSTehyn8QEzF MD6zLkgSKagDIpdWXB/WoYDb8mMK3Vghy3WAOWoivQculdGRbRsfGmTRJWgsVAmH4Qe2 HCWIhP4rXcEJ1xU29jjK8zrxsJ5l5XyUCirCUT/OLlwm1ny80NucLs8bXUkbMFk9wNQk y75q0JMy2C2Q8op1YWqdSGANcVQUUfqZcZZMBMIk1+M/US+fDoDzG7lRY0sHmt0QEriK gg249+gg9XDwoV6ZrkGCUkDO/FN/+PnonfDZV3g08FSN7v2CfE7JiIO1bwLF9ujNMS1r LS3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nQXYpr9OGIzLz2GoYn3JoWophCOazgqDgqjJlPCW9Rs=; b=M0H3OP4oeZcrgoU9zP5kKb7k/VIlXaHbrNQC2FAk88f8oRuvi/x5mkNsHnrPVzl+63 T4hMDSWtN0ceOLB4dyVi4kUTm89yUDhuExDpknW03LU+viGicTr+Fm8MD6w3RvQFB0um JwbDHZwprt/oodax6YDAqCRpNmwHdfFq2vmmcvgI7HVvdAdG/3+u1ZIxeZAGaXwGHmxB Vwbs/2DYvBA2H6y9JxpVihK1Zie9XZg6TZm+8js1sd0gifb3Hr1f21KTVzlBWjqr7eav 9A3f94UMHU6qhPztimPgq8XjmBV9p/XWQYxUVtmPyR844vhDh5iCogYwwEcX7PEhKObW B8AA== X-Gm-Message-State: AOAM532ubXGxma/UgTHKIZqYVjR681J+gQTIFKToLNkiSE0DZOFps6GZ nohCusXy8/YkK9JqMqiPYBA= X-Google-Smtp-Source: ABdhPJyPXvMmTVBBrohbhigbDD75fOJj+C8VChO1klkLtEa4TIchsmrFlvmYANSMpmEV0qTu6XMEiw== X-Received: by 2002:a17:90b:3646:: with SMTP id nh6mr14994751pjb.73.1624283420237; Mon, 21 Jun 2021 06:50:20 -0700 (PDT) Received: from localhost ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id n6sm16959160pgt.7.2021.06.21.06.50.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Jun 2021 06:50:19 -0700 (PDT) From: Coiby Xu To: linux-staging@lists.linux.dev Cc: netdev@vger.kernel.org, Benjamin Poirier , Shung-Hsi Yu , Manish Chopra , GR-Linux-NIC-Dev@marvell.com (supporter:QLOGIC QLGE 10Gb ETHERNET DRIVER), Greg Kroah-Hartman , linux-kernel@vger.kernel.org (open list) Subject: [RFC 07/19] staging: qlge: remove the TODO item of unnecessary memset 0 Date: Mon, 21 Jun 2021 21:48:50 +0800 Message-Id: <20210621134902.83587-8-coiby.xu@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210621134902.83587-1-coiby.xu@gmail.com> References: <20210621134902.83587-1-coiby.xu@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org commit 953b94009377419f28fd0153f91fcd5b5a347608 ("staging: qlge: Initialize devlink health dump framework") removed the unnecessary memset 0 after alloc_etherdev_mq. Delete this TODO item. Signed-off-by: Coiby Xu --- drivers/staging/qlge/TODO | 2 -- 1 file changed, 2 deletions(-) diff --git a/drivers/staging/qlge/TODO b/drivers/staging/qlge/TODO index 8c84160b5993..cc5f8cf7608d 100644 --- a/drivers/staging/qlge/TODO +++ b/drivers/staging/qlge/TODO @@ -4,8 +4,6 @@ ql_build_rx_skb(). That function is now used exclusively to handle packets that underwent header splitting but it still contains code to handle non split cases. -* some structures are initialized redundantly (ex. memset 0 after - alloc_etherdev()) * the driver has a habit of using runtime checks where compile time checks are possible (ex. ql_free_rx_buffers(), ql_alloc_rx_buffers()) * reorder struct members to avoid holes if it doesn't impact performance From patchwork Mon Jun 21 13:48:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Coiby Xu X-Patchwork-Id: 465632 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3B0BC4743C for ; Mon, 21 Jun 2021 13:50:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9809761206 for ; Mon, 21 Jun 2021 13:50:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230331AbhFUNwu (ORCPT ); Mon, 21 Jun 2021 09:52:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49512 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229945AbhFUNwo (ORCPT ); Mon, 21 Jun 2021 09:52:44 -0400 Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com [IPv6:2607:f8b0:4864:20::62c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9622BC061574; Mon, 21 Jun 2021 06:50:29 -0700 (PDT) Received: by mail-pl1-x62c.google.com with SMTP id i4so4884728plt.12; Mon, 21 Jun 2021 06:50:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/Vevo7ejaXcssHHgQzGNwjoQOcpQ59e4oftezJZAjPA=; b=DF7TJVGQac5fB/7fndmVtAxkhZw0fe30wtM/heScSiU0oQ4TWzckN7FY2znSYD8t+v u1rx2N06N1Fa7EMzSVa7y4uJFW36yt/r+I6NkXWdB0uy2Z5H979fa7dhxBaTXUWSVwsn TVWWPrEnhBB1Kxbab7eyoSagGbP76cjaPMMAFVYoS+r8K26q+0dr5388IKUWmFx4J83b Xwa29Hy9mAJlx8OIdGvSsFFouiwXFQ9OaQy4RKFBHXVjG0M3mvOmrE/JVB9rW79xojZx nAIqQmADurAap0myndKnvCQG3hTSdkkWNqyEaw+7X7W+IVREEoBpjQ+T1g3Xs12R2uqz tDYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/Vevo7ejaXcssHHgQzGNwjoQOcpQ59e4oftezJZAjPA=; b=BSA6RlVPl0TXOZ2fjsN0aldOPwbotN5+QQ0YWQVhHAjLGMPB+PDTWrtZtfj2iSZFrh dPWVUJcFTCJoIUc+UUvahB+dVhwE3nTStHTKNYW8x19oQePSzZ/l4t5PHJFEKJaDqWqb JKUeW76I4CNgU5QaJXpVgit3SdtaPYHanGSu7MSKEva13jgKxbhXJQxOCa/fhqmT4AGk lvOUFJ8NhnQG2xdSBKadVOX0VUU0CkMSM3WT2mOoGEClVZJSBNpDtscIsiDLm89nldLp 5vGcbv4SVA7yeAL/Yyv0VKj5+YjdC4jlJA/AZat9HeBQ9oZF5u76ZMuSd7ZRiwlB8CwV dNPw== X-Gm-Message-State: AOAM530t5mspsMmpBcm0iKVNtdrMUZqSuXWnf8kH8qHvpj7O4xJ3Idzs OD4UYE/SOwbq5R67EgSqIL8= X-Google-Smtp-Source: ABdhPJz1CHl2x+3Mv02b7BoUpsfSsssHxHoLH0khC5l65hvcSPpsJpHMbYHYXTFMj0TSl4Dngg51cw== X-Received: by 2002:a17:90b:3142:: with SMTP id ip2mr37487283pjb.63.1624283429175; Mon, 21 Jun 2021 06:50:29 -0700 (PDT) Received: from localhost ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id s42sm6678504pfw.184.2021.06.21.06.50.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Jun 2021 06:50:28 -0700 (PDT) From: Coiby Xu To: linux-staging@lists.linux.dev Cc: netdev@vger.kernel.org, Benjamin Poirier , Shung-Hsi Yu , Manish Chopra , GR-Linux-NIC-Dev@marvell.com (supporter:QLOGIC QLGE 10Gb ETHERNET DRIVER), Greg Kroah-Hartman , linux-kernel@vger.kernel.org (open list) Subject: [RFC 08/19] staging: qlge: reorder members of qlge_adapter for optimization Date: Mon, 21 Jun 2021 21:48:51 +0800 Message-Id: <20210621134902.83587-9-coiby.xu@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210621134902.83587-1-coiby.xu@gmail.com> References: <20210621134902.83587-1-coiby.xu@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Before reordering, pahole shows, /* size: 21168, cachelines: 331, members: 69 */ /* sum members: 21144, holes: 4, sum holes: 18 */ /* padding: 6 */ /* paddings: 6, sum paddings: 24 */ /* forced alignments: 1 */ /* last cacheline: 48 bytes */ After reordering following pahole's suggestion, /* size: 21152, cachelines: 331, members: 69 */ /* sum members: 21144, holes: 1, sum holes: 2 */ /* padding: 6 */ /* paddings: 6, sum paddings: 24 */ /* forced alignments: 1 */ /* last cacheline: 32 bytes */ Signed-off-by: Coiby Xu --- drivers/staging/qlge/qlge.h | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/staging/qlge/qlge.h b/drivers/staging/qlge/qlge.h index 926af25b14fa..9177baa9f022 100644 --- a/drivers/staging/qlge/qlge.h +++ b/drivers/staging/qlge/qlge.h @@ -2081,8 +2081,8 @@ struct qlge_adapter *netdev_to_qdev(struct net_device *ndev) */ struct qlge_adapter { struct ricb ricb; - unsigned long flags; u32 wol; + unsigned long flags; struct nic_stats nic_stats; @@ -2103,6 +2103,8 @@ struct qlge_adapter { spinlock_t adapter_lock; spinlock_t stats_lock; + u32 intr_count; + /* PCI Bus Relative Register Addresses */ void __iomem *reg_base; void __iomem *doorbell_area; @@ -2123,7 +2125,6 @@ struct qlge_adapter { int tx_ring_size; int rx_ring_size; - u32 intr_count; struct msix_entry *msi_x_entry; struct intr_context intr_context[MAX_RX_RINGS]; @@ -2162,6 +2163,7 @@ struct qlge_adapter { u32 max_frame_size; union flash_params flash; + u16 device_id; struct workqueue_struct *workqueue; struct delayed_work asic_reset_work; @@ -2171,7 +2173,6 @@ struct qlge_adapter { struct delayed_work mpi_idc_work; struct completion ide_completion; const struct nic_operations *nic_ops; - u16 device_id; struct timer_list timer; atomic_t lb_count; /* Keep local copy of current mac address. */ From patchwork Mon Jun 21 13:48:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Coiby Xu X-Patchwork-Id: 464892 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 863E6C4743C for ; Mon, 21 Jun 2021 13:50:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6F37F6120D for ; Mon, 21 Jun 2021 13:50:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230379AbhFUNwy (ORCPT ); Mon, 21 Jun 2021 09:52:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49560 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230075AbhFUNwx (ORCPT ); Mon, 21 Jun 2021 09:52:53 -0400 Received: from mail-pf1-x433.google.com (mail-pf1-x433.google.com [IPv6:2607:f8b0:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E55D0C061574; Mon, 21 Jun 2021 06:50:38 -0700 (PDT) Received: by mail-pf1-x433.google.com with SMTP id t32so2342770pfg.2; Mon, 21 Jun 2021 06:50:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dsT/DFkU4ugOnwiedTB/H/+aCFKE/gHvtAJo8gJ/4xU=; b=K3dPJFNfKnJZ9ImpJZQhUeZnOL6fNuVlMCtBJDlCFZ5UoW/3i6WdJeQdLzp11OK2P6 5xHgQK/vQCIClkykiMXyXqCP3sESForvaOgw/drcxGFmXvrWVfXlclE7wHDoD9lhwqod w+0qART18gMaYAiYQ8NQHhVLOmPkhe20a3w0/MhVwqXKigSpR97C1Qfc2mD8YBLt8jPR 6w7E4FOrt7D1gQ9kEXeBKCw0/Ah+kufyCDyaVkap4RPcE9YJHvYWknRsVIN67rwtDs4X kFxnCIiLOlYHOCWLLmgA21LMpHLphQ0zoOI2aECsqUJ3ViF4uXg7b5aMbdjr4arqZxw9 bclw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dsT/DFkU4ugOnwiedTB/H/+aCFKE/gHvtAJo8gJ/4xU=; b=mc92tw8FX12Ut9C1X09CEJu2a/AbpKViq8pDn2Izv1iEYIiUKUAT2W9OpHkN+Owfuk 3RXRHllSO9CfXac1y6vaWlDEOdmIkqL/nZisxrbg+1cQxAh6ZU2WhDokFeN0kFG9G9QZ nIwDXT+NogcuExu7vhVTdJaymSGdpF5yQOjReDhqQBH8TKK/ApbDUEa7ho/MnGE8ry7c o8DAar2Z/RMyZXh1NiJMyP/sSKsa/+VfWynHXnt/Qyh+8Zk6LZet2nBIS8J0rpAezANP d0ZfouM0NlS2KjrgBLu1LL8fECQmdFShw8TLdz+bfl1tGY/Y9BPkRcPnw2In/CmdvXXx 7Uww== X-Gm-Message-State: AOAM531RxrG6y5V/NCvSMaPztK7FT13Msv8ZqhaLtAiQdCgNCH/4nQAO t6ZsDRtlLD6l3fUXAV73XJg= X-Google-Smtp-Source: ABdhPJw6Tgz/lB1hCgsHP0sq40v5fxcqZ5xkjpB/b36/FOB5gK+DiNaLs7bScNm/q8BVczc7ZhZF2A== X-Received: by 2002:a62:f947:0:b029:2e9:c502:7939 with SMTP id g7-20020a62f9470000b02902e9c5027939mr19696390pfm.34.1624283438541; Mon, 21 Jun 2021 06:50:38 -0700 (PDT) Received: from localhost ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id f13sm1147160pfe.149.2021.06.21.06.50.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Jun 2021 06:50:38 -0700 (PDT) From: Coiby Xu To: linux-staging@lists.linux.dev Cc: netdev@vger.kernel.org, Benjamin Poirier , Shung-Hsi Yu , Manish Chopra , GR-Linux-NIC-Dev@marvell.com (supporter:QLOGIC QLGE 10Gb ETHERNET DRIVER), Greg Kroah-Hartman , linux-kernel@vger.kernel.org (open list) Subject: [RFC 09/19] staging: qlge: remove the TODO item of reorder struct Date: Mon, 21 Jun 2021 21:48:52 +0800 Message-Id: <20210621134902.83587-10-coiby.xu@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210621134902.83587-1-coiby.xu@gmail.com> References: <20210621134902.83587-1-coiby.xu@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org struct qlge_cq has one hole, but for the sake of readability (irq is one of the Misc. handler elements, don't move it from after 'irq' to after 'cnsmr_idx' ), keep it untouched. Then, there is no struct that need reordering according to pahole. Signed-off-by: Coiby Xu --- drivers/staging/qlge/TODO | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/staging/qlge/TODO b/drivers/staging/qlge/TODO index cc5f8cf7608d..2c4cc586a4bf 100644 --- a/drivers/staging/qlge/TODO +++ b/drivers/staging/qlge/TODO @@ -6,7 +6,6 @@ split cases. * the driver has a habit of using runtime checks where compile time checks are possible (ex. ql_free_rx_buffers(), ql_alloc_rx_buffers()) -* reorder struct members to avoid holes if it doesn't impact performance * avoid legacy/deprecated apis (ex. replace pci_dma_*, replace pci_enable_msi, use pci_iomap) * some "while" loops could be rewritten with simple "for", ex. From patchwork Mon Jun 21 13:48:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Coiby Xu X-Patchwork-Id: 465631 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9008C4743C for ; Mon, 21 Jun 2021 13:50:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C2AFE6120D for ; Mon, 21 Jun 2021 13:50:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230388AbhFUNxD (ORCPT ); Mon, 21 Jun 2021 09:53:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49614 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230040AbhFUNxC (ORCPT ); Mon, 21 Jun 2021 09:53:02 -0400 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 18204C061574; Mon, 21 Jun 2021 06:50:48 -0700 (PDT) Received: by mail-pj1-x1032.google.com with SMTP id x21-20020a17090aa395b029016e25313bfcso21377pjp.2; Mon, 21 Jun 2021 06:50:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0mYZSPKTpWCjYpalvYE2jEBITq0KmYxfwhSJMx4IyFU=; b=KSxiYOtMd5eVo3XoXGvB1wIDNFa3/+K+/9GQn5OWFCgdWXTIBzoE0frECCZdB3bFSN E6J+B3SI9PzgePBducLKDMtsQ5fdbuE7+6MBBQ99b/Xe0zpG+NolavSk/5+M7E6P6D+7 ek1dk1xjf7kF3EEHtrVtoWK8/m4Zg2ccPJZgQl1P/0hIyYPbMBqHLis9oq//I+XgR0DZ 57x2LPLwpnncFY4bQ2oRAefVerf4EgqO0ybIv/NeTiotSyLn1HI1SQ9reUaQqP9ulahu W0egIP/HhUgSY46/YsdqKRS3gfCG/Ak9s319H73GOxZ4+vHrd9exdjsTlMN6AmWgQD7h h0cg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0mYZSPKTpWCjYpalvYE2jEBITq0KmYxfwhSJMx4IyFU=; b=pCXFbcOq5EoKAMJySCPIPBvQayp9S/wYcugiVbEImVVN2rOpKXTU/qY+djOT9KTdYy 8pFg5ZPg2g2VqYUWDnHgyeBGqIC8xTLq8pHME5pgFmRPDjTCDfQr5JyE1ixtWSWSPyr9 /kFgshPyaqZok8L1wNzFdb4KI0WNNLnHoRN/qZ67t4jKit1mmhO7hLz4OoOQClnVeVeu s5OA1jlGeiED4reXWr//+psXDXvybFZATa7RcluT2fgXEbM/vMBuz1xdeYjlTS9Sof1G H2JD5rupF+GU70aafSXU9wrLQ7mX218DT0x2+w4YKAduZy8qc4FyimgONNtq9m3/CEP4 l85w== X-Gm-Message-State: AOAM533S4kivhw3PiY2gc/sUNGPjDa+6p/PzAPRYLaYyy8yuK5yTVOT1 B13l+QYiZb+gK0bIRsKd5wA= X-Google-Smtp-Source: ABdhPJzmjRCxXPd5Y2US1enITpczMMJEqXsf/QB7GL1eAze9xbXlZQi4mbwKC2CG8TOcHP0sU9oDsg== X-Received: by 2002:a17:902:14b:b029:119:ef6b:8039 with SMTP id 69-20020a170902014bb0290119ef6b8039mr17948896plb.84.1624283447704; Mon, 21 Jun 2021 06:50:47 -0700 (PDT) Received: from localhost ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id b20sm16914967pgm.30.2021.06.21.06.50.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Jun 2021 06:50:47 -0700 (PDT) From: Coiby Xu To: linux-staging@lists.linux.dev Cc: netdev@vger.kernel.org, Benjamin Poirier , Shung-Hsi Yu , Manish Chopra , GR-Linux-NIC-Dev@marvell.com (supporter:QLOGIC QLGE 10Gb ETHERNET DRIVER), Greg Kroah-Hartman , linux-kernel@vger.kernel.org (open list) Subject: [RFC 10/19] staging: qlge: remove the TODO item of avoid legacy/deprecated apis Date: Mon, 21 Jun 2021 21:48:53 +0800 Message-Id: <20210621134902.83587-11-coiby.xu@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210621134902.83587-1-coiby.xu@gmail.com> References: <20210621134902.83587-1-coiby.xu@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org The following commits have finished the job, - commit e955a071b9b3e6b634b7ceda64025bfbd6529dcc ("staging: qlge: replace deprecated apis pci_dma_*") - commit 50b483a1457abd6fe27117f0507297e107ef42b2 ("qlge: Use pci_enable_msix_range() instead of pci_enable_msix()") Signed-off-by: Coiby Xu --- drivers/staging/qlge/TODO | 2 -- 1 file changed, 2 deletions(-) diff --git a/drivers/staging/qlge/TODO b/drivers/staging/qlge/TODO index 2c4cc586a4bf..8bb6779a5bb4 100644 --- a/drivers/staging/qlge/TODO +++ b/drivers/staging/qlge/TODO @@ -6,8 +6,6 @@ split cases. * the driver has a habit of using runtime checks where compile time checks are possible (ex. ql_free_rx_buffers(), ql_alloc_rx_buffers()) -* avoid legacy/deprecated apis (ex. replace pci_dma_*, replace pci_enable_msi, - use pci_iomap) * some "while" loops could be rewritten with simple "for", ex. ql_wait_reg_rdy(), ql_start_rx_ring()) * remove duplicate and useless comments From patchwork Mon Jun 21 13:48:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Coiby Xu X-Patchwork-Id: 464891 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B719C48BE5 for ; Mon, 21 Jun 2021 13:51:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4513361261 for ; Mon, 21 Jun 2021 13:51:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230409AbhFUNxQ (ORCPT ); Mon, 21 Jun 2021 09:53:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49670 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230160AbhFUNxM (ORCPT ); Mon, 21 Jun 2021 09:53:12 -0400 Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F27AC061756; Mon, 21 Jun 2021 06:50:58 -0700 (PDT) Received: by mail-pl1-x632.google.com with SMTP id b1so1822327pls.3; Mon, 21 Jun 2021 06:50:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=uk9A7KFysYcvmGVXWYf3VoOwupNCjbKRP8dtymmy4uk=; b=pW8zYcf43IiKFrfbxzbQVomcaR0Ob7Y+MWyH3XDrfC3YfHpUFSkPEI+yf7CYE47KSX ia1eYRPDWlaqhPLrBXvVILMvwjAGyp+AaDpYYuJAWjBWaRz2SKEqooA/7PfaBgforBxa sz372j6FnoLb2Mq1j7Lb8jO6POMs12WSF6JjAy7A/HUyu7WTr7SAjrEnffVQi6nEFADr YUNeooyjx3Kf6q8sDe2KfYJok7e7txZoHsp2162YHO6AHct7+PJSqMbHUyUpvOL4Ej9v Y+VSvbcL3FOi8kfzaPloftnFNAUYpTro4Dv3KJ8izqARE3oRytl8ED9g3QSkOAdGGdhU VrOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=uk9A7KFysYcvmGVXWYf3VoOwupNCjbKRP8dtymmy4uk=; b=avPiPzZDDiOgBroA3LCxPKeZ2LrxyQzPt4k+9SxYcY8oC+qOqYGoI20S2XbkWrjXPH xljRvixBHsltKVu3R7S78SyQ3KPb64cHv+/Iezb/928/P02sB365bLOzurbuX2xPXxmC ZTA6p9TbVGsgS98mF55pfeQkYHwVmBLE2xlVMJe+In5FDivMTAor9bJbpl1xEClLmLBr K4KYRy4zeDG/iKvWQpGmcvfLiGlfWwJU60DRIqlvNAEqXd9FlAoSyZMy/vXOgkFIMfz1 IyEcQ4OIWkv8+ah9OXK8IavCy7eimqp0ZCgjjmNgiQG3+hf1bYy0wgC9/Y+q6VC5uYMV emkA== X-Gm-Message-State: AOAM531KWDFoVnpj+BhXrzS+ooMYkcPLaPOJWF2AzM1iNLG9RGk6hGHX GrSZxdUacwYSrDcZnOFNj8eT+iETi0kTFATL X-Google-Smtp-Source: ABdhPJwIvYWpAcKpURINZG6U/3N9v106fQ+fXn+eeNs+r+FAdK/vZMuP9iT4PcXEYeV63sW+zIwefw== X-Received: by 2002:a17:90b:ecf:: with SMTP id gz15mr9419265pjb.131.1624283458189; Mon, 21 Jun 2021 06:50:58 -0700 (PDT) Received: from localhost ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id o189sm9276461pga.78.2021.06.21.06.50.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Jun 2021 06:50:57 -0700 (PDT) From: Coiby Xu To: linux-staging@lists.linux.dev Cc: netdev@vger.kernel.org, Benjamin Poirier , Shung-Hsi Yu , Manish Chopra , GR-Linux-NIC-Dev@marvell.com (supporter:QLOGIC QLGE 10Gb ETHERNET DRIVER), Greg Kroah-Hartman , linux-kernel@vger.kernel.org (open list) Subject: [RFC 11/19] staging: qlge: the number of pages to contain a buffer queue is constant Date: Mon, 21 Jun 2021 21:48:54 +0800 Message-Id: <20210621134902.83587-12-coiby.xu@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210621134902.83587-1-coiby.xu@gmail.com> References: <20210621134902.83587-1-coiby.xu@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This patch is extented work of commit ec705b983b46b8e2d3cafd40c188458bf4241f11 ("staging: qlge: Remove qlge_bq.len & size"). Since the same len is used for both sbq (small buffer queue) and lbq (large buffer queue), the number of pages to contain a buffer queue is also known at compile time. Signed-off-by: Coiby Xu --- drivers/staging/qlge/qlge.h | 13 ++++++------- drivers/staging/qlge/qlge_main.c | 8 ++++---- 2 files changed, 10 insertions(+), 11 deletions(-) diff --git a/drivers/staging/qlge/qlge.h b/drivers/staging/qlge/qlge.h index 9177baa9f022..32755b0e2fb7 100644 --- a/drivers/staging/qlge/qlge.h +++ b/drivers/staging/qlge/qlge.h @@ -42,16 +42,15 @@ #define DB_PAGE_SIZE 4096 -/* Calculate the number of (4k) pages required to - * contain a buffer queue of the given length. +/* + * The number of (4k) pages required to contain a buffer queue. */ -#define MAX_DB_PAGES_PER_BQ(x) \ - (((x * sizeof(u64)) / DB_PAGE_SIZE) + \ - (((x * sizeof(u64)) % DB_PAGE_SIZE) ? 1 : 0)) +#define MAX_DB_PAGES_PER_BQ \ + (((QLGE_BQ_LEN * sizeof(u64)) / DB_PAGE_SIZE) + \ + (((QLGE_BQ_LEN * sizeof(u64)) % DB_PAGE_SIZE) ? 1 : 0)) #define RX_RING_SHADOW_SPACE (sizeof(u64) + \ - MAX_DB_PAGES_PER_BQ(QLGE_BQ_LEN) * sizeof(u64) + \ - MAX_DB_PAGES_PER_BQ(QLGE_BQ_LEN) * sizeof(u64)) + MAX_DB_PAGES_PER_BQ * sizeof(u64) * 2) #define LARGE_BUFFER_MAX_SIZE 4096 #define LARGE_BUFFER_MIN_SIZE 2048 diff --git a/drivers/staging/qlge/qlge_main.c b/drivers/staging/qlge/qlge_main.c index 94853b182608..7aee9e904097 100644 --- a/drivers/staging/qlge/qlge_main.c +++ b/drivers/staging/qlge/qlge_main.c @@ -3015,8 +3015,8 @@ static int qlge_start_cq(struct qlge_adapter *qdev, struct qlge_cq *cq) shadow_reg_dma += sizeof(u64); rx_ring->lbq.base_indirect = shadow_reg; rx_ring->lbq.base_indirect_dma = shadow_reg_dma; - shadow_reg += (sizeof(u64) * MAX_DB_PAGES_PER_BQ(QLGE_BQ_LEN)); - shadow_reg_dma += (sizeof(u64) * MAX_DB_PAGES_PER_BQ(QLGE_BQ_LEN)); + shadow_reg += (sizeof(u64) * MAX_DB_PAGES_PER_BQ); + shadow_reg_dma += (sizeof(u64) * MAX_DB_PAGES_PER_BQ); rx_ring->sbq.base_indirect = shadow_reg; rx_ring->sbq.base_indirect_dma = shadow_reg_dma; /* PCI doorbell mem area + 0x18 for large buffer consumer */ @@ -3034,7 +3034,7 @@ static int qlge_start_cq(struct qlge_adapter *qdev, struct qlge_cq *cq) tmp += DB_PAGE_SIZE; base_indirect_ptr++; page_entries++; - } while (page_entries < MAX_DB_PAGES_PER_BQ(QLGE_BQ_LEN)); + } while (page_entries < MAX_DB_PAGES_PER_BQ); cqicb->lbq_addr = cpu_to_le64(rx_ring->lbq.base_indirect_dma); cqicb->lbq_buf_size = cpu_to_le16(QLGE_FIT16(qdev->lbq_buf_size)); @@ -3051,7 +3051,7 @@ static int qlge_start_cq(struct qlge_adapter *qdev, struct qlge_cq *cq) tmp += DB_PAGE_SIZE; base_indirect_ptr++; page_entries++; - } while (page_entries < MAX_DB_PAGES_PER_BQ(QLGE_BQ_LEN)); + } while (page_entries < MAX_DB_PAGES_PER_BQ); cqicb->sbq_addr = cpu_to_le64(rx_ring->sbq.base_indirect_dma); cqicb->sbq_buf_size = cpu_to_le16(QLGE_SMALL_BUFFER_SIZE); cqicb->sbq_len = cpu_to_le16(QLGE_FIT16(QLGE_BQ_LEN)); From patchwork Mon Jun 21 13:48:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Coiby Xu X-Patchwork-Id: 465630 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C921EC4743C for ; Mon, 21 Jun 2021 13:51:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B29A361261 for ; Mon, 21 Jun 2021 13:51:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230423AbhFUNxZ (ORCPT ); Mon, 21 Jun 2021 09:53:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49718 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229789AbhFUNxY (ORCPT ); Mon, 21 Jun 2021 09:53:24 -0400 Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com [IPv6:2607:f8b0:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 08700C061574; Mon, 21 Jun 2021 06:51:08 -0700 (PDT) Received: by mail-pl1-x62d.google.com with SMTP id c15so8513297pls.13; Mon, 21 Jun 2021 06:51:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TREWgCZWamyM+cAvFe+be+iyBQ2mM4FCIj8jAxnCWs0=; b=fEpcFCM2VjpxKDX6qBcqcW976jrBvNzdDnqeE7kkmCA54xvYA4tjne6uHuRPjecEbr JB9MtUDnrHIPae2hAflr8rrpbzIZwu5s8pvuWPRaVeAHzJDk1LtGQsAg6m0Uvv6waFVi IYqPOVWlKZlVSDTIDk0gj8bSXfzdIYB/avCm8m4wpaEn2vD/q9t+UWhFeXfrZF/ggsl8 nDAMEXJNPyGYM0f7+U32sdGdLo69dfwECtTNfintPs6QIW1gs3v48Po+50q7BbBtz8u9 5OrzIka2R+hFKvGmDHqgS+eKEfvhaXhb68CxDcOOK8JMq0xrwtATR5CHGmp6K1NMTDvq SKrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TREWgCZWamyM+cAvFe+be+iyBQ2mM4FCIj8jAxnCWs0=; b=EsNfity4yVD3MmeySEzAm/BbrAJ9dyH2dMLmcJ0C1QnrwwnDodEg3/UdmF2+GQ4E9h 1Mv8N2slflLXU1ijYweDIsd4oT5tQ2pH93J1cREZfW0zLgE8X758DpIV/Z/Kq4njTA+B Vo6BJ2yqN7oydxLlnF2mrGg6CBZuo3K8CkCY9Le3mOc2VTJZqe/vc/9+nIWyWap9yF07 Bmnxr+NuDVh9DwgWwT7RdX9Qpca9YBM1QbvU7S9EPxotRQgBMRERxal9w/BEAHbK6GgF qDy1UrGD1k5Pb6+AH3od1QYOvuu4VCwYXGWYK+bCcNbv1OPzfc7+8HGYEn9CqyfvaZyx vWLg== X-Gm-Message-State: AOAM530OF5Ab1WOK22bRAnZhaCKJFfyyT9aAoY5L0QEHRXaXT2lOrFe8 hM7sxbw1aGPZR2HWMZFb6jw= X-Google-Smtp-Source: ABdhPJzZjS77z8PraDiF4aOCsoGOuHLIIF3lM0XWqXPAxHUs+FwCRZ91wEzksH0o6KrPa+OMPFwJQw== X-Received: by 2002:a17:902:d4cb:b029:124:3333:9ded with SMTP id o11-20020a170902d4cbb029012433339dedmr6883227plg.22.1624283467638; Mon, 21 Jun 2021 06:51:07 -0700 (PDT) Received: from localhost ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id k25sm15256121pfa.213.2021.06.21.06.51.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Jun 2021 06:51:07 -0700 (PDT) From: Coiby Xu To: linux-staging@lists.linux.dev Cc: netdev@vger.kernel.org, Benjamin Poirier , Shung-Hsi Yu , Manish Chopra , GR-Linux-NIC-Dev@marvell.com (supporter:QLOGIC QLGE 10Gb ETHERNET DRIVER), Greg Kroah-Hartman , linux-kernel@vger.kernel.org (open list) Subject: [RFC 12/19] staging: qlge: rewrite do while loops as for loops in qlge_start_rx_ring Date: Mon, 21 Jun 2021 21:48:55 +0800 Message-Id: <20210621134902.83587-13-coiby.xu@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210621134902.83587-1-coiby.xu@gmail.com> References: <20210621134902.83587-1-coiby.xu@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Since MAX_DB_PAGES_PER_BQ > 0, the for loop is equivalent to do while loop. Signed-off-by: Coiby Xu --- drivers/staging/qlge/qlge_main.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/drivers/staging/qlge/qlge_main.c b/drivers/staging/qlge/qlge_main.c index 7aee9e904097..c5e161595b1f 100644 --- a/drivers/staging/qlge/qlge_main.c +++ b/drivers/staging/qlge/qlge_main.c @@ -3029,12 +3029,11 @@ static int qlge_start_cq(struct qlge_adapter *qdev, struct qlge_cq *cq) tmp = (u64)rx_ring->lbq.base_dma; base_indirect_ptr = rx_ring->lbq.base_indirect; page_entries = 0; - do { + for (page_entries = 0; page_entries < MAX_DB_PAGES_PER_BQ; page_entries++) { *base_indirect_ptr = cpu_to_le64(tmp); tmp += DB_PAGE_SIZE; base_indirect_ptr++; - page_entries++; - } while (page_entries < MAX_DB_PAGES_PER_BQ); + } cqicb->lbq_addr = cpu_to_le64(rx_ring->lbq.base_indirect_dma); cqicb->lbq_buf_size = cpu_to_le16(QLGE_FIT16(qdev->lbq_buf_size)); @@ -3046,12 +3045,11 @@ static int qlge_start_cq(struct qlge_adapter *qdev, struct qlge_cq *cq) tmp = (u64)rx_ring->sbq.base_dma; base_indirect_ptr = rx_ring->sbq.base_indirect; page_entries = 0; - do { + for (page_entries = 0; page_entries < MAX_DB_PAGES_PER_BQ; page_entries++) { *base_indirect_ptr = cpu_to_le64(tmp); tmp += DB_PAGE_SIZE; base_indirect_ptr++; - page_entries++; - } while (page_entries < MAX_DB_PAGES_PER_BQ); + } cqicb->sbq_addr = cpu_to_le64(rx_ring->sbq.base_indirect_dma); cqicb->sbq_buf_size = cpu_to_le16(QLGE_SMALL_BUFFER_SIZE); cqicb->sbq_len = cpu_to_le16(QLGE_FIT16(QLGE_BQ_LEN)); From patchwork Mon Jun 21 13:48:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Coiby Xu X-Patchwork-Id: 464890 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94398C4743C for ; Mon, 21 Jun 2021 13:51:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 78D866120D for ; Mon, 21 Jun 2021 13:51:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230439AbhFUNxe (ORCPT ); Mon, 21 Jun 2021 09:53:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49770 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229949AbhFUNxd (ORCPT ); Mon, 21 Jun 2021 09:53:33 -0400 Received: from mail-pl1-x629.google.com (mail-pl1-x629.google.com [IPv6:2607:f8b0:4864:20::629]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4FF6EC061574; Mon, 21 Jun 2021 06:51:18 -0700 (PDT) Received: by mail-pl1-x629.google.com with SMTP id c15so8513563pls.13; Mon, 21 Jun 2021 06:51:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Ib+t9lVmRc47tm7LsElU1TV26Vqw/r1cOdCfq04ZSQE=; b=VEE7fjWiDkmeD1XofkTGTv9lFoFTZM+iHn5Kn4Xx6Xa4uUg4bU76o7Yr8jZPmE8ls7 8/Y4uzvUSXiFJIWAQ3TsCwfOpDp+8G9YUzC5bXKzvt6KrX7TsYNxl5jvHMnJT/fPeDqa R6Dgw7rbVcalfIRhaO98vo/0ZeR8M3MKOPVR4Rt8DnmBQZgXrftqjElGiAqfYmoSmYYj LYtH9/3MG8WT0Wkixx6PAjvYikoPh3MlUVk2LhiMnLMoLhrLF60l4HYHdkLk077D8sGA reM8DPhCoH9yQUVtwAPPKsvN+kAj15yOz3AyqET0VlPozhF9xBqjqyESIvAiGw8gWQ69 s/Wg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Ib+t9lVmRc47tm7LsElU1TV26Vqw/r1cOdCfq04ZSQE=; b=VO/gLR1JOOcte6hOWjvUXFnAgiNYy4VcPaKEdXy4gQpoi8xwP8T2N1InEDOX4y0lEe O0yHJ/qLR/1CU5KMAO5A8oREnCA30XloSSGo2S8oq7HaUn1if6snPz4yVbtaWIWfPsp9 Nvrag6x7SWN9hmgHga5n+kEhCijfnNW24WYO+rd0kyutzQtmMIu8j8BQTNSJNJjiYdGC MI4gQzJCRILwr5Y51mPopGuV5vzLiJs6WkB0mbJ0rntufQaSJ9zc99s2jIvaa1GsHhiq 8gDxiTfMPnRQ6Vi3cglzfNTwe9bi73mCzG+dUpVV9AZbrZWbxvODuaOamc/ATdk/WtjU eDPw== X-Gm-Message-State: AOAM530ypli3tNFiNX13yOA+nAwaglXTdyxWUiVtjDL/aWzCQ9h35Gtk tp//awmGaLk02TW4K7RXRz8= X-Google-Smtp-Source: ABdhPJyfyUFZ+vhs2TGIQqtoKDwOtSGbu0zMRWpP/QIjUMgrczLKyuRymwSAHSzj4qMzA746ctx9rg== X-Received: by 2002:a17:90a:a108:: with SMTP id s8mr38186952pjp.85.1624283477944; Mon, 21 Jun 2021 06:51:17 -0700 (PDT) Received: from localhost ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id j10sm15995814pjb.36.2021.06.21.06.51.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Jun 2021 06:51:17 -0700 (PDT) From: Coiby Xu To: linux-staging@lists.linux.dev Cc: netdev@vger.kernel.org, Benjamin Poirier , Shung-Hsi Yu , Manish Chopra , GR-Linux-NIC-Dev@marvell.com (supporter:QLOGIC QLGE 10Gb ETHERNET DRIVER), Greg Kroah-Hartman , linux-kernel@vger.kernel.org (open list) Subject: [RFC 13/19] staging: qlge: rewrite do while loop as for loop in qlge_sem_spinlock Date: Mon, 21 Jun 2021 21:48:56 +0800 Message-Id: <20210621134902.83587-14-coiby.xu@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210621134902.83587-1-coiby.xu@gmail.com> References: <20210621134902.83587-1-coiby.xu@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Since wait_count=30 > 0, the for loop is equivalent to do while loop. This commit also replaces 100 with UDELAY_DELAY. Signed-off-by: Coiby Xu --- drivers/staging/qlge/qlge_main.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/staging/qlge/qlge_main.c b/drivers/staging/qlge/qlge_main.c index c5e161595b1f..2d2405be38f5 100644 --- a/drivers/staging/qlge/qlge_main.c +++ b/drivers/staging/qlge/qlge_main.c @@ -140,12 +140,13 @@ static int qlge_sem_trylock(struct qlge_adapter *qdev, u32 sem_mask) int qlge_sem_spinlock(struct qlge_adapter *qdev, u32 sem_mask) { unsigned int wait_count = 30; + int count; - do { + for (count = 0; count < wait_count; count++) { if (!qlge_sem_trylock(qdev, sem_mask)) return 0; - udelay(100); - } while (--wait_count); + udelay(UDELAY_DELAY); + } return -ETIMEDOUT; } From patchwork Mon Jun 21 13:48:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Coiby Xu X-Patchwork-Id: 465629 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE599C4743C for ; Mon, 21 Jun 2021 13:51:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A6ECC61206 for ; Mon, 21 Jun 2021 13:51:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230454AbhFUNxp (ORCPT ); Mon, 21 Jun 2021 09:53:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49822 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230423AbhFUNxn (ORCPT ); Mon, 21 Jun 2021 09:53:43 -0400 Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 17D12C061574; Mon, 21 Jun 2021 06:51:28 -0700 (PDT) Received: by mail-pl1-x632.google.com with SMTP id b3so945245plg.2; Mon, 21 Jun 2021 06:51:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4wtQAn6LzmVlaCqMqizM6ab4EoyWHwZrrQCDQEZJCVE=; b=KsYZRBVCVKS8ZtQhhdKkCHzLAngteaWyzxZ2c3GBjC/s1xJ4Rp7Gh0K1U2kEkJ8Z9Y EMRilykmwbrm+E2O8hxt9xCNxuB1BC9beU9B5e56l8SXn6Od8MQYN6hQDd4QseWRI0EK kBpHrttwMcWFmJqlCTYFpq9rd0BXet3ySC3EpdoVxtees7G4jJy9ccgVhlaVUf3BFZvT kn4qdZlgQtH6rFDQwcDKoq44if7vDYEoEd5nAEGBmVZxBQ06U9sjz6bQKSxBDoql/KAg vor69bpxhCLEO/DvQ5EUq40ME/SYRRK+QIiXIsdeKrIztoA88lClj0C3HVwIySYhPbac Hssg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4wtQAn6LzmVlaCqMqizM6ab4EoyWHwZrrQCDQEZJCVE=; b=C8ZJFuyQtvQhfylqnMKpYWU4Ci80RaJsob0TMBkBPrASkRZxOJez8+uyCdcTragyTa q+zu7ABBSpod+5XLNssEc+pNq3pbauFzxj/5HfsZJtFMJ2uCyaaw4qVPThgOYfNZoYq6 x5PFn99Ice+6HyqPMd1SiO5SCzo/a5UDuRAoGqlk6YEaEEyZV76W4Ge7qVBsNWXARxHj G25zpguXGUp2qYHBJ0H5R+WfnKBx6BUOwb5fIBdpwroOHJEkVZpc4I74Lc76+7GI5DCM xZ4JdWC7XDjeljvoc/WXHG60LZdm+8oFFCba2CXasq4jPvcU7biagB4J0ndbg/UWkzHs F5Ig== X-Gm-Message-State: AOAM533OBREK0n6zQpoQVND8sH7/mYCQHUuHbnqL0PF1kHSjMc2+1eew fTEJ/ST/gDvdjbKQT5bdPUY= X-Google-Smtp-Source: ABdhPJwFUktdrr4PhLpWpymkUiqvYP8hU7ocofZpZwVOz02ZFqqxCuUbnQ0Z6AgvXGLhDIPvf4ZH+Q== X-Received: by 2002:a17:90a:2e87:: with SMTP id r7mr38259386pjd.232.1624283487677; Mon, 21 Jun 2021 06:51:27 -0700 (PDT) Received: from localhost ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id b6sm16726628pgw.67.2021.06.21.06.51.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Jun 2021 06:51:27 -0700 (PDT) From: Coiby Xu To: linux-staging@lists.linux.dev Cc: netdev@vger.kernel.org, Benjamin Poirier , Shung-Hsi Yu , Manish Chopra , GR-Linux-NIC-Dev@marvell.com (supporter:QLOGIC QLGE 10Gb ETHERNET DRIVER), Greg Kroah-Hartman , linux-kernel@vger.kernel.org (open list) Subject: [RFC 14/19] staging: qlge: rewrite do while loop as for loop in qlge_refill_bq Date: Mon, 21 Jun 2021 21:48:57 +0800 Message-Id: <20210621134902.83587-15-coiby.xu@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210621134902.83587-1-coiby.xu@gmail.com> References: <20210621134902.83587-1-coiby.xu@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Since refill_count > 0, the for loop is equivalent to do while loop. Signed-off-by: Coiby Xu --- drivers/staging/qlge/qlge_main.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/staging/qlge/qlge_main.c b/drivers/staging/qlge/qlge_main.c index 2d2405be38f5..904dba7aaee5 100644 --- a/drivers/staging/qlge/qlge_main.c +++ b/drivers/staging/qlge/qlge_main.c @@ -1092,6 +1092,7 @@ static int qlge_refill_bq(struct qlge_bq *bq, gfp_t gfp) struct qlge_bq_desc *bq_desc; int refill_count; int retval; + int count; int i; refill_count = QLGE_BQ_WRAP(QLGE_BQ_ALIGN(bq->next_to_clean - 1) - @@ -1102,7 +1103,7 @@ static int qlge_refill_bq(struct qlge_bq *bq, gfp_t gfp) i = bq->next_to_use; bq_desc = &bq->queue[i]; i -= QLGE_BQ_LEN; - do { + for (count = 0; count < refill_count; count++) { netif_printk(qdev, rx_status, KERN_DEBUG, qdev->ndev, "ring %u %s: try cleaning idx %d\n", rx_ring->cq_id, bq_type_name[bq->type], i); @@ -1124,8 +1125,7 @@ static int qlge_refill_bq(struct qlge_bq *bq, gfp_t gfp) bq_desc = &bq->queue[0]; i -= QLGE_BQ_LEN; } - refill_count--; - } while (refill_count); + } i += QLGE_BQ_LEN; if (bq->next_to_use != i) { From patchwork Mon Jun 21 13:48:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Coiby Xu X-Patchwork-Id: 464889 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5DD6C4743C for ; Mon, 21 Jun 2021 13:51:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C72BA6120D for ; Mon, 21 Jun 2021 13:51:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230486AbhFUNyD (ORCPT ); Mon, 21 Jun 2021 09:54:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49884 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230498AbhFUNx4 (ORCPT ); Mon, 21 Jun 2021 09:53:56 -0400 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 04D05C061767; Mon, 21 Jun 2021 06:51:38 -0700 (PDT) Received: by mail-pj1-x102e.google.com with SMTP id x21-20020a17090aa395b029016e25313bfcso22913pjp.2; Mon, 21 Jun 2021 06:51:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Oat56oXy+eq9YFjUfHTUBj0ELGqmUbrbNMdQSDvWOZY=; b=ORC1bRd2LED7D1ipCRVRCwk3nkwGhHzL4i9UqeoHbMer6DdHt2KPkye2SqXcJtIRJ3 oVdK+u3KThOvxPQL/3F18SegSC79699zmm+A8L1iv3Sa9BiQZ1ehxXQyF2JEnSze2fFC UMAEEffmHyY7WUV7yU5yGcdC5NxJzNcB47UtvZsRHu4KeErayVmXrVWDmfNN5ztZYmTd Cvii20dPzgrlrE/07hTolkd77B4sTqYvBT9WMdQ0uSyjoyWuERKITiS0D0svh5Aa9yDy 0BVObAXwMwutxtVPyzlv1hPvUxlwJdHyLDs0PToYLsHqZT+WNRtS+iAnjrN2Ms1ew62D oBxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Oat56oXy+eq9YFjUfHTUBj0ELGqmUbrbNMdQSDvWOZY=; b=nX80/M8l1+br0j46zj3eknLu33tzvbuew5hsCFZwcIhAl4/nGivnNimn3w90gAzbap ImyZiChytsIk1mDbN+82f5AflG1+GpMo/2pKjoe3S2P2pBoQxlgeg3pLCwFVim4vmzVE JRhgBDocM97I4aX3+l5k02qrZCq7RAr24aORFgxvaVufRi/t1kFHwUw9A/Aik70yHv6l o+/mlIOJVmJxSSbfddIWHzhy6BJHufED2/dp0xQ1WezWw4XnybkYrlgxNS+kLBFKQv/1 1yWAQ4LYG7Vbr/Eo5cU3QhGm63k/D/7bQLxWZEr/YvJ0juPfa4k8P0kHJhfBgHdE0wqa joQg== X-Gm-Message-State: AOAM531k/CyetBE4+I67PLs7PYU+yMNsXVa9l498AXjwnr4asONoSj0N UGBQq+seFGfIE9jIVOFrhbU= X-Google-Smtp-Source: ABdhPJzJEfr27phtSC+0vjL45RQyKwZCU9Q0avkrLDXz0ep8+vIniM5cSSXGzX7VSeqGfNAYu1TzAw== X-Received: by 2002:a17:902:9f93:b029:104:9bae:f56a with SMTP id g19-20020a1709029f93b02901049baef56amr18061276plq.75.1624283497593; Mon, 21 Jun 2021 06:51:37 -0700 (PDT) Received: from localhost ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id d3sm16154940pfn.141.2021.06.21.06.51.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Jun 2021 06:51:37 -0700 (PDT) From: Coiby Xu To: linux-staging@lists.linux.dev Cc: netdev@vger.kernel.org, Benjamin Poirier , Shung-Hsi Yu , Manish Chopra , GR-Linux-NIC-Dev@marvell.com (supporter:QLOGIC QLGE 10Gb ETHERNET DRIVER), Greg Kroah-Hartman , linux-kernel@vger.kernel.org (open list) Subject: [RFC 15/19] staging: qlge: remove the TODO item about rewriting while loops as simple for loops Date: Mon, 21 Jun 2021 21:48:58 +0800 Message-Id: <20210621134902.83587-16-coiby.xu@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210621134902.83587-1-coiby.xu@gmail.com> References: <20210621134902.83587-1-coiby.xu@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Since all while loops that could be written as simple for loops have been converted, remove the TODO item. Signed-off-by: Coiby Xu --- drivers/staging/qlge/TODO | 2 -- 1 file changed, 2 deletions(-) diff --git a/drivers/staging/qlge/TODO b/drivers/staging/qlge/TODO index 8bb6779a5bb4..4575f35114bf 100644 --- a/drivers/staging/qlge/TODO +++ b/drivers/staging/qlge/TODO @@ -6,8 +6,6 @@ split cases. * the driver has a habit of using runtime checks where compile time checks are possible (ex. ql_free_rx_buffers(), ql_alloc_rx_buffers()) -* some "while" loops could be rewritten with simple "for", ex. - ql_wait_reg_rdy(), ql_start_rx_ring()) * remove duplicate and useless comments * fix weird line wrapping (all over, ex. the ql_set_routing_reg() calls in qlge_set_multicast_list()). From patchwork Mon Jun 21 13:48:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Coiby Xu X-Patchwork-Id: 465628 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 880F0C4743C for ; Mon, 21 Jun 2021 13:51:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6DF0061206 for ; Mon, 21 Jun 2021 13:51:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230304AbhFUNyG (ORCPT ); Mon, 21 Jun 2021 09:54:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49910 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230291AbhFUNyC (ORCPT ); Mon, 21 Jun 2021 09:54:02 -0400 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D32CFC061756; Mon, 21 Jun 2021 06:51:47 -0700 (PDT) Received: by mail-pj1-x1031.google.com with SMTP id g4so10048289pjk.0; Mon, 21 Jun 2021 06:51:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=yaqSHxaezIY9lDpJVOshFr2LKj7NNyUw2GLUsi5+kqc=; b=c67stKUAr73vcfRW1Dd2usP+4E0y6ePTCHpOLdQlp/Cx8FYYa25vHEvzfEpX5JfkKQ 6mDeh28T+WcaFmAKSfL1vqEv4nKF3u50MmIu9eg22ZTAXr/JN0KlnEb8Sr9C2R5x5M37 igxpbDhfNQbqFNEtsvhjXMKcGMeNkl9HyDloVrL2BbPm9mkbVuCM7oFSWhwe2HI9g/bA B9nfyE5ZosqAn54w+Jpo7xvYxf18xkV66LXZcb857xg9sQNq5Do5rVsDW6o3eQi1w3z6 wbXVJddb7ZoeJ5RY7Lo5Z9dtkdHvMYfAXOMj79NzhhJwBa4PEfUDD8waK0vjc98ONIyx uFIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=yaqSHxaezIY9lDpJVOshFr2LKj7NNyUw2GLUsi5+kqc=; b=lI/utQi+N0fD47qTTEA50534fDm4EeB7zIx4IfOlQyzn95WkXN5+hN/qceoYiuhX+Y 4LW2FWFRM4yeQWFcPqU42sbMIqh7nZP09AtHrrCru9zdkeMIO6iGXRjpyuFyfJXPlW+q /pna4Zl4TMiKPQ0+HR9Vre97IqzBvVzfbgLxEgceAsM78recGbr3geMpCFeBhgXHw+6Z LfrlvzDRSGnEMMtJNRMq2liLH+9BVGPRSTe62vOf5EyD1dNCg5N/4iRwgoFfQHg5j5Jl EpSNtxxemf3o9FRSOmTBRAUNr89vKxAqC6/UD6Xcy65E7j9nV5f+nfDR089mHXddwM/Z eO5A== X-Gm-Message-State: AOAM532fgFacWOVsxD3OfeXjOffKCB+GDxQYAtl1LrOKrNZHTGOuukBL O73JGLhAH1uQSRpBox/nPGY= X-Google-Smtp-Source: ABdhPJx6nEjz0q2zE0jrdffy18CuqwHoKdgH74Cd1p1ZHJIwyLZ3KgoICDXjpMcnqM0/aS9VnjqHDQ== X-Received: by 2002:a17:90a:4491:: with SMTP id t17mr24443671pjg.30.1624283507346; Mon, 21 Jun 2021 06:51:47 -0700 (PDT) Received: from localhost ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id u24sm16581779pfm.156.2021.06.21.06.51.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Jun 2021 06:51:46 -0700 (PDT) From: Coiby Xu To: linux-staging@lists.linux.dev Cc: netdev@vger.kernel.org, Benjamin Poirier , Shung-Hsi Yu , Manish Chopra , GR-Linux-NIC-Dev@marvell.com (supporter:QLOGIC QLGE 10Gb ETHERNET DRIVER), Greg Kroah-Hartman , linux-kernel@vger.kernel.org (open list) Subject: [RFC 16/19] staging: qlge: remove deadcode in qlge_build_rx_skb Date: Mon, 21 Jun 2021 21:48:59 +0800 Message-Id: <20210621134902.83587-17-coiby.xu@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210621134902.83587-1-coiby.xu@gmail.com> References: <20210621134902.83587-1-coiby.xu@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This part of code is for the case that "the headers and data are in a single large buffer". However, qlge_process_mac_split_rx_intr is for handling packets that packets underwent head splitting. In reality, with jumbo frame enabled, the part of code couldn't be reached regardless of the packet size when ping the NIC. Signed-off-by: Coiby Xu --- drivers/staging/qlge/TODO | 6 --- drivers/staging/qlge/qlge_main.c | 66 ++++++++------------------------ 2 files changed, 17 insertions(+), 55 deletions(-) diff --git a/drivers/staging/qlge/TODO b/drivers/staging/qlge/TODO index 4575f35114bf..0f96186ed77c 100644 --- a/drivers/staging/qlge/TODO +++ b/drivers/staging/qlge/TODO @@ -1,9 +1,3 @@ -* commit 7c734359d350 ("qlge: Size RX buffers based on MTU.", v2.6.33-rc1) - introduced dead code in the receive routines, which should be rewritten - anyways by the admission of the author himself, see the comment above - ql_build_rx_skb(). That function is now used exclusively to handle packets - that underwent header splitting but it still contains code to handle non - split cases. * the driver has a habit of using runtime checks where compile time checks are possible (ex. ql_free_rx_buffers(), ql_alloc_rx_buffers()) * remove duplicate and useless comments diff --git a/drivers/staging/qlge/qlge_main.c b/drivers/staging/qlge/qlge_main.c index 904dba7aaee5..e560006225ca 100644 --- a/drivers/staging/qlge/qlge_main.c +++ b/drivers/staging/qlge/qlge_main.c @@ -1741,55 +1741,23 @@ static struct sk_buff *qlge_build_rx_skb(struct qlge_adapter *qdev, sbq_desc->p.skb = NULL; } } else if (ib_mac_rsp->flags3 & IB_MAC_IOCB_RSP_DL) { - if (ib_mac_rsp->flags4 & IB_MAC_IOCB_RSP_HS) { - netif_printk(qdev, rx_status, KERN_DEBUG, qdev->ndev, - "Header in small, %d bytes in large. Chain large to small!\n", - length); - /* - * The data is in a single large buffer. We - * chain it to the header buffer's skb and let - * it rip. - */ - lbq_desc = qlge_get_curr_lchunk(qdev, rx_ring); - netif_printk(qdev, rx_status, KERN_DEBUG, qdev->ndev, - "Chaining page at offset = %d, for %d bytes to skb.\n", - lbq_desc->p.pg_chunk.offset, length); - skb_fill_page_desc(skb, 0, lbq_desc->p.pg_chunk.page, - lbq_desc->p.pg_chunk.offset, length); - skb->len += length; - skb->data_len += length; - skb->truesize += qdev->lbq_buf_size; - } else { - /* - * The headers and data are in a single large buffer. We - * copy it to a new skb and let it go. This can happen with - * jumbo mtu on a non-TCP/UDP frame. - */ - lbq_desc = qlge_get_curr_lchunk(qdev, rx_ring); - skb = napi_alloc_skb(&rx_ring->napi, QLGE_SMALL_BUFFER_SIZE); - if (!skb) { - netif_printk(qdev, probe, KERN_DEBUG, qdev->ndev, - "No skb available, drop the packet.\n"); - return NULL; - } - dma_unmap_page(&qdev->pdev->dev, lbq_desc->dma_addr, - qdev->lbq_buf_size, - DMA_FROM_DEVICE); - skb_reserve(skb, NET_IP_ALIGN); - netif_printk(qdev, rx_status, KERN_DEBUG, qdev->ndev, - "%d bytes of headers and data in large. Chain page to new skb and pull tail.\n", - length); - skb_fill_page_desc(skb, 0, lbq_desc->p.pg_chunk.page, - lbq_desc->p.pg_chunk.offset, - length); - skb->len += length; - skb->data_len += length; - skb->truesize += qdev->lbq_buf_size; - qlge_update_mac_hdr_len(qdev, ib_mac_rsp, - lbq_desc->p.pg_chunk.va, - &hlen); - __pskb_pull_tail(skb, hlen); - } + netif_printk(qdev, rx_status, KERN_DEBUG, qdev->ndev, + "Header in small, %d bytes in large. Chain large to small!\n", + length); + /* + * The data is in a single large buffer. We + * chain it to the header buffer's skb and let + * it rip. + */ + lbq_desc = qlge_get_curr_lchunk(qdev, rx_ring); + netif_printk(qdev, rx_status, KERN_DEBUG, qdev->ndev, + "Chaining page at offset = %d, for %d bytes to skb.\n", + lbq_desc->p.pg_chunk.offset, length); + skb_fill_page_desc(skb, 0, lbq_desc->p.pg_chunk.page, + lbq_desc->p.pg_chunk.offset, length); + skb->len += length; + skb->data_len += length; + skb->truesize += qdev->lbq_buf_size; } else { /* * The data is in a chain of large buffers From patchwork Mon Jun 21 13:49:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Coiby Xu X-Patchwork-Id: 464888 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07884C4743C for ; Mon, 21 Jun 2021 13:52:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E601B61206 for ; Mon, 21 Jun 2021 13:52:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230165AbhFUNyS (ORCPT ); Mon, 21 Jun 2021 09:54:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49932 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230241AbhFUNyM (ORCPT ); Mon, 21 Jun 2021 09:54:12 -0400 Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [IPv6:2607:f8b0:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2CF40C06175F; Mon, 21 Jun 2021 06:51:58 -0700 (PDT) Received: by mail-pl1-x635.google.com with SMTP id m17so5413044plx.7; Mon, 21 Jun 2021 06:51:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=CzTn4Xgtstg1K+676euDxy0Tn/X8cl4RaZRYaImsaO4=; b=YLzVte0fuyu74sfSrjAYeoQ7ZSR04nfSu0FK5oTdV1/XuYne/AEBHDRF7T9vrWWQe/ l5td5wG4BcleFG6dBB7pxtWqeXcQPeaaWRko7tz9xNp03YoMyF/9hlySkWdVvJeJqrgS L1qjDNqt+HBmhUMc/YTKEFHaLHNEzaEV0pW62t7YxCA3gjYPdD906DnJA93lRRq0zCzS 4jIREY/8m2NIaW+ErhsBh4qkmZQv6XKx1dYgvXoraNJfV1cNzjBZ7PVNUO0XgIItPJZE GQQsAczGbezjrl4G7Y3CbcOW6oXQhNjY1gVBf06u7rBFgT+43CmjkeyQ3sqMUc4VBirQ AcOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=CzTn4Xgtstg1K+676euDxy0Tn/X8cl4RaZRYaImsaO4=; b=DuoWNqVX7h4MZZhA/sOwi1lVrkteuyUQWuuGqDRH4/jVpBigJZk30urRrr9VqwCan/ EJzQ+w/aBxsH3Ki1KogcbjKaT4oXp1zsyg2hksQ+qosJh/5p3ftq/4zvEGd09VbiHbX6 +D0vosbmCBLAsrrEsZiyM0tEQVlxgw81PgLZC5dXbOuQdtjnnhOnDkIVOykMpS4eJVx7 KG2nhvF7IuYdHWyj3vlT7nPPe1DyHts36UDCjUH9fNZzSnmeiKTid9eaPv7wpu86ibkm aP8gVFSExLTuOuoPMI1edwTiydHfCkkafANNXD0O5/5sqZoyQ5lJ0u0aQ7qngEamTQ11 HBvw== X-Gm-Message-State: AOAM5335H8ZPIpqOeauJ/sWMNrjqLB4hnoaFMTHIP8HoGL2vRDt5n72Z PTA5N5iXxSQIvToWkct8VlY= X-Google-Smtp-Source: ABdhPJwX9uPz0oOExUW9zt2aeG0/c1DLxYAleX6HTTvFQDLzvS5/fZdIKtnIm4UL+gzUialOuil3RQ== X-Received: by 2002:a17:902:e8c2:b029:123:25ba:e443 with SMTP id v2-20020a170902e8c2b029012325bae443mr10425855plg.29.1624283517719; Mon, 21 Jun 2021 06:51:57 -0700 (PDT) Received: from localhost ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id b18sm1147766pjq.2.2021.06.21.06.51.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Jun 2021 06:51:57 -0700 (PDT) From: Coiby Xu To: linux-staging@lists.linux.dev Cc: netdev@vger.kernel.org, Benjamin Poirier , Shung-Hsi Yu , Manish Chopra , GR-Linux-NIC-Dev@marvell.com (supporter:QLOGIC QLGE 10Gb ETHERNET DRIVER), Greg Kroah-Hartman , Nathan Chancellor , Nick Desaulniers , linux-kernel@vger.kernel.org (open list), clang-built-linux@googlegroups.com (open list:CLANG/LLVM BUILD SUPPORT) Subject: [RFC 17/19] staging: qlge: fix weird line wrapping Date: Mon, 21 Jun 2021 21:49:00 +0800 Message-Id: <20210621134902.83587-18-coiby.xu@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210621134902.83587-1-coiby.xu@gmail.com> References: <20210621134902.83587-1-coiby.xu@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This commits fix weird line wrapping based on "clang-format drivers/staging/qlge/qlge_main.c" Signed-off-by: Coiby Xu --- drivers/staging/qlge/TODO | 2 - drivers/staging/qlge/qlge_main.c | 106 +++++++++++++++---------------- 2 files changed, 52 insertions(+), 56 deletions(-) diff --git a/drivers/staging/qlge/TODO b/drivers/staging/qlge/TODO index 0f96186ed77c..b8def0c70614 100644 --- a/drivers/staging/qlge/TODO +++ b/drivers/staging/qlge/TODO @@ -1,7 +1,5 @@ * the driver has a habit of using runtime checks where compile time checks are possible (ex. ql_free_rx_buffers(), ql_alloc_rx_buffers()) * remove duplicate and useless comments -* fix weird line wrapping (all over, ex. the ql_set_routing_reg() calls in - qlge_set_multicast_list()). * fix weird indentation (all over, ex. the for loops in qlge_get_stats()) * fix checkpatch issues diff --git a/drivers/staging/qlge/qlge_main.c b/drivers/staging/qlge/qlge_main.c index e560006225ca..21fb942c2595 100644 --- a/drivers/staging/qlge/qlge_main.c +++ b/drivers/staging/qlge/qlge_main.c @@ -442,8 +442,7 @@ static int qlge_set_mac_addr(struct qlge_adapter *qdev, int set) status = qlge_sem_spinlock(qdev, SEM_MAC_ADDR_MASK); if (status) return status; - status = qlge_set_mac_addr_reg(qdev, (u8 *)addr, - MAC_ADDR_TYPE_CAM_MAC, + status = qlge_set_mac_addr_reg(qdev, (u8 *)addr, MAC_ADDR_TYPE_CAM_MAC, qdev->func * MAX_CQ); qlge_sem_unlock(qdev, SEM_MAC_ADDR_MASK); if (status) @@ -524,8 +523,8 @@ static int qlge_set_routing_reg(struct qlge_adapter *qdev, u32 index, u32 mask, { value = RT_IDX_DST_DFLT_Q | /* dest */ RT_IDX_TYPE_NICQ | /* type */ - (RT_IDX_IP_CSUM_ERR_SLOT << - RT_IDX_IDX_SHIFT); /* index */ + (RT_IDX_IP_CSUM_ERR_SLOT + << RT_IDX_IDX_SHIFT); /* index */ break; } case RT_IDX_TU_CSUM_ERR: /* Pass up TCP/UDP CSUM error frames. */ @@ -554,7 +553,8 @@ static int qlge_set_routing_reg(struct qlge_adapter *qdev, u32 index, u32 mask, { value = RT_IDX_DST_DFLT_Q | /* dest */ RT_IDX_TYPE_NICQ | /* type */ - (RT_IDX_MCAST_MATCH_SLOT << RT_IDX_IDX_SHIFT);/* index */ + (RT_IDX_MCAST_MATCH_SLOT + << RT_IDX_IDX_SHIFT); /* index */ break; } case RT_IDX_RSS_MATCH: /* Pass up matched RSS frames. */ @@ -648,15 +648,15 @@ static int qlge_read_flash_word(struct qlge_adapter *qdev, int offset, __le32 *d { int status = 0; /* wait for reg to come ready */ - status = qlge_wait_reg_rdy(qdev, - FLASH_ADDR, FLASH_ADDR_RDY, FLASH_ADDR_ERR); + status = qlge_wait_reg_rdy(qdev, FLASH_ADDR, FLASH_ADDR_RDY, + FLASH_ADDR_ERR); if (status) goto exit; /* set up for reg read */ qlge_write32(qdev, FLASH_ADDR, FLASH_ADDR_R | offset); /* wait for reg to come ready */ - status = qlge_wait_reg_rdy(qdev, - FLASH_ADDR, FLASH_ADDR_RDY, FLASH_ADDR_ERR); + status = qlge_wait_reg_rdy(qdev, FLASH_ADDR, FLASH_ADDR_RDY, + FLASH_ADDR_ERR); if (status) goto exit; /* This data is stored on flash as an array of @@ -792,8 +792,8 @@ static int qlge_write_xgmac_reg(struct qlge_adapter *qdev, u32 reg, u32 data) { int status; /* wait for reg to come ready */ - status = qlge_wait_reg_rdy(qdev, - XGMAC_ADDR, XGMAC_ADDR_RDY, XGMAC_ADDR_XME); + status = qlge_wait_reg_rdy(qdev, XGMAC_ADDR, XGMAC_ADDR_RDY, + XGMAC_ADDR_XME); if (status) return status; /* write the data to the data reg */ @@ -811,15 +811,15 @@ int qlge_read_xgmac_reg(struct qlge_adapter *qdev, u32 reg, u32 *data) { int status = 0; /* wait for reg to come ready */ - status = qlge_wait_reg_rdy(qdev, - XGMAC_ADDR, XGMAC_ADDR_RDY, XGMAC_ADDR_XME); + status = qlge_wait_reg_rdy(qdev, XGMAC_ADDR, XGMAC_ADDR_RDY, + XGMAC_ADDR_XME); if (status) goto exit; /* set up for reg read */ qlge_write32(qdev, XGMAC_ADDR, reg | XGMAC_ADDR_R); /* wait for reg to come ready */ - status = qlge_wait_reg_rdy(qdev, - XGMAC_ADDR, XGMAC_ADDR_RDY, XGMAC_ADDR_XME); + status = qlge_wait_reg_rdy(qdev, XGMAC_ADDR, XGMAC_ADDR_RDY, + XGMAC_ADDR_XME); if (status) goto exit; /* get the data */ @@ -1067,8 +1067,8 @@ static int qlge_refill_lb(struct qlge_rx_ring *rx_ring, lbq_desc->p.pg_chunk = *master_chunk; lbq_desc->dma_addr = rx_ring->chunk_dma_addr; - *lbq_desc->buf_ptr = cpu_to_le64(lbq_desc->dma_addr + - lbq_desc->p.pg_chunk.offset); + *lbq_desc->buf_ptr = + cpu_to_le64(lbq_desc->dma_addr + lbq_desc->p.pg_chunk.offset); /* Adjust the master page chunk for next * buffer get. @@ -1233,7 +1233,8 @@ static void qlge_unmap_send(struct qlge_adapter *qdev, */ static int qlge_map_send(struct qlge_adapter *qdev, struct qlge_ob_mac_iocb_req *mac_iocb_ptr, - struct sk_buff *skb, struct qlge_tx_ring_desc *tx_ring_desc) + struct sk_buff *skb, + struct qlge_tx_ring_desc *tx_ring_desc) { int len = skb_headlen(skb); dma_addr_t map; @@ -1295,7 +1296,8 @@ static int qlge_map_send(struct qlge_adapter *qdev, * etc... */ /* Tack on the OAL in the eighth segment of IOCB. */ - map = dma_map_single(&qdev->pdev->dev, &tx_ring_desc->oal, + map = dma_map_single(&qdev->pdev->dev, + &tx_ring_desc->oal, sizeof(struct qlge_oal), DMA_TO_DEVICE); err = dma_mapping_error(&qdev->pdev->dev, map); @@ -1405,8 +1407,7 @@ static void qlge_update_mac_hdr_len(struct qlge_adapter *qdev, if (ib_mac_rsp->flags2 & IB_MAC_IOCB_RSP_V) { tags = (u16 *)page; /* Look for stacked vlan tags in ethertype field */ - if (tags[6] == ETH_P_8021Q && - tags[8] == ETH_P_8021Q) + if (tags[6] == ETH_P_8021Q && tags[8] == ETH_P_8021Q) *len += 2 * VLAN_HLEN; else *len += VLAN_HLEN; @@ -1442,8 +1443,7 @@ static void qlge_process_mac_rx_gro_page(struct qlge_adapter *qdev, prefetch(lbq_desc->p.pg_chunk.va); __skb_fill_page_desc(skb, skb_shinfo(skb)->nr_frags, lbq_desc->p.pg_chunk.page, - lbq_desc->p.pg_chunk.offset, - length); + lbq_desc->p.pg_chunk.offset, length); skb->len += length; skb->data_len += length; @@ -2264,8 +2264,8 @@ static int __qlge_vlan_rx_add_vid(struct qlge_adapter *qdev, u16 vid) u32 enable_bit = MAC_ADDR_E; int err; - err = qlge_set_mac_addr_reg(qdev, (u8 *)&enable_bit, - MAC_ADDR_TYPE_VLAN, vid); + err = qlge_set_mac_addr_reg(qdev, (u8 *)&enable_bit, MAC_ADDR_TYPE_VLAN, + vid); if (err) netif_err(qdev, ifup, qdev->ndev, "Failed to init vlan address.\n"); @@ -2295,8 +2295,8 @@ static int __qlge_vlan_rx_kill_vid(struct qlge_adapter *qdev, u16 vid) u32 enable_bit = 0; int err; - err = qlge_set_mac_addr_reg(qdev, (u8 *)&enable_bit, - MAC_ADDR_TYPE_VLAN, vid); + err = qlge_set_mac_addr_reg(qdev, (u8 *)&enable_bit, MAC_ADDR_TYPE_VLAN, + vid); if (err) netif_err(qdev, ifup, qdev->ndev, "Failed to clear vlan address.\n"); @@ -2400,8 +2400,8 @@ static irqreturn_t qlge_isr(int irq, void *dev_id) netif_err(qdev, intr, qdev->ndev, "Got MPI processor interrupt.\n"); qlge_write32(qdev, INTR_MASK, (INTR_MASK_PI << 16)); - queue_delayed_work_on(smp_processor_id(), - qdev->workqueue, &qdev->mpi_work, 0); + queue_delayed_work_on(smp_processor_id(), qdev->workqueue, + &qdev->mpi_work, 0); work_done++; } @@ -2730,8 +2730,7 @@ static void qlge_free_sbq_buffers(struct qlge_adapter *qdev, struct qlge_rx_ring } if (sbq_desc->p.skb) { dma_unmap_single(&qdev->pdev->dev, sbq_desc->dma_addr, - QLGE_SMALL_BUF_MAP_SIZE, - DMA_FROM_DEVICE); + QLGE_SMALL_BUF_MAP_SIZE, DMA_FROM_DEVICE); dev_kfree_skb(sbq_desc->p.skb); sbq_desc->p.skb = NULL; } @@ -2824,9 +2823,8 @@ static void qlge_free_cq_resources(struct qlge_adapter *qdev, /* Free the completion queue. */ if (cq->cq_base) { - dma_free_coherent(&qdev->pdev->dev, - cq->cq_size, - cq->cq_base, cq->cq_base_dma); + dma_free_coherent(&qdev->pdev->dev, cq->cq_size, cq->cq_base, + cq->cq_base_dma); cq->cq_base = NULL; } } @@ -3128,8 +3126,8 @@ static void qlge_enable_msix(struct qlge_adapter *qdev) for (i = 0; i < qdev->intr_count; i++) qdev->msi_x_entry[i].entry = i; - err = pci_enable_msix_range(qdev->pdev, qdev->msi_x_entry, - 1, qdev->intr_count); + err = pci_enable_msix_range(qdev->pdev, qdev->msi_x_entry, 1, + qdev->intr_count); if (err < 0) { kfree(qdev->msi_x_entry); qdev->msi_x_entry = NULL; @@ -3509,8 +3507,8 @@ static int qlge_route_initialize(struct qlge_adapter *qdev) } } - status = qlge_set_routing_reg(qdev, RT_IDX_CAM_HIT_SLOT, - RT_IDX_CAM_HIT, 1); + status = qlge_set_routing_reg(qdev, RT_IDX_CAM_HIT_SLOT, RT_IDX_CAM_HIT, + 1); if (status) netif_err(qdev, ifup, qdev->ndev, "Failed to init routing register for CAM packets.\n"); @@ -3713,8 +3711,8 @@ static void qlge_display_dev_info(struct net_device *ndev) qdev->chip_rev_id >> 4 & 0x0000000f, qdev->chip_rev_id >> 8 & 0x0000000f, qdev->chip_rev_id >> 12 & 0x0000000f); - netif_info(qdev, probe, qdev->ndev, - "MAC address %pM\n", ndev->dev_addr); + netif_info(qdev, probe, qdev->ndev, "MAC address %pM\n", + ndev->dev_addr); } static int qlge_wol(struct qlge_adapter *qdev) @@ -4119,8 +4117,8 @@ static void qlge_set_multicast_list(struct net_device *ndev) */ if (ndev->flags & IFF_PROMISC) { if (!test_bit(QL_PROMISCUOUS, &qdev->flags)) { - if (qlge_set_routing_reg - (qdev, RT_IDX_PROMISCUOUS_SLOT, RT_IDX_VALID, 1)) { + if (qlge_set_routing_reg(qdev, RT_IDX_PROMISCUOUS_SLOT, + RT_IDX_VALID, 1)) { netif_err(qdev, hw, qdev->ndev, "Failed to set promiscuous mode.\n"); } else { @@ -4129,8 +4127,8 @@ static void qlge_set_multicast_list(struct net_device *ndev) } } else { if (test_bit(QL_PROMISCUOUS, &qdev->flags)) { - if (qlge_set_routing_reg - (qdev, RT_IDX_PROMISCUOUS_SLOT, RT_IDX_VALID, 0)) { + if (qlge_set_routing_reg(qdev, RT_IDX_PROMISCUOUS_SLOT, + RT_IDX_VALID, 0)) { netif_err(qdev, hw, qdev->ndev, "Failed to clear promiscuous mode.\n"); } else { @@ -4146,8 +4144,8 @@ static void qlge_set_multicast_list(struct net_device *ndev) if ((ndev->flags & IFF_ALLMULTI) || (netdev_mc_count(ndev) > MAX_MULTICAST_ENTRIES)) { if (!test_bit(QL_ALLMULTI, &qdev->flags)) { - if (qlge_set_routing_reg - (qdev, RT_IDX_ALLMULTI_SLOT, RT_IDX_MCAST, 1)) { + if (qlge_set_routing_reg(qdev, RT_IDX_ALLMULTI_SLOT, + RT_IDX_MCAST, 1)) { netif_err(qdev, hw, qdev->ndev, "Failed to set all-multi mode.\n"); } else { @@ -4156,8 +4154,8 @@ static void qlge_set_multicast_list(struct net_device *ndev) } } else { if (test_bit(QL_ALLMULTI, &qdev->flags)) { - if (qlge_set_routing_reg - (qdev, RT_IDX_ALLMULTI_SLOT, RT_IDX_MCAST, 0)) { + if (qlge_set_routing_reg(qdev, RT_IDX_ALLMULTI_SLOT, + RT_IDX_MCAST, 0)) { netif_err(qdev, hw, qdev->ndev, "Failed to clear all-multi mode.\n"); } else { @@ -4182,8 +4180,8 @@ static void qlge_set_multicast_list(struct net_device *ndev) i++; } qlge_sem_unlock(qdev, SEM_MAC_ADDR_MASK); - if (qlge_set_routing_reg - (qdev, RT_IDX_MCAST_MATCH_SLOT, RT_IDX_MCAST_MATCH, 1)) { + if (qlge_set_routing_reg(qdev, RT_IDX_MCAST_MATCH_SLOT, + RT_IDX_MCAST_MATCH, 1)) { netif_err(qdev, hw, qdev->ndev, "Failed to set multicast match mode.\n"); } else { @@ -4458,8 +4456,8 @@ static int qlge_init_device(struct pci_dev *pdev, struct qlge_adapter *qdev, /* * Set up the operating parameters. */ - qdev->workqueue = alloc_ordered_workqueue("%s", WQ_MEM_RECLAIM, - ndev->name); + qdev->workqueue = + alloc_ordered_workqueue("%s", WQ_MEM_RECLAIM, ndev->name); if (!qdev->workqueue) { err = -ENOMEM; goto err_free_mpi_coredump; @@ -4702,8 +4700,8 @@ static pci_ers_result_t qlge_io_error_detected(struct pci_dev *pdev, pci_disable_device(pdev); return PCI_ERS_RESULT_NEED_RESET; case pci_channel_io_perm_failure: - dev_err(&pdev->dev, - "%s: pci_channel_io_perm_failure.\n", __func__); + dev_err(&pdev->dev, "%s: pci_channel_io_perm_failure.\n", + __func__); del_timer_sync(&qdev->timer); qlge_eeh_close(ndev); set_bit(QL_EEH_FATAL, &qdev->flags); From patchwork Mon Jun 21 13:49:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Coiby Xu X-Patchwork-Id: 465627 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AAF67C4743C for ; Mon, 21 Jun 2021 13:52:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8A51C61042 for ; Mon, 21 Jun 2021 13:52:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230296AbhFUNyZ (ORCPT ); Mon, 21 Jun 2021 09:54:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50010 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230136AbhFUNyX (ORCPT ); Mon, 21 Jun 2021 09:54:23 -0400 Received: from mail-pl1-x629.google.com (mail-pl1-x629.google.com [IPv6:2607:f8b0:4864:20::629]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 49A63C061760; Mon, 21 Jun 2021 06:52:08 -0700 (PDT) Received: by mail-pl1-x629.google.com with SMTP id v12so8593575plo.10; Mon, 21 Jun 2021 06:52:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=GecS0/ehd6+JUFAcJrnC4SQJD8ZCoGcVouB3TgYaaMw=; b=qqkvx3EdQ8jAojsPlNTBSw6SapC3waBuxhu8qGo/sbsMrcvanDpY7oSUAr2ON/ORnW TdY9Uut89tDIXaOD/x5V8j5xYPE+ARcXmZiIBHS2GeAbjHYd11MqSAVOym+w9rDOKduB pAGStPjSfAB8y/cC4rqXZVSvyyo+w/YawcNluwBNUCI5jHuZBpeLivV+/IlCN2HBhcH1 af1evMKETvSYljuyll2g/1R87b+/wTH/jPnhTmzgtq2qnpFiYtOe7jxM9RMWWIV9uaAe /G2epTPrgkPJiLtnWVbjjKLQN8VC22sMEroeAOMPsddpHOgCDW8KESzp7ZE6ZcuSxcBL rIEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=GecS0/ehd6+JUFAcJrnC4SQJD8ZCoGcVouB3TgYaaMw=; b=JOkMAje48hGoi8VUuDXY28EODt1f+SpMxrP892DrTPdhqNwlWleMJDN09ImuroxRbK fckynUZNUfXFvdTk2PgLo1lRIQ2tJE5gOE1UhdgvlZ8/VIlNcxh1YE/pkgRZmnpjmI4H bG0iyaTyp/GbiF/3npCvtCo+zc5XN6Vim6/Y97xlF3Sn4Kcmqd5tpTD02ahzQuJ/sdae fB4Jcw3lrGP1DRcDZ91MY8YEppPXypOwouGWIbMW61azuHAoOPfozUdBJlq+sWyMfIsI 1FXB53jKSettVuSY6suLm0O6mImtSR5CHXOGuczF/VwpV/xCWlO/WXawKMc7HxXX3Hsq 97Og== X-Gm-Message-State: AOAM531zVG/UfQ0L7b7Th6Gg0KoW6A6+wBzxFnELOfCFTEttuJQBVcfe 5+Bi7JH+Vj0l2Ca0yxpOlWw= X-Google-Smtp-Source: ABdhPJwlEScElw9RbXC0NE4tPYBHMMj/4R4m2P6GWxgQLiqqhXYOGRJxmE63HWFoRstvQvdnUWyUDQ== X-Received: by 2002:a17:90a:4d86:: with SMTP id m6mr26934502pjh.44.1624283527871; Mon, 21 Jun 2021 06:52:07 -0700 (PDT) Received: from localhost ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id x143sm8488178pfc.6.2021.06.21.06.52.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Jun 2021 06:52:07 -0700 (PDT) From: Coiby Xu To: linux-staging@lists.linux.dev Cc: netdev@vger.kernel.org, Benjamin Poirier , Shung-Hsi Yu , Manish Chopra , GR-Linux-NIC-Dev@marvell.com (supporter:QLOGIC QLGE 10Gb ETHERNET DRIVER), Greg Kroah-Hartman , linux-kernel@vger.kernel.org (open list) Subject: [RFC 18/19] staging: qlge: fix two indentation issues Date: Mon, 21 Jun 2021 21:49:01 +0800 Message-Id: <20210621134902.83587-19-coiby.xu@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210621134902.83587-1-coiby.xu@gmail.com> References: <20210621134902.83587-1-coiby.xu@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Fix two indentation issues. Signed-off-by: Coiby Xu --- drivers/staging/qlge/TODO | 1 - drivers/staging/qlge/qlge_dbg.c | 30 +++++++++++++++--------------- drivers/staging/qlge/qlge_main.c | 4 ++-- 3 files changed, 17 insertions(+), 18 deletions(-) diff --git a/drivers/staging/qlge/TODO b/drivers/staging/qlge/TODO index b8def0c70614..7e466a0f7771 100644 --- a/drivers/staging/qlge/TODO +++ b/drivers/staging/qlge/TODO @@ -1,5 +1,4 @@ * the driver has a habit of using runtime checks where compile time checks are possible (ex. ql_free_rx_buffers(), ql_alloc_rx_buffers()) * remove duplicate and useless comments -* fix weird indentation (all over, ex. the for loops in qlge_get_stats()) * fix checkpatch issues diff --git a/drivers/staging/qlge/qlge_dbg.c b/drivers/staging/qlge/qlge_dbg.c index d093e6c9f19c..d4d486f99549 100644 --- a/drivers/staging/qlge/qlge_dbg.c +++ b/drivers/staging/qlge/qlge_dbg.c @@ -353,21 +353,21 @@ static int qlge_get_xgmac_regs(struct qlge_adapter *qdev, u32 *buf, */ if ((i == 0x00000114) || (i == 0x00000118) || - (i == 0x0000013c) || - (i == 0x00000140) || - (i > 0x00000150 && i < 0x000001fc) || - (i > 0x00000278 && i < 0x000002a0) || - (i > 0x000002c0 && i < 0x000002cf) || - (i > 0x000002dc && i < 0x000002f0) || - (i > 0x000003c8 && i < 0x00000400) || - (i > 0x00000400 && i < 0x00000410) || - (i > 0x00000410 && i < 0x00000420) || - (i > 0x00000420 && i < 0x00000430) || - (i > 0x00000430 && i < 0x00000440) || - (i > 0x00000440 && i < 0x00000450) || - (i > 0x00000450 && i < 0x00000500) || - (i > 0x0000054c && i < 0x00000568) || - (i > 0x000005c8 && i < 0x00000600)) { + (i == 0x0000013c) || + (i == 0x00000140) || + (i > 0x00000150 && i < 0x000001fc) || + (i > 0x00000278 && i < 0x000002a0) || + (i > 0x000002c0 && i < 0x000002cf) || + (i > 0x000002dc && i < 0x000002f0) || + (i > 0x000003c8 && i < 0x00000400) || + (i > 0x00000400 && i < 0x00000410) || + (i > 0x00000410 && i < 0x00000420) || + (i > 0x00000420 && i < 0x00000430) || + (i > 0x00000430 && i < 0x00000440) || + (i > 0x00000440 && i < 0x00000450) || + (i > 0x00000450 && i < 0x00000500) || + (i > 0x0000054c && i < 0x00000568) || + (i > 0x000005c8 && i < 0x00000600)) { if (other_function) status = qlge_read_other_func_xgmac_reg(qdev, i, buf); diff --git a/drivers/staging/qlge/qlge_main.c b/drivers/staging/qlge/qlge_main.c index 21fb942c2595..7cec2d6c3fea 100644 --- a/drivers/staging/qlge/qlge_main.c +++ b/drivers/staging/qlge/qlge_main.c @@ -699,8 +699,8 @@ static int qlge_get_8000_flash_params(struct qlge_adapter *qdev) status = qlge_validate_flash(qdev, sizeof(struct flash_params_8000) / - sizeof(u16), - "8000"); + sizeof(u16), + "8000"); if (status) { netif_err(qdev, ifup, qdev->ndev, "Invalid flash.\n"); status = -EINVAL; From patchwork Mon Jun 21 13:49:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Coiby Xu X-Patchwork-Id: 464887 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D90F1C4743C for ; Mon, 21 Jun 2021 13:52:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BB9046120D for ; Mon, 21 Jun 2021 13:52:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230202AbhFUNyi (ORCPT ); Mon, 21 Jun 2021 09:54:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50062 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230388AbhFUNyc (ORCPT ); Mon, 21 Jun 2021 09:54:32 -0400 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B5432C061574; Mon, 21 Jun 2021 06:52:17 -0700 (PDT) Received: by mail-pj1-x102f.google.com with SMTP id g24so10006664pji.4; Mon, 21 Jun 2021 06:52:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cS/F8T1IFYAn7INxjRc02bXMnhEJgrDiWz1RpqB30qg=; b=SJCGESTjQtyYduI8KWTbHwthcEwAcmaxdU6hvYYWVh+GfYWmxJjiB0YEJw5i7+Hs+/ 7UNlbQQnS3Ixf218WX02g797CCw5A7nupA3Eh3COwpxuMd8H5rWsQhO+4S2EnVd6XjA2 VIhOYoO/1rreePN2r3ARbaic4QcE20/gufqYpvJ28Xg+MmCAnxBMlHAky0lJM9vFlQP4 IHAhRyMc8yPXLJjafnW7UXiGp9wAhBC0pDrE47/KRhyCnBN5emslw6hfGeuyF+DCYdc0 t371rI7FASZx/JgDK2IJSW4oms1cjy6eUsX8kiH/9pBVU7QY0ORfMiGgW1O9BkcaY5Ej MtfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cS/F8T1IFYAn7INxjRc02bXMnhEJgrDiWz1RpqB30qg=; b=mXsuHYcREAfuATfmHHDivTsvGNHGSlXyQDVvFmnQvBxUHElaulOEpqi2jxb/QDVyHw f0xwzhk2DQfpxk/rcfIMMNcWABpUkVrJ9g4eOw0vgMSqoSs0JJxBf9/CYUH/P0Nt7qQ7 HMrqUH8fFiVFK1cPc8EjXD6AR/4ji+B+1a9MfjcqxZuZV06P8vSBqE0HV3VYBwL4Osgb wG4ZH2QqoAH2OkQVBwz3CQ0RbdWvmEw3yGUBIoxh9HCHHeFfpWN/tjDqfwyFiezZAez1 /u4cB+DPVIGdQjiw1GnjY5vR5aKCmxx/nTWBMh0qGobhT7LSvV6W7bQEOws2mnm0ku2G NGxA== X-Gm-Message-State: AOAM5302Bv5hEGGwCipoOAz/8AoNihzXkBWeG+mMUBCCZxEiRk/mmiya pmODv8poNxNVp1X35o0zhLk= X-Google-Smtp-Source: ABdhPJxT9DTOz8C0+5X6I7CDf35WX5i4rQLGUTavO5WmXFb6c5vVq6gzP11MULWNG5JNW0L/EF4OHw== X-Received: by 2002:a17:902:9f83:b029:f6:5c3c:db03 with SMTP id g3-20020a1709029f83b02900f65c3cdb03mr18185118plq.2.1624283537354; Mon, 21 Jun 2021 06:52:17 -0700 (PDT) Received: from localhost ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id h21sm14832500pfv.190.2021.06.21.06.52.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Jun 2021 06:52:16 -0700 (PDT) From: Coiby Xu To: linux-staging@lists.linux.dev Cc: netdev@vger.kernel.org, Benjamin Poirier , Shung-Hsi Yu , Manish Chopra , GR-Linux-NIC-Dev@marvell.com (supporter:QLOGIC QLGE 10Gb ETHERNET DRIVER), Greg Kroah-Hartman , linux-kernel@vger.kernel.org (open list) Subject: [RFC 19/19] staging: qlge: remove TODO item of unnecessary runtime checks Date: Mon, 21 Jun 2021 21:49:02 +0800 Message-Id: <20210621134902.83587-20-coiby.xu@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210621134902.83587-1-coiby.xu@gmail.com> References: <20210621134902.83587-1-coiby.xu@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org The following commits [1], - e4c911a73c89 ("staging: qlge: Remove rx_ring.type") - a68a5b2fd3a2 ("staging: qlge: Remove bq_desc.maplen") - 16714d98bf63 ("staging: qlge: Remove rx_ring.sbq_buf_size") - ec705b983b46 ("staging: qlge: Remove qlge_bq.len & size") and recent "commit a0e57b58d35d3d6808187bb10ee9e5030ff87618 ("staging: qlge: the number of pages to contain a buffer queue is constant") has fixed issue. Thus remove the TODO item. [1] https://lore.kernel.org/netdev/YJeUZo+zoNZmFuKs@f3/ Signed-off-by: Coiby Xu --- drivers/staging/qlge/TODO | 2 -- 1 file changed, 2 deletions(-) diff --git a/drivers/staging/qlge/TODO b/drivers/staging/qlge/TODO index 7e466a0f7771..0e349ffc630e 100644 --- a/drivers/staging/qlge/TODO +++ b/drivers/staging/qlge/TODO @@ -1,4 +1,2 @@ -* the driver has a habit of using runtime checks where compile time checks are - possible (ex. ql_free_rx_buffers(), ql_alloc_rx_buffers()) * remove duplicate and useless comments * fix checkpatch issues