From patchwork Tue Mar 24 18:24:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julian Wiedmann X-Patchwork-Id: 221949 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B7CDC2BAEE for ; Tue, 24 Mar 2020 18:25:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 72BEB2076E for ; Tue, 24 Mar 2020 18:25:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727932AbgCXSZE (ORCPT ); Tue, 24 Mar 2020 14:25:04 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:18328 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727791AbgCXSZD (ORCPT ); Tue, 24 Mar 2020 14:25:03 -0400 Received: from pps.filterd (m0098409.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 02OI4HZ1093909 for ; Tue, 24 Mar 2020 14:25:02 -0400 Received: from e06smtp02.uk.ibm.com (e06smtp02.uk.ibm.com [195.75.94.98]) by mx0a-001b2d01.pphosted.com with ESMTP id 2ywe7tp25g-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Tue, 24 Mar 2020 14:25:02 -0400 Received: from localhost by e06smtp02.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 24 Mar 2020 18:24:58 -0000 Received: from b06cxnps4074.portsmouth.uk.ibm.com (9.149.109.196) by e06smtp02.uk.ibm.com (192.168.101.132) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Tue, 24 Mar 2020 18:24:54 -0000 Received: from b06wcsmtp001.portsmouth.uk.ibm.com (b06wcsmtp001.portsmouth.uk.ibm.com [9.149.105.160]) by b06cxnps4074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 02OIOtuP54132910 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 24 Mar 2020 18:24:55 GMT Received: from b06wcsmtp001.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 131EBA4060; Tue, 24 Mar 2020 18:24:55 +0000 (GMT) Received: from b06wcsmtp001.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id D0293A4066; Tue, 24 Mar 2020 18:24:54 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by b06wcsmtp001.portsmouth.uk.ibm.com (Postfix) with ESMTP; Tue, 24 Mar 2020 18:24:54 +0000 (GMT) From: Julian Wiedmann To: David Miller Cc: netdev , linux-s390 , Heiko Carstens , Ursula Braun , Julian Wiedmann Subject: [PATCH net-next 01/11] s390/qeth: simplify RX buffer tracking Date: Tue, 24 Mar 2020 19:24:38 +0100 X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200324182448.95362-1-jwi@linux.ibm.com> References: <20200324182448.95362-1-jwi@linux.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 20032418-0008-0000-0000-000003632A3D X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 20032418-0009-0000-0000-00004A849776 Message-Id: <20200324182448.95362-2-jwi@linux.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.645 definitions=2020-03-24_05:2020-03-23,2020-03-24 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=999 impostorscore=0 bulkscore=0 phishscore=0 spamscore=0 adultscore=0 mlxscore=0 lowpriorityscore=0 suspectscore=0 malwarescore=0 priorityscore=1501 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2003020000 definitions=main-2003240090 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Since RX buffers may contain multiple packets, qeth's NAPI poll code can exhaust its budget in the middle of an RX buffer. Thus we keep track of our current position within the active RX buffer, so we can resume processing here in the next NAPI poll period. Clean up that code by tracking the index of the active buffer element, instead of a pointer to it. Also simplify the code that advances to the next RX buffer when the current buffer has been fully processed. Signed-off-by: Julian Wiedmann --- arch/s390/include/asm/qdio.h | 2 ++ drivers/s390/net/qeth_core.h | 2 +- drivers/s390/net/qeth_core_main.c | 30 ++++++++++++------------------ 3 files changed, 15 insertions(+), 19 deletions(-) diff --git a/arch/s390/include/asm/qdio.h b/arch/s390/include/asm/qdio.h index 1e3517b0518b..6ea3a1d5fe7f 100644 --- a/arch/s390/include/asm/qdio.h +++ b/arch/s390/include/asm/qdio.h @@ -222,6 +222,8 @@ struct qdio_buffer { struct qdio_buffer_element element[QDIO_MAX_ELEMENTS_PER_BUFFER]; } __attribute__ ((packed, aligned(256))); +#define QDIO_ELEMENT_NO(buf, element) (element - &buf->element[0]) + /** * struct sl_element - storage list entry * @sbal: absolute SBAL address diff --git a/drivers/s390/net/qeth_core.h b/drivers/s390/net/qeth_core.h index 6eb431c194bd..9840d4fab010 100644 --- a/drivers/s390/net/qeth_core.h +++ b/drivers/s390/net/qeth_core.h @@ -752,7 +752,7 @@ enum qeth_addr_disposition { struct qeth_rx { int b_count; int b_index; - struct qdio_buffer_element *b_element; + u8 buf_element; int e_offset; int qdio_err; }; diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c index bd3adbb6ad50..b25d44c83997 100644 --- a/drivers/s390/net/qeth_core_main.c +++ b/drivers/s390/net/qeth_core_main.c @@ -5332,14 +5332,13 @@ static inline int qeth_is_last_sbale(struct qdio_buffer_element *sbale) } static int qeth_extract_skb(struct qeth_card *card, - struct qeth_qdio_buffer *qethbuffer, - struct qdio_buffer_element **__element, + struct qeth_qdio_buffer *qethbuffer, u8 *element_no, int *__offset) { - struct qdio_buffer_element *element = *__element; struct qeth_priv *priv = netdev_priv(card->dev); struct qdio_buffer *buffer = qethbuffer->buffer; struct napi_struct *napi = &card->napi; + struct qdio_buffer_element *element; unsigned int linear_len = 0; bool uses_frags = false; int offset = *__offset; @@ -5349,6 +5348,8 @@ static int qeth_extract_skb(struct qeth_card *card, struct sk_buff *skb; int skb_len = 0; + element = &buffer->element[*element_no]; + next_packet: /* qeth_hdr must not cross element boundaries */ while (element->length < offset + sizeof(struct qeth_hdr)) { @@ -5504,7 +5505,7 @@ static int qeth_extract_skb(struct qeth_card *card, if (!skb) goto next_packet; - *__element = element; + *element_no = QDIO_ELEMENT_NO(buffer, element); *__offset = offset; qeth_receive_skb(card, skb, hdr, uses_frags); @@ -5519,7 +5520,7 @@ static int qeth_extract_skbs(struct qeth_card *card, int budget, *done = false; while (budget) { - if (qeth_extract_skb(card, buf, &card->rx.b_element, + if (qeth_extract_skb(card, buf, &card->rx.buf_element, &card->rx.e_offset)) { *done = true; break; @@ -5550,10 +5551,6 @@ int qeth_poll(struct napi_struct *napi, int budget) card->rx.b_count = 0; break; } - card->rx.b_element = - &card->qdio.in_q->bufs[card->rx.b_index] - .buffer->element[0]; - card->rx.e_offset = 0; } while (card->rx.b_count) { @@ -5572,15 +5569,12 @@ int qeth_poll(struct napi_struct *napi, int budget) buffer->pool_entry); qeth_queue_input_buffer(card, card->rx.b_index); card->rx.b_count--; - if (card->rx.b_count) { - card->rx.b_index = - QDIO_BUFNR(card->rx.b_index + 1); - card->rx.b_element = - &card->qdio.in_q - ->bufs[card->rx.b_index] - .buffer->element[0]; - card->rx.e_offset = 0; - } + + /* Step forward to next buffer: */ + card->rx.b_index = + QDIO_BUFNR(card->rx.b_index + 1); + card->rx.buf_element = 0; + card->rx.e_offset = 0; } if (work_done >= budget) From patchwork Tue Mar 24 18:24:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julian Wiedmann X-Patchwork-Id: 221948 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E2CFC41621 for ; Tue, 24 Mar 2020 18:25:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1499D20714 for ; Tue, 24 Mar 2020 18:25:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727956AbgCXSZE (ORCPT ); Tue, 24 Mar 2020 14:25:04 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:40306 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727830AbgCXSZD (ORCPT ); Tue, 24 Mar 2020 14:25:03 -0400 Received: from pps.filterd (m0098404.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 02OI5OT2012051 for ; Tue, 24 Mar 2020 14:25:02 -0400 Received: from e06smtp02.uk.ibm.com (e06smtp02.uk.ibm.com [195.75.94.98]) by mx0a-001b2d01.pphosted.com with ESMTP id 2ywf0pavkf-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Tue, 24 Mar 2020 14:25:02 -0400 Received: from localhost by e06smtp02.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 24 Mar 2020 18:24:58 -0000 Received: from b06cxnps4074.portsmouth.uk.ibm.com (9.149.109.196) by e06smtp02.uk.ibm.com (192.168.101.132) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Tue, 24 Mar 2020 18:24:55 -0000 Received: from b06wcsmtp001.portsmouth.uk.ibm.com (b06wcsmtp001.portsmouth.uk.ibm.com [9.149.105.160]) by b06cxnps4074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 02OIOtCA50593798 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 24 Mar 2020 18:24:55 GMT Received: from b06wcsmtp001.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 984AFA406D; Tue, 24 Mar 2020 18:24:55 +0000 (GMT) Received: from b06wcsmtp001.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 6131DA4065; Tue, 24 Mar 2020 18:24:55 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by b06wcsmtp001.portsmouth.uk.ibm.com (Postfix) with ESMTP; Tue, 24 Mar 2020 18:24:55 +0000 (GMT) From: Julian Wiedmann To: David Miller Cc: netdev , linux-s390 , Heiko Carstens , Ursula Braun , Julian Wiedmann Subject: [PATCH net-next 03/11] s390/qeth: remove redundant if-clause in RX poll code Date: Tue, 24 Mar 2020 19:24:40 +0100 X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200324182448.95362-1-jwi@linux.ibm.com> References: <20200324182448.95362-1-jwi@linux.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 20032418-0008-0000-0000-000003632A3E X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 20032418-0009-0000-0000-00004A849777 Message-Id: <20200324182448.95362-4-jwi@linux.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.645 definitions=2020-03-24_05:2020-03-23,2020-03-24 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 phishscore=0 impostorscore=0 malwarescore=0 mlxlogscore=999 spamscore=0 clxscore=1015 lowpriorityscore=0 adultscore=0 suspectscore=0 bulkscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2003020000 definitions=main-2003240090 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Whenever all completed RX buffers have been processed (ie. rx->b_count == 0), we call down to the HW layer to scan for additional buffers. If no further buffers are available, the code breaks out of the while-loop. So we never reach the 'process an RX buffer' step with rx->b_count == 0, eliminate that check and one level of indentation. Signed-off-by: Julian Wiedmann --- drivers/s390/net/qeth_core_main.c | 56 ++++++++++++++----------------- 1 file changed, 26 insertions(+), 30 deletions(-) diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c index f818d9db32ac..dd9bee620f46 100644 --- a/drivers/s390/net/qeth_core_main.c +++ b/drivers/s390/net/qeth_core_main.c @@ -5536,6 +5536,10 @@ static unsigned int qeth_rx_poll(struct qeth_card *card, int budget) unsigned int work_done = 0; while (budget > 0) { + struct qeth_qdio_buffer *buffer; + unsigned int skbs_done = 0; + bool done = false; + /* Fetch completed RX buffers: */ if (!card->rx.b_count) { card->rx.qdio_err = 0; @@ -5549,36 +5553,28 @@ static unsigned int qeth_rx_poll(struct qeth_card *card, int budget) } /* Process one completed RX buffer: */ - if (card->rx.b_count) { - struct qeth_qdio_buffer *buffer; - unsigned int skbs_done = 0; - bool done = false; - - buffer = &card->qdio.in_q->bufs[card->rx.b_index]; - if (!(card->rx.qdio_err && - qeth_check_qdio_errors(card, buffer->buffer, - card->rx.qdio_err, "qinerr"))) - skbs_done = qeth_extract_skbs(card, budget, - buffer, &done); - else - done = true; - - work_done += skbs_done; - budget -= skbs_done; - - if (done) { - QETH_CARD_STAT_INC(card, rx_bufs); - qeth_put_buffer_pool_entry(card, - buffer->pool_entry); - qeth_queue_input_buffer(card, card->rx.b_index); - card->rx.b_count--; - - /* Step forward to next buffer: */ - card->rx.b_index = - QDIO_BUFNR(card->rx.b_index + 1); - card->rx.buf_element = 0; - card->rx.e_offset = 0; - } + buffer = &card->qdio.in_q->bufs[card->rx.b_index]; + if (!(card->rx.qdio_err && + qeth_check_qdio_errors(card, buffer->buffer, + card->rx.qdio_err, "qinerr"))) + skbs_done = qeth_extract_skbs(card, budget, buffer, + &done); + else + done = true; + + work_done += skbs_done; + budget -= skbs_done; + + if (done) { + QETH_CARD_STAT_INC(card, rx_bufs); + qeth_put_buffer_pool_entry(card, buffer->pool_entry); + qeth_queue_input_buffer(card, card->rx.b_index); + card->rx.b_count--; + + /* Step forward to next buffer: */ + card->rx.b_index = QDIO_BUFNR(card->rx.b_index + 1); + card->rx.buf_element = 0; + card->rx.e_offset = 0; } } From patchwork Tue Mar 24 18:24:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julian Wiedmann X-Patchwork-Id: 221945 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ACE33C54EEB for ; Tue, 24 Mar 2020 18:25:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 76FB020714 for ; Tue, 24 Mar 2020 18:25:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728080AbgCXSZO (ORCPT ); Tue, 24 Mar 2020 14:25:14 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:61590 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727781AbgCXSZD (ORCPT ); Tue, 24 Mar 2020 14:25:03 -0400 Received: from pps.filterd (m0098399.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 02OI4Go5051512 for ; Tue, 24 Mar 2020 14:25:02 -0400 Received: from e06smtp03.uk.ibm.com (e06smtp03.uk.ibm.com [195.75.94.99]) by mx0a-001b2d01.pphosted.com with ESMTP id 2ywf3fd964-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Tue, 24 Mar 2020 14:25:01 -0400 Received: from localhost by e06smtp03.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 24 Mar 2020 18:24:58 -0000 Received: from b06cxnps4075.portsmouth.uk.ibm.com (9.149.109.197) by e06smtp03.uk.ibm.com (192.168.101.133) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Tue, 24 Mar 2020 18:24:56 -0000 Received: from b06wcsmtp001.portsmouth.uk.ibm.com (b06wcsmtp001.portsmouth.uk.ibm.com [9.149.105.160]) by b06cxnps4075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 02OIOufx58261594 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 24 Mar 2020 18:24:56 GMT Received: from b06wcsmtp001.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id E0887A4066; Tue, 24 Mar 2020 18:24:55 +0000 (GMT) Received: from b06wcsmtp001.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id A42B4A406E; Tue, 24 Mar 2020 18:24:55 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by b06wcsmtp001.portsmouth.uk.ibm.com (Postfix) with ESMTP; Tue, 24 Mar 2020 18:24:55 +0000 (GMT) From: Julian Wiedmann To: David Miller Cc: netdev , linux-s390 , Heiko Carstens , Ursula Braun , Julian Wiedmann Subject: [PATCH net-next 04/11] s390/qdio: extend polling support to multiple queues Date: Tue, 24 Mar 2020 19:24:41 +0100 X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200324182448.95362-1-jwi@linux.ibm.com> References: <20200324182448.95362-1-jwi@linux.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 20032418-0012-0000-0000-00000396FF90 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 20032418-0013-0000-0000-000021D3F4BE Message-Id: <20200324182448.95362-5-jwi@linux.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.645 definitions=2020-03-24_05:2020-03-23,2020-03-24 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 phishscore=0 priorityscore=1501 adultscore=0 mlxscore=0 mlxlogscore=999 impostorscore=0 bulkscore=0 lowpriorityscore=0 malwarescore=0 spamscore=0 suspectscore=2 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2003020000 definitions=main-2003240090 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org When the support for polling drivers was initially added, it only considered Input Queue 0. But as QDIO interrupts are actually for the full device and not a single queue, this doesn't really fit for configurations where multiple Input Queues are used. Rework the qdio code so that interrupts for a polling driver are not split up into actions for each queue. Instead deliver the interrupt as a single event, and let the driver decide which queue needs what action. When re-enabling the QDIO interrupt via qdio_start_irq(), this means that the qdio code needs to (1) put _all_ eligible queues back into a state where they raise IRQs, (2) and afterwards check _all_ eligible queues for new work to bridge the race window. On the qeth side of things (as the only qdio polling driver), we can now add CQ polling support to the main NAPI poll routine. It doesn't consume NAPI budget, and to avoid hogging the CPU we yield control after completing one full queue worth of buffers. The subsequent qdio_start_irq() will check for any additional work, and have us re-schedule the NAPI instance accordingly. Signed-off-by: Julian Wiedmann Acked-by: Heiko Carstens --- arch/s390/include/asm/qdio.h | 9 ++-- drivers/s390/cio/qdio.h | 11 +++-- drivers/s390/cio/qdio_debug.c | 4 +- drivers/s390/cio/qdio_main.c | 50 +++++++++++------------ drivers/s390/cio/qdio_setup.c | 16 ++++---- drivers/s390/cio/qdio_thinint.c | 38 ++++++++--------- drivers/s390/net/qeth_core_main.c | 68 +++++++++++++------------------ 7 files changed, 87 insertions(+), 109 deletions(-) diff --git a/arch/s390/include/asm/qdio.h b/arch/s390/include/asm/qdio.h index 6ea3a1d5fe7f..8301a19a17db 100644 --- a/arch/s390/include/asm/qdio.h +++ b/arch/s390/include/asm/qdio.h @@ -340,7 +340,7 @@ typedef void qdio_handler_t(struct ccw_device *, unsigned int, int, * @no_output_qs: number of output queues * @input_handler: handler to be called for input queues * @output_handler: handler to be called for output queues - * @queue_start_poll_array: polling handlers (one per input queue or NULL) + * @irq_poll: Data IRQ polling handler (NULL when not supported) * @scan_threshold: # of in-use buffers that triggers scan on output queue * @int_parm: interruption parameter * @input_sbal_addr_array: address of no_input_qs * 128 pointers @@ -361,8 +361,7 @@ struct qdio_initialize { unsigned int no_output_qs; qdio_handler_t *input_handler; qdio_handler_t *output_handler; - void (**queue_start_poll_array) (struct ccw_device *, int, - unsigned long); + void (*irq_poll)(struct ccw_device *cdev, unsigned long data); unsigned int scan_threshold; unsigned long int_parm; struct qdio_buffer **input_sbal_addr_array; @@ -417,8 +416,8 @@ extern int qdio_activate(struct ccw_device *); extern void qdio_release_aob(struct qaob *); extern int do_QDIO(struct ccw_device *, unsigned int, int, unsigned int, unsigned int); -extern int qdio_start_irq(struct ccw_device *, int); -extern int qdio_stop_irq(struct ccw_device *, int); +extern int qdio_start_irq(struct ccw_device *cdev); +extern int qdio_stop_irq(struct ccw_device *cdev); extern int qdio_get_next_buffers(struct ccw_device *, int, int *, int *); extern int qdio_inspect_queue(struct ccw_device *cdev, unsigned int nr, bool is_input, unsigned int *bufnr, diff --git a/drivers/s390/cio/qdio.h b/drivers/s390/cio/qdio.h index ff74eb5fce50..f72f961cc78f 100644 --- a/drivers/s390/cio/qdio.h +++ b/drivers/s390/cio/qdio.h @@ -177,8 +177,8 @@ struct qdio_queue_perf_stat { unsigned int nr_sbal_total; }; -enum qdio_queue_irq_states { - QDIO_QUEUE_IRQS_DISABLED, +enum qdio_irq_poll_states { + QDIO_IRQ_DISABLED, }; struct qdio_input_q { @@ -188,10 +188,6 @@ struct qdio_input_q { int ack_count; /* last time of noticing incoming data */ u64 timestamp; - /* upper-layer polling flag */ - unsigned long queue_irq_state; - /* callback to start upper-layer polling */ - void (*queue_start_poll) (struct ccw_device *, int, unsigned long); }; struct qdio_output_q { @@ -299,6 +295,9 @@ struct qdio_irq { struct qdio_q *input_qs[QDIO_MAX_QUEUES_PER_IRQ]; struct qdio_q *output_qs[QDIO_MAX_QUEUES_PER_IRQ]; + void (*irq_poll)(struct ccw_device *cdev, unsigned long data); + unsigned long poll_state; + debug_info_t *debug_area; struct mutex setup_mutex; struct qdio_dev_perf_stat perf_stat; diff --git a/drivers/s390/cio/qdio_debug.c b/drivers/s390/cio/qdio_debug.c index 9c0370b27426..00244607c8c0 100644 --- a/drivers/s390/cio/qdio_debug.c +++ b/drivers/s390/cio/qdio_debug.c @@ -128,8 +128,8 @@ static int qstat_show(struct seq_file *m, void *v) q->u.in.ack_start, q->u.in.ack_count); seq_printf(m, "DSCI: %x IRQs disabled: %u\n", *(u8 *)q->irq_ptr->dsci, - test_bit(QDIO_QUEUE_IRQS_DISABLED, - &q->u.in.queue_irq_state)); + test_bit(QDIO_IRQ_DISABLED, + &q->irq_ptr->poll_state)); } seq_printf(m, "SBAL states:\n"); seq_printf(m, "|0 |8 |16 |24 |32 |40 |48 |56 63|\n"); diff --git a/drivers/s390/cio/qdio_main.c b/drivers/s390/cio/qdio_main.c index 3475317c42e5..02ced5949287 100644 --- a/drivers/s390/cio/qdio_main.c +++ b/drivers/s390/cio/qdio_main.c @@ -950,19 +950,14 @@ static void qdio_int_handler_pci(struct qdio_irq *irq_ptr) if (unlikely(irq_ptr->state != QDIO_IRQ_STATE_ACTIVE)) return; - for_each_input_queue(irq_ptr, q, i) { - if (q->u.in.queue_start_poll) { - /* skip if polling is enabled or already in work */ - if (test_and_set_bit(QDIO_QUEUE_IRQS_DISABLED, - &q->u.in.queue_irq_state)) { - QDIO_PERF_STAT_INC(irq_ptr, int_discarded); - continue; - } - q->u.in.queue_start_poll(q->irq_ptr->cdev, q->nr, - q->irq_ptr->int_parm); - } else { + if (irq_ptr->irq_poll) { + if (!test_and_set_bit(QDIO_IRQ_DISABLED, &irq_ptr->poll_state)) + irq_ptr->irq_poll(irq_ptr->cdev, irq_ptr->int_parm); + else + QDIO_PERF_STAT_INC(irq_ptr, int_discarded); + } else { + for_each_input_queue(irq_ptr, q, i) tasklet_schedule(&q->tasklet); - } } if (!pci_out_supported(irq_ptr) || !irq_ptr->scan_threshold) @@ -1610,24 +1605,26 @@ EXPORT_SYMBOL_GPL(do_QDIO); /** * qdio_start_irq - process input buffers * @cdev: associated ccw_device for the qdio subchannel - * @nr: input queue number * * Return codes * 0 - success * 1 - irqs not started since new data is available */ -int qdio_start_irq(struct ccw_device *cdev, int nr) +int qdio_start_irq(struct ccw_device *cdev) { struct qdio_q *q; struct qdio_irq *irq_ptr = cdev->private->qdio_data; + unsigned int i; if (!irq_ptr) return -ENODEV; - q = irq_ptr->input_qs[nr]; clear_nonshared_ind(irq_ptr); - qdio_stop_polling(q); - clear_bit(QDIO_QUEUE_IRQS_DISABLED, &q->u.in.queue_irq_state); + + for_each_input_queue(irq_ptr, q, i) + qdio_stop_polling(q); + + clear_bit(QDIO_IRQ_DISABLED, &irq_ptr->poll_state); /* * We need to check again to not lose initiative after @@ -1635,13 +1632,16 @@ int qdio_start_irq(struct ccw_device *cdev, int nr) */ if (test_nonshared_ind(irq_ptr)) goto rescan; - if (!qdio_inbound_q_done(q, q->first_to_check)) - goto rescan; + + for_each_input_queue(irq_ptr, q, i) { + if (!qdio_inbound_q_done(q, q->first_to_check)) + goto rescan; + } + return 0; rescan: - if (test_and_set_bit(QDIO_QUEUE_IRQS_DISABLED, - &q->u.in.queue_irq_state)) + if (test_and_set_bit(QDIO_IRQ_DISABLED, &irq_ptr->poll_state)) return 0; else return 1; @@ -1729,23 +1729,19 @@ EXPORT_SYMBOL(qdio_get_next_buffers); /** * qdio_stop_irq - disable interrupt processing for the device * @cdev: associated ccw_device for the qdio subchannel - * @nr: input queue number * * Return codes * 0 - interrupts were already disabled * 1 - interrupts successfully disabled */ -int qdio_stop_irq(struct ccw_device *cdev, int nr) +int qdio_stop_irq(struct ccw_device *cdev) { - struct qdio_q *q; struct qdio_irq *irq_ptr = cdev->private->qdio_data; if (!irq_ptr) return -ENODEV; - q = irq_ptr->input_qs[nr]; - if (test_and_set_bit(QDIO_QUEUE_IRQS_DISABLED, - &q->u.in.queue_irq_state)) + if (test_and_set_bit(QDIO_IRQ_DISABLED, &irq_ptr->poll_state)) return 0; else return 1; diff --git a/drivers/s390/cio/qdio_setup.c b/drivers/s390/cio/qdio_setup.c index 66e4bdca9d89..7b831bb4e229 100644 --- a/drivers/s390/cio/qdio_setup.c +++ b/drivers/s390/cio/qdio_setup.c @@ -224,15 +224,6 @@ static void setup_queues(struct qdio_irq *irq_ptr, setup_queues_misc(q, irq_ptr, qdio_init->input_handler, i); q->is_input_q = 1; - if (qdio_init->queue_start_poll_array && - qdio_init->queue_start_poll_array[i]) { - q->u.in.queue_start_poll = - qdio_init->queue_start_poll_array[i]; - set_bit(QDIO_QUEUE_IRQS_DISABLED, - &q->u.in.queue_irq_state); - } else { - q->u.in.queue_start_poll = NULL; - } setup_storage_lists(q, irq_ptr, input_sbal_array, i); input_sbal_array += QDIO_MAX_BUFFERS_PER_Q; @@ -483,6 +474,13 @@ int qdio_setup_irq(struct qdio_initialize *init_data) ccw_device_get_schid(irq_ptr->cdev, &irq_ptr->schid); setup_queues(irq_ptr, init_data); + if (init_data->irq_poll) { + irq_ptr->irq_poll = init_data->irq_poll; + set_bit(QDIO_IRQ_DISABLED, &irq_ptr->poll_state); + } else { + irq_ptr->irq_poll = NULL; + } + setup_qib(irq_ptr, init_data); qdio_setup_thinint(irq_ptr); set_impl_params(irq_ptr, init_data->qib_param_field_format, diff --git a/drivers/s390/cio/qdio_thinint.c b/drivers/s390/cio/qdio_thinint.c index 7c4e4ec08a12..8f315c53de23 100644 --- a/drivers/s390/cio/qdio_thinint.c +++ b/drivers/s390/cio/qdio_thinint.c @@ -135,28 +135,24 @@ static inline void tiqdio_call_inq_handlers(struct qdio_irq *irq) has_multiple_inq_on_dsci(irq)) xchg(irq->dsci, 0); + if (irq->irq_poll) { + if (!test_and_set_bit(QDIO_IRQ_DISABLED, &irq->poll_state)) + irq->irq_poll(irq->cdev, irq->int_parm); + else + QDIO_PERF_STAT_INC(irq, int_discarded); + + return; + } + for_each_input_queue(irq, q, i) { - if (q->u.in.queue_start_poll) { - /* skip if polling is enabled or already in work */ - if (test_and_set_bit(QDIO_QUEUE_IRQS_DISABLED, - &q->u.in.queue_irq_state)) { - QDIO_PERF_STAT_INC(irq, int_discarded); - continue; - } - - /* avoid dsci clear here, done after processing */ - q->u.in.queue_start_poll(irq->cdev, q->nr, - irq->int_parm); - } else { - if (!shared_ind(irq)) - xchg(irq->dsci, 0); - - /* - * Call inbound processing but not directly - * since that could starve other thinint queues. - */ - tasklet_schedule(&q->tasklet); - } + if (!shared_ind(irq)) + xchg(irq->dsci, 0); + + /* + * Call inbound processing but not directly + * since that could starve other thinint queues. + */ + tasklet_schedule(&q->tasklet); } } diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c index dd9bee620f46..07b2231f028f 100644 --- a/drivers/s390/net/qeth_core_main.c +++ b/drivers/s390/net/qeth_core_main.c @@ -548,14 +548,6 @@ static void qeth_qdio_handle_aob(struct qeth_card *card, qdio_release_aob(aob); } -static inline int qeth_is_cq(struct qeth_card *card, unsigned int queue) -{ - return card->options.cq == QETH_CQ_ENABLED && - card->qdio.c_q != NULL && - queue != 0 && - queue == card->qdio.no_in_queues - 1; -} - static void qeth_setup_ccw(struct ccw1 *ccw, u8 cmd_code, u8 flags, u32 len, void *data) { @@ -3469,8 +3461,7 @@ static void qeth_check_outbound_queue(struct qeth_qdio_out_q *queue) } } -static void qeth_qdio_start_poll(struct ccw_device *ccwdev, int queue, - unsigned long card_ptr) +static void qeth_qdio_poll(struct ccw_device *cdev, unsigned long card_ptr) { struct qeth_card *card = (struct qeth_card *)card_ptr; @@ -3508,9 +3499,6 @@ static void qeth_qdio_cq_handler(struct qeth_card *card, unsigned int qdio_err, int i; int rc; - if (!qeth_is_cq(card, queue)) - return; - QETH_CARD_TEXT_(card, 5, "qcqhe%d", first_element); QETH_CARD_TEXT_(card, 5, "qcqhc%d", count); QETH_CARD_TEXT_(card, 5, "qcqherr%d", qdio_err); @@ -3556,9 +3544,7 @@ static void qeth_qdio_input_handler(struct ccw_device *ccwdev, QETH_CARD_TEXT_(card, 2, "qihq%d", queue); QETH_CARD_TEXT_(card, 2, "qiec%d", qdio_err); - if (qeth_is_cq(card, queue)) - qeth_qdio_cq_handler(card, qdio_err, queue, first_elem, count); - else if (qdio_err) + if (qdio_err) qeth_schedule_recovery(card); } @@ -4805,10 +4791,7 @@ static void qeth_determine_capabilities(struct qeth_card *card) } static void qeth_qdio_establish_cq(struct qeth_card *card, - struct qdio_buffer **in_sbal_ptrs, - void (**queue_start_poll) - (struct ccw_device *, int, - unsigned long)) + struct qdio_buffer **in_sbal_ptrs) { int i; @@ -4819,8 +4802,6 @@ static void qeth_qdio_establish_cq(struct qeth_card *card, for (i = 0; i < QDIO_MAX_BUFFERS_PER_Q; i++) in_sbal_ptrs[offset + i] = card->qdio.c_q->bufs[i].buffer; - - queue_start_poll[card->qdio.no_in_queues - 1] = NULL; } } @@ -4829,7 +4810,6 @@ static int qeth_qdio_establish(struct qeth_card *card) struct qdio_initialize init_data; char *qib_param_field; struct qdio_buffer **in_sbal_ptrs; - void (**queue_start_poll) (struct ccw_device *, int, unsigned long); struct qdio_buffer **out_sbal_ptrs; int i, j, k; int rc = 0; @@ -4856,16 +4836,7 @@ static int qeth_qdio_establish(struct qeth_card *card) for (i = 0; i < QDIO_MAX_BUFFERS_PER_Q; i++) in_sbal_ptrs[i] = card->qdio.in_q->bufs[i].buffer; - queue_start_poll = kcalloc(card->qdio.no_in_queues, sizeof(void *), - GFP_KERNEL); - if (!queue_start_poll) { - rc = -ENOMEM; - goto out_free_in_sbals; - } - for (i = 0; i < card->qdio.no_in_queues; ++i) - queue_start_poll[i] = qeth_qdio_start_poll; - - qeth_qdio_establish_cq(card, in_sbal_ptrs, queue_start_poll); + qeth_qdio_establish_cq(card, in_sbal_ptrs); out_sbal_ptrs = kcalloc(card->qdio.no_out_queues * QDIO_MAX_BUFFERS_PER_Q, @@ -4873,7 +4844,7 @@ static int qeth_qdio_establish(struct qeth_card *card) GFP_KERNEL); if (!out_sbal_ptrs) { rc = -ENOMEM; - goto out_free_queue_start_poll; + goto out_free_in_sbals; } for (i = 0, k = 0; i < card->qdio.no_out_queues; ++i) @@ -4891,7 +4862,7 @@ static int qeth_qdio_establish(struct qeth_card *card) init_data.no_output_qs = card->qdio.no_out_queues; init_data.input_handler = qeth_qdio_input_handler; init_data.output_handler = qeth_qdio_output_handler; - init_data.queue_start_poll_array = queue_start_poll; + init_data.irq_poll = qeth_qdio_poll; init_data.int_parm = (unsigned long) card; init_data.input_sbal_addr_array = in_sbal_ptrs; init_data.output_sbal_addr_array = out_sbal_ptrs; @@ -4924,8 +4895,6 @@ static int qeth_qdio_establish(struct qeth_card *card) } out: kfree(out_sbal_ptrs); -out_free_queue_start_poll: - kfree(queue_start_poll); out_free_in_sbals: kfree(in_sbal_ptrs); out_free_qib_param: @@ -5581,6 +5550,24 @@ static unsigned int qeth_rx_poll(struct qeth_card *card, int budget) return work_done; } +static void qeth_cq_poll(struct qeth_card *card) +{ + unsigned int work_done = 0; + + while (work_done < QDIO_MAX_BUFFERS_PER_Q) { + unsigned int start, error; + int completed; + + completed = qdio_inspect_queue(CARD_DDEV(card), 1, true, &start, + &error); + if (completed <= 0) + return; + + qeth_qdio_cq_handler(card, error, 1, start, completed); + work_done += completed; + } +} + int qeth_poll(struct napi_struct *napi, int budget) { struct qeth_card *card = container_of(napi, struct qeth_card, napi); @@ -5588,12 +5575,15 @@ int qeth_poll(struct napi_struct *napi, int budget) work_done = qeth_rx_poll(card, budget); + if (card->options.cq == QETH_CQ_ENABLED) + qeth_cq_poll(card); + /* Exhausted the RX budget. Keep IRQ disabled, we get called again. */ if (budget && work_done >= budget) return work_done; if (napi_complete_done(napi, work_done) && - qdio_start_irq(CARD_DDEV(card), 0)) + qdio_start_irq(CARD_DDEV(card))) napi_schedule(napi); return work_done; @@ -6756,7 +6746,7 @@ int qeth_stop(struct net_device *dev) } napi_disable(&card->napi); - qdio_stop_irq(CARD_DDEV(card), 0); + qdio_stop_irq(CARD_DDEV(card)); return 0; } From patchwork Tue Mar 24 18:24:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julian Wiedmann X-Patchwork-Id: 221946 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1C31C43331 for ; Tue, 24 Mar 2020 18:25:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C8CAB20714 for ; Tue, 24 Mar 2020 18:25:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728060AbgCXSZN (ORCPT ); Tue, 24 Mar 2020 14:25:13 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:56972 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727792AbgCXSZD (ORCPT ); Tue, 24 Mar 2020 14:25:03 -0400 Received: from pps.filterd (m0098399.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 02OI4GGj051538 for ; Tue, 24 Mar 2020 14:25:02 -0400 Received: from e06smtp04.uk.ibm.com (e06smtp04.uk.ibm.com [195.75.94.100]) by mx0a-001b2d01.pphosted.com with ESMTP id 2ywf3fd96g-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Tue, 24 Mar 2020 14:25:02 -0400 Received: from localhost by e06smtp04.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 24 Mar 2020 18:24:57 -0000 Received: from b06cxnps4075.portsmouth.uk.ibm.com (9.149.109.197) by e06smtp04.uk.ibm.com (192.168.101.134) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Tue, 24 Mar 2020 18:24:55 -0000 Received: from b06wcsmtp001.portsmouth.uk.ibm.com (b06wcsmtp001.portsmouth.uk.ibm.com [9.149.105.160]) by b06cxnps4075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 02OIOuKX51511526 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 24 Mar 2020 18:24:56 GMT Received: from b06wcsmtp001.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 2F0D7A405F; Tue, 24 Mar 2020 18:24:56 +0000 (GMT) Received: from b06wcsmtp001.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 01DD0A4068; Tue, 24 Mar 2020 18:24:56 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by b06wcsmtp001.portsmouth.uk.ibm.com (Postfix) with ESMTP; Tue, 24 Mar 2020 18:24:55 +0000 (GMT) From: Julian Wiedmann To: David Miller Cc: netdev , linux-s390 , Heiko Carstens , Ursula Braun , Julian Wiedmann Subject: [PATCH net-next 05/11] s390/qeth: simplify L3 dev_id logic Date: Tue, 24 Mar 2020 19:24:42 +0100 X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200324182448.95362-1-jwi@linux.ibm.com> References: <20200324182448.95362-1-jwi@linux.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 20032418-0016-0000-0000-000002F7054D X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 20032418-0017-0000-0000-0000335AA4A3 Message-Id: <20200324182448.95362-6-jwi@linux.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.645 definitions=2020-03-24_05:2020-03-23,2020-03-24 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 phishscore=0 priorityscore=1501 adultscore=0 mlxscore=0 mlxlogscore=857 impostorscore=0 bulkscore=0 lowpriorityscore=0 malwarescore=0 spamscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2003020000 definitions=main-2003240090 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org The logic that deals with errors from qeth_l3_get_unique_id() is quite complex: it sets card->unique_id to 0xfffe, additionally flags it as UNIQUE_ID_NOT_BY_CARD and later takes this flag as cue to not propagate card->unique_id to dev->dev_id. With dev->dev_id thus holding 0, addrconf_ifid_eui48() applies its default behaviour. Get rid of all the special bit masks, and just return the old uid in case of an error. For the vast majority of cases this will be 0 (and so we still get the desired default behaviour) - with the rare exception where qeth_l3_get_unique_id() might have been called earlier but the initialization then failed at a later point. Signed-off-by: Julian Wiedmann --- drivers/s390/net/qeth_core.h | 5 ----- drivers/s390/net/qeth_l3_main.c | 30 +++++++++++++----------------- 2 files changed, 13 insertions(+), 22 deletions(-) diff --git a/drivers/s390/net/qeth_core.h b/drivers/s390/net/qeth_core.h index 9840d4fab010..257b7f3c5558 100644 --- a/drivers/s390/net/qeth_core.h +++ b/drivers/s390/net/qeth_core.h @@ -178,10 +178,6 @@ struct qeth_vnicc_info { #define QETH_RECLAIM_WORK_TIME HZ #define QETH_MAX_PORTNO 15 -/*IPv6 address autoconfiguration stuff*/ -#define UNIQUE_ID_IF_CREATE_ADDR_FAILED 0xfffe -#define UNIQUE_ID_NOT_BY_CARD 0x10000 - /*****************************************************************************/ /* QDIO queue and buffer handling */ /*****************************************************************************/ @@ -687,7 +683,6 @@ struct qeth_card_info { enum qeth_card_types type; enum qeth_link_types link_type; int broadcast_capable; - int unique_id; bool layer_enforced; struct qeth_card_blkt blkt; __u32 diagass_support; diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c index 83ae75cf1389..b48cd0df3e31 100644 --- a/drivers/s390/net/qeth_l3_main.c +++ b/drivers/s390/net/qeth_l3_main.c @@ -949,39 +949,36 @@ static int qeth_l3_get_unique_id_cb(struct qeth_card *card, struct qeth_reply *reply, unsigned long data) { struct qeth_ipa_cmd *cmd = (struct qeth_ipa_cmd *) data; + u16 *uid = reply->param; if (cmd->hdr.return_code == 0) { - card->info.unique_id = cmd->data.create_destroy_addr.uid; + *uid = cmd->data.create_destroy_addr.uid; return 0; } - card->info.unique_id = UNIQUE_ID_IF_CREATE_ADDR_FAILED | - UNIQUE_ID_NOT_BY_CARD; dev_warn(&card->gdev->dev, "The network adapter failed to generate a unique ID\n"); return -EIO; } -static int qeth_l3_get_unique_id(struct qeth_card *card) +static u16 qeth_l3_get_unique_id(struct qeth_card *card, u16 uid) { - int rc = 0; struct qeth_cmd_buffer *iob; QETH_CARD_TEXT(card, 2, "guniqeid"); - if (!qeth_is_supported(card, IPA_IPV6)) { - card->info.unique_id = UNIQUE_ID_IF_CREATE_ADDR_FAILED | - UNIQUE_ID_NOT_BY_CARD; - return 0; - } + if (!qeth_is_supported(card, IPA_IPV6)) + goto out; iob = qeth_ipa_alloc_cmd(card, IPA_CMD_CREATE_ADDR, QETH_PROT_IPV6, IPA_DATA_SIZEOF(create_destroy_addr)); if (!iob) - return -ENOMEM; + goto out; - __ipa_cmd(iob)->data.create_destroy_addr.uid = card->info.unique_id; - rc = qeth_send_ipa_cmd(card, iob, qeth_l3_get_unique_id_cb, NULL); - return rc; + __ipa_cmd(iob)->data.create_destroy_addr.uid = uid; + qeth_send_ipa_cmd(card, iob, qeth_l3_get_unique_id_cb, &uid); + +out: + return uid; } static int @@ -1920,6 +1917,7 @@ static const struct net_device_ops qeth_l3_osa_netdev_ops = { static int qeth_l3_setup_netdev(struct qeth_card *card) { + struct net_device *dev = card->dev; unsigned int headroom; int rc; @@ -1937,9 +1935,7 @@ static int qeth_l3_setup_netdev(struct qeth_card *card) card->dev->netdev_ops = &qeth_l3_osa_netdev_ops; /*IPv6 address autoconfiguration stuff*/ - qeth_l3_get_unique_id(card); - if (!(card->info.unique_id & UNIQUE_ID_NOT_BY_CARD)) - card->dev->dev_id = card->info.unique_id & 0xffff; + dev->dev_id = qeth_l3_get_unique_id(card, dev->dev_id); if (!IS_VM_NIC(card)) { card->dev->features |= NETIF_F_SG; From patchwork Tue Mar 24 18:24:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julian Wiedmann X-Patchwork-Id: 221947 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 839D8C2BAEE for ; Tue, 24 Mar 2020 18:25:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 6906120714 for ; Tue, 24 Mar 2020 18:25:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728055AbgCXSZH (ORCPT ); Tue, 24 Mar 2020 14:25:07 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:43934 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727928AbgCXSZF (ORCPT ); Tue, 24 Mar 2020 14:25:05 -0400 Received: from pps.filterd (m0098421.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 02OI434H114837 for ; Tue, 24 Mar 2020 14:25:03 -0400 Received: from e06smtp05.uk.ibm.com (e06smtp05.uk.ibm.com [195.75.94.101]) by mx0a-001b2d01.pphosted.com with ESMTP id 2ywewudvdv-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Tue, 24 Mar 2020 14:25:03 -0400 Received: from localhost by e06smtp05.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 24 Mar 2020 18:24:59 -0000 Received: from b06cxnps4076.portsmouth.uk.ibm.com (9.149.109.198) by e06smtp05.uk.ibm.com (192.168.101.135) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Tue, 24 Mar 2020 18:24:56 -0000 Received: from b06wcsmtp001.portsmouth.uk.ibm.com (b06wcsmtp001.portsmouth.uk.ibm.com [9.149.105.160]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 02OIOvTR41484450 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 24 Mar 2020 18:24:58 GMT Received: from b06wcsmtp001.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id E3DF5A405B; Tue, 24 Mar 2020 18:24:57 +0000 (GMT) Received: from b06wcsmtp001.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id AD0A7A405C; Tue, 24 Mar 2020 18:24:57 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by b06wcsmtp001.portsmouth.uk.ibm.com (Postfix) with ESMTP; Tue, 24 Mar 2020 18:24:57 +0000 (GMT) From: Julian Wiedmann To: David Miller Cc: netdev , linux-s390 , Heiko Carstens , Ursula Braun , Julian Wiedmann Subject: [PATCH net-next 11/11] s390/qeth: modernize two list helpers Date: Tue, 24 Mar 2020 19:24:48 +0100 X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200324182448.95362-1-jwi@linux.ibm.com> References: <20200324182448.95362-1-jwi@linux.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 20032418-0020-0000-0000-000003BA4DAA X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 20032418-0021-0000-0000-00002212CFCF Message-Id: <20200324182448.95362-12-jwi@linux.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.645 definitions=2020-03-24_05:2020-03-23,2020-03-24 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 suspectscore=2 spamscore=0 clxscore=1015 malwarescore=0 impostorscore=0 bulkscore=0 phishscore=0 mlxscore=0 adultscore=0 mlxlogscore=950 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2003020000 definitions=main-2003240090 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Replace list_for_each() with list_for_each_entry(), and list_entry(head.next) with list_first_entry(). Signed-off-by: Julian Wiedmann --- drivers/s390/net/qeth_core_main.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c index 4940e4fb556e..79a92761e50a 100644 --- a/drivers/s390/net/qeth_core_main.c +++ b/drivers/s390/net/qeth_core_main.c @@ -2629,15 +2629,13 @@ static void qeth_initialize_working_pool_list(struct qeth_card *card) static struct qeth_buffer_pool_entry *qeth_find_free_buffer_pool_entry( struct qeth_card *card) { - struct list_head *plh; struct qeth_buffer_pool_entry *entry; int i, free; if (list_empty(&card->qdio.in_buf_pool.entry_list)) return NULL; - list_for_each(plh, &card->qdio.in_buf_pool.entry_list) { - entry = list_entry(plh, struct qeth_buffer_pool_entry, list); + list_for_each_entry(entry, &card->qdio.in_buf_pool.entry_list, list) { free = 1; for (i = 0; i < QETH_MAX_BUFFER_ELEMENTS(card); ++i) { if (page_count(entry->elements[i]) > 1) { @@ -2652,8 +2650,8 @@ static struct qeth_buffer_pool_entry *qeth_find_free_buffer_pool_entry( } /* no free buffer in pool so take first one and swap pages */ - entry = list_entry(card->qdio.in_buf_pool.entry_list.next, - struct qeth_buffer_pool_entry, list); + entry = list_first_entry(&card->qdio.in_buf_pool.entry_list, + struct qeth_buffer_pool_entry, list); for (i = 0; i < QETH_MAX_BUFFER_ELEMENTS(card); ++i) { if (page_count(entry->elements[i]) > 1) { struct page *page = dev_alloc_page();