From patchwork Wed Dec 6 18:46:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Karan Tilak Kumar \(kartilak\)" X-Patchwork-Id: 750977 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=cisco.com header.i=@cisco.com header.b="htFbLTE3" Received: from rcdn-iport-7.cisco.com (rcdn-iport-7.cisco.com [173.37.86.78]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8938384; Wed, 6 Dec 2023 10:46:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=1974; q=dns/txt; s=iport; t=1701888391; x=1703097991; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=cAUkkq9AKnGLq+kxdOrbr7ZaaBIeDQKT3XyG1iC0jTE=; b=htFbLTE3hRy5ZuV5famGeFndKua9XfoWi5FIU/cB3UcJERY0HQ0P/te1 0Otx8+bN+jwpXNVCrsqghuJDoW191sGMnyP1aPyf2Lz3Qod933LSWhbFh Vnt2uxwO4ZE70UHaMjtiF52L2rCqzxZylPTbELW2kR+wh4zLD2AdwAzP4 c=; X-CSE-ConnectionGUID: 8I9DPCYVTJS1eCqzmIefvQ== X-CSE-MsgGUID: pTBMEiQTTp+dvPAeeUirxA== X-IronPort-AV: E=Sophos;i="6.04,256,1695686400"; d="scan'208";a="155092235" Received: from alln-core-4.cisco.com ([173.36.13.137]) by rcdn-iport-7.cisco.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Dec 2023 18:46:30 +0000 Received: from localhost.cisco.com ([10.193.101.253]) (authenticated bits=0) by alln-core-4.cisco.com (8.15.2/8.15.2) with ESMTPSA id 3B6IkHCv010013 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Wed, 6 Dec 2023 18:46:29 GMT From: Karan Tilak Kumar To: sebaddel@cisco.com Cc: arulponn@cisco.com, djhawar@cisco.com, gcboffa@cisco.com, mkai2@cisco.com, satishkh@cisco.com, jejb@linux.ibm.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org, Karan Tilak Kumar , Hannes Reinecke Subject: [PATCH v5 01/13] scsi: fnic: Modify definitions to sync with VIC firmware Date: Wed, 6 Dec 2023 10:46:03 -0800 Message-Id: <20231206184615.878755-2-kartilak@cisco.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20231206184615.878755-1-kartilak@cisco.com> References: <20231206184615.878755-1-kartilak@cisco.com> Precedence: bulk X-Mailing-List: linux-scsi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Authenticated-User: kartilak@cisco.com X-Outbound-SMTP-Client: 10.193.101.253, [10.193.101.253] X-Outbound-Node: alln-core-4.cisco.com VIC firmware has updated definitions. Modify structure and definitions to sync with the latest VIC firmware. Reviewed-by: Sesidhar Baddela Reviewed-by: Arulprabhu Ponnusamy Reviewed-by: Hannes Reinecke Signed-off-by: Karan Tilak Kumar --- drivers/scsi/fnic/vnic_scsi.h | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/drivers/scsi/fnic/vnic_scsi.h b/drivers/scsi/fnic/vnic_scsi.h index 4e12f7b32d9d..f715f7942bfe 100644 --- a/drivers/scsi/fnic/vnic_scsi.h +++ b/drivers/scsi/fnic/vnic_scsi.h @@ -26,7 +26,7 @@ #define VNIC_FNIC_RATOV_MAX 255000 #define VNIC_FNIC_MAXDATAFIELDSIZE_MIN 256 -#define VNIC_FNIC_MAXDATAFIELDSIZE_MAX 2112 +#define VNIC_FNIC_MAXDATAFIELDSIZE_MAX 2048 #define VNIC_FNIC_FLOGI_RETRIES_MIN 0 #define VNIC_FNIC_FLOGI_RETRIES_MAX 0xffffffff @@ -55,7 +55,7 @@ #define VNIC_FNIC_PORT_DOWN_IO_RETRIES_MAX 255 #define VNIC_FNIC_LUNS_PER_TARGET_MIN 1 -#define VNIC_FNIC_LUNS_PER_TARGET_MAX 1024 +#define VNIC_FNIC_LUNS_PER_TARGET_MAX 4096 /* Device-specific region: scsi configuration */ struct vnic_fc_config { @@ -79,10 +79,19 @@ struct vnic_fc_config { u16 ra_tov; u16 intr_timer; u8 intr_timer_type; + u8 intr_mode; + u8 lun_queue_depth; + u8 io_timeout_retry; + u16 wq_copy_count; }; #define VFCF_FCP_SEQ_LVL_ERR 0x1 /* Enable FCP-2 Error Recovery */ #define VFCF_PERBI 0x2 /* persistent binding info available */ #define VFCF_FIP_CAPABLE 0x4 /* firmware can handle FIP */ +#define VFCF_FC_INITIATOR 0x20 /* FC Initiator Mode */ +#define VFCF_FC_TARGET 0x40 /* FC Target Mode */ +#define VFCF_FC_NVME_INITIATOR 0x80 /* FC-NVMe Initiator Mode */ +#define VFCF_FC_NVME_TARGET 0x100 /* FC-NVMe Target Mode */ + #endif /* _VNIC_SCSI_H_ */ From patchwork Wed Dec 6 18:46:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Karan Tilak Kumar \(kartilak\)" X-Patchwork-Id: 750976 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=cisco.com header.i=@cisco.com header.b="S/8DgS09" Received: from rcdn-iport-1.cisco.com (rcdn-iport-1.cisco.com [173.37.86.72]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DA609D4B; Wed, 6 Dec 2023 10:46:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=13057; q=dns/txt; s=iport; t=1701888397; x=1703097997; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=TC+hmWoJpps+Jw8WdGxvA0qi6hZRj4rlNomIB1X9sew=; b=S/8DgS09lYzoeZAmjUD2r67MWk7TDBEC+Y4IvFy3LfQ9uRtXP3mIH9w5 6HniWl8FUw8NlqFGHLqFwxIBfqD6LqzyEWp7+cjK9I0qaE15a670ZAbSZ iF6Oh4rHEraugRZdI0kAeRJYht0Sv/8JBjHmXLXusCO3rSU5MIMWisZSQ 8=; X-CSE-ConnectionGUID: eDouuw9VQl+o+1dmkhQSdg== X-CSE-MsgGUID: vDlBMve/SPqrQww8MHpj7w== X-IronPort-AV: E=Sophos;i="6.04,256,1695686400"; d="scan'208";a="155367504" Received: from alln-core-4.cisco.com ([173.36.13.137]) by rcdn-iport-1.cisco.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Dec 2023 18:46:36 +0000 Received: from localhost.cisco.com ([10.193.101.253]) (authenticated bits=0) by alln-core-4.cisco.com (8.15.2/8.15.2) with ESMTPSA id 3B6IkHCx010013 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Wed, 6 Dec 2023 18:46:35 GMT From: Karan Tilak Kumar To: sebaddel@cisco.com Cc: arulponn@cisco.com, djhawar@cisco.com, gcboffa@cisco.com, mkai2@cisco.com, satishkh@cisco.com, jejb@linux.ibm.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org, Karan Tilak Kumar Subject: [PATCH v5 03/13] scsi: fnic: Add and improve log messages Date: Wed, 6 Dec 2023 10:46:05 -0800 Message-Id: <20231206184615.878755-4-kartilak@cisco.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20231206184615.878755-1-kartilak@cisco.com> References: <20231206184615.878755-1-kartilak@cisco.com> Precedence: bulk X-Mailing-List: linux-scsi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Authenticated-User: kartilak@cisco.com X-Outbound-SMTP-Client: 10.193.101.253, [10.193.101.253] X-Outbound-Node: alln-core-4.cisco.com Add link related log messages in fnic_fcs.c, Improve log message in fnic_fcs.c, Add log message in vnic_dev.c. Reviewed-by: Sesidhar Baddela Reviewed-by: Arulprabhu Ponnusamy Signed-off-by: Karan Tilak Kumar --- Changes between v4 and v5: Incorporate review comments from Martin: Modify patch commits to include a "---" separator. Changes between v2 and v3: Incorporate review comment from Hannes: Modify FNIC_MAIN_DBG to prepend fnic number. Modify FNIC_MAIN_DBG definition to prepend function name and line number. Modify FNIC_FCS_DBG definition to prepend function name and line number. Replace FNIC_MAIN_DBG with FNIC_FCS_DBG in fnic_fcs.c Use fnic_num as an argument to FNIC_MAIN_DBG and FNIC_FCS_DBG. Host number is still used as an argument to FNIC_MAIN_DBG and FNIC_FCS_DBG since it in turn uses shost_printk. --- drivers/scsi/fnic/fnic.h | 12 ++++--- drivers/scsi/fnic/fnic_fcs.c | 63 +++++++++++++++++++---------------- drivers/scsi/fnic/fnic_main.c | 4 +-- drivers/scsi/fnic/vnic_dev.c | 4 +++ 4 files changed, 49 insertions(+), 34 deletions(-) diff --git a/drivers/scsi/fnic/fnic.h b/drivers/scsi/fnic/fnic.h index c6c549c633b1..faac0f93b983 100644 --- a/drivers/scsi/fnic/fnic.h +++ b/drivers/scsi/fnic/fnic.h @@ -144,13 +144,17 @@ do { \ } while (0); \ } while (0) -#define FNIC_MAIN_DBG(kern_level, host, fmt, args...) \ +#define FNIC_MAIN_DBG(kern_level, host, fnic_num, fmt, args...) \ FNIC_CHECK_LOGGING(FNIC_MAIN_LOGGING, \ - shost_printk(kern_level, host, fmt, ##args);) + shost_printk(kern_level, host, \ + "fnic<%d>: %s: %d: " fmt, fnic_num,\ + __func__, __LINE__, ##args);) -#define FNIC_FCS_DBG(kern_level, host, fmt, args...) \ +#define FNIC_FCS_DBG(kern_level, host, fnic_num, fmt, args...) \ FNIC_CHECK_LOGGING(FNIC_FCS_LOGGING, \ - shost_printk(kern_level, host, fmt, ##args);) + shost_printk(kern_level, host, \ + "fnic<%d>: %s: %d: " fmt, fnic_num,\ + __func__, __LINE__, ##args);) #define FNIC_SCSI_DBG(kern_level, host, fmt, args...) \ FNIC_CHECK_LOGGING(FNIC_SCSI_LOGGING, \ diff --git a/drivers/scsi/fnic/fnic_fcs.c b/drivers/scsi/fnic/fnic_fcs.c index 55632c67a8f2..5e312a55cc7d 100644 --- a/drivers/scsi/fnic/fnic_fcs.c +++ b/drivers/scsi/fnic/fnic_fcs.c @@ -63,8 +63,8 @@ void fnic_handle_link(struct work_struct *work) atomic64_set(&fnic->fnic_stats.misc_stats.current_port_speed, new_port_speed); if (old_port_speed != new_port_speed) - FNIC_MAIN_DBG(KERN_INFO, fnic->lport->host, - "Current vnic speed set to : %llu\n", + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "Current vnic speed set to: %llu\n", new_port_speed); switch (vnic_dev_port_speed(fnic->vdev)) { @@ -102,6 +102,8 @@ void fnic_handle_link(struct work_struct *work) fnic_fc_trace_set_data(fnic->lport->host->host_no, FNIC_FC_LE, "Link Status: DOWN->DOWN", strlen("Link Status: DOWN->DOWN")); + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "down->down\n"); } else { if (old_link_down_cnt != fnic->link_down_cnt) { /* UP -> DOWN -> UP */ @@ -113,7 +115,7 @@ void fnic_handle_link(struct work_struct *work) "Link Status:UP_DOWN_UP", strlen("Link_Status:UP_DOWN_UP") ); - FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, + FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, "link down\n"); fcoe_ctlr_link_down(&fnic->ctlr); if (fnic->config.flags & VFCF_FIP_CAPABLE) { @@ -128,8 +130,8 @@ void fnic_handle_link(struct work_struct *work) fnic_fcoe_send_vlan_req(fnic); return; } - FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, - "link up\n"); + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "up->down->up: Link up\n"); fcoe_ctlr_link_up(&fnic->ctlr); } else { /* UP -> UP */ @@ -138,6 +140,8 @@ void fnic_handle_link(struct work_struct *work) fnic->lport->host->host_no, FNIC_FC_LE, "Link Status: UP_UP", strlen("Link Status: UP_UP")); + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "up->up\n"); } } } else if (fnic->link_status) { @@ -153,7 +157,8 @@ void fnic_handle_link(struct work_struct *work) return; } - FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, "link up\n"); + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "down->up: Link up\n"); fnic_fc_trace_set_data(fnic->lport->host->host_no, FNIC_FC_LE, "Link Status: DOWN_UP", strlen("Link Status: DOWN_UP")); fcoe_ctlr_link_up(&fnic->ctlr); @@ -161,13 +166,14 @@ void fnic_handle_link(struct work_struct *work) /* UP -> DOWN */ fnic->lport->host_stats.link_failure_count++; spin_unlock_irqrestore(&fnic->fnic_lock, flags); - FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, "link down\n"); + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "up->down: Link down\n"); fnic_fc_trace_set_data( fnic->lport->host->host_no, FNIC_FC_LE, "Link Status: UP_DOWN", strlen("Link Status: UP_DOWN")); if (fnic->config.flags & VFCF_FIP_CAPABLE) { - FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, + FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, "deleting fip-timer during link-down\n"); del_timer_sync(&fnic->fip_timer); } @@ -270,12 +276,12 @@ void fnic_handle_event(struct work_struct *work) spin_lock_irqsave(&fnic->fnic_lock, flags); break; case FNIC_EVT_START_FCF_DISC: - FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, + FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, "Start FCF Discovery\n"); fnic_fcoe_start_fcf_disc(fnic); break; default: - FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, + FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, "Unknown event 0x%x\n", fevt->event); break; } @@ -370,7 +376,7 @@ static void fnic_fcoe_send_vlan_req(struct fnic *fnic) fnic->set_vlan(fnic, 0); if (printk_ratelimit()) - FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, "Sending VLAN request...\n"); skb = dev_alloc_skb(sizeof(struct fip_vlan)); @@ -423,12 +429,12 @@ static void fnic_fcoe_process_vlan_resp(struct fnic *fnic, struct sk_buff *skb) u64 sol_time; unsigned long flags; - FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, "Received VLAN response...\n"); fiph = (struct fip_header *) skb->data; - FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, "Received VLAN response... OP 0x%x SUB_OP 0x%x\n", ntohs(fiph->fip_op), fiph->fip_subcode); @@ -463,7 +469,7 @@ static void fnic_fcoe_process_vlan_resp(struct fnic *fnic, struct sk_buff *skb) if (list_empty(&fnic->vlans)) { /* retry from timer */ atomic64_inc(&fnic_stats->vlan_stats.resp_withno_vlanID); - FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, "No VLAN descriptors in FIP VLAN response\n"); spin_unlock_irqrestore(&fnic->vlans_lock, flags); goto out; @@ -721,7 +727,8 @@ void fnic_update_mac_locked(struct fnic *fnic, u8 *new) new = ctl; if (ether_addr_equal(data, new)) return; - FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, "update_mac %pM\n", new); + FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, + "update_mac %pM\n", new); if (!is_zero_ether_addr(data) && !ether_addr_equal(data, ctl)) vnic_dev_del_addr(fnic->vdev, data); memcpy(data, new, ETH_ALEN); @@ -763,8 +770,9 @@ void fnic_set_port_id(struct fc_lport *lport, u32 port_id, struct fc_frame *fp) u8 *mac; int ret; - FNIC_FCS_DBG(KERN_DEBUG, lport->host, "set port_id %x fp %p\n", - port_id, fp); + FNIC_FCS_DBG(KERN_DEBUG, lport->host, fnic->fnic_num, + "set port_id 0x%x fp 0x%p\n", + port_id, fp); /* * If we're clearing the FC_ID, change to use the ctl_src_addr. @@ -790,10 +798,9 @@ void fnic_set_port_id(struct fc_lport *lport, u32 port_id, struct fc_frame *fp) if (fnic->state == FNIC_IN_ETH_MODE || fnic->state == FNIC_IN_FC_MODE) fnic->state = FNIC_IN_ETH_TRANS_FC_MODE; else { - FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, - "Unexpected fnic state %s while" - " processing flogi resp\n", - fnic_state_to_str(fnic->state)); + FNIC_FCS_DBG(KERN_ERR, fnic->lport->host, fnic->fnic_num, + "Unexpected fnic state: %s processing FLOGI response", + fnic_state_to_str(fnic->state)); spin_unlock_irq(&fnic->fnic_lock); return; } @@ -870,7 +877,7 @@ static void fnic_rq_cmpl_frame_recv(struct vnic_rq *rq, struct cq_desc skb_trim(skb, bytes_written); if (!fcs_ok) { atomic64_inc(&fnic_stats->misc_stats.frame_errors); - FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, + FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, "fcs error. dropping packet.\n"); goto drop; } @@ -886,7 +893,7 @@ static void fnic_rq_cmpl_frame_recv(struct vnic_rq *rq, struct cq_desc if (!fcs_ok || packet_error || !fcoe_fc_crc_ok || fcoe_enc_error) { atomic64_inc(&fnic_stats->misc_stats.frame_errors); - FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, + FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, "fnic rq_cmpl fcoe x%x fcsok x%x" " pkterr x%x fcoe_fc_crc_ok x%x, fcoe_enc_err" " x%x\n", @@ -967,7 +974,7 @@ int fnic_alloc_rq_frame(struct vnic_rq *rq) len = FC_FRAME_HEADROOM + FC_MAX_FRAME + FC_FRAME_TAILROOM; skb = dev_alloc_skb(len); if (!skb) { - FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, + FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, "Unable to allocate RQ sk_buff\n"); return -ENOMEM; } @@ -1341,12 +1348,12 @@ void fnic_handle_fip_timer(struct fnic *fnic) } vlan = list_first_entry(&fnic->vlans, struct fcoe_vlan, list); - FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, + FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, "fip_timer: vlan %d state %d sol_count %d\n", vlan->vid, vlan->state, vlan->sol_count); switch (vlan->state) { case FIP_VLAN_USED: - FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, + FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, "FIP VLAN is selected for FC transaction\n"); spin_unlock_irqrestore(&fnic->vlans_lock, flags); break; @@ -1365,7 +1372,7 @@ void fnic_handle_fip_timer(struct fnic *fnic) * no response on this vlan, remove from the list. * Try the next vlan */ - FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, "Dequeue this VLAN ID %d from list\n", vlan->vid); list_del(&vlan->list); @@ -1375,7 +1382,7 @@ void fnic_handle_fip_timer(struct fnic *fnic) /* we exhausted all vlans, restart vlan disc */ spin_unlock_irqrestore(&fnic->vlans_lock, flags); - FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, + FNIC_FCS_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, "fip_timer: vlan list empty, " "trigger vlan disc\n"); fnic_event_enq(fnic, FNIC_EVT_START_VLAN_DISC); diff --git a/drivers/scsi/fnic/fnic_main.c b/drivers/scsi/fnic/fnic_main.c index f989a5d7a229..e8c567a46994 100644 --- a/drivers/scsi/fnic/fnic_main.c +++ b/drivers/scsi/fnic/fnic_main.c @@ -210,7 +210,7 @@ static struct fc_host_statistics *fnic_get_stats(struct Scsi_Host *host) spin_unlock_irqrestore(&fnic->fnic_lock, flags); if (ret) { - FNIC_MAIN_DBG(KERN_DEBUG, fnic->lport->host, + FNIC_MAIN_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, "fnic: Get vnic stats failed" " 0x%x", ret); return stats; @@ -322,7 +322,7 @@ static void fnic_reset_host_stats(struct Scsi_Host *host) spin_unlock_irqrestore(&fnic->fnic_lock, flags); if (ret) { - FNIC_MAIN_DBG(KERN_DEBUG, fnic->lport->host, + FNIC_MAIN_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, "fnic: Reset vnic stats failed" " 0x%x", ret); return; diff --git a/drivers/scsi/fnic/vnic_dev.c b/drivers/scsi/fnic/vnic_dev.c index 3e5b437c0492..e0b173cc9d5f 100644 --- a/drivers/scsi/fnic/vnic_dev.c +++ b/drivers/scsi/fnic/vnic_dev.c @@ -143,6 +143,10 @@ static int vnic_dev_discover_res(struct vnic_dev *vdev, vdev->res[type].vaddr = (char __iomem *)bar->vaddr + bar_offset; } + pr_info("res_type_wq: %d res_type_rq: %d res_type_cq: %d res_type_intr_ctrl: %d\n", + vdev->res[RES_TYPE_WQ].count, vdev->res[RES_TYPE_RQ].count, + vdev->res[RES_TYPE_CQ].count, vdev->res[RES_TYPE_INTR_CTRL].count); + return 0; } From patchwork Wed Dec 6 18:46:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Karan Tilak Kumar \(kartilak\)" X-Patchwork-Id: 750975 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=cisco.com header.i=@cisco.com header.b="k76YNNSv" Received: from rcdn-iport-9.cisco.com (rcdn-iport-9.cisco.com [173.37.86.80]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AB97810C7; Wed, 6 Dec 2023 10:46:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=4170; q=dns/txt; s=iport; t=1701888403; x=1703098003; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=OcJECNe81v7ZCEUoq8/ceKd1781hnWAO3rR3PDQ7g5E=; b=k76YNNSv31+PK1r4xa4RdsocrSvobr4TACa5RepqJ0io9TYBdhz6Wi+P 4kKKMg0OfSdaNECYz8WsherZntZ3Z1yM8+iatC3+bZSXznd66byGOyOGI jb0bcmAZLzycyJJeKl2ZNp7YXjwVDISY8j9YSC7bqyv04lr8LI3wqRie/ A=; X-CSE-ConnectionGUID: 63OZT/WVSuSeF4g1WZ/R0g== X-CSE-MsgGUID: NbfXQPDWRJ+o9rs8ABIgqA== X-IronPort-AV: E=Sophos;i="6.04,256,1695686400"; d="scan'208";a="154981894" Received: from alln-core-4.cisco.com ([173.36.13.137]) by rcdn-iport-9.cisco.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Dec 2023 18:46:42 +0000 Received: from localhost.cisco.com ([10.193.101.253]) (authenticated bits=0) by alln-core-4.cisco.com (8.15.2/8.15.2) with ESMTPSA id 3B6IkHD1010013 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Wed, 6 Dec 2023 18:46:42 GMT From: Karan Tilak Kumar To: sebaddel@cisco.com Cc: arulponn@cisco.com, djhawar@cisco.com, gcboffa@cisco.com, mkai2@cisco.com, satishkh@cisco.com, jejb@linux.ibm.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org, Karan Tilak Kumar , Hannes Reinecke Subject: [PATCH v5 05/13] scsi: fnic: Get copy workqueue count and interrupt mode from config Date: Wed, 6 Dec 2023 10:46:07 -0800 Message-Id: <20231206184615.878755-6-kartilak@cisco.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20231206184615.878755-1-kartilak@cisco.com> References: <20231206184615.878755-1-kartilak@cisco.com> Precedence: bulk X-Mailing-List: linux-scsi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Authenticated-User: kartilak@cisco.com X-Outbound-SMTP-Client: 10.193.101.253, [10.193.101.253] X-Outbound-Node: alln-core-4.cisco.com Get the copy workqueue count and interrupt mode from the configuration. The config can be changed via UCSM. Add logs to print the interrupt mode and copy workqueue count. Add logs to print the vNIC resources. Reviewed-by: Sesidhar Baddela Reviewed-by: Arulprabhu Ponnusamy Reviewed-by: Hannes Reinecke Signed-off-by: Karan Tilak Kumar --- drivers/scsi/fnic/fnic_res.c | 42 ++++++++++++++++++++++++++++++------ 1 file changed, 36 insertions(+), 6 deletions(-) diff --git a/drivers/scsi/fnic/fnic_res.c b/drivers/scsi/fnic/fnic_res.c index 109316cc4ad9..33dd27f6f24e 100644 --- a/drivers/scsi/fnic/fnic_res.c +++ b/drivers/scsi/fnic/fnic_res.c @@ -57,6 +57,8 @@ int fnic_get_vnic_config(struct fnic *fnic) GET_CONFIG(port_down_timeout); GET_CONFIG(port_down_io_retries); GET_CONFIG(luns_per_tgt); + GET_CONFIG(intr_mode); + GET_CONFIG(wq_copy_count); c->wq_enet_desc_count = min_t(u32, VNIC_FNIC_WQ_DESCS_MAX, @@ -131,6 +133,12 @@ int fnic_get_vnic_config(struct fnic *fnic) c->intr_timer = min_t(u16, VNIC_INTR_TIMER_MAX, c->intr_timer); c->intr_timer_type = c->intr_timer_type; + /* for older firmware, GET_CONFIG will not return anything */ + if (c->wq_copy_count == 0) + c->wq_copy_count = 1; + + c->wq_copy_count = min_t(u16, FNIC_WQ_COPY_MAX, c->wq_copy_count); + shost_printk(KERN_INFO, fnic->lport->host, "vNIC MAC addr %pM " "wq/wq_copy/rq %d/%d/%d\n", @@ -161,6 +169,10 @@ int fnic_get_vnic_config(struct fnic *fnic) shost_printk(KERN_INFO, fnic->lport->host, "vNIC port dn io retries %d port dn timeout %d\n", c->port_down_io_retries, c->port_down_timeout); + shost_printk(KERN_INFO, fnic->lport->host, + "vNIC wq_copy_count: %d\n", c->wq_copy_count); + shost_printk(KERN_INFO, fnic->lport->host, + "vNIC intr mode: %d\n", c->intr_mode); return 0; } @@ -187,12 +199,25 @@ int fnic_set_nic_config(struct fnic *fnic, u8 rss_default_cpu, void fnic_get_res_counts(struct fnic *fnic) { fnic->wq_count = vnic_dev_get_res_count(fnic->vdev, RES_TYPE_WQ); - fnic->raw_wq_count = fnic->wq_count - 1; - fnic->wq_copy_count = fnic->wq_count - fnic->raw_wq_count; + fnic->raw_wq_count = 1; + fnic->wq_copy_count = fnic->config.wq_copy_count; fnic->rq_count = vnic_dev_get_res_count(fnic->vdev, RES_TYPE_RQ); fnic->cq_count = vnic_dev_get_res_count(fnic->vdev, RES_TYPE_CQ); fnic->intr_count = vnic_dev_get_res_count(fnic->vdev, RES_TYPE_INTR_CTRL); + + shost_printk(KERN_INFO, fnic->lport->host, + "vNIC fw resources wq_count: %d\n", fnic->wq_count); + shost_printk(KERN_INFO, fnic->lport->host, + "vNIC fw resources raw_wq_count: %d\n", fnic->raw_wq_count); + shost_printk(KERN_INFO, fnic->lport->host, + "vNIC fw resources wq_copy_count: %d\n", fnic->wq_copy_count); + shost_printk(KERN_INFO, fnic->lport->host, + "vNIC fw resources rq_count: %d\n", fnic->rq_count); + shost_printk(KERN_INFO, fnic->lport->host, + "vNIC fw resources cq_count: %d\n", fnic->cq_count); + shost_printk(KERN_INFO, fnic->lport->host, + "vNIC fw resources intr_count: %d\n", fnic->intr_count); } void fnic_free_vnic_resources(struct fnic *fnic) @@ -234,10 +259,15 @@ int fnic_alloc_vnic_resources(struct fnic *fnic) intr_mode == VNIC_DEV_INTR_MODE_MSIX ? "MSI-X" : "unknown"); - shost_printk(KERN_INFO, fnic->lport->host, "vNIC resources avail: " - "wq %d cp_wq %d raw_wq %d rq %d cq %d intr %d\n", - fnic->wq_count, fnic->wq_copy_count, fnic->raw_wq_count, - fnic->rq_count, fnic->cq_count, fnic->intr_count); + shost_printk(KERN_INFO, fnic->lport->host, + "vNIC resources avail: wq %d cp_wq %d raw_wq %d rq %d", + fnic->wq_count, fnic->wq_copy_count, + fnic->raw_wq_count, fnic->rq_count); + + shost_printk(KERN_INFO, fnic->lport->host, + "vNIC resources avail: cq %d intr %d cpy-wq desc count %d\n", + fnic->cq_count, fnic->intr_count, + fnic->config.wq_copy_desc_count); /* Allocate Raw WQ used for FCS frames */ for (i = 0; i < fnic->raw_wq_count; i++) { From patchwork Wed Dec 6 18:46:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Karan Tilak Kumar \(kartilak\)" X-Patchwork-Id: 750974 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=cisco.com header.i=@cisco.com header.b="BCrasn8h" Received: from rcdn-iport-2.cisco.com (rcdn-iport-2.cisco.com [173.37.86.73]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B6EE3D5F; Wed, 6 Dec 2023 10:46:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=15762; q=dns/txt; s=iport; t=1701888415; x=1703098015; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ZomJntL9Q2JYbuazVPu85JC6N9ylwp3bDcMA5WHGqZY=; b=BCrasn8hmZsGqL0xZqyUtzlN0zeuq0jXbDlK2fLUzyemVcWsYvS0SG0z uxl5h91NKaubQC3X3mMA2LyXZP8a11HcepmzDEwByhKwDDzTjG6FVKg0m GI67Xys+WtC0Axze3vRehD5xTsh7Ywqz5efC6LzwBU2+CR6w6/HFgnA9h 0=; X-CSE-ConnectionGUID: l56VDD1yQwSOk7ZrP7XLHQ== X-CSE-MsgGUID: 3MWiq/n1RyaNNXIZAGwD3Q== X-IronPort-AV: E=Sophos;i="6.04,256,1695686400"; d="scan'208";a="155948814" Received: from alln-core-4.cisco.com ([173.36.13.137]) by rcdn-iport-2.cisco.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Dec 2023 18:46:53 +0000 Received: from localhost.cisco.com ([10.193.101.253]) (authenticated bits=0) by alln-core-4.cisco.com (8.15.2/8.15.2) with ESMTPSA id 3B6IkHD3010013 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Wed, 6 Dec 2023 18:46:52 GMT From: Karan Tilak Kumar To: sebaddel@cisco.com Cc: arulponn@cisco.com, djhawar@cisco.com, gcboffa@cisco.com, mkai2@cisco.com, satishkh@cisco.com, jejb@linux.ibm.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org, Karan Tilak Kumar , kernel test robot Subject: [PATCH v5 07/13] scsi: fnic: Modify ISRs to support multiqueue(MQ) Date: Wed, 6 Dec 2023 10:46:09 -0800 Message-Id: <20231206184615.878755-8-kartilak@cisco.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20231206184615.878755-1-kartilak@cisco.com> References: <20231206184615.878755-1-kartilak@cisco.com> Precedence: bulk X-Mailing-List: linux-scsi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Authenticated-User: kartilak@cisco.com X-Outbound-SMTP-Client: 10.193.101.253, [10.193.101.253] X-Outbound-Node: alln-core-4.cisco.com Modify interrupt service routines for INTx, MSI, and MSI-x to support multiqueue. Modify parameter list of fnic_wq_copy_cmpl_handler to take cq_index. Modify fnic_cleanup function to use the new function call of fnic_wq_copy_cmpl_handler. Refactor code to set interrupt mode to MSI-x to a new function. Add a new stat for intx_dummy. Reported-by: kernel test robot Closes: https://lore.kernel.org/oe-kbuild-all/202310251847.4T8BVZAZ-lkp@intel.com/ Reviewed-by: Sesidhar Baddela Reviewed-by: Arulprabhu Ponnusamy Signed-off-by: Karan Tilak Kumar --- Changes between v4 and v5: Incorporate review comments from Martin: Modify patch commits to include a "---" separator. Changes between v2 and v3: Incorporate the following review comments from Hannes: Replace cpy_wq_base with copy_wq_base. Remove C99 style comment. Extend review comments of FNIC_MAIN_DBG and FNIC_SCSI_DBG to FNIC_ISR_DBG: Use fnic_num as an argument to FNIC_ISR_DBG. Modify definition of FNIC_ISR_DBG. Host number is still used as an argument to FNIC_ISR_DBG since it in turn uses shost_printk. Removed reviewed by tag from Hannes due to additional modifications. Changes between v1 and v2: Suppress warning from kernel test bot. --- drivers/scsi/fnic/fnic.h | 9 +- drivers/scsi/fnic/fnic_isr.c | 166 ++++++++++++++++++++++++--------- drivers/scsi/fnic/fnic_main.c | 4 +- drivers/scsi/fnic/fnic_scsi.c | 38 +++----- drivers/scsi/fnic/fnic_stats.h | 1 + 5 files changed, 144 insertions(+), 74 deletions(-) diff --git a/drivers/scsi/fnic/fnic.h b/drivers/scsi/fnic/fnic.h index 07d67fe903f2..97a2e547584a 100644 --- a/drivers/scsi/fnic/fnic.h +++ b/drivers/scsi/fnic/fnic.h @@ -160,9 +160,11 @@ do { \ FNIC_CHECK_LOGGING(FNIC_SCSI_LOGGING, \ shost_printk(kern_level, host, fmt, ##args);) -#define FNIC_ISR_DBG(kern_level, host, fmt, args...) \ +#define FNIC_ISR_DBG(kern_level, host, fnic_num, fmt, args...) \ FNIC_CHECK_LOGGING(FNIC_ISR_LOGGING, \ - shost_printk(kern_level, host, fmt, ##args);) + shost_printk(kern_level, host, \ + "fnic<%d>: %s: %d: " fmt, fnic_num,\ + __func__, __LINE__, ##args);) #define FNIC_MAIN_NOTE(kern_level, host, fmt, args...) \ shost_printk(kern_level, host, fmt, ##args) @@ -347,6 +349,7 @@ extern const struct attribute_group *fnic_host_groups[]; void fnic_clear_intr_mode(struct fnic *fnic); int fnic_set_intr_mode(struct fnic *fnic); +int fnic_set_intr_mode_msix(struct fnic *fnic); void fnic_free_intr(struct fnic *fnic); int fnic_request_intr(struct fnic *fnic); @@ -373,7 +376,7 @@ void fnic_scsi_cleanup(struct fc_lport *); void fnic_scsi_abort_io(struct fc_lport *); void fnic_empty_scsi_cleanup(struct fc_lport *); void fnic_exch_mgr_reset(struct fc_lport *, u32, u32); -int fnic_wq_copy_cmpl_handler(struct fnic *fnic, int); +int fnic_wq_copy_cmpl_handler(struct fnic *fnic, int copy_work_to_do, unsigned int cq_index); int fnic_wq_cmpl_handler(struct fnic *fnic, int); int fnic_flogi_reg_handler(struct fnic *fnic, u32); void fnic_wq_copy_cleanup_handler(struct vnic_wq_copy *wq, diff --git a/drivers/scsi/fnic/fnic_isr.c b/drivers/scsi/fnic/fnic_isr.c index dff9689023e4..ff85441c6cea 100644 --- a/drivers/scsi/fnic/fnic_isr.c +++ b/drivers/scsi/fnic/fnic_isr.c @@ -38,8 +38,13 @@ static irqreturn_t fnic_isr_legacy(int irq, void *data) fnic_log_q_error(fnic); } + if (pba & (1 << FNIC_INTX_DUMMY)) { + atomic64_inc(&fnic->fnic_stats.misc_stats.intx_dummy); + vnic_intr_return_all_credits(&fnic->intr[FNIC_INTX_DUMMY]); + } + if (pba & (1 << FNIC_INTX_WQ_RQ_COPYWQ)) { - work_done += fnic_wq_copy_cmpl_handler(fnic, io_completions); + work_done += fnic_wq_copy_cmpl_handler(fnic, io_completions, FNIC_MQ_CQ_INDEX); work_done += fnic_wq_cmpl_handler(fnic, -1); work_done += fnic_rq_cmpl_handler(fnic, -1); @@ -60,7 +65,7 @@ static irqreturn_t fnic_isr_msi(int irq, void *data) fnic->fnic_stats.misc_stats.last_isr_time = jiffies; atomic64_inc(&fnic->fnic_stats.misc_stats.isr_count); - work_done += fnic_wq_copy_cmpl_handler(fnic, io_completions); + work_done += fnic_wq_copy_cmpl_handler(fnic, io_completions, FNIC_MQ_CQ_INDEX); work_done += fnic_wq_cmpl_handler(fnic, -1); work_done += fnic_rq_cmpl_handler(fnic, -1); @@ -109,12 +114,22 @@ static irqreturn_t fnic_isr_msix_wq_copy(int irq, void *data) { struct fnic *fnic = data; unsigned long wq_copy_work_done = 0; + int i; fnic->fnic_stats.misc_stats.last_isr_time = jiffies; atomic64_inc(&fnic->fnic_stats.misc_stats.isr_count); - wq_copy_work_done = fnic_wq_copy_cmpl_handler(fnic, io_completions); - vnic_intr_return_credits(&fnic->intr[FNIC_MSIX_WQ_COPY], + i = irq - fnic->msix[0].irq_num; + if (i >= fnic->wq_copy_count + fnic->copy_wq_base || + i < 0 || fnic->msix[i].irq_num != irq) { + for (i = fnic->copy_wq_base; i < fnic->wq_copy_count + fnic->copy_wq_base ; i++) { + if (fnic->msix[i].irq_num == irq) + break; + } + } + + wq_copy_work_done = fnic_wq_copy_cmpl_handler(fnic, io_completions, i); + vnic_intr_return_credits(&fnic->intr[i], wq_copy_work_done, 1 /* unmask intr */, 1 /* reset intr timer */); @@ -128,7 +143,7 @@ static irqreturn_t fnic_isr_msix_err_notify(int irq, void *data) fnic->fnic_stats.misc_stats.last_isr_time = jiffies; atomic64_inc(&fnic->fnic_stats.misc_stats.isr_count); - vnic_intr_return_all_credits(&fnic->intr[FNIC_MSIX_ERR_NOTIFY]); + vnic_intr_return_all_credits(&fnic->intr[fnic->err_intr_offset]); fnic_log_q_error(fnic); fnic_handle_link_event(fnic); @@ -186,26 +201,30 @@ int fnic_request_intr(struct fnic *fnic) fnic->msix[FNIC_MSIX_WQ].isr = fnic_isr_msix_wq; fnic->msix[FNIC_MSIX_WQ].devid = fnic; - sprintf(fnic->msix[FNIC_MSIX_WQ_COPY].devname, - "%.11s-scsi-wq", fnic->name); - fnic->msix[FNIC_MSIX_WQ_COPY].isr = fnic_isr_msix_wq_copy; - fnic->msix[FNIC_MSIX_WQ_COPY].devid = fnic; + for (i = fnic->copy_wq_base; i < fnic->wq_copy_count + fnic->copy_wq_base; i++) { + sprintf(fnic->msix[i].devname, + "%.11s-scsi-wq-%d", fnic->name, i-FNIC_MSIX_WQ_COPY); + fnic->msix[i].isr = fnic_isr_msix_wq_copy; + fnic->msix[i].devid = fnic; + } - sprintf(fnic->msix[FNIC_MSIX_ERR_NOTIFY].devname, + sprintf(fnic->msix[fnic->err_intr_offset].devname, "%.11s-err-notify", fnic->name); - fnic->msix[FNIC_MSIX_ERR_NOTIFY].isr = + fnic->msix[fnic->err_intr_offset].isr = fnic_isr_msix_err_notify; - fnic->msix[FNIC_MSIX_ERR_NOTIFY].devid = fnic; + fnic->msix[fnic->err_intr_offset].devid = fnic; - for (i = 0; i < ARRAY_SIZE(fnic->msix); i++) { - err = request_irq(pci_irq_vector(fnic->pdev, i), - fnic->msix[i].isr, 0, - fnic->msix[i].devname, - fnic->msix[i].devid); + for (i = 0; i < fnic->intr_count; i++) { + fnic->msix[i].irq_num = pci_irq_vector(fnic->pdev, i); + + err = request_irq(fnic->msix[i].irq_num, + fnic->msix[i].isr, 0, + fnic->msix[i].devname, + fnic->msix[i].devid); if (err) { - shost_printk(KERN_ERR, fnic->lport->host, - "MSIX: request_irq" - " failed %d\n", err); + FNIC_ISR_DBG(KERN_ERR, fnic->lport->host, fnic->fnic_num, + "request_irq failed with error: %d\n", + err); fnic_free_intr(fnic); break; } @@ -220,44 +239,99 @@ int fnic_request_intr(struct fnic *fnic) return err; } -int fnic_set_intr_mode(struct fnic *fnic) +int fnic_set_intr_mode_msix(struct fnic *fnic) { unsigned int n = ARRAY_SIZE(fnic->rq); unsigned int m = ARRAY_SIZE(fnic->wq); unsigned int o = ARRAY_SIZE(fnic->hw_copy_wq); + unsigned int min_irqs = n + m + 1 + 1; /*rq, raw wq, wq, err*/ /* - * Set interrupt mode (INTx, MSI, MSI-X) depending - * system capabilities. - * - * Try MSI-X first - * * We need n RQs, m WQs, o Copy WQs, n+m+o CQs, and n+m+o+1 INTRs * (last INTR is used for WQ/RQ errors and notification area) */ - if (fnic->rq_count >= n && - fnic->raw_wq_count >= m && - fnic->wq_copy_count >= o && - fnic->cq_count >= n + m + o) { - int vecs = n + m + o + 1; - - if (pci_alloc_irq_vectors(fnic->pdev, vecs, vecs, - PCI_IRQ_MSIX) == vecs) { + FNIC_ISR_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "rq-array size: %d wq-array size: %d copy-wq array size: %d\n", + n, m, o); + FNIC_ISR_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "rq_count: %d raw_wq_count: %d wq_copy_count: %d cq_count: %d\n", + fnic->rq_count, fnic->raw_wq_count, + fnic->wq_copy_count, fnic->cq_count); + + if (fnic->rq_count <= n && fnic->raw_wq_count <= m && + fnic->wq_copy_count <= o) { + int vec_count = 0; + int vecs = fnic->rq_count + fnic->raw_wq_count + fnic->wq_copy_count + 1; + + vec_count = pci_alloc_irq_vectors(fnic->pdev, min_irqs, vecs, + PCI_IRQ_MSIX | PCI_IRQ_AFFINITY); + FNIC_ISR_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "allocated %d MSI-X vectors\n", + vec_count); + + if (vec_count > 0) { + if (vec_count < vecs) { + FNIC_ISR_DBG(KERN_ERR, fnic->lport->host, fnic->fnic_num, + "interrupts number mismatch: vec_count: %d vecs: %d\n", + vec_count, vecs); + if (vec_count < min_irqs) { + FNIC_ISR_DBG(KERN_ERR, fnic->lport->host, fnic->fnic_num, + "no interrupts for copy wq\n"); + return 1; + } + } + fnic->rq_count = n; fnic->raw_wq_count = m; - fnic->wq_copy_count = o; - fnic->wq_count = m + o; - fnic->cq_count = n + m + o; - fnic->intr_count = vecs; - fnic->err_intr_offset = FNIC_MSIX_ERR_NOTIFY; - - FNIC_ISR_DBG(KERN_DEBUG, fnic->lport->host, - "Using MSI-X Interrupts\n"); - vnic_dev_set_intr_mode(fnic->vdev, - VNIC_DEV_INTR_MODE_MSIX); + fnic->copy_wq_base = fnic->rq_count + fnic->raw_wq_count; + fnic->wq_copy_count = vec_count - n - m - 1; + fnic->wq_count = fnic->raw_wq_count + fnic->wq_copy_count; + if (fnic->cq_count != vec_count - 1) { + FNIC_ISR_DBG(KERN_ERR, fnic->lport->host, fnic->fnic_num, + "CQ count: %d does not match MSI-X vector count: %d\n", + fnic->cq_count, vec_count); + fnic->cq_count = vec_count - 1; + } + fnic->intr_count = vec_count; + fnic->err_intr_offset = fnic->rq_count + fnic->wq_count; + + FNIC_ISR_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "rq_count: %d raw_wq_count: %d copy_wq_base: %d\n", + fnic->rq_count, + fnic->raw_wq_count, fnic->copy_wq_base); + + FNIC_ISR_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "wq_copy_count: %d wq_count: %d cq_count: %d\n", + fnic->wq_copy_count, + fnic->wq_count, fnic->cq_count); + + FNIC_ISR_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "intr_count: %d err_intr_offset: %u", + fnic->intr_count, + fnic->err_intr_offset); + + vnic_dev_set_intr_mode(fnic->vdev, VNIC_DEV_INTR_MODE_MSIX); + FNIC_ISR_DBG(KERN_INFO, fnic->lport->host, fnic->fnic_num, + "fnic using MSI-X\n"); return 0; } } + return 1; +} + +int fnic_set_intr_mode(struct fnic *fnic) +{ + int ret_status = 0; + + /* + * Set interrupt mode (INTx, MSI, MSI-X) depending + * system capabilities. + * + * Try MSI-X first + */ + ret_status = fnic_set_intr_mode_msix(fnic); + if (ret_status == 0) + return ret_status; /* * Next try MSI @@ -277,7 +351,7 @@ int fnic_set_intr_mode(struct fnic *fnic) fnic->intr_count = 1; fnic->err_intr_offset = 0; - FNIC_ISR_DBG(KERN_DEBUG, fnic->lport->host, + FNIC_ISR_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, "Using MSI Interrupts\n"); vnic_dev_set_intr_mode(fnic->vdev, VNIC_DEV_INTR_MODE_MSI); @@ -303,7 +377,7 @@ int fnic_set_intr_mode(struct fnic *fnic) fnic->cq_count = 3; fnic->intr_count = 3; - FNIC_ISR_DBG(KERN_DEBUG, fnic->lport->host, + FNIC_ISR_DBG(KERN_DEBUG, fnic->lport->host, fnic->fnic_num, "Using Legacy Interrupts\n"); vnic_dev_set_intr_mode(fnic->vdev, VNIC_DEV_INTR_MODE_INTX); diff --git a/drivers/scsi/fnic/fnic_main.c b/drivers/scsi/fnic/fnic_main.c index 021ddf33c4cc..32ae4b32e5ba 100644 --- a/drivers/scsi/fnic/fnic_main.c +++ b/drivers/scsi/fnic/fnic_main.c @@ -476,6 +476,7 @@ static int fnic_cleanup(struct fnic *fnic) { unsigned int i; int err; + int raw_wq_rq_counts; vnic_dev_disable(fnic->vdev); for (i = 0; i < fnic->intr_count; i++) @@ -495,10 +496,11 @@ static int fnic_cleanup(struct fnic *fnic) err = vnic_wq_copy_disable(&fnic->hw_copy_wq[i]); if (err) return err; + raw_wq_rq_counts = fnic->raw_wq_count + fnic->rq_count; + fnic_wq_copy_cmpl_handler(fnic, -1, i + raw_wq_rq_counts); } /* Clean up completed IOs and FCS frames */ - fnic_wq_copy_cmpl_handler(fnic, io_completions); fnic_wq_cmpl_handler(fnic, -1); fnic_rq_cmpl_handler(fnic, -1); diff --git a/drivers/scsi/fnic/fnic_scsi.c b/drivers/scsi/fnic/fnic_scsi.c index 3498a8d670b1..f32781f8fdd0 100644 --- a/drivers/scsi/fnic/fnic_scsi.c +++ b/drivers/scsi/fnic/fnic_scsi.c @@ -1303,10 +1303,8 @@ static int fnic_fcpio_cmpl_handler(struct vnic_dev *vdev, * fnic_wq_copy_cmpl_handler * Routine to process wq copy */ -int fnic_wq_copy_cmpl_handler(struct fnic *fnic, int copy_work_to_do) +int fnic_wq_copy_cmpl_handler(struct fnic *fnic, int copy_work_to_do, unsigned int cq_index) { - unsigned int wq_work_done = 0; - unsigned int i, cq_index; unsigned int cur_work_done; struct misc_stats *misc_stats = &fnic->fnic_stats.misc_stats; u64 start_jiffies = 0; @@ -1314,28 +1312,20 @@ int fnic_wq_copy_cmpl_handler(struct fnic *fnic, int copy_work_to_do) u64 delta_jiffies = 0; u64 delta_ms = 0; - for (i = 0; i < fnic->wq_copy_count; i++) { - cq_index = i + fnic->raw_wq_count + fnic->rq_count; - - start_jiffies = jiffies; - cur_work_done = vnic_cq_copy_service(&fnic->cq[cq_index], - fnic_fcpio_cmpl_handler, - copy_work_to_do); - end_jiffies = jiffies; - - wq_work_done += cur_work_done; - delta_jiffies = end_jiffies - start_jiffies; - if (delta_jiffies > - (u64) atomic64_read(&misc_stats->max_isr_jiffies)) { - atomic64_set(&misc_stats->max_isr_jiffies, - delta_jiffies); - delta_ms = jiffies_to_msecs(delta_jiffies); - atomic64_set(&misc_stats->max_isr_time_ms, delta_ms); - atomic64_set(&misc_stats->corr_work_done, - cur_work_done); - } + start_jiffies = jiffies; + cur_work_done = vnic_cq_copy_service(&fnic->cq[cq_index], + fnic_fcpio_cmpl_handler, + copy_work_to_do); + end_jiffies = jiffies; + delta_jiffies = end_jiffies - start_jiffies; + if (delta_jiffies > (u64) atomic64_read(&misc_stats->max_isr_jiffies)) { + atomic64_set(&misc_stats->max_isr_jiffies, delta_jiffies); + delta_ms = jiffies_to_msecs(delta_jiffies); + atomic64_set(&misc_stats->max_isr_time_ms, delta_ms); + atomic64_set(&misc_stats->corr_work_done, cur_work_done); } - return wq_work_done; + + return cur_work_done; } static bool fnic_cleanup_io_iter(struct scsi_cmnd *sc, void *data) diff --git a/drivers/scsi/fnic/fnic_stats.h b/drivers/scsi/fnic/fnic_stats.h index bdf639eef8cf..07d1556e3c32 100644 --- a/drivers/scsi/fnic/fnic_stats.h +++ b/drivers/scsi/fnic/fnic_stats.h @@ -103,6 +103,7 @@ struct misc_stats { atomic64_t rport_not_ready; atomic64_t frame_errors; atomic64_t current_port_speed; + atomic64_t intx_dummy; }; struct fnic_stats { From patchwork Wed Dec 6 18:46:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Karan Tilak Kumar \(kartilak\)" X-Patchwork-Id: 750973 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=cisco.com header.i=@cisco.com header.b="jSpF1ogX" Received: from rcdn-iport-1.cisco.com (rcdn-iport-1.cisco.com [173.37.86.72]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 234111991; Wed, 6 Dec 2023 10:47:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=7162; q=dns/txt; s=iport; t=1701888423; x=1703098023; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=xqNh/RJVPfnAzy/1JnCYGT8UAMYR0iqPSmd6acxcNNo=; b=jSpF1ogXxocoQLnoYSgUajsNBqPayh0cdJYnhjL8o0S/owZ3IFYCntwh WLfAiylWUIfPB7DankKbvPKM8VHQ+PO9z4LbFxhgess8GqIZ4korViHAL NPuLNvaJxcKRlUywI4y794ncTiDszPa9rY6v32ZA4Q0As1aYPczXca0K8 A=; X-CSE-ConnectionGUID: fAIMvJ3eT3qm7WFV0rnkuQ== X-CSE-MsgGUID: glW4ZSuvRruYZVejIvUutA== X-IronPort-AV: E=Sophos;i="6.04,256,1695686400"; d="scan'208";a="155368007" Received: from alln-core-4.cisco.com ([173.36.13.137]) by rcdn-iport-1.cisco.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Dec 2023 18:47:02 +0000 Received: from localhost.cisco.com ([10.193.101.253]) (authenticated bits=0) by alln-core-4.cisco.com (8.15.2/8.15.2) with ESMTPSA id 3B6IkHD5010013 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Wed, 6 Dec 2023 18:47:01 GMT From: Karan Tilak Kumar To: sebaddel@cisco.com Cc: arulponn@cisco.com, djhawar@cisco.com, gcboffa@cisco.com, mkai2@cisco.com, satishkh@cisco.com, jejb@linux.ibm.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org, Karan Tilak Kumar Subject: [PATCH v5 09/13] scsi: fnic: Remove usage of host_lock Date: Wed, 6 Dec 2023 10:46:11 -0800 Message-Id: <20231206184615.878755-10-kartilak@cisco.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20231206184615.878755-1-kartilak@cisco.com> References: <20231206184615.878755-1-kartilak@cisco.com> Precedence: bulk X-Mailing-List: linux-scsi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Authenticated-User: kartilak@cisco.com X-Outbound-SMTP-Client: 10.193.101.253, [10.193.101.253] X-Outbound-Node: alln-core-4.cisco.com Remove usage of host_lock. Replace with fnic_lock, where necessary. fnic does not use host_lock. fnic uses fnic_lock. Use fnic lock to protect fnic members in fnic_queuecommand. Add log messages in error cases. Reviewed-by: Sesidhar Baddela Reviewed-by: Arulprabhu Ponnusamy Signed-off-by: Karan Tilak Kumar --- Changes between v4 and v5: Incorporate review comments from Martin: Modify patch commits to include a "---" separator. Changes between v2 and v3: Squash the following commits into one: scsi: fnic: Remove usage of host_lock scsi: fnic: Use fnic_lock to protect fnic structures in queuecommand --- drivers/scsi/fnic/fnic_scsi.c | 55 ++++++++++++++++++++--------------- 1 file changed, 31 insertions(+), 24 deletions(-) diff --git a/drivers/scsi/fnic/fnic_scsi.c b/drivers/scsi/fnic/fnic_scsi.c index f32781f8fdd0..ffdbdbfcdf57 100644 --- a/drivers/scsi/fnic/fnic_scsi.c +++ b/drivers/scsi/fnic/fnic_scsi.c @@ -170,17 +170,14 @@ __fnic_set_state_flags(struct fnic *fnic, unsigned long st_flags, unsigned long clearbits) { unsigned long flags = 0; - unsigned long host_lock_flags = 0; spin_lock_irqsave(&fnic->fnic_lock, flags); - spin_lock_irqsave(fnic->lport->host->host_lock, host_lock_flags); if (clearbits) fnic->state_flags &= ~st_flags; else fnic->state_flags |= st_flags; - spin_unlock_irqrestore(fnic->lport->host->host_lock, host_lock_flags); spin_unlock_irqrestore(&fnic->fnic_lock, flags); return; @@ -427,14 +424,27 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc) int io_lock_acquired = 0; struct fc_rport_libfc_priv *rp; - if (unlikely(fnic_chk_state_flags_locked(fnic, FNIC_FLAGS_IO_BLOCKED))) + spin_lock_irqsave(&fnic->fnic_lock, flags); + + if (unlikely(fnic_chk_state_flags_locked(fnic, FNIC_FLAGS_IO_BLOCKED))) { + spin_unlock_irqrestore(&fnic->fnic_lock, flags); + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, + "fnic<%d>: %s: %d: fnic IO blocked flags: 0x%lx. Returning SCSI_MLQUEUE_HOST_BUSY\n", + fnic->fnic_num, __func__, __LINE__, fnic->state_flags); return SCSI_MLQUEUE_HOST_BUSY; + } - if (unlikely(fnic_chk_state_flags_locked(fnic, FNIC_FLAGS_FWRESET))) + if (unlikely(fnic_chk_state_flags_locked(fnic, FNIC_FLAGS_FWRESET))) { + spin_unlock_irqrestore(&fnic->fnic_lock, flags); + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, + "fnic<%d>: %s: %d: fnic flags: 0x%lx. Returning SCSI_MLQUEUE_HOST_BUSY\n", + fnic->fnic_num, __func__, __LINE__, fnic->state_flags); return SCSI_MLQUEUE_HOST_BUSY; + } rport = starget_to_rport(scsi_target(sc->device)); if (!rport) { + spin_unlock_irqrestore(&fnic->fnic_lock, flags); FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, "returning DID_NO_CONNECT for IO as rport is NULL\n"); sc->result = DID_NO_CONNECT << 16; @@ -444,6 +454,7 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc) ret = fc_remote_port_chkready(rport); if (ret) { + spin_unlock_irqrestore(&fnic->fnic_lock, flags); FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, "rport is not ready\n"); atomic64_inc(&fnic_stats->misc_stats.rport_not_ready); @@ -454,6 +465,7 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc) rp = rport->dd_data; if (!rp || rp->rp_state == RPORT_ST_DELETE) { + spin_unlock_irqrestore(&fnic->fnic_lock, flags); FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, "rport 0x%x removed, returning DID_NO_CONNECT\n", rport->port_id); @@ -465,6 +477,7 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc) } if (rp->rp_state != RPORT_ST_READY) { + spin_unlock_irqrestore(&fnic->fnic_lock, flags); FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, "rport 0x%x in state 0x%x, returning DID_IMM_RETRY\n", rport->port_id, rp->rp_state); @@ -474,17 +487,17 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc) return 0; } - if (lp->state != LPORT_ST_READY || !(lp->link_up)) + if (lp->state != LPORT_ST_READY || !(lp->link_up)) { + spin_unlock_irqrestore(&fnic->fnic_lock, flags); + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, + "fnic<%d>: %s: %d: state not ready: %d/link not up: %d Returning HOST_BUSY\n", + fnic->fnic_num, __func__, __LINE__, lp->state, lp->link_up); return SCSI_MLQUEUE_HOST_BUSY; + } atomic_inc(&fnic->in_flight); - /* - * Release host lock, use driver resource specific locks from here. - * Don't re-enable interrupts in case they were disabled prior to the - * caller disabling them. - */ - spin_unlock(lp->host->host_lock); + spin_unlock_irqrestore(&fnic->fnic_lock, flags); fnic_priv(sc)->state = FNIC_IOREQ_NOT_INITED; fnic_priv(sc)->flags = FNIC_NO_FLAGS; @@ -569,8 +582,6 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc) mempool_free(io_req, fnic->io_req_pool); } atomic_dec(&fnic->in_flight); - /* acquire host lock before returning to SCSI */ - spin_lock(lp->host->host_lock); return ret; } else { atomic64_inc(&fnic_stats->io_stats.active_ios); @@ -598,8 +609,6 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc) spin_unlock_irqrestore(io_lock, flags); atomic_dec(&fnic->in_flight); - /* acquire host lock before returning to SCSI */ - spin_lock(lp->host->host_lock); return ret; } @@ -1477,18 +1486,17 @@ static inline int fnic_queue_abort_io_req(struct fnic *fnic, int tag, struct fnic_io_req *io_req) { struct vnic_wq_copy *wq = &fnic->hw_copy_wq[0]; - struct Scsi_Host *host = fnic->lport->host; struct misc_stats *misc_stats = &fnic->fnic_stats.misc_stats; unsigned long flags; - spin_lock_irqsave(host->host_lock, flags); + spin_lock_irqsave(&fnic->fnic_lock, flags); if (unlikely(fnic_chk_state_flags_locked(fnic, FNIC_FLAGS_IO_BLOCKED))) { - spin_unlock_irqrestore(host->host_lock, flags); + spin_unlock_irqrestore(&fnic->fnic_lock, flags); return 1; } else atomic_inc(&fnic->in_flight); - spin_unlock_irqrestore(host->host_lock, flags); + spin_unlock_irqrestore(&fnic->fnic_lock, flags); spin_lock_irqsave(&fnic->wq_copy_lock[0], flags); @@ -1923,20 +1931,19 @@ static inline int fnic_queue_dr_io_req(struct fnic *fnic, struct fnic_io_req *io_req) { struct vnic_wq_copy *wq = &fnic->hw_copy_wq[0]; - struct Scsi_Host *host = fnic->lport->host; struct misc_stats *misc_stats = &fnic->fnic_stats.misc_stats; struct scsi_lun fc_lun; int ret = 0; unsigned long intr_flags; - spin_lock_irqsave(host->host_lock, intr_flags); + spin_lock_irqsave(&fnic->fnic_lock, intr_flags); if (unlikely(fnic_chk_state_flags_locked(fnic, FNIC_FLAGS_IO_BLOCKED))) { - spin_unlock_irqrestore(host->host_lock, intr_flags); + spin_unlock_irqrestore(&fnic->fnic_lock, intr_flags); return FAILED; } else atomic_inc(&fnic->in_flight); - spin_unlock_irqrestore(host->host_lock, intr_flags); + spin_unlock_irqrestore(&fnic->fnic_lock, intr_flags); spin_lock_irqsave(&fnic->wq_copy_lock[0], intr_flags); From patchwork Wed Dec 6 18:46:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Karan Tilak Kumar \(kartilak\)" X-Patchwork-Id: 750972 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=cisco.com header.i=@cisco.com header.b="frm3XDb5" Received: from rcdn-iport-9.cisco.com (rcdn-iport-9.cisco.com [173.37.86.80]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D9B82D6D; Wed, 6 Dec 2023 10:47:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=57507; q=dns/txt; s=iport; t=1701888429; x=1703098029; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9QOnGHdkx3g/ByKajcXBeYDmMM4/dALltuNEDcn6M5E=; b=frm3XDb5xmuO2TwCd5wCzzIXcTZIVpLWPzLKTTXu/XyamgidZcpwkiIz S8NXPu6HFUEDd+E/VI7XbaVeS/l85H5OCpxVlHXYWHMA5XZ93/tE73oC9 gDlnyNhu7+aWbDmq+mscaYvU6K9EjPqRk1dmxczdz1aRZPm/heeS6cVTU 0=; X-CSE-ConnectionGUID: hMaQfV/DQM+LtEmO8DfbSA== X-CSE-MsgGUID: DG84a5UqTXWWqcmHw2DOiQ== X-IronPort-AV: E=Sophos;i="6.04,256,1695686400"; d="scan'208";a="154982435" Received: from alln-core-4.cisco.com ([173.36.13.137]) by rcdn-iport-9.cisco.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Dec 2023 18:47:07 +0000 Received: from localhost.cisco.com ([10.193.101.253]) (authenticated bits=0) by alln-core-4.cisco.com (8.15.2/8.15.2) with ESMTPSA id 3B6IkHD7010013 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Wed, 6 Dec 2023 18:47:06 GMT From: Karan Tilak Kumar To: sebaddel@cisco.com Cc: arulponn@cisco.com, djhawar@cisco.com, gcboffa@cisco.com, mkai2@cisco.com, satishkh@cisco.com, jejb@linux.ibm.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org, Karan Tilak Kumar , kernel test robot Subject: [PATCH v5 11/13] scsi: fnic: Add support for multiqueue (MQ) in fnic driver Date: Wed, 6 Dec 2023 10:46:13 -0800 Message-Id: <20231206184615.878755-12-kartilak@cisco.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20231206184615.878755-1-kartilak@cisco.com> References: <20231206184615.878755-1-kartilak@cisco.com> Precedence: bulk X-Mailing-List: linux-scsi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Authenticated-User: kartilak@cisco.com X-Outbound-SMTP-Client: 10.193.101.253, [10.193.101.253] X-Outbound-Node: alln-core-4.cisco.com Implement support for MQ in fnic driver: The block multiqueue layer issues IO to the fnic driver with an MQ tag. Use the mqtag and derive a tag from it. Derive the hardware queue from the mqtag and use it in all paths. Modify queuecommand to handle mqtag. Replace wq and cq indices to support MQ. Replace the zeroth queue with a hardware queue. Implement spin locks on a per hardware queue basis. Replace io_lock with per hardware queue spinlock. Implement out of range tag checks. Allocate an io_req_table to track status of the io_req. Test the driver by building it, loading it, and configuring 64 queues in UCSM. Issue IOs using Medusa on multiple fnics. Enable/disable links to exercise the abort and clean up path. Reported-by: kernel test robot Closes: https://lore.kernel.org/oe-kbuild-all/202310300032.2awCqkfn-lkp@intel.com/ Reviewed-by: Sesidhar Baddela Reviewed-by: Arulprabhu Ponnusamy Tested-by: Karan Tilak Kumar Signed-off-by: Karan Tilak Kumar --- Changes between v4 and v5: Incorporate review comments from Martin: Modify patch commits to include a "---" separator. Changes between v2 and v3: Incorporate the following review comments from Hannes: Replace cpy_wq_base with copy_wq_base. Remove hwq as an argument to fnic_queuecommand_int. Suppress warning from kernel test robot. Replace new shost_printk comments with FNIC_SCSI_DBG. Replace fnic_queuecommand_int with fnic_queuecommand. Changes between v1 and v2: Incorporate the following review comments from Bart: Remove outdated comment, Remove unnecessary out of range tag checks, Remove unnecessary local variable, Modify function name. --- drivers/scsi/fnic/fnic.h | 2 - drivers/scsi/fnic/fnic_io.h | 2 + drivers/scsi/fnic/fnic_main.c | 3 - drivers/scsi/fnic/fnic_scsi.c | 567 ++++++++++++++++++++-------------- 4 files changed, 340 insertions(+), 234 deletions(-) diff --git a/drivers/scsi/fnic/fnic.h b/drivers/scsi/fnic/fnic.h index 5777a54c99c3..d5b9590894d2 100644 --- a/drivers/scsi/fnic/fnic.h +++ b/drivers/scsi/fnic/fnic.h @@ -36,7 +36,6 @@ #define FNIC_MIN_IO_REQ 256 /* Min IO throttle count */ #define FNIC_MAX_IO_REQ 1024 /* scsi_cmnd tag map entries */ #define FNIC_DFLT_IO_REQ 256 /* Default scsi_cmnd tag map entries */ -#define FNIC_IO_LOCKS 64 /* IO locks: power of 2 */ #define FNIC_DFLT_QUEUE_DEPTH 256 #define FNIC_STATS_RATE_LIMIT 4 /* limit rate at which stats are pulled up */ @@ -298,7 +297,6 @@ struct fnic { struct fnic_host_tag *tags; mempool_t *io_req_pool; mempool_t *io_sgl_pool[FNIC_SGL_NUM_CACHES]; - spinlock_t io_req_lock[FNIC_IO_LOCKS]; /* locks for scsi cmnds */ unsigned int copy_wq_base; struct work_struct link_work; diff --git a/drivers/scsi/fnic/fnic_io.h b/drivers/scsi/fnic/fnic_io.h index f4c8769df312..5895ead20e14 100644 --- a/drivers/scsi/fnic/fnic_io.h +++ b/drivers/scsi/fnic/fnic_io.h @@ -52,6 +52,8 @@ struct fnic_io_req { unsigned long start_time; /* in jiffies */ struct completion *abts_done; /* completion for abts */ struct completion *dr_done; /* completion for device reset */ + unsigned int tag; + struct scsi_cmnd *sc; /* midlayer's cmd pointer */ }; enum fnic_port_speeds { diff --git a/drivers/scsi/fnic/fnic_main.c b/drivers/scsi/fnic/fnic_main.c index 4ce933d5c15f..5ed1d897311a 100644 --- a/drivers/scsi/fnic/fnic_main.c +++ b/drivers/scsi/fnic/fnic_main.c @@ -794,9 +794,6 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) fnic->fw_ack_index[i] = -1; } - for (i = 0; i < FNIC_IO_LOCKS; i++) - spin_lock_init(&fnic->io_req_lock[i]); - err = -ENOMEM; fnic->io_req_pool = mempool_create_slab_pool(2, fnic_io_req_cache); if (!fnic->io_req_pool) diff --git a/drivers/scsi/fnic/fnic_scsi.c b/drivers/scsi/fnic/fnic_scsi.c index ffdbdbfcdf57..648cb8aff92c 100644 --- a/drivers/scsi/fnic/fnic_scsi.c +++ b/drivers/scsi/fnic/fnic_scsi.c @@ -92,20 +92,6 @@ static const char *fnic_fcpio_status_to_str(unsigned int status) static void fnic_cleanup_io(struct fnic *fnic); -static inline spinlock_t *fnic_io_lock_hash(struct fnic *fnic, - struct scsi_cmnd *sc) -{ - u32 hash = scsi_cmd_to_rq(sc)->tag & (FNIC_IO_LOCKS - 1); - - return &fnic->io_req_lock[hash]; -} - -static inline spinlock_t *fnic_io_lock_tag(struct fnic *fnic, - int tag) -{ - return &fnic->io_req_lock[tag & (FNIC_IO_LOCKS - 1)]; -} - /* * Unmap the data buffer and sense buffer for an io_req, * also unmap and free the device-private scatter/gather list. @@ -129,23 +115,23 @@ static void fnic_release_ioreq_buf(struct fnic *fnic, } /* Free up Copy Wq descriptors. Called with copy_wq lock held */ -static int free_wq_copy_descs(struct fnic *fnic, struct vnic_wq_copy *wq) +static int free_wq_copy_descs(struct fnic *fnic, struct vnic_wq_copy *wq, unsigned int hwq) { /* if no Ack received from firmware, then nothing to clean */ - if (!fnic->fw_ack_recd[0]) + if (!fnic->fw_ack_recd[hwq]) return 1; /* * Update desc_available count based on number of freed descriptors * Account for wraparound */ - if (wq->to_clean_index <= fnic->fw_ack_index[0]) - wq->ring.desc_avail += (fnic->fw_ack_index[0] + if (wq->to_clean_index <= fnic->fw_ack_index[hwq]) + wq->ring.desc_avail += (fnic->fw_ack_index[hwq] - wq->to_clean_index + 1); else wq->ring.desc_avail += (wq->ring.desc_count - wq->to_clean_index - + fnic->fw_ack_index[0] + 1); + + fnic->fw_ack_index[hwq] + 1); /* * just bump clean index to ack_index+1 accounting for wraparound @@ -153,10 +139,10 @@ static int free_wq_copy_descs(struct fnic *fnic, struct vnic_wq_copy *wq) * to_clean_index and fw_ack_index, both inclusive */ wq->to_clean_index = - (fnic->fw_ack_index[0] + 1) % wq->ring.desc_count; + (fnic->fw_ack_index[hwq] + 1) % wq->ring.desc_count; /* we have processed the acks received so far */ - fnic->fw_ack_recd[0] = 0; + fnic->fw_ack_recd[hwq] = 0; return 0; } @@ -207,7 +193,7 @@ int fnic_fw_reset_handler(struct fnic *fnic) spin_lock_irqsave(&fnic->wq_copy_lock[0], flags); if (vnic_wq_copy_desc_avail(wq) <= fnic->wq_copy_desc_low[0]) - free_wq_copy_descs(fnic, wq); + free_wq_copy_descs(fnic, wq, 0); if (!vnic_wq_copy_desc_avail(wq)) ret = -EAGAIN; @@ -253,7 +239,7 @@ int fnic_flogi_reg_handler(struct fnic *fnic, u32 fc_id) spin_lock_irqsave(&fnic->wq_copy_lock[0], flags); if (vnic_wq_copy_desc_avail(wq) <= fnic->wq_copy_desc_low[0]) - free_wq_copy_descs(fnic, wq); + free_wq_copy_descs(fnic, wq, 0); if (!vnic_wq_copy_desc_avail(wq)) { ret = -EAGAIN; @@ -303,7 +289,9 @@ static inline int fnic_queue_wq_copy_desc(struct fnic *fnic, struct vnic_wq_copy *wq, struct fnic_io_req *io_req, struct scsi_cmnd *sc, - int sg_count) + int sg_count, + uint32_t mqtag, + uint16_t hwq) { struct scatterlist *sg; struct fc_rport *rport = starget_to_rport(scsi_target(sc->device)); @@ -311,7 +299,6 @@ static inline int fnic_queue_wq_copy_desc(struct fnic *fnic, struct host_sg_desc *desc; struct misc_stats *misc_stats = &fnic->fnic_stats.misc_stats; unsigned int i; - unsigned long intr_flags; int flags; u8 exch_flags; struct scsi_lun fc_lun; @@ -351,13 +338,10 @@ static inline int fnic_queue_wq_copy_desc(struct fnic *fnic, int_to_scsilun(sc->device->lun, &fc_lun); /* Enqueue the descriptor in the Copy WQ */ - spin_lock_irqsave(&fnic->wq_copy_lock[0], intr_flags); - - if (vnic_wq_copy_desc_avail(wq) <= fnic->wq_copy_desc_low[0]) - free_wq_copy_descs(fnic, wq); + if (vnic_wq_copy_desc_avail(wq) <= fnic->wq_copy_desc_low[hwq]) + free_wq_copy_descs(fnic, wq, hwq); if (unlikely(!vnic_wq_copy_desc_avail(wq))) { - spin_unlock_irqrestore(&fnic->wq_copy_lock[0], intr_flags); FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host, "fnic_queue_wq_copy_desc failure - no descriptors\n"); atomic64_inc(&misc_stats->io_cpwq_alloc_failures); @@ -375,7 +359,7 @@ static inline int fnic_queue_wq_copy_desc(struct fnic *fnic, (rp->flags & FC_RP_FLAGS_RETRY)) exch_flags |= FCPIO_ICMND_SRFLAG_RETRY; - fnic_queue_wq_copy_desc_icmnd_16(wq, scsi_cmd_to_rq(sc)->tag, + fnic_queue_wq_copy_desc_icmnd_16(wq, mqtag, 0, exch_flags, io_req->sgl_cnt, SCSI_SENSE_BUFFERSIZE, io_req->sgl_list_pa, @@ -396,34 +380,30 @@ static inline int fnic_queue_wq_copy_desc(struct fnic *fnic, atomic64_set(&fnic->fnic_stats.fw_stats.max_fw_reqs, atomic64_read(&fnic->fnic_stats.fw_stats.active_fw_reqs)); - spin_unlock_irqrestore(&fnic->wq_copy_lock[0], intr_flags); return 0; } -/* - * fnic_queuecommand - * Routine to send a scsi cdb - * Called with host_lock held and interrupts disabled. - */ -static int fnic_queuecommand_lck(struct scsi_cmnd *sc) +int fnic_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *sc) { + struct request *const rq = scsi_cmd_to_rq(sc); + uint32_t mqtag = 0; void (*done)(struct scsi_cmnd *) = scsi_done; - const int tag = scsi_cmd_to_rq(sc)->tag; struct fc_lport *lp = shost_priv(sc->device->host); struct fc_rport *rport; struct fnic_io_req *io_req = NULL; struct fnic *fnic = lport_priv(lp); struct fnic_stats *fnic_stats = &fnic->fnic_stats; struct vnic_wq_copy *wq; - int ret; + int ret = 1; u64 cmd_trace; int sg_count = 0; unsigned long flags = 0; unsigned long ptr; - spinlock_t *io_lock = NULL; int io_lock_acquired = 0; struct fc_rport_libfc_priv *rp; + uint16_t hwq = 0; + mqtag = blk_mq_unique_tag(rq); spin_lock_irqsave(&fnic->fnic_lock, flags); if (unlikely(fnic_chk_state_flags_locked(fnic, FNIC_FLAGS_IO_BLOCKED))) { @@ -514,7 +494,7 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc) sg_count = scsi_dma_map(sc); if (sg_count < 0) { FNIC_TRACE(fnic_queuecommand, sc->device->host->host_no, - tag, sc, 0, sc->cmnd[0], sg_count, fnic_priv(sc)->state); + mqtag, sc, 0, sc->cmnd[0], sg_count, fnic_priv(sc)->state); mempool_free(io_req, fnic->io_req_pool); goto out; } @@ -549,11 +529,10 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc) } /* - * Will acquire lock defore setting to IO initialized. + * Will acquire lock before setting to IO initialized. */ - - io_lock = fnic_io_lock_hash(fnic, sc); - spin_lock_irqsave(io_lock, flags); + hwq = blk_mq_unique_tag_to_hwq(mqtag); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); /* initialize rest of io_req */ io_lock_acquired = 1; @@ -562,21 +541,34 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc) fnic_priv(sc)->state = FNIC_IOREQ_CMD_PENDING; fnic_priv(sc)->io_req = io_req; fnic_priv(sc)->flags |= FNIC_IO_INITIALIZED; + io_req->sc = sc; + + if (fnic->sw_copy_wq[hwq].io_req_table[blk_mq_unique_tag_to_tag(mqtag)] != NULL) { + WARN(1, "fnic<%d>: %s: hwq: %d tag 0x%x already exists\n", + fnic->fnic_num, __func__, hwq, blk_mq_unique_tag_to_tag(mqtag)); + return SCSI_MLQUEUE_HOST_BUSY; + } + + fnic->sw_copy_wq[hwq].io_req_table[blk_mq_unique_tag_to_tag(mqtag)] = io_req; + io_req->tag = mqtag; /* create copy wq desc and enqueue it */ - wq = &fnic->hw_copy_wq[0]; - ret = fnic_queue_wq_copy_desc(fnic, wq, io_req, sc, sg_count); + wq = &fnic->hw_copy_wq[hwq]; + atomic64_inc(&fnic_stats->io_stats.ios[hwq]); + ret = fnic_queue_wq_copy_desc(fnic, wq, io_req, sc, sg_count, mqtag, hwq); if (ret) { /* * In case another thread cancelled the request, * refetch the pointer under the lock. */ FNIC_TRACE(fnic_queuecommand, sc->device->host->host_no, - tag, sc, 0, 0, 0, fnic_flags_and_state(sc)); + mqtag, sc, 0, 0, 0, fnic_flags_and_state(sc)); io_req = fnic_priv(sc)->io_req; fnic_priv(sc)->io_req = NULL; + if (io_req) + fnic->sw_copy_wq[hwq].io_req_table[blk_mq_unique_tag_to_tag(mqtag)] = NULL; fnic_priv(sc)->state = FNIC_IOREQ_CMD_COMPLETE; - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); if (io_req) { fnic_release_ioreq_buf(fnic, io_req, sc); mempool_free(io_req, fnic->io_req_pool); @@ -601,18 +593,17 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc) sc->cmnd[5]); FNIC_TRACE(fnic_queuecommand, sc->device->host->host_no, - tag, sc, io_req, sg_count, cmd_trace, + mqtag, sc, io_req, sg_count, cmd_trace, fnic_flags_and_state(sc)); /* if only we issued IO, will we have the io lock */ if (io_lock_acquired) - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); atomic_dec(&fnic->in_flight); return ret; } -DEF_SCSI_QCMD(fnic_queuecommand) /* * fnic_fcpio_fw_reset_cmpl_handler @@ -789,20 +780,21 @@ static inline void fnic_fcpio_ack_handler(struct fnic *fnic, u16 request_out = desc->u.ack.request_out; unsigned long flags; u64 *ox_id_tag = (u64 *)(void *)desc; + unsigned int wq_index = cq_index; /* mark the ack state */ - wq = &fnic->hw_copy_wq[cq_index - fnic->raw_wq_count - fnic->rq_count]; - spin_lock_irqsave(&fnic->wq_copy_lock[0], flags); + wq = &fnic->hw_copy_wq[cq_index]; + spin_lock_irqsave(&fnic->wq_copy_lock[wq_index], flags); fnic->fnic_stats.misc_stats.last_ack_time = jiffies; if (is_ack_index_in_range(wq, request_out)) { - fnic->fw_ack_index[0] = request_out; - fnic->fw_ack_recd[0] = 1; + fnic->fw_ack_index[wq_index] = request_out; + fnic->fw_ack_recd[wq_index] = 1; } else atomic64_inc( &fnic->fnic_stats.misc_stats.ack_index_out_of_range); - spin_unlock_irqrestore(&fnic->wq_copy_lock[0], flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[wq_index], flags); FNIC_TRACE(fnic_fcpio_ack_handler, fnic->lport->host->host_no, 0, 0, ox_id_tag[2], ox_id_tag[3], ox_id_tag[4], ox_id_tag[5]); @@ -812,12 +804,12 @@ static inline void fnic_fcpio_ack_handler(struct fnic *fnic, * fnic_fcpio_icmnd_cmpl_handler * Routine to handle icmnd completions */ -static void fnic_fcpio_icmnd_cmpl_handler(struct fnic *fnic, +static void fnic_fcpio_icmnd_cmpl_handler(struct fnic *fnic, unsigned int cq_index, struct fcpio_fw_req *desc) { u8 type; u8 hdr_status; - struct fcpio_tag tag; + struct fcpio_tag ftag; u32 id; u64 xfer_len = 0; struct fcpio_icmnd_cmpl *icmnd_cmpl; @@ -825,27 +817,49 @@ static void fnic_fcpio_icmnd_cmpl_handler(struct fnic *fnic, struct scsi_cmnd *sc; struct fnic_stats *fnic_stats = &fnic->fnic_stats; unsigned long flags; - spinlock_t *io_lock; u64 cmd_trace; unsigned long start_time; unsigned long io_duration_time; + unsigned int hwq = 0; + unsigned int mqtag = 0; + unsigned int tag = 0; /* Decode the cmpl description to get the io_req id */ - fcpio_header_dec(&desc->hdr, &type, &hdr_status, &tag); - fcpio_tag_id_dec(&tag, &id); + fcpio_header_dec(&desc->hdr, &type, &hdr_status, &ftag); + fcpio_tag_id_dec(&ftag, &id); icmnd_cmpl = &desc->u.icmnd_cmpl; - if (id >= fnic->fnic_max_tag_id) { - shost_printk(KERN_ERR, fnic->lport->host, - "Tag out of range tag %x hdr status = %s\n", - id, fnic_fcpio_status_to_str(hdr_status)); + mqtag = id; + tag = blk_mq_unique_tag_to_tag(mqtag); + hwq = blk_mq_unique_tag_to_hwq(mqtag); + + if (hwq != cq_index) { + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, + "fnic<%d>: %s: %d: hwq: %d mqtag: 0x%x tag: 0x%x cq index: %d ", + fnic->fnic_num, __func__, __LINE__, hwq, mqtag, tag, cq_index); + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, + "fnic<%d>: %s: %d: hdr status: %s icmnd completion on the wrong queue\n", + fnic->fnic_num, __func__, __LINE__, + fnic_fcpio_status_to_str(hdr_status)); + } + + if (tag >= fnic->fnic_max_tag_id) { + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, + "fnic<%d>: %s: %d: hwq: %d mqtag: 0x%x tag: 0x%x cq index: %d ", + fnic->fnic_num, __func__, __LINE__, hwq, mqtag, tag, cq_index); + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, + "fnic<%d>: %s: %d: hdr status: %s Out of range tag\n", + fnic->fnic_num, __func__, __LINE__, + fnic_fcpio_status_to_str(hdr_status)); return; } + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); sc = scsi_host_find_tag(fnic->lport->host, id); WARN_ON_ONCE(!sc); if (!sc) { atomic64_inc(&fnic_stats->io_stats.sc_null); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); shost_printk(KERN_ERR, fnic->lport->host, "icmnd_cmpl sc is null - " "hdr status = %s tag = 0x%x desc = 0x%p\n", @@ -861,14 +875,19 @@ static void fnic_fcpio_icmnd_cmpl_handler(struct fnic *fnic, return; } - io_lock = fnic_io_lock_hash(fnic, sc); - spin_lock_irqsave(io_lock, flags); io_req = fnic_priv(sc)->io_req; + if (fnic->sw_copy_wq[hwq].io_req_table[tag] != io_req) { + WARN(1, "%s: %d: hwq: %d mqtag: 0x%x tag: 0x%x io_req tag mismatch\n", + __func__, __LINE__, hwq, mqtag, tag); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); + return; + } + WARN_ON_ONCE(!io_req); if (!io_req) { atomic64_inc(&fnic_stats->io_stats.ioreq_null); fnic_priv(sc)->flags |= FNIC_IO_REQ_NULL; - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); shost_printk(KERN_ERR, fnic->lport->host, "icmnd_cmpl io_req is null - " "hdr status = %s tag = 0x%x sc 0x%p\n", @@ -892,7 +911,7 @@ static void fnic_fcpio_icmnd_cmpl_handler(struct fnic *fnic, */ fnic_priv(sc)->flags |= FNIC_IO_DONE; fnic_priv(sc)->flags |= FNIC_IO_ABTS_PENDING; - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); if(FCPIO_ABORTED == hdr_status) fnic_priv(sc)->flags |= FNIC_IO_ABORTED; @@ -980,7 +999,11 @@ static void fnic_fcpio_icmnd_cmpl_handler(struct fnic *fnic, /* Break link with the SCSI command */ fnic_priv(sc)->io_req = NULL; + io_req->sc = NULL; fnic_priv(sc)->flags |= FNIC_IO_DONE; + fnic->sw_copy_wq[hwq].io_req_table[tag] = NULL; + + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); if (hdr_status != FCPIO_SUCCESS) { atomic64_inc(&fnic_stats->io_stats.io_failures); @@ -1014,7 +1037,6 @@ static void fnic_fcpio_icmnd_cmpl_handler(struct fnic *fnic, /* Call SCSI completion function to complete the IO */ scsi_done(sc); - spin_unlock_irqrestore(io_lock, flags); mempool_free(io_req, fnic->io_req_pool); @@ -1051,12 +1073,12 @@ static void fnic_fcpio_icmnd_cmpl_handler(struct fnic *fnic, /* fnic_fcpio_itmf_cmpl_handler * Routine to handle itmf completions */ -static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic, +static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic, unsigned int cq_index, struct fcpio_fw_req *desc) { u8 type; u8 hdr_status; - struct fcpio_tag tag; + struct fcpio_tag ftag; u32 id; struct scsi_cmnd *sc; struct fnic_io_req *io_req; @@ -1065,35 +1087,76 @@ static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic, struct terminate_stats *term_stats = &fnic->fnic_stats.term_stats; struct misc_stats *misc_stats = &fnic->fnic_stats.misc_stats; unsigned long flags; - spinlock_t *io_lock; unsigned long start_time; + unsigned int hwq = cq_index; + unsigned int mqtag; + unsigned int tag; - fcpio_header_dec(&desc->hdr, &type, &hdr_status, &tag); - fcpio_tag_id_dec(&tag, &id); + fcpio_header_dec(&desc->hdr, &type, &hdr_status, &ftag); + fcpio_tag_id_dec(&ftag, &id); - if ((id & FNIC_TAG_MASK) >= fnic->fnic_max_tag_id) { - shost_printk(KERN_ERR, fnic->lport->host, - "Tag out of range tag %x hdr status = %s\n", - id, fnic_fcpio_status_to_str(hdr_status)); + mqtag = id & FNIC_TAG_MASK; + tag = blk_mq_unique_tag_to_tag(id & FNIC_TAG_MASK); + hwq = blk_mq_unique_tag_to_hwq(id & FNIC_TAG_MASK); + + if (hwq != cq_index) { + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, + "%s: %d: hwq: %d mqtag: 0x%x tag: 0x%x cq index: %d ", + __func__, __LINE__, hwq, mqtag, tag, cq_index); + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, + "%s: %d: hdr status: %s ITMF completion on the wrong queue\n", + __func__, __LINE__, + fnic_fcpio_status_to_str(hdr_status)); + } + + if (tag > fnic->fnic_max_tag_id) { + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, + "%s: %d: hwq: %d mqtag: 0x%x tag: 0x%x cq index: %d ", + __func__, __LINE__, hwq, mqtag, tag, cq_index); + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, + "%s: %d: hdr status: %s Tag out of range\n", + __func__, __LINE__, + fnic_fcpio_status_to_str(hdr_status)); return; + } else if ((tag == fnic->fnic_max_tag_id) && !(id & FNIC_TAG_DEV_RST)) { + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, + "%s: %d: hwq: %d mqtag: 0x%x tag: 0x%x cq index: %d ", + __func__, __LINE__, hwq, mqtag, tag, cq_index); + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, + "%s: %d: hdr status: %s Tag out of range\n", + __func__, __LINE__, + fnic_fcpio_status_to_str(hdr_status)); + return; + } + + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); + + /* If it is sg3utils allocated SC then tag_id + * is max_tag_id and SC is retrieved from io_req + */ + if ((mqtag == fnic->fnic_max_tag_id) && (id & FNIC_TAG_DEV_RST)) { + io_req = fnic->sw_copy_wq[hwq].io_req_table[tag]; + if (io_req) + sc = io_req->sc; + } else { + sc = scsi_host_find_tag(fnic->lport->host, id & FNIC_TAG_MASK); } - sc = scsi_host_find_tag(fnic->lport->host, id & FNIC_TAG_MASK); WARN_ON_ONCE(!sc); if (!sc) { atomic64_inc(&fnic_stats->io_stats.sc_null); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); shost_printk(KERN_ERR, fnic->lport->host, "itmf_cmpl sc is null - hdr status = %s tag = 0x%x\n", fnic_fcpio_status_to_str(hdr_status), id); return; } - io_lock = fnic_io_lock_hash(fnic, sc); - spin_lock_irqsave(io_lock, flags); + io_req = fnic_priv(sc)->io_req; WARN_ON_ONCE(!io_req); if (!io_req) { atomic64_inc(&fnic_stats->io_stats.ioreq_null); - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); fnic_priv(sc)->flags |= FNIC_IO_ABT_TERM_REQ_NULL; shost_printk(KERN_ERR, fnic->lport->host, "itmf_cmpl io_req is null - " @@ -1114,7 +1177,7 @@ static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic, fnic_priv(sc)->flags |= FNIC_DEV_RST_DONE; if (io_req->abts_done) complete(io_req->abts_done); - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); } else if (id & FNIC_TAG_ABORT) { /* Completion of abort cmd */ switch (hdr_status) { @@ -1149,7 +1212,7 @@ static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic, } if (fnic_priv(sc)->state != FNIC_IOREQ_ABTS_PENDING) { /* This is a late completion. Ignore it */ - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); return; } @@ -1175,14 +1238,14 @@ static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic, */ if (io_req->abts_done) { complete(io_req->abts_done); - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); } else { FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, "abts cmpl, completing IO\n"); fnic_priv(sc)->io_req = NULL; sc->result = (DID_ERROR << 16); - - spin_unlock_irqrestore(io_lock, flags); + fnic->sw_copy_wq[hwq].io_req_table[tag] = NULL; + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); fnic_release_ioreq_buf(fnic, io_req, sc); mempool_free(io_req, fnic->io_req_pool); @@ -1208,7 +1271,7 @@ static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic, /* Completion of device reset */ fnic_priv(sc)->lr_status = hdr_status; if (fnic_priv(sc)->state == FNIC_IOREQ_ABTS_PENDING) { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); fnic_priv(sc)->flags |= FNIC_DEV_RST_ABTS_PENDING; FNIC_TRACE(fnic_fcpio_itmf_cmpl_handler, sc->device->host->host_no, id, sc, @@ -1223,7 +1286,7 @@ static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic, } if (fnic_priv(sc)->flags & FNIC_DEV_RST_TIMED_OUT) { /* Need to wait for terminate completion */ - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); FNIC_TRACE(fnic_fcpio_itmf_cmpl_handler, sc->device->host->host_no, id, sc, jiffies_to_msecs(jiffies - start_time), @@ -1243,13 +1306,13 @@ static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic, fnic_fcpio_status_to_str(hdr_status)); if (io_req->dr_done) complete(io_req->dr_done); - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); } else { shost_printk(KERN_ERR, fnic->lport->host, "Unexpected itmf io state %s tag %x\n", fnic_ioreq_state_to_str(fnic_priv(sc)->state), id); - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); } } @@ -1276,17 +1339,19 @@ static int fnic_fcpio_cmpl_handler(struct vnic_dev *vdev, break; } + cq_index -= fnic->copy_wq_base; + switch (desc->hdr.type) { case FCPIO_ACK: /* fw copied copy wq desc to its queue */ fnic_fcpio_ack_handler(fnic, cq_index, desc); break; case FCPIO_ICMND_CMPL: /* fw completed a command */ - fnic_fcpio_icmnd_cmpl_handler(fnic, desc); + fnic_fcpio_icmnd_cmpl_handler(fnic, cq_index, desc); break; case FCPIO_ITMF_CMPL: /* fw completed itmf (abort cmd, lun reset)*/ - fnic_fcpio_itmf_cmpl_handler(fnic, desc); + fnic_fcpio_itmf_cmpl_handler(fnic, cq_index, desc); break; case FCPIO_FLOGI_REG_CMPL: /* fw completed flogi_reg */ @@ -1339,18 +1404,33 @@ int fnic_wq_copy_cmpl_handler(struct fnic *fnic, int copy_work_to_do, unsigned i static bool fnic_cleanup_io_iter(struct scsi_cmnd *sc, void *data) { - const int tag = scsi_cmd_to_rq(sc)->tag; + struct request *const rq = scsi_cmd_to_rq(sc); struct fnic *fnic = data; struct fnic_io_req *io_req; unsigned long flags = 0; - spinlock_t *io_lock; unsigned long start_time = 0; struct fnic_stats *fnic_stats = &fnic->fnic_stats; + uint16_t hwq = 0; + int tag; + int mqtag; + + mqtag = blk_mq_unique_tag(rq); + hwq = blk_mq_unique_tag_to_hwq(mqtag); + tag = blk_mq_unique_tag_to_tag(mqtag); - io_lock = fnic_io_lock_tag(fnic, tag); - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); + + fnic->sw_copy_wq[hwq].io_req_table[tag] = NULL; io_req = fnic_priv(sc)->io_req; + if (!io_req) { + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host, + "fnic<%d>: %s: %d: hwq: %d mqtag: 0x%x tag: 0x%x flags: 0x%x No ioreq. Returning\n", + fnic->fnic_num, __func__, __LINE__, hwq, mqtag, tag, fnic_priv(sc)->flags); + return true; + } + if ((fnic_priv(sc)->flags & FNIC_DEVICE_RESET) && !(fnic_priv(sc)->flags & FNIC_DEV_RST_DONE)) { /* @@ -1362,20 +1442,16 @@ static bool fnic_cleanup_io_iter(struct scsi_cmnd *sc, void *data) complete(io_req->dr_done); else if (io_req && io_req->abts_done) complete(io_req->abts_done); - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); return true; } else if (fnic_priv(sc)->flags & FNIC_DEVICE_RESET) { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); return true; } - if (!io_req) { - spin_unlock_irqrestore(io_lock, flags); - goto cleanup_scsi_cmd; - } fnic_priv(sc)->io_req = NULL; - - spin_unlock_irqrestore(io_lock, flags); + io_req->sc = NULL; + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); /* * If there is a scsi_cmnd associated with this io_req, then @@ -1385,7 +1461,6 @@ static bool fnic_cleanup_io_iter(struct scsi_cmnd *sc, void *data) fnic_release_ioreq_buf(fnic, io_req, sc); mempool_free(io_req, fnic->io_req_pool); -cleanup_scsi_cmd: sc->result = DID_TRANSPORT_DISRUPTED << 16; FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, "fnic_cleanup_io: tag:0x%x : sc:0x%p duration = %lu DID_TRANSPORT_DISRUPTED\n", @@ -1396,12 +1471,6 @@ static bool fnic_cleanup_io_iter(struct scsi_cmnd *sc, void *data) else atomic64_inc(&fnic_stats->io_stats.io_completions); - /* Complete the command to SCSI */ - if (!(fnic_priv(sc)->flags & FNIC_IO_ISSUED)) - shost_printk(KERN_ERR, fnic->lport->host, - "Calling done for IO not issued to fw: tag:0x%x sc:0x%p\n", - tag, sc); - FNIC_TRACE(fnic_cleanup_io, sc->device->host->host_no, tag, sc, jiffies_to_msecs(jiffies - start_time), @@ -1430,8 +1499,8 @@ void fnic_wq_copy_cleanup_handler(struct vnic_wq_copy *wq, struct fnic_io_req *io_req; struct scsi_cmnd *sc; unsigned long flags; - spinlock_t *io_lock; unsigned long start_time = 0; + uint16_t hwq; /* get the tag reference */ fcpio_tag_id_dec(&desc->hdr.tag, &id); @@ -1444,8 +1513,8 @@ void fnic_wq_copy_cleanup_handler(struct vnic_wq_copy *wq, if (!sc) return; - io_lock = fnic_io_lock_hash(fnic, sc); - spin_lock_irqsave(io_lock, flags); + hwq = blk_mq_unique_tag_to_hwq(id); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); /* Get the IO context which this desc refers to */ io_req = fnic_priv(sc)->io_req; @@ -1453,13 +1522,15 @@ void fnic_wq_copy_cleanup_handler(struct vnic_wq_copy *wq, /* fnic interrupts are turned off by now */ if (!io_req) { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); goto wq_copy_cleanup_scsi_cmd; } fnic_priv(sc)->io_req = NULL; + io_req->sc = NULL; + fnic->sw_copy_wq[hwq].io_req_table[blk_mq_unique_tag_to_tag(id)] = NULL; - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); start_time = io_req->start_time; fnic_release_ioreq_buf(fnic, io_req, sc); @@ -1483,9 +1554,10 @@ void fnic_wq_copy_cleanup_handler(struct vnic_wq_copy *wq, static inline int fnic_queue_abort_io_req(struct fnic *fnic, int tag, u32 task_req, u8 *fc_lun, - struct fnic_io_req *io_req) + struct fnic_io_req *io_req, + unsigned int hwq) { - struct vnic_wq_copy *wq = &fnic->hw_copy_wq[0]; + struct vnic_wq_copy *wq = &fnic->hw_copy_wq[hwq]; struct misc_stats *misc_stats = &fnic->fnic_stats.misc_stats; unsigned long flags; @@ -1498,13 +1570,13 @@ static inline int fnic_queue_abort_io_req(struct fnic *fnic, int tag, atomic_inc(&fnic->in_flight); spin_unlock_irqrestore(&fnic->fnic_lock, flags); - spin_lock_irqsave(&fnic->wq_copy_lock[0], flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); - if (vnic_wq_copy_desc_avail(wq) <= fnic->wq_copy_desc_low[0]) - free_wq_copy_descs(fnic, wq); + if (vnic_wq_copy_desc_avail(wq) <= fnic->wq_copy_desc_low[hwq]) + free_wq_copy_descs(fnic, wq, hwq); if (!vnic_wq_copy_desc_avail(wq)) { - spin_unlock_irqrestore(&fnic->wq_copy_lock[0], flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); atomic_dec(&fnic->in_flight); FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, "fnic_queue_abort_io_req: failure: no descriptors\n"); @@ -1521,7 +1593,7 @@ static inline int fnic_queue_abort_io_req(struct fnic *fnic, int tag, atomic64_set(&fnic->fnic_stats.fw_stats.max_fw_reqs, atomic64_read(&fnic->fnic_stats.fw_stats.active_fw_reqs)); - spin_unlock_irqrestore(&fnic->wq_copy_lock[0], flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); atomic_dec(&fnic->in_flight); return 0; @@ -1535,24 +1607,27 @@ struct fnic_rport_abort_io_iter_data { static bool fnic_rport_abort_io_iter(struct scsi_cmnd *sc, void *data) { + struct request *const rq = scsi_cmd_to_rq(sc); struct fnic_rport_abort_io_iter_data *iter_data = data; struct fnic *fnic = iter_data->fnic; - int abt_tag = scsi_cmd_to_rq(sc)->tag; + int abt_tag = 0; struct fnic_io_req *io_req; - spinlock_t *io_lock; unsigned long flags; struct reset_stats *reset_stats = &fnic->fnic_stats.reset_stats; struct terminate_stats *term_stats = &fnic->fnic_stats.term_stats; struct scsi_lun fc_lun; enum fnic_ioreq_state old_ioreq_state; + uint16_t hwq = 0; + + abt_tag = blk_mq_unique_tag(rq); + hwq = blk_mq_unique_tag_to_hwq(abt_tag); - io_lock = fnic_io_lock_tag(fnic, abt_tag); - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); io_req = fnic_priv(sc)->io_req; if (!io_req || io_req->port_id != iter_data->port_id) { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); return true; } @@ -1561,7 +1636,7 @@ static bool fnic_rport_abort_io_iter(struct scsi_cmnd *sc, void *data) FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, "fnic_rport_exch_reset dev rst not pending sc 0x%p\n", sc); - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); return true; } @@ -1570,7 +1645,7 @@ static bool fnic_rport_abort_io_iter(struct scsi_cmnd *sc, void *data) * belongs to rport that went away */ if (fnic_priv(sc)->state == FNIC_IOREQ_ABTS_PENDING) { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); return true; } if (io_req->abts_done) { @@ -1601,31 +1676,31 @@ static bool fnic_rport_abort_io_iter(struct scsi_cmnd *sc, void *data) FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, "fnic_rport_reset_exch: Issuing abts\n"); - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); /* Now queue the abort command to firmware */ int_to_scsilun(sc->device->lun, &fc_lun); if (fnic_queue_abort_io_req(fnic, abt_tag, FCPIO_ITMF_ABT_TASK_TERM, - fc_lun.scsi_lun, io_req)) { + fc_lun.scsi_lun, io_req, hwq)) { /* * Revert the cmd state back to old state, if * it hasn't changed in between. This cmd will get * aborted later by scsi_eh, or cleaned up during * lun reset */ - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); if (fnic_priv(sc)->state == FNIC_IOREQ_ABTS_PENDING) fnic_priv(sc)->state = old_ioreq_state; - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); } else { - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); if (fnic_priv(sc)->flags & FNIC_DEVICE_RESET) fnic_priv(sc)->flags |= FNIC_DEV_RST_TERM_ISSUED; else fnic_priv(sc)->flags |= FNIC_IO_INTERNAL_TERM_ISSUED; - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); atomic64_inc(&term_stats->terminates); iter_data->term_cnt++; } @@ -1703,7 +1778,6 @@ int fnic_abort_cmd(struct scsi_cmnd *sc) struct fnic *fnic; struct fnic_io_req *io_req = NULL; struct fc_rport *rport; - spinlock_t *io_lock; unsigned long flags; unsigned long start_time = 0; int ret = SUCCESS; @@ -1713,8 +1787,10 @@ int fnic_abort_cmd(struct scsi_cmnd *sc) struct abort_stats *abts_stats; struct terminate_stats *term_stats; enum fnic_ioreq_state old_ioreq_state; - const int tag = rq->tag; + int mqtag; unsigned long abt_issued_time; + uint16_t hwq = 0; + DECLARE_COMPLETION_ONSTACK(tm_done); /* Wait for rport to unblock */ @@ -1724,23 +1800,25 @@ int fnic_abort_cmd(struct scsi_cmnd *sc) lp = shost_priv(sc->device->host); fnic = lport_priv(lp); + + spin_lock_irqsave(&fnic->fnic_lock, flags); fnic_stats = &fnic->fnic_stats; abts_stats = &fnic->fnic_stats.abts_stats; term_stats = &fnic->fnic_stats.term_stats; rport = starget_to_rport(scsi_target(sc->device)); - FNIC_SCSI_DBG(KERN_DEBUG, - fnic->lport->host, - "Abort Cmd called FCID 0x%x, LUN 0x%llx TAG %x flags %x\n", - rport->port_id, sc->device->lun, tag, fnic_priv(sc)->flags); + mqtag = blk_mq_unique_tag(rq); + hwq = blk_mq_unique_tag_to_hwq(mqtag); fnic_priv(sc)->flags = FNIC_NO_FLAGS; if (lp->state != LPORT_ST_READY || !(lp->link_up)) { ret = FAILED; + spin_unlock_irqrestore(&fnic->fnic_lock, flags); goto fnic_abort_cmd_end; } + spin_unlock_irqrestore(&fnic->fnic_lock, flags); /* * Avoid a race between SCSI issuing the abort and the device * completing the command. @@ -1753,18 +1831,17 @@ int fnic_abort_cmd(struct scsi_cmnd *sc) * * .io_req will not be cleared except while holding io_req_lock. */ - io_lock = fnic_io_lock_hash(fnic, sc); - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); io_req = fnic_priv(sc)->io_req; if (!io_req) { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); goto fnic_abort_cmd_end; } io_req->abts_done = &tm_done; if (fnic_priv(sc)->state == FNIC_IOREQ_ABTS_PENDING) { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); goto wait_pending; } @@ -1796,7 +1873,7 @@ int fnic_abort_cmd(struct scsi_cmnd *sc) fnic_priv(sc)->state = FNIC_IOREQ_ABTS_PENDING; fnic_priv(sc)->abts_status = FCPIO_INVALID_CODE; - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); /* * Check readiness of the remote port. If the path to remote @@ -1813,15 +1890,15 @@ int fnic_abort_cmd(struct scsi_cmnd *sc) /* Now queue the abort command to firmware */ int_to_scsilun(sc->device->lun, &fc_lun); - if (fnic_queue_abort_io_req(fnic, tag, task_req, fc_lun.scsi_lun, - io_req)) { - spin_lock_irqsave(io_lock, flags); + if (fnic_queue_abort_io_req(fnic, mqtag, task_req, fc_lun.scsi_lun, + io_req, hwq)) { + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); if (fnic_priv(sc)->state == FNIC_IOREQ_ABTS_PENDING) fnic_priv(sc)->state = old_ioreq_state; io_req = fnic_priv(sc)->io_req; if (io_req) io_req->abts_done = NULL; - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); ret = FAILED; goto fnic_abort_cmd_end; } @@ -1845,12 +1922,12 @@ int fnic_abort_cmd(struct scsi_cmnd *sc) fnic->config.ed_tov)); /* Check the abort status */ - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); io_req = fnic_priv(sc)->io_req; if (!io_req) { atomic64_inc(&fnic_stats->io_stats.ioreq_null); - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); fnic_priv(sc)->flags |= FNIC_IO_ABT_TERM_REQ_NULL; ret = FAILED; goto fnic_abort_cmd_end; @@ -1859,7 +1936,7 @@ int fnic_abort_cmd(struct scsi_cmnd *sc) /* fw did not complete abort, timed out */ if (fnic_priv(sc)->abts_status == FCPIO_INVALID_CODE) { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); if (task_req == FCPIO_ITMF_ABT_TASK) { atomic64_inc(&abts_stats->abort_drv_timeouts); } else { @@ -1873,7 +1950,7 @@ int fnic_abort_cmd(struct scsi_cmnd *sc) /* IO out of order */ if (!(fnic_priv(sc)->flags & (FNIC_IO_ABORTED | FNIC_IO_DONE))) { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, "Issuing Host reset due to out of order IO\n"); @@ -1889,15 +1966,18 @@ int fnic_abort_cmd(struct scsi_cmnd *sc) * free the io_req if successful. If abort fails, * Device reset will clean the I/O. */ - if (fnic_priv(sc)->abts_status == FCPIO_SUCCESS) { + if (fnic_priv(sc)->abts_status == FCPIO_SUCCESS || + (fnic_priv(sc)->abts_status == FCPIO_ABORTED)) { fnic_priv(sc)->io_req = NULL; + io_req->sc = NULL; } else { ret = FAILED; - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); goto fnic_abort_cmd_end; } - spin_unlock_irqrestore(io_lock, flags); + fnic->sw_copy_wq[hwq].io_req_table[blk_mq_unique_tag_to_tag(mqtag)] = NULL; + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); fnic_release_ioreq_buf(fnic, io_req, sc); mempool_free(io_req, fnic->io_req_pool); @@ -1912,7 +1992,7 @@ int fnic_abort_cmd(struct scsi_cmnd *sc) atomic64_inc(&fnic_stats->io_stats.io_completions); fnic_abort_cmd_end: - FNIC_TRACE(fnic_abort_cmd, sc->device->host->host_no, tag, sc, + FNIC_TRACE(fnic_abort_cmd, sc->device->host->host_no, mqtag, sc, jiffies_to_msecs(jiffies - start_time), 0, ((u64)sc->cmnd[0] << 32 | (u64)sc->cmnd[2] << 24 | (u64)sc->cmnd[3] << 16 | @@ -1930,25 +2010,31 @@ static inline int fnic_queue_dr_io_req(struct fnic *fnic, struct scsi_cmnd *sc, struct fnic_io_req *io_req) { - struct vnic_wq_copy *wq = &fnic->hw_copy_wq[0]; + struct vnic_wq_copy *wq; struct misc_stats *misc_stats = &fnic->fnic_stats.misc_stats; struct scsi_lun fc_lun; int ret = 0; - unsigned long intr_flags; + unsigned long flags; + uint16_t hwq = 0; + uint32_t tag = 0; + + tag = io_req->tag; + hwq = blk_mq_unique_tag_to_hwq(tag); + wq = &fnic->hw_copy_wq[hwq]; - spin_lock_irqsave(&fnic->fnic_lock, intr_flags); + spin_lock_irqsave(&fnic->fnic_lock, flags); if (unlikely(fnic_chk_state_flags_locked(fnic, FNIC_FLAGS_IO_BLOCKED))) { - spin_unlock_irqrestore(&fnic->fnic_lock, intr_flags); + spin_unlock_irqrestore(&fnic->fnic_lock, flags); return FAILED; } else atomic_inc(&fnic->in_flight); - spin_unlock_irqrestore(&fnic->fnic_lock, intr_flags); + spin_unlock_irqrestore(&fnic->fnic_lock, flags); - spin_lock_irqsave(&fnic->wq_copy_lock[0], intr_flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); - if (vnic_wq_copy_desc_avail(wq) <= fnic->wq_copy_desc_low[0]) - free_wq_copy_descs(fnic, wq); + if (vnic_wq_copy_desc_avail(wq) <= fnic->wq_copy_desc_low[hwq]) + free_wq_copy_descs(fnic, wq, hwq); if (!vnic_wq_copy_desc_avail(wq)) { FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, @@ -1973,7 +2059,7 @@ static inline int fnic_queue_dr_io_req(struct fnic *fnic, atomic64_read(&fnic->fnic_stats.fw_stats.active_fw_reqs)); lr_io_req_end: - spin_unlock_irqrestore(&fnic->wq_copy_lock[0], intr_flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); atomic_dec(&fnic->in_flight); return ret; @@ -1988,12 +2074,13 @@ struct fnic_pending_aborts_iter_data { static bool fnic_pending_aborts_iter(struct scsi_cmnd *sc, void *data) { + struct request *const rq = scsi_cmd_to_rq(sc); struct fnic_pending_aborts_iter_data *iter_data = data; struct fnic *fnic = iter_data->fnic; struct scsi_device *lun_dev = iter_data->lun_dev; - int abt_tag = scsi_cmd_to_rq(sc)->tag; + unsigned long abt_tag = 0; + uint16_t hwq = 0; struct fnic_io_req *io_req; - spinlock_t *io_lock; unsigned long flags; struct scsi_lun fc_lun; DECLARE_COMPLETION_ONSTACK(tm_done); @@ -2002,11 +2089,13 @@ static bool fnic_pending_aborts_iter(struct scsi_cmnd *sc, void *data) if (sc == iter_data->lr_sc || sc->device != lun_dev) return true; - io_lock = fnic_io_lock_tag(fnic, abt_tag); - spin_lock_irqsave(io_lock, flags); + abt_tag = blk_mq_unique_tag(rq); + hwq = blk_mq_unique_tag_to_hwq(abt_tag); + + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); io_req = fnic_priv(sc)->io_req; if (!io_req) { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); return true; } @@ -2019,7 +2108,7 @@ static bool fnic_pending_aborts_iter(struct scsi_cmnd *sc, void *data) fnic_ioreq_state_to_str(fnic_priv(sc)->state)); if (fnic_priv(sc)->state == FNIC_IOREQ_ABTS_PENDING) { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); return true; } if ((fnic_priv(sc)->flags & FNIC_DEVICE_RESET) && @@ -2027,7 +2116,7 @@ static bool fnic_pending_aborts_iter(struct scsi_cmnd *sc, void *data) FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host, "%s dev rst not pending sc 0x%p\n", __func__, sc); - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); return true; } @@ -2048,35 +2137,34 @@ static bool fnic_pending_aborts_iter(struct scsi_cmnd *sc, void *data) BUG_ON(io_req->abts_done); if (fnic_priv(sc)->flags & FNIC_DEVICE_RESET) { - abt_tag |= FNIC_TAG_DEV_RST; FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host, "%s: dev rst sc 0x%p\n", __func__, sc); } fnic_priv(sc)->abts_status = FCPIO_INVALID_CODE; io_req->abts_done = &tm_done; - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); /* Now queue the abort command to firmware */ int_to_scsilun(sc->device->lun, &fc_lun); if (fnic_queue_abort_io_req(fnic, abt_tag, FCPIO_ITMF_ABT_TASK_TERM, - fc_lun.scsi_lun, io_req)) { - spin_lock_irqsave(io_lock, flags); + fc_lun.scsi_lun, io_req, hwq)) { + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); io_req = fnic_priv(sc)->io_req; if (io_req) io_req->abts_done = NULL; if (fnic_priv(sc)->state == FNIC_IOREQ_ABTS_PENDING) fnic_priv(sc)->state = old_ioreq_state; - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); iter_data->ret = FAILED; return false; } else { - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); if (fnic_priv(sc)->flags & FNIC_DEVICE_RESET) fnic_priv(sc)->flags |= FNIC_DEV_RST_TERM_ISSUED; - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); } fnic_priv(sc)->flags |= FNIC_IO_INTERNAL_TERM_ISSUED; @@ -2084,10 +2172,10 @@ static bool fnic_pending_aborts_iter(struct scsi_cmnd *sc, void *data) (fnic->config.ed_tov)); /* Recheck cmd state to check if it is now aborted */ - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); io_req = fnic_priv(sc)->io_req; if (!io_req) { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); fnic_priv(sc)->flags |= FNIC_IO_ABT_TERM_REQ_NULL; return true; } @@ -2096,7 +2184,7 @@ static bool fnic_pending_aborts_iter(struct scsi_cmnd *sc, void *data) /* if abort is still pending with fw, fail */ if (fnic_priv(sc)->abts_status == FCPIO_INVALID_CODE) { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); fnic_priv(sc)->flags |= FNIC_IO_ABT_TERM_DONE; iter_data->ret = FAILED; return false; @@ -2104,9 +2192,11 @@ static bool fnic_pending_aborts_iter(struct scsi_cmnd *sc, void *data) fnic_priv(sc)->state = FNIC_IOREQ_ABTS_COMPLETE; /* original sc used for lr is handled by dev reset code */ - if (sc != iter_data->lr_sc) + if (sc != iter_data->lr_sc) { fnic_priv(sc)->io_req = NULL; - spin_unlock_irqrestore(io_lock, flags); + fnic->sw_copy_wq[hwq].io_req_table[blk_mq_unique_tag_to_tag(abt_tag)] = NULL; + } + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); /* original sc used for lr is handled by dev reset code */ if (sc != iter_data->lr_sc) { @@ -2178,15 +2268,15 @@ int fnic_device_reset(struct scsi_cmnd *sc) struct fc_rport *rport; int status; int ret = FAILED; - spinlock_t *io_lock; unsigned long flags; unsigned long start_time = 0; struct scsi_lun fc_lun; struct fnic_stats *fnic_stats; struct reset_stats *reset_stats; - int tag = rq->tag; + int mqtag = rq->tag; DECLARE_COMPLETION_ONSTACK(tm_done); bool new_sc = 0; + uint16_t hwq = 0; /* Wait for rport to unblock */ fc_block_scsi_eh(sc); @@ -2216,7 +2306,7 @@ int fnic_device_reset(struct scsi_cmnd *sc) fnic_priv(sc)->flags = FNIC_DEVICE_RESET; - if (unlikely(tag < 0)) { + if (unlikely(mqtag < 0)) { /* * For device reset issued through sg3utils, we let * only one LUN_RESET to go through and use a special @@ -2225,11 +2315,14 @@ int fnic_device_reset(struct scsi_cmnd *sc) * allocated by mid layer. */ mutex_lock(&fnic->sgreset_mutex); - tag = fnic->fnic_max_tag_id; + mqtag = fnic->fnic_max_tag_id; new_sc = 1; + } else { + mqtag = blk_mq_unique_tag(rq); + hwq = blk_mq_unique_tag_to_hwq(mqtag); } - io_lock = fnic_io_lock_hash(fnic, sc); - spin_lock_irqsave(io_lock, flags); + + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); io_req = fnic_priv(sc)->io_req; /* @@ -2239,34 +2332,43 @@ int fnic_device_reset(struct scsi_cmnd *sc) if (!io_req) { io_req = mempool_alloc(fnic->io_req_pool, GFP_ATOMIC); if (!io_req) { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); goto fnic_device_reset_end; } memset(io_req, 0, sizeof(*io_req)); io_req->port_id = rport->port_id; + io_req->tag = mqtag; fnic_priv(sc)->io_req = io_req; + io_req->sc = sc; + + if (fnic->sw_copy_wq[hwq].io_req_table[blk_mq_unique_tag_to_tag(mqtag)] != NULL) + WARN(1, "fnic<%d>: %s: tag 0x%x already exists\n", + fnic->fnic_num, __func__, blk_mq_unique_tag_to_tag(mqtag)); + + fnic->sw_copy_wq[hwq].io_req_table[blk_mq_unique_tag_to_tag(mqtag)] = + io_req; } io_req->dr_done = &tm_done; fnic_priv(sc)->state = FNIC_IOREQ_CMD_PENDING; fnic_priv(sc)->lr_status = FCPIO_INVALID_CODE; - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); - FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, "TAG %x\n", tag); + FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, "TAG %x\n", mqtag); /* * issue the device reset, if enqueue failed, clean up the ioreq * and break assoc with scsi cmd */ if (fnic_queue_dr_io_req(fnic, sc, io_req)) { - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); io_req = fnic_priv(sc)->io_req; if (io_req) io_req->dr_done = NULL; goto fnic_device_reset_clean; } - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); fnic_priv(sc)->flags |= FNIC_DEV_RST_ISSUED; - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); /* * Wait on the local completion for LUN reset. The io_req may be @@ -2275,12 +2377,12 @@ int fnic_device_reset(struct scsi_cmnd *sc) wait_for_completion_timeout(&tm_done, msecs_to_jiffies(FNIC_LUN_RESET_TIMEOUT)); - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); io_req = fnic_priv(sc)->io_req; if (!io_req) { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, - "io_req is null tag 0x%x sc 0x%p\n", tag, sc); + "io_req is null mqtag 0x%x sc 0x%p\n", mqtag, sc); goto fnic_device_reset_end; } io_req->dr_done = NULL; @@ -2296,41 +2398,41 @@ int fnic_device_reset(struct scsi_cmnd *sc) FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, "Device reset timed out\n"); fnic_priv(sc)->flags |= FNIC_DEV_RST_TIMED_OUT; - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); int_to_scsilun(sc->device->lun, &fc_lun); /* * Issue abort and terminate on device reset request. * If q'ing of terminate fails, retry it after a delay. */ while (1) { - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); if (fnic_priv(sc)->flags & FNIC_DEV_RST_TERM_ISSUED) { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); break; } - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); if (fnic_queue_abort_io_req(fnic, - tag | FNIC_TAG_DEV_RST, + mqtag | FNIC_TAG_DEV_RST, FCPIO_ITMF_ABT_TASK_TERM, - fc_lun.scsi_lun, io_req)) { + fc_lun.scsi_lun, io_req, hwq)) { wait_for_completion_timeout(&tm_done, msecs_to_jiffies(FNIC_ABT_TERM_DELAY_TIMEOUT)); } else { - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); fnic_priv(sc)->flags |= FNIC_DEV_RST_TERM_ISSUED; fnic_priv(sc)->state = FNIC_IOREQ_ABTS_PENDING; io_req->abts_done = &tm_done; - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, - "Abort and terminate issued on Device reset " - "tag 0x%x sc 0x%p\n", tag, sc); + "Abort and terminate issued on Device reset mqtag 0x%x sc 0x%p\n", + mqtag, sc); break; } } while (1) { - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); if (!(fnic_priv(sc)->flags & FNIC_DEV_RST_DONE)) { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); wait_for_completion_timeout(&tm_done, msecs_to_jiffies(FNIC_LUN_RESET_TIMEOUT)); break; @@ -2341,12 +2443,12 @@ int fnic_device_reset(struct scsi_cmnd *sc) } } } else { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); } /* Completed, but not successful, clean up the io_req, return fail */ if (status != FCPIO_SUCCESS) { - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, "Device reset completed - failed\n"); @@ -2362,7 +2464,7 @@ int fnic_device_reset(struct scsi_cmnd *sc) * succeeds */ if (fnic_clean_pending_aborts(fnic, sc, new_sc)) { - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); io_req = fnic_priv(sc)->io_req; FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, "Device reset failed" @@ -2371,17 +2473,20 @@ int fnic_device_reset(struct scsi_cmnd *sc) } /* Clean lun reset command */ - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); io_req = fnic_priv(sc)->io_req; if (io_req) /* Completed, and successful */ ret = SUCCESS; fnic_device_reset_clean: - if (io_req) + if (io_req) { fnic_priv(sc)->io_req = NULL; + io_req->sc = NULL; + fnic->sw_copy_wq[hwq].io_req_table[blk_mq_unique_tag_to_tag(io_req->tag)] = NULL; + } - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); if (io_req) { start_time = io_req->start_time; @@ -2619,12 +2724,17 @@ void fnic_exch_mgr_reset(struct fc_lport *lp, u32 sid, u32 did) static bool fnic_abts_pending_iter(struct scsi_cmnd *sc, void *data) { + struct request *const rq = scsi_cmd_to_rq(sc); struct fnic_pending_aborts_iter_data *iter_data = data; struct fnic *fnic = iter_data->fnic; int cmd_state; struct fnic_io_req *io_req; - spinlock_t *io_lock; unsigned long flags; + uint16_t hwq = 0; + int tag; + + tag = blk_mq_unique_tag(rq); + hwq = blk_mq_unique_tag_to_hwq(tag); /* * ignore this lun reset cmd or cmds that do not belong to @@ -2635,12 +2745,11 @@ static bool fnic_abts_pending_iter(struct scsi_cmnd *sc, void *data) if (iter_data->lun_dev && sc->device != iter_data->lun_dev) return true; - io_lock = fnic_io_lock_hash(fnic, sc); - spin_lock_irqsave(io_lock, flags); + spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags); io_req = fnic_priv(sc)->io_req; if (!io_req) { - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); return true; } @@ -2652,7 +2761,7 @@ static bool fnic_abts_pending_iter(struct scsi_cmnd *sc, void *data) "Found IO in %s on lun\n", fnic_ioreq_state_to_str(fnic_priv(sc)->state)); cmd_state = fnic_priv(sc)->state; - spin_unlock_irqrestore(io_lock, flags); + spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags); if (cmd_state == FNIC_IOREQ_ABTS_PENDING) iter_data->ret = 1; From patchwork Wed Dec 6 18:46:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Karan Tilak Kumar \(kartilak\)" X-Patchwork-Id: 750971 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=cisco.com header.i=@cisco.com header.b="Rh7pbLO8" Received: from rcdn-iport-9.cisco.com (rcdn-iport-9.cisco.com [173.37.86.80]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6598F10FF; Wed, 6 Dec 2023 10:47:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=1056; q=dns/txt; s=iport; t=1701888435; x=1703098035; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=DvpvexVDMU5638wqFKGLX6VF8dgg0xDpZ3MUoNlfWeY=; b=Rh7pbLO8lRb+VAfCFVi9uxswufie0P0oQICcwGUSjpS9vbm3FS86YwiC 72/LAaW41Wso5O0weUDmLHb3Hq1D4G41dbU01C3SYn+YqvHQkclVCkbnE mhFTDt9RHm52ZV7IFAJUVdxz+CnpzcQKLDBRfpBBJTqeyfFAxwyOCld5K s=; X-CSE-ConnectionGUID: 0VqnhO/SSNu5vJN/vZvf+Q== X-CSE-MsgGUID: /Vur/SGrSamPKHoNazhmpw== X-IronPort-AV: E=Sophos;i="6.04,256,1695686400"; d="scan'208";a="154982619" Received: from alln-core-4.cisco.com ([173.36.13.137]) by rcdn-iport-9.cisco.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Dec 2023 18:47:14 +0000 Received: from localhost.cisco.com ([10.193.101.253]) (authenticated bits=0) by alln-core-4.cisco.com (8.15.2/8.15.2) with ESMTPSA id 3B6IkHD9010013 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Wed, 6 Dec 2023 18:47:13 GMT From: Karan Tilak Kumar To: sebaddel@cisco.com Cc: arulponn@cisco.com, djhawar@cisco.com, gcboffa@cisco.com, mkai2@cisco.com, satishkh@cisco.com, jejb@linux.ibm.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org, Karan Tilak Kumar Subject: [PATCH v5 13/13] scsi: fnic: Increment driver version Date: Wed, 6 Dec 2023 10:46:15 -0800 Message-Id: <20231206184615.878755-14-kartilak@cisco.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20231206184615.878755-1-kartilak@cisco.com> References: <20231206184615.878755-1-kartilak@cisco.com> Precedence: bulk X-Mailing-List: linux-scsi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Authenticated-User: kartilak@cisco.com X-Outbound-SMTP-Client: 10.193.101.253, [10.193.101.253] X-Outbound-Node: alln-core-4.cisco.com Increment driver version for multiqueue(MQ) Reviewed-by: Sesidhar Baddela Reviewed-by: Arulprabhu Ponnusamy Signed-off-by: Karan Tilak Kumar --- Changes between v4 and v5: Incorporate review comments from Martin: Modify patch commits to include a "---" separator. Changes between v2 and v3: Incorporate the following review comments from Hannes: Create a separate patch to increment driver version. Increment driver version number to 1.7.0.0. --- drivers/scsi/fnic/fnic.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/scsi/fnic/fnic.h b/drivers/scsi/fnic/fnic.h index c4edbd7dfc25..7241aebf79d6 100644 --- a/drivers/scsi/fnic/fnic.h +++ b/drivers/scsi/fnic/fnic.h @@ -27,7 +27,7 @@ #define DRV_NAME "fnic" #define DRV_DESCRIPTION "Cisco FCoE HBA Driver" -#define DRV_VERSION "1.6.0.56" +#define DRV_VERSION "1.7.0.0" #define PFX DRV_NAME ": " #define DFX DRV_NAME "%d: "