From patchwork Tue Mar 22 12:21:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sumit Saxena X-Patchwork-Id: 553850 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D42BC433F5 for ; Tue, 22 Mar 2022 12:22:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233450AbiCVMXY (ORCPT ); Tue, 22 Mar 2022 08:23:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32808 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231751AbiCVMXV (ORCPT ); Tue, 22 Mar 2022 08:23:21 -0400 Received: from mail-pf1-x42c.google.com (mail-pf1-x42c.google.com [IPv6:2607:f8b0:4864:20::42c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 42D2D82D3A for ; Tue, 22 Mar 2022 05:21:53 -0700 (PDT) Received: by mail-pf1-x42c.google.com with SMTP id p8so17985057pfh.8 for ; Tue, 22 Mar 2022 05:21:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=FR8dmLb5OzNTU3IxTXHlcV8zxbcDta1lAqWF1IxCWZY=; b=PsYvWuFqLY/mPORm40N7Z1tiPZrOgNThwq55mSXJpGLY5J/F399jDURXHmuzqn883a nWlSR7kj1IFuD0pPhzEvDqS7oWHP06OEzKkG8nG6hpodMQL804QfLxW9P/aQkOZITpNX 3gOVUGC4wjDZyvlanu9/bLSyfnsmCe+68bfU0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=FR8dmLb5OzNTU3IxTXHlcV8zxbcDta1lAqWF1IxCWZY=; b=YOvwHw7eIE4zTrNo3UYckBMTwxHzS7rnEMu8X4tusVZsmjovLfqcQS/TNc3C8PBbo1 IvEqtcFymtUt8UNLf+tFD0pbZUgB4wLQ5TdkuvB03Zq+Dtt7pvQhtPF3jfgwoDTHf5v/ ml3kV7H+WYpT8oVAknxSnk3//72kIkwihxxaxz4ae0bq+8bPvDw1N+82sOEuc97qtgon CJQBLdunbcZwxVtm0bTLmbHGf16hWBkVOF43xLMyOLsHHlNIwLtuXNxfqVuvsk7ynm0F 2l2Zt9GWVMGIyVQKnYQAZ78jNWLsnBNqqSDl6uqVnqdybQjKWsUhTTtDKpKpvdZM9JYH RSkg== X-Gm-Message-State: AOAM532uzSubFAzUPM+dugpAAPeh9VpKvQSI4m3r8uGiO874Rthdsl2S P+y/WsihEwMaJo8qY1X04WJuQHeQVlzcJK88J/PBSiakPTw1XB+/TTWTLiv5SKei2yZak7pcftw spFRjtdR4h9AMJd/F/2341GwJJqo0LgpkGEdY8Dj7olL70pgCyuB9Pa3ufRHeCRb6DVFNojJq1F hQsM99nPgG3Q== X-Google-Smtp-Source: ABdhPJws8CJM7MHxibl62vIcofu1K74gXM3FlaqOgQJ2aMhwvmHT8lV1mdUhr9aO9yKk4HWPzpAx5w== X-Received: by 2002:a63:1f55:0:b0:382:65eb:3073 with SMTP id q21-20020a631f55000000b0038265eb3073mr10437045pgm.624.1647951712424; Tue, 22 Mar 2022 05:21:52 -0700 (PDT) Received: from dhcp-10-123-20-15.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id d11-20020a056a0010cb00b004e1b76b09c0sm23225259pfu.74.2022.03.22.05.21.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Mar 2022 05:21:51 -0700 (PDT) From: Sumit Saxena To: linux-scsi@vger.kernel.org Cc: martin.petersen@oracle.com, sathya.prakash@broadcom.com, kashyap.desai@broadcom.com, chandrakanth.patil@broadcom.com, sreekanth.reddy@broadcom.com, prayas.patel@broadcom.com, Sumit Saxena Subject: [PATCH 1/7] mpi3mr: add BSG device support for controller management Date: Tue, 22 Mar 2022 08:21:01 -0400 Message-Id: <20220322122107.8482-2-sumit.saxena@broadcom.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220322122107.8482-1-sumit.saxena@broadcom.com> References: <20220322122107.8482-1-sumit.saxena@broadcom.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This patch creates BSG device per controller during load and destroys during unload. BSG Device nodes will be named as /dev/bsg/mpi3mrctl0, /dev/bsg/mpi3mrctl1... Signed-off-by: Sumit Saxena --- drivers/scsi/mpi3mr/Kconfig | 1 + drivers/scsi/mpi3mr/Makefile | 1 + drivers/scsi/mpi3mr/mpi3mr.h | 8 +++ drivers/scsi/mpi3mr/mpi3mr_app.c | 106 +++++++++++++++++++++++++++++++ drivers/scsi/mpi3mr/mpi3mr_app.h | 20 ++++++ drivers/scsi/mpi3mr/mpi3mr_os.c | 2 + 6 files changed, 138 insertions(+) create mode 100644 drivers/scsi/mpi3mr/mpi3mr_app.c create mode 100644 drivers/scsi/mpi3mr/mpi3mr_app.h diff --git a/drivers/scsi/mpi3mr/Kconfig b/drivers/scsi/mpi3mr/Kconfig index f7882375e74f..8997531940c2 100644 --- a/drivers/scsi/mpi3mr/Kconfig +++ b/drivers/scsi/mpi3mr/Kconfig @@ -3,5 +3,6 @@ config SCSI_MPI3MR tristate "Broadcom MPI3 Storage Controller Device Driver" depends on PCI && SCSI + select BLK_DEV_BSGLIB help MPI3 based Storage & RAID Controllers Driver. diff --git a/drivers/scsi/mpi3mr/Makefile b/drivers/scsi/mpi3mr/Makefile index 7c2063e04c81..f5cdbe48c150 100644 --- a/drivers/scsi/mpi3mr/Makefile +++ b/drivers/scsi/mpi3mr/Makefile @@ -2,3 +2,4 @@ obj-m += mpi3mr.o mpi3mr-y += mpi3mr_os.o \ mpi3mr_fw.o \ + mpi3mr_app.o \ diff --git a/drivers/scsi/mpi3mr/mpi3mr.h b/drivers/scsi/mpi3mr/mpi3mr.h index 6672d907d75d..f790b84c810a 100644 --- a/drivers/scsi/mpi3mr/mpi3mr.h +++ b/drivers/scsi/mpi3mr/mpi3mr.h @@ -148,6 +148,7 @@ extern int prot_mask; #define MPI3MR_DEFAULT_MDTS (128 * 1024) #define MPI3MR_DEFAULT_PGSZEXP (12) + /* Command retry count definitions */ #define MPI3MR_DEV_RMHS_RETRY_COUNT 3 @@ -714,6 +715,8 @@ struct scmd_priv { * @default_qcount: Total Default queues * @active_poll_qcount: Currently active poll queue count * @requested_poll_qcount: User requested poll queue count + * @bsg_dev: BSG device structure + * @bsg_queue: Request queue for BSG device */ struct mpi3mr_ioc { struct list_head list; @@ -854,6 +857,9 @@ struct mpi3mr_ioc { u16 default_qcount; u16 active_poll_qcount; u16 requested_poll_qcount; + + struct device *bsg_dev; + struct request_queue *bsg_queue; }; /** @@ -962,5 +968,7 @@ void mpi3mr_check_rh_fault_ioc(struct mpi3mr_ioc *mrioc, u32 reason_code); int mpi3mr_process_op_reply_q(struct mpi3mr_ioc *mrioc, struct op_reply_qinfo *op_reply_q); int mpi3mr_blk_mq_poll(struct Scsi_Host *shost, unsigned int queue_num); +void mpi3mr_bsg_init(struct mpi3mr_ioc *mrioc); +void mpi3mr_bsg_exit(struct mpi3mr_ioc *mrioc); #endif /*MPI3MR_H_INCLUDED*/ diff --git a/drivers/scsi/mpi3mr/mpi3mr_app.c b/drivers/scsi/mpi3mr/mpi3mr_app.c new file mode 100644 index 000000000000..ac420730c8c4 --- /dev/null +++ b/drivers/scsi/mpi3mr/mpi3mr_app.c @@ -0,0 +1,106 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Driver for Broadcom MPI3 Storage Controllers + * + * Copyright (C) 2017-2022 Broadcom Inc. + * (mailto: mpi3mr-linuxdrv.pdl@broadcom.com) + * + */ + +#include "mpi3mr.h" +#include "mpi3mr_app.h" +#include + +/** + * mpi3mr_bsg_request - bsg request entry point + * @job: BSG job reference + * + * This is driver's entry point for bsg requests + * + * Return: 0 on success and proper error codes on failure + */ +int mpi3mr_bsg_request(struct bsg_job *job) +{ + return 0; +} + +/** + * mpi3mr_bsg_exit - de-registration from bsg layer + * + * This will be called during driver unload and all + * bsg resources allocated during load will be freed. + * + * Return:Nothing + */ +void mpi3mr_bsg_exit(struct mpi3mr_ioc *mrioc) +{ + if (!mrioc->bsg_queue) + return; + + bsg_remove_queue(mrioc->bsg_queue); + mrioc->bsg_queue = NULL; + + device_del(mrioc->bsg_dev); + put_device(mrioc->bsg_dev); + kfree(mrioc->bsg_dev); +} + +/** + * mpi3mr_bsg_node_release -release bsg device node + * @dev: bsg device node + * + * decrements bsg dev reference count + * + * Return:Nothing + */ +void mpi3mr_bsg_node_release(struct device *dev) +{ + put_device(dev); +} + +/** + * mpi3mr_bsg_init - registration with bsg layer + * + * This will be called during driver load and it will + * register driver with bsg layer + * + * Return:Nothing + */ +void mpi3mr_bsg_init(struct mpi3mr_ioc *mrioc) +{ + mrioc->bsg_dev = kzalloc(sizeof(struct device), GFP_KERNEL); + if (!mrioc->bsg_dev) { + ioc_err(mrioc, "bsg device mem allocation failed\n"); + return; + } + + device_initialize(mrioc->bsg_dev); + dev_set_name(mrioc->bsg_dev, "mpi3mrctl%u", mrioc->id); + + if (device_add(mrioc->bsg_dev)) { + ioc_err(mrioc, "%s: bsg device add failed\n", + dev_name(mrioc->bsg_dev)); + goto err_device_add; + } + + mrioc->bsg_dev->release = mpi3mr_bsg_node_release; + + mrioc->bsg_queue = bsg_setup_queue(mrioc->bsg_dev, dev_name(mrioc->bsg_dev), + mpi3mr_bsg_request, NULL, 0); + if (!mrioc->bsg_queue) { + ioc_err(mrioc, "%s: bsg registration failed\n", + dev_name(mrioc->bsg_dev)); + goto err_setup_queue; + } + + blk_queue_max_segment_size(mrioc->bsg_queue, MPI3MR_MAX_APP_XFER_SIZE); + blk_queue_max_hw_sectors(mrioc->bsg_queue, MPI3MR_MAX_APP_XFER_SECTORS); + + return; + +err_setup_queue: + device_del(mrioc->bsg_dev); + put_device(mrioc->bsg_dev); +err_device_add: + kfree(mrioc->bsg_dev); +} diff --git a/drivers/scsi/mpi3mr/mpi3mr_app.h b/drivers/scsi/mpi3mr/mpi3mr_app.h new file mode 100644 index 000000000000..4bc31d45c610 --- /dev/null +++ b/drivers/scsi/mpi3mr/mpi3mr_app.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * Driver for Broadcom MPI3 Storage Controllers + * + * Copyright (C) 2017-2022 Broadcom Inc. + * (mailto: mpi3mr-linuxdrv.pdl@broadcom.com) + * + */ + +#ifndef MPI3MR_APP_H_INCLUDED +#define MPI3MR_APP_H_INCLUDED + +/* + * Maximum data transfer size definitions for management + * application commands + */ +#define MPI3MR_MAX_APP_XFER_SIZE (1 * 1024 * 1024) +#define MPI3MR_MAX_APP_XFER_SECTORS 2048 + +#endif diff --git a/drivers/scsi/mpi3mr/mpi3mr_os.c b/drivers/scsi/mpi3mr/mpi3mr_os.c index f7cd70a15ea6..faf14a5f9123 100644 --- a/drivers/scsi/mpi3mr/mpi3mr_os.c +++ b/drivers/scsi/mpi3mr/mpi3mr_os.c @@ -4345,6 +4345,7 @@ mpi3mr_probe(struct pci_dev *pdev, const struct pci_device_id *id) } scsi_scan_host(shost); + mpi3mr_bsg_init(mrioc); return retval; addhost_failed: @@ -4389,6 +4390,7 @@ static void mpi3mr_remove(struct pci_dev *pdev) while (mrioc->reset_in_progress || mrioc->is_driver_loading) ssleep(1); + mpi3mr_bsg_exit(mrioc); mrioc->stop_drv_processing = 1; mpi3mr_cleanup_fwevt_list(mrioc); spin_lock_irqsave(&mrioc->fwevt_lock, flags); From patchwork Tue Mar 22 12:21:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sumit Saxena X-Patchwork-Id: 553698 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 95969C433F5 for ; Tue, 22 Mar 2022 12:22:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231751AbiCVMX3 (ORCPT ); Tue, 22 Mar 2022 08:23:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33090 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233471AbiCVMXZ (ORCPT ); Tue, 22 Mar 2022 08:23:25 -0400 Received: from mail-pl1-x643.google.com (mail-pl1-x643.google.com [IPv6:2607:f8b0:4864:20::643]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9CC8684EFE for ; Tue, 22 Mar 2022 05:21:56 -0700 (PDT) Received: by mail-pl1-x643.google.com with SMTP id x2so1771437plm.7 for ; Tue, 22 Mar 2022 05:21:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=RQZF/JK5VKqyIIGZwEUJt8GSFWX9yTO1tGRlyBJKdZg=; b=G3T7GyxSBruaDGx9aSupkQiN0sJ6dc/hzNfsvz2neqpCVcJBdHtNoP5Y4AsZMXJUee YpZNmqi2Qqfafe6IN2TEkNq5AQYpu4SH311EKi/CCJn64WRb03oXYaRnGaVhK0ZpIHUn TUU/ymY3PzMZs7yTgD2UiX8fkvrYaxfgJnzS4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=RQZF/JK5VKqyIIGZwEUJt8GSFWX9yTO1tGRlyBJKdZg=; b=CD6kP/VBCnd8Jx+4U22s6QxXokKIqwE7HRiX/STz/0HOawCkr0N3wJfnRhg3xEyC9O gm41/tg5ra0RDryB4ena0zmNhBEgU7+mMmablPWX+9LUxybVWr/E71Fz3zMA4+cSW/vX Vz2UrD/c3i2pjnC5kHi6xarvEj6GMlZiPQNAtSf+HOQu4KrtNlYlFp/VzhjunFdCBE7u dmwDjBNGU7HEx5n77z3KMnjgtWnVr5ZsIDjEjFS76R2D7vRfLiY2wSh5j+Qn3sAdfO20 jUDB+G10oWJaF9fuqBVn1e83mnkkutj8aIcwWVLctJBB2RyDYShDP7hEkbRbxEwH/WYq xV1w== X-Gm-Message-State: AOAM531kiIkibE5/k4KWz154c1+kbiBgy7ep7wEzKaGupCN07YTilwEd 0ELl7/cCsGdnxuUFB3NrTh+66i8KBKYJ3UNiq4KJadSsqU4EfnIkQ/2CqRojL2vgYLxKC6MtQww YlI5WmBPv2B9ohBbDYUPRb/vDkN+Toca/FzQciT+d7b60VPN5IPL1hJWNXAIVGgfJc4RStTpOE6 fN6mueaCT2kw== X-Google-Smtp-Source: ABdhPJxqS7X3T15HwE1hbqycHip/hYwe52BP8mxv2kI++xwxy0twx2M7d6FbgUJ1pUtqwJhUtaOTow== X-Received: by 2002:a17:90b:1c8e:b0:1bf:364c:dd7a with SMTP id oo14-20020a17090b1c8e00b001bf364cdd7amr4662061pjb.103.1647951715411; Tue, 22 Mar 2022 05:21:55 -0700 (PDT) Received: from dhcp-10-123-20-15.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id d11-20020a056a0010cb00b004e1b76b09c0sm23225259pfu.74.2022.03.22.05.21.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Mar 2022 05:21:54 -0700 (PDT) From: Sumit Saxena To: linux-scsi@vger.kernel.org Cc: martin.petersen@oracle.com, sathya.prakash@broadcom.com, kashyap.desai@broadcom.com, chandrakanth.patil@broadcom.com, sreekanth.reddy@broadcom.com, prayas.patel@broadcom.com, Sumit Saxena Subject: [PATCH 2/7] mpi3mr: add support for driver commands Date: Tue, 22 Mar 2022 08:21:02 -0400 Message-Id: <20220322122107.8482-3-sumit.saxena@broadcom.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220322122107.8482-1-sumit.saxena@broadcom.com> References: <20220322122107.8482-1-sumit.saxena@broadcom.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org There are certain BSG commands which is to be completed by driver without involving firmware. These requests are termed as driver commands. This patch adds support for the same. Signed-off-by: Sumit Saxena --- drivers/scsi/mpi3mr/mpi3mr.h | 16 +- drivers/scsi/mpi3mr/mpi3mr_app.c | 394 ++++++++++++++++++++++++ drivers/scsi/mpi3mr/mpi3mr_app.h | 470 +++++++++++++++++++++++++++++ drivers/scsi/mpi3mr/mpi3mr_debug.h | 12 +- drivers/scsi/mpi3mr/mpi3mr_fw.c | 21 +- drivers/scsi/mpi3mr/mpi3mr_os.c | 3 + 6 files changed, 904 insertions(+), 12 deletions(-) diff --git a/drivers/scsi/mpi3mr/mpi3mr.h b/drivers/scsi/mpi3mr/mpi3mr.h index f790b84c810a..958675d917ee 100644 --- a/drivers/scsi/mpi3mr/mpi3mr.h +++ b/drivers/scsi/mpi3mr/mpi3mr.h @@ -89,7 +89,7 @@ extern int prot_mask; /* Reserved Host Tag definitions */ #define MPI3MR_HOSTTAG_INVALID 0xFFFF #define MPI3MR_HOSTTAG_INITCMDS 1 -#define MPI3MR_HOSTTAG_IOCTLCMDS 2 +#define MPI3MR_HOSTTAG_BSG_CMDS 2 #define MPI3MR_HOSTTAG_BLK_TMS 5 #define MPI3MR_NUM_DEVRMCMD 16 @@ -190,10 +190,10 @@ enum mpi3mr_iocstate { enum mpi3mr_reset_reason { MPI3MR_RESET_FROM_BRINGUP = 1, MPI3MR_RESET_FROM_FAULT_WATCH = 2, - MPI3MR_RESET_FROM_IOCTL = 3, + MPI3MR_RESET_FROM_APP = 3, MPI3MR_RESET_FROM_EH_HOS = 4, MPI3MR_RESET_FROM_TM_TIMEOUT = 5, - MPI3MR_RESET_FROM_IOCTL_TIMEOUT = 6, + MPI3MR_RESET_FROM_APP_TIMEOUT = 6, MPI3MR_RESET_FROM_MUR_FAILURE = 7, MPI3MR_RESET_FROM_CTLR_CLEANUP = 8, MPI3MR_RESET_FROM_CIACTIV_FAULT = 9, @@ -686,6 +686,7 @@ struct scmd_priv { * @chain_bitmap_sz: Chain buffer allocator bitmap size * @chain_bitmap: Chain buffer allocator bitmap * @chain_buf_lock: Chain buffer list lock + * @bsg_cmds: Command tracker for BSG command * @host_tm_cmds: Command tracker for task management commands * @dev_rmhs_cmds: Command tracker for device removal commands * @evtack_cmds: Command tracker for event ack commands @@ -717,6 +718,10 @@ struct scmd_priv { * @requested_poll_qcount: User requested poll queue count * @bsg_dev: BSG device structure * @bsg_queue: Request queue for BSG device + * @stop_bsgs: Stop BSG request flag + * @logdata_buf: Circular buffer to store log data entries + * @logdata_buf_idx: Index of entry in buffer to store + * @logdata_entry_sz: log data entry size */ struct mpi3mr_ioc { struct list_head list; @@ -823,6 +828,7 @@ struct mpi3mr_ioc { void *chain_bitmap; spinlock_t chain_buf_lock; + struct mpi3mr_drv_cmd bsg_cmds; struct mpi3mr_drv_cmd host_tm_cmds; struct mpi3mr_drv_cmd dev_rmhs_cmds[MPI3MR_NUM_DEVRMCMD]; struct mpi3mr_drv_cmd evtack_cmds[MPI3MR_NUM_EVTACKCMD]; @@ -860,6 +866,10 @@ struct mpi3mr_ioc { struct device *bsg_dev; struct request_queue *bsg_queue; + u8 stop_bsgs; + u8 *logdata_buf; + u16 logdata_buf_idx; + u16 logdata_entry_sz; }; /** diff --git a/drivers/scsi/mpi3mr/mpi3mr_app.c b/drivers/scsi/mpi3mr/mpi3mr_app.c index ac420730c8c4..8efb33040626 100644 --- a/drivers/scsi/mpi3mr/mpi3mr_app.c +++ b/drivers/scsi/mpi3mr/mpi3mr_app.c @@ -11,6 +11,385 @@ #include "mpi3mr_app.h" #include +/** + * mpi3mr_bsg_verify_adapter - verify adapter number is valid + * @ioc_number: Adapter number + * @mriocpp: Pointer to hold per adapter instance + * + * This function checks whether given adapter number matches + * with an adapter id in the driver's list and if so fills + * pointer to the per adapter instance in mriocpp else set that + * to NULL. + * + * Return: Nothing. + */ +static void mpi3mr_bsg_verify_adapter(int ioc_number, + struct mpi3mr_ioc **mriocpp) +{ + struct mpi3mr_ioc *mrioc; + + spin_lock(&mrioc_list_lock); + list_for_each_entry(mrioc, &mrioc_list, list) { + if (mrioc->id != ioc_number) + continue; + spin_unlock(&mrioc_list_lock); + *mriocpp = mrioc; + return; + } + spin_unlock(&mrioc_list_lock); + *mriocpp = NULL; +} + +/** + * mpi3mr_enable_logdata - Handler for log data enable + * @mrioc: Adapter instance reference + * @job: BSG job reference + * + * This function enables log data caching in the driver if not + * already enabled and return the maximum number of log data + * entries that can be cached in the driver. + * + * Return: 0 on success and proper error codes on failure + */ +static long mpi3mr_enable_logdata(struct mpi3mr_ioc *mrioc, + struct bsg_job *job) +{ + long rval = -EINVAL; + struct mpi3mr_logdata_enable logdata_enable; + + if (mrioc->logdata_buf) + goto copy_user_data; + + mrioc->logdata_entry_sz = + (mrioc->reply_sz - (sizeof(struct mpi3_event_notification_reply) - 4)) + + MPI3MR_BSG_LOGDATA_ENTRY_HEADER_SZ; + mrioc->logdata_buf_idx = 0; + + mrioc->logdata_buf = kcalloc(MPI3MR_BSG_LOGDATA_MAX_ENTRIES, + mrioc->logdata_entry_sz, GFP_KERNEL); + if (!mrioc->logdata_buf) + return -ENOMEM; + +copy_user_data: + memset(&logdata_enable, 0, sizeof(logdata_enable)); + logdata_enable.max_entries = + MPI3MR_BSG_LOGDATA_MAX_ENTRIES; + if (job->request_payload.payload_len >= sizeof(logdata_enable)) { + sg_copy_from_buffer(job->request_payload.sg_list, + job->request_payload.sg_cnt, + &logdata_enable, sizeof(logdata_enable)); + rval = 0; + } + return rval; +} +/** + * mpi3mr_get_logdata - Handler for get log data + * @mrioc: Adapter instance reference + * @job: BSG job pointer + * This function copies the log data entries to the user buffer + * when log caching is enabled in the driver. + * + * Return: 0 on success and proper error codes on failure + */ +static long mpi3mr_get_logdata(struct mpi3mr_ioc *mrioc, + struct bsg_job *job) +{ + u16 num_entries, sz, entry_sz = mrioc->logdata_entry_sz; + + if ((!mrioc->logdata_buf) || (job->request_payload.payload_len < entry_sz)) + return -EINVAL; + + num_entries = job->request_payload.payload_len / entry_sz; + if (num_entries > MPI3MR_BSG_LOGDATA_MAX_ENTRIES) + num_entries = MPI3MR_BSG_LOGDATA_MAX_ENTRIES; + sz = num_entries * entry_sz; + + if (job->request_payload.payload_len >= sz) { + sg_copy_from_buffer(job->request_payload.sg_list, + job->request_payload.sg_cnt, + mrioc->logdata_buf, sz); + return 0; + } + return -EINVAL; +} + +/** + * mpi3mr_get_all_tgt_info - Get all target information + * @mrioc: Adapter instance reference + * @job: BSG job reference + * + * This function copies the driver managed target devices device + * handle, persistent ID, bus ID and taret ID to the user + * provided buffer for the specific controller. This function + * also provides the number of devices managed by the driver for + * the specific controller. + * + * Return: 0 on success and proper error codes on failure + */ +static long mpi3mr_get_all_tgt_info(struct mpi3mr_ioc *mrioc, + struct bsg_job *job) +{ + long rval = -EINVAL; + u16 num_devices = 0, i = 0, size; + unsigned long flags; + struct mpi3mr_tgt_dev *tgtdev; + struct mpi3mr_device_map_info *devmap_info = NULL; + struct mpi3mr_all_tgt_info *alltgt_info = NULL; + uint32_t min_entrylen = 0, kern_entrylen = 0, usr_entrylen = 0; + + if (job->request_payload.payload_len < sizeof(u32)) { + dprint_bsg_err(mrioc, "%s: invalid size argument\n", + __func__); + return rval; + } + + spin_lock_irqsave(&mrioc->tgtdev_lock, flags); + list_for_each_entry(tgtdev, &mrioc->tgtdev_list, list) + num_devices++; + spin_unlock_irqrestore(&mrioc->tgtdev_lock, flags); + + if ((job->request_payload.payload_len == sizeof(u32)) || + list_empty(&mrioc->tgtdev_list)) { + sg_copy_from_buffer(job->request_payload.sg_list, + job->request_payload.sg_cnt, + &num_devices, sizeof(num_devices)); + return 0; + } + + kern_entrylen = (num_devices - 1) * sizeof(*devmap_info); + size = sizeof(*alltgt_info) + kern_entrylen; + alltgt_info = kzalloc(size, GFP_KERNEL); + if (!alltgt_info) + return -ENOMEM; + + devmap_info = alltgt_info->dmi; + memset((u8 *)devmap_info, 0xFF, (kern_entrylen + sizeof(*devmap_info))); + spin_lock_irqsave(&mrioc->tgtdev_lock, flags); + list_for_each_entry(tgtdev, &mrioc->tgtdev_list, list) { + if (i < num_devices) { + devmap_info[i].handle = tgtdev->dev_handle; + devmap_info[i].perst_id = tgtdev->perst_id; + if (tgtdev->host_exposed && tgtdev->starget) { + devmap_info[i].target_id = tgtdev->starget->id; + devmap_info[i].bus_id = + tgtdev->starget->channel; + } + i++; + } + } + num_devices = i; + spin_unlock_irqrestore(&mrioc->tgtdev_lock, flags); + + memcpy(&alltgt_info->num_devices, &num_devices, sizeof(num_devices)); + + usr_entrylen = (job->request_payload.payload_len - sizeof(u32)) / sizeof(*devmap_info); + usr_entrylen *= sizeof(*devmap_info); + min_entrylen = min(usr_entrylen, kern_entrylen); + if (min_entrylen && (!memcpy(&alltgt_info->dmi, devmap_info, min_entrylen))) { + dprint_bsg_err(mrioc, "%s:%d: device map info copy failed\n", + __func__, __LINE__); + rval = -EFAULT; + goto out; + } + + sg_copy_from_buffer(job->request_payload.sg_list, + job->request_payload.sg_cnt, + alltgt_info, job->request_payload.payload_len); + rval = 0; +out: + kfree(alltgt_info); + return rval; +} + +/** + * mpi3mr_get_change_count - Get topology change count + * @mrioc: Adapter instance reference + * @job: BSG job reference + * + * This function copies the toplogy change count provided by the + * driver in events and cached in the driver to the user + * provided buffer for the specific controller. + * + * Return: 0 on success and proper error codes on failure + */ +static long mpi3mr_get_change_count(struct mpi3mr_ioc *mrioc, + struct bsg_job *job) +{ + long rval = -EINVAL; + struct mpi3mr_change_count chgcnt; + + memset(&chgcnt, 0, sizeof(chgcnt)); + chgcnt.change_count = mrioc->change_count; + if (job->request_payload.payload_len >= sizeof(chgcnt)) { + sg_copy_from_buffer(job->request_payload.sg_list, + job->request_payload.sg_cnt, + &chgcnt, sizeof(chgcnt)); + rval = 0; + } + return rval; +} + +/** + * mpi3mr_bsg_adp_reset - Issue controller reset + * @mrioc: Adapter instance reference + * @job: BSG job reference + * + * This function identifies the user provided reset type and + * issues approporiate reset to the controller and wait for that + * to complete and reinitialize the controller and then returns + * + * Return: 0 on success and proper error codes on failure + */ +static long mpi3mr_bsg_adp_reset(struct mpi3mr_ioc *mrioc, + struct bsg_job *job) +{ + long rval = -EINVAL; + u8 save_snapdump; + struct mpi3mr_bsg_adp_reset adpreset; + + if (job->request_payload.payload_len != + sizeof(adpreset)) { + dprint_bsg_err(mrioc, "%s: invalid size argument\n", + __func__); + goto out; + } + + sg_copy_to_buffer(job->request_payload.sg_list, + job->request_payload.sg_cnt, + &adpreset, sizeof(adpreset)); + + switch (adpreset.reset_type) { + case MPI3MR_BSG_ADPRESET_SOFT: + save_snapdump = 0; + break; + case MPI3MR_BSG_ADPRESET_DIAG_FAULT: + save_snapdump = 1; + break; + default: + dprint_bsg_err(mrioc, "%s: unknown reset_type(%d)\n", + __func__, adpreset.reset_type); + goto out; + } + + rval = mpi3mr_soft_reset_handler(mrioc, MPI3MR_RESET_FROM_APP, + save_snapdump); + + if (rval) + dprint_bsg_err(mrioc, + "%s: reset handler returned error(%ld) for reset type %d\n", + __func__, rval, adpreset.reset_type); +out: + return rval; +} + +/** + * mpi3mr_bsg_populate_adpinfo - Get adapter info command handler + * @mrioc: Adapter instance reference + * @job: BSG job reference + * + * This function provides adapter information for the given + * controller + * + * Return: 0 on success and proper error codes on failure + */ +static long mpi3mr_bsg_populate_adpinfo(struct mpi3mr_ioc *mrioc, + struct bsg_job *job) +{ + enum mpi3mr_iocstate ioc_state; + struct mpi3mr_bsg_in_adpinfo adpinfo; + + memset(&adpinfo, 0, sizeof(adpinfo)); + adpinfo.adp_type = MPI3MR_BSG_ADPTYPE_AVGFAMILY; + adpinfo.pci_dev_id = mrioc->pdev->device; + adpinfo.pci_dev_hw_rev = mrioc->pdev->revision; + adpinfo.pci_subsys_dev_id = mrioc->pdev->subsystem_device; + adpinfo.pci_subsys_ven_id = mrioc->pdev->subsystem_vendor; + adpinfo.pci_bus = mrioc->pdev->bus->number; + adpinfo.pci_dev = PCI_SLOT(mrioc->pdev->devfn); + adpinfo.pci_func = PCI_FUNC(mrioc->pdev->devfn); + adpinfo.pci_seg_id = pci_domain_nr(mrioc->pdev->bus); + adpinfo.app_intfc_ver = MPI3MR_IOCTL_VERSION; + + ioc_state = mpi3mr_get_iocstate(mrioc); + if (ioc_state == MRIOC_STATE_UNRECOVERABLE) + adpinfo.adp_state = MPI3MR_BSG_ADPSTATE_UNRECOVERABLE; + else if ((mrioc->reset_in_progress) || (mrioc->stop_bsgs)) + adpinfo.adp_state = MPI3MR_BSG_ADPSTATE_IN_RESET; + else if (ioc_state == MRIOC_STATE_FAULT) + adpinfo.adp_state = MPI3MR_BSG_ADPSTATE_FAULT; + else + adpinfo.adp_state = MPI3MR_BSG_ADPSTATE_OPERATIONAL; + + memcpy((u8 *)&adpinfo.driver_info, (u8 *)&mrioc->driver_info, + sizeof(adpinfo.driver_info)); + + if (job->request_payload.payload_len >= sizeof(adpinfo)) { + sg_copy_from_buffer(job->request_payload.sg_list, + job->request_payload.sg_cnt, + &adpinfo, sizeof(adpinfo)); + return 0; + } + return -EINVAL; +} + +/** + * mpi3mr_bsg_process_drv_cmds - Driver Command handler + * @job: BSG job reference + * + * This function is the top level handler for driver commands, + * this does basic validation of the buffer and identifies the + * opcode and switches to correct sub handler. + * + * Return: 0 on success and proper error codes on failure + */ +static long mpi3mr_bsg_process_drv_cmds(struct bsg_job *job) +{ + long rval = -EINVAL; + struct mpi3mr_ioc *mrioc = NULL; + struct mpi3mr_bsg_packet *bsg_req = NULL; + struct mpi3mr_bsg_drv_cmd *drvrcmd = NULL; + + bsg_req = job->request; + drvrcmd = &bsg_req->cmd.drvrcmd; + + mpi3mr_bsg_verify_adapter(drvrcmd->mrioc_id, &mrioc); + if (!mrioc) + return -ENODEV; + + if (drvrcmd->opcode == MPI3MR_DRVBSG_OPCODE_ADPINFO) { + rval = mpi3mr_bsg_populate_adpinfo(mrioc, job); + return rval; + } + + if (mutex_lock_interruptible(&mrioc->bsg_cmds.mutex)) + return -ERESTARTSYS; + + switch (drvrcmd->opcode) { + case MPI3MR_DRVBSG_OPCODE_ADPRESET: + rval = mpi3mr_bsg_adp_reset(mrioc, job); + break; + case MPI3MR_DRVBSG_OPCODE_ALLTGTDEVINFO: + rval = mpi3mr_get_all_tgt_info(mrioc, job); + break; + case MPI3MR_DRVBSG_OPCODE_GETCHGCNT: + rval = mpi3mr_get_change_count(mrioc, job); + break; + case MPI3MR_DRVBSG_OPCODE_LOGDATAENABLE: + rval = mpi3mr_enable_logdata(mrioc, job); + break; + case MPI3MR_DRVBSG_OPCODE_GETLOGDATA: + rval = mpi3mr_get_logdata(mrioc, job); + break; + case MPI3MR_DRVBSG_OPCODE_UNKNOWN: + default: + pr_err("%s: unsupported driver command opcode %d\n", + MPI3MR_DRIVER_NAME, drvrcmd->opcode); + break; + } + mutex_unlock(&mrioc->bsg_cmds.mutex); + return rval; +} + /** * mpi3mr_bsg_request - bsg request entry point * @job: BSG job reference @@ -21,6 +400,21 @@ */ int mpi3mr_bsg_request(struct bsg_job *job) { + long rval = -EINVAL; + struct mpi3mr_bsg_packet *bsg_req = job->request; + + switch (bsg_req->cmd_type) { + case MPI3MR_DRV_CMD: + rval = mpi3mr_bsg_process_drv_cmds(job); + break; + default: + pr_err("%s: unsupported BSG command(0x%08x)\n", + MPI3MR_DRIVER_NAME, bsg_req->cmd_type); + break; + } + + bsg_job_done(job, rval, job->reply_payload_rcv_len); + return 0; } diff --git a/drivers/scsi/mpi3mr/mpi3mr_app.h b/drivers/scsi/mpi3mr/mpi3mr_app.h index 4bc31d45c610..d324a13c672c 100644 --- a/drivers/scsi/mpi3mr/mpi3mr_app.h +++ b/drivers/scsi/mpi3mr/mpi3mr_app.h @@ -10,6 +10,476 @@ #ifndef MPI3MR_APP_H_INCLUDED #define MPI3MR_APP_H_INCLUDED + +/* Definitions for BSG commands */ +#define MPI3MR_IOCTL_VERSION 0x06 + +#define MPI3MR_APP_DEFAULT_TIMEOUT (60) /*seconds*/ + +#define MPI3MR_BSG_ADPTYPE_UNKNOWN 0 +#define MPI3MR_BSG_ADPTYPE_AVGFAMILY 1 + +#define MPI3MR_BSG_ADPSTATE_UNKNOWN 0 +#define MPI3MR_BSG_ADPSTATE_OPERATIONAL 1 +#define MPI3MR_BSG_ADPSTATE_FAULT 2 +#define MPI3MR_BSG_ADPSTATE_IN_RESET 3 +#define MPI3MR_BSG_ADPSTATE_UNRECOVERABLE 4 + +#define MPI3MR_BSG_ADPRESET_UNKNOWN 0 +#define MPI3MR_BSG_ADPRESET_SOFT 1 +#define MPI3MR_BSG_ADPRESET_DIAG_FAULT 2 + +#define MPI3MR_BSG_LOGDATA_MAX_ENTRIES 400 +#define MPI3MR_BSG_LOGDATA_ENTRY_HEADER_SZ 4 + +#define MPI3MR_DRVBSG_OPCODE_UNKNOWN 0 +#define MPI3MR_DRVBSG_OPCODE_ADPINFO 1 +#define MPI3MR_DRVBSG_OPCODE_ADPRESET 2 +#define MPI3MR_DRVBSG_OPCODE_ALLTGTDEVINFO 4 +#define MPI3MR_DRVBSG_OPCODE_GETCHGCNT 5 +#define MPI3MR_DRVBSG_OPCODE_LOGDATAENABLE 6 +#define MPI3MR_DRVBSG_OPCODE_PELENABLE 7 +#define MPI3MR_DRVBSG_OPCODE_GETLOGDATA 8 +#define MPI3MR_DRVBSG_OPCODE_QUERY_HDB 9 +#define MPI3MR_DRVBSG_OPCODE_REPOST_HDB 10 +#define MPI3MR_DRVBSG_OPCODE_UPLOAD_HDB 11 +#define MPI3MR_DRVBSG_OPCODE_REFRESH_HDB_TRIGGERS 12 + + +#define MPI3MR_BSG_BUFTYPE_UNKNOWN 0 +#define MPI3MR_BSG_BUFTYPE_RAIDMGMT_CMD 1 +#define MPI3MR_BSG_BUFTYPE_RAIDMGMT_RESP 2 +#define MPI3MR_BSG_BUFTYPE_DATA_IN 3 +#define MPI3MR_BSG_BUFTYPE_DATA_OUT 4 +#define MPI3MR_BSG_BUFTYPE_MPI_REPLY 5 +#define MPI3MR_BSG_BUFTYPE_ERR_RESPONSE 6 +#define MPI3MR_BSG_BUFTYPE_MPI_REQUEST 0xFE + +#define MPI3MR_BSG_MPI_REPLY_BUFTYPE_UNKNOWN 0 +#define MPI3MR_BSG_MPI_REPLY_BUFTYPE_STATUS 1 +#define MPI3MR_BSG_MPI_REPLY_BUFTYPE_ADDRESS 2 + +#define MPI3MR_HDB_BUFTYPE_UNKNOWN 0 +#define MPI3MR_HDB_BUFTYPE_TRACE 1 +#define MPI3MR_HDB_BUFTYPE_FIRMWARE 2 +#define MPI3MR_HDB_BUFTYPE_RESERVED 3 + +#define MPI3MR_HDB_BUFSTATUS_UNKNOWN 0 +#define MPI3MR_HDB_BUFSTATUS_NOT_ALLOCATED 1 +#define MPI3MR_HDB_BUFSTATUS_POSTED_UNPAUSED 2 +#define MPI3MR_HDB_BUFSTATUS_POSTED_PAUSED 3 +#define MPI3MR_HDB_BUFSTATUS_RELEASED 4 + +#define MPI3MR_HDB_TRIGGER_TYPE_UNKNOWN 0 +#define MPI3MR_HDB_TRIGGER_TYPE_DIAGFAULT 1 +#define MPI3MR_HDB_TRIGGER_TYPE_ELEMENT 2 +#define MPI3MR_HDB_TRIGGER_TYPE_MASTER 3 + + +/* Supported BSG commands */ +enum command { + MPI3MR_DRV_CMD = 1, + MPI3MR_MPT_CMD = 2, +}; + +/** + * struct mpi3mr_bsg_in_adpinfo - Adapter information request + * data returned by the driver. + * + * @adp_type: Adapter type + * @rsvd1: Reserved + * @pci_dev_id: PCI device ID of the adapter + * @pci_dev_hw_rev: PCI revision of the adapter + * @pci_subsys_dev_id: PCI subsystem device ID of the adapter + * @pci_subsys_ven_id: PCI subsystem vendor ID of the adapter + * @pci_dev: PCI device + * @pci_func: PCI function + * @pci_bus: PCI bus + * @rsvd2: Reserved + * @pci_seg_id: PCI segment ID + * @app_intfc_ver: version of the application interface definition + * @rsvd3: Reserved + * @rsvd4: Reserved + * @rsvd5: Reserved + * @driver_info: Driver Information (Version/Name) + */ +struct mpi3mr_bsg_in_adpinfo { + uint32_t adp_type; + uint32_t rsvd1; + uint32_t pci_dev_id; + uint32_t pci_dev_hw_rev; + uint32_t pci_subsys_dev_id; + uint32_t pci_subsys_ven_id; + uint32_t pci_dev:5; + uint32_t pci_func:3; + uint32_t pci_bus:8; + uint16_t rsvd2; + uint32_t pci_seg_id; + uint32_t app_intfc_ver; + uint8_t adp_state; + uint8_t rsvd3; + uint16_t rsvd4; + uint32_t rsvd5[2]; + struct mpi3_driver_info_layout driver_info; +}; + +/** + * struct mpi3mr_bsg_adp_reset - Adapter reset request + * payload data to the driver. + * + * @reset_type: Reset type + * @rsvd1: Reserved + * @rsvd2: Reserved + */ +struct mpi3mr_bsg_adp_reset { + uint8_t reset_type; + uint8_t rsvd1; + uint16_t rsvd2; +}; + +/** + * struct mpi3mr_change_count - Topology change count + * returned by the driver. + * + * @change_count: Topology change count + * @rsvd: Reserved + */ +struct mpi3mr_change_count { + uint16_t change_count; + uint16_t rsvd; +}; + +/** + * struct mpi3mr_device_map_info - Target device mapping + * information + * + * @handle: Firmware device handle + * @perst_id: Persistent ID assigned by the firmware + * @target_id: Target ID assigned by the driver + * @bus_id: Bus ID assigned by the driver + * @rsvd1: Reserved + * @rsvd2: Reserved + */ +struct mpi3mr_device_map_info { + uint16_t handle; + uint16_t perst_id; + uint32_t target_id; + uint8_t bus_id; + uint8_t rsvd1; + uint16_t rsvd2; +}; + +/** + * struct mpi3mr_all_tgt_info - Target device mapping + * information returned by the driver + * + * @num_devices: The number of devices in driver's inventory + * @rsvd1: Reserved + * @rsvd2: Reserved + * @dmi: Variable length array of mapping information of targets + */ +struct mpi3mr_all_tgt_info { + uint16_t num_devices; + uint16_t rsvd1; + uint32_t rsvd2; + struct mpi3mr_device_map_info dmi[1]; +}; + +/** + * struct mpi3mr_logdata_enable - Number of log data + * entries saved by the driver returned as payload data for + * enable logdata BSG request by the driver. + * + * @max_entries: Number of log data entries cached by the driver + * @rsvd: Reserved + */ +struct mpi3mr_logdata_enable { + uint16_t max_entries; + uint16_t rsvd; +}; + +/** + * struct mpi3mr_bsg_out_pel_enable - PEL enable request payload + * data to the driver. + * + * @pel_locale: PEL locale to the firmware + * @pel_class: PEL class to the firmware + * @rsvd: Reserved + */ +struct mpi3mr_bsg_out_pel_enable { + uint16_t pel_locale; + uint8_t pel_class; + uint8_t rsvd; +}; + +/** + * struct mpi3mr_logdata_entry - Log data entry cached by the + * driver. + * + * @valid_entry: Is the entry valid + * @rsvd1: Reserved + * @rsvd2: Reserved + * @data: Variable length Log entry data + */ +struct mpi3mr_logdata_entry { + uint8_t valid_entry; + uint8_t rsvd1; + uint16_t rsvd2; + uint8_t data[1]; /* Variable length Array */ +}; + +/** + * struct mpi3mr_bsg_in_log_data - Log data entries saved by + * the driver returned as payload data for Get logdata request + * by the driver. + * + * @entry: Variable length Log data entry array + */ +struct mpi3mr_bsg_in_log_data { + struct mpi3mr_logdata_entry entry[1]; +}; + +/** + * struct mpi3mr_hdb_entry - host diag buffer entry. + * + * @buf_type: Buffer type + * @status: Buffer status + * @trigger_type: Trigger type + * @rsvd1: Reserved + * @size: Buffer size + * @rsvd2: Reserved + * @trigger_data: Trigger specific data + * @rsvd3: Reserved + * @rsvd4: Reserved + */ +struct mpi3mr_hdb_entry { + uint8_t buf_type; + uint8_t status; + uint8_t trigger_type; + uint8_t rsvd1; + uint16_t size; + uint16_t rsvd2; + uint64_t trigger_data; + uint32_t rsvd3; + uint32_t rsvd4; +}; + + +/** + * struct mpi3mr_bsg_in_hdb_status - This structure contains + * return data for the BSG request to retrieve the number of host + * diagnostic buffers supported by the driver and their current + * status and additional status specific data if any in forms of + * multiple hdb entries. + * + * @num_hdb_types: Number of host diag buffer types supported + * @rsvd1: Reserved + * @rsvd2: Reserved + * @rsvd3: Reserved + * @entry: Variable length Diag buffer status entry array + */ +struct mpi3mr_bsg_in_hdb_status { + uint8_t num_hdb_types; + uint8_t rsvd1; + uint16_t rsvd2; + uint32_t rsvd3; + struct mpi3mr_hdb_entry entry[1]; +}; + +/** + * struct mpi3mr_bsg_out_repost_hdb - Repost host diagnostic + * buffer request payload data to the driver. + * + * @buf_type: Buffer type + * @rsvd1: Reserved + * @rsvd2: Reserved + */ +struct mpi3mr_bsg_out_repost_hdb { + uint8_t buf_type; + uint8_t rsvd1; + uint16_t rsvd2; +}; + +/** + * struct mpi3mr_bsg_out_upload_hdb - Upload host diagnostic + * buffer request payload data to the driver. + * + * @buf_type: Buffer type + * @rsvd1: Reserved + * @rsvd2: Reserved + * @start_offset: Start offset of the buffer from where to copy + * @length: Length of the buffer to copy + */ +struct mpi3mr_bsg_out_upload_hdb { + uint8_t buf_type; + uint8_t rsvd1; + uint16_t rsvd2; + uint32_t start_offset; + uint32_t length; +}; + +/** + * struct mpi3mr_bsg_out_refresh_hdb_triggers - Refresh host + * diagnostic buffer triggers request payload data to the driver. + * + * @page_type: Page type + * @rsvd1: Reserved + * @rsvd2: Reserved + */ +struct mpi3mr_bsg_out_refresh_hdb_triggers { + uint8_t page_type; + uint8_t rsvd1; + uint16_t rsvd2; +}; +/** + * struct mpi3mr_bsg_drv_cmd - Generic bsg data + * structure for all driver specific requests. + * + * @mrioc_id: Controller ID + * @opcode: Driver specific opcode + * @rsvd1: Reserved + * @rsvd2: Reserved + */ +struct mpi3mr_bsg_drv_cmd { + uint8_t mrioc_id; + uint8_t opcode; + uint16_t rsvd1; + uint32_t rsvd2[4]; +}; +/** + * struct mpi3mr_bsg_in_reply_buf - MPI reply buffer returned + * for MPI Passthrough request . + * + * @mpi_reply_type: Type of MPI reply + * @rsvd1: Reserved + * @rsvd2: Reserved + * @reply_buf: Variable Length buffer based on mpirep type + */ +struct mpi3mr_bsg_in_reply_buf { + uint8_t mpi_reply_type; + uint8_t rsvd1; + uint16_t rsvd2; + uint8_t reply_buf[1]; +}; + +/** + * struct mpi3mr_buf_entry - User buffer descriptor for MPI + * Passthrough requests. + * + * @buf_type: Buffer type + * @rsvd1: Reserved + * @rsvd2: Reserved + * @buf_len: Buffer length + */ +struct mpi3mr_buf_entry { + uint8_t buf_type; + uint8_t rsvd1; + uint16_t rsvd2; + uint32_t buf_len; +}; +/** + * struct mpi3mr_bsg_buf_entry_list - list of user buffer + * descriptor for MPI Passthrough requests. + * + * @num_of_entries: Number of buffer descriptors + * @rsvd1: Reserved + * @rsvd2: Reserved + * @rsvd3: Reserved + * @buf_entry: Variable length array of buffer descriptors + */ +struct mpi3mr_buf_entry_list { + uint8_t num_of_entries; + uint8_t rsvd1; + uint16_t rsvd2; + uint32_t rsvd3; + struct mpi3mr_buf_entry buf_entry[1]; +}; +/** + * struct mpi3mr_bsg_mptcmd - Generic bsg data + * structure for all MPI Passthrough requests . + * + * @mrioc_id: Controller ID + * @rsvd1: Reserved + * @timeout: MPI request timeout + * @buf_entry_list: Buffer descriptor list + */ +struct mpi3mr_bsg_mptcmd { + uint8_t mrioc_id; + uint8_t rsvd1; + uint16_t timeout; + uint32_t rsvd2; + struct mpi3mr_buf_entry_list buf_entry_list; +}; + +/** + * struct mpi3mr_bsg_packet - Generic bsg data + * structure for all supported requests . + * + * @cmd_type: represents drvrcmd or mptcmd + * @rsvd1: Reserved + * @rsvd2: Reserved + * @drvrcmd: driver request structure + * @mptcmd: mpt request structure + */ +struct mpi3mr_bsg_packet { + uint8_t cmd_type; + uint8_t rsvd1; + uint16_t rsvd2; + uint32_t rsvd3; + union { + struct mpi3mr_bsg_drv_cmd drvrcmd; + struct mpi3mr_bsg_mptcmd mptcmd; + } cmd; +}; + +/** + * struct mpi3mr_buf_map - local structure to + * track kernel and user buffers associated with an BSG + * structure. + * + * @bsg_buf: BSG buffer virtual address + * @bsg_buf_len: BSG buffer length + * @kern_buf: Kernel buffer virtual address + * @kern_buf_len: Kernel buffer length + * @kern_buf_dma: Kernel buffer DMA address + * @data_dir: Data direction. + */ +struct mpi3mr_buf_map { + void *bsg_buf; + u32 bsg_buf_len; + void *kern_buf; + u32 kern_buf_len; + dma_addr_t kern_buf_dma; + u8 data_dir; +}; + +/* Encapsulated NVMe command definitions */ +#define MPI3MR_NVME_PRP_SIZE 8 /* PRP size */ +#define MPI3MR_NVME_CMD_PRP1_OFFSET 24 /* PRP1 offset in NVMe cmd */ +#define MPI3MR_NVME_CMD_PRP2_OFFSET 32 /* PRP2 offset in NVMe cmd */ +#define MPI3MR_NVME_CMD_SGL_OFFSET 24 /* SGL offset in NVMe cmd */ +#define MPI3MR_NVME_DATA_FORMAT_PRP 0 +#define MPI3MR_NVME_DATA_FORMAT_SGL1 1 +#define MPI3MR_NVME_DATA_FORMAT_SGL2 2 + +/** + * struct mpi3mr_nvme_pt_sge - Structure to store SGEs for NVMe + * Encapsulated commands. + * + * @base_addr: Physical address + * @length: SGE length + * @rsvd: Reserved + * @rsvd1: Reserved + * @sgl_type: sgl type + */ +struct mpi3mr_nvme_pt_sge { + u64 base_addr; + u32 length; + u16 rsvd; + u8 rsvd1; + u8 sgl_type; +}; + /* * Maximum data transfer size definitions for management * application commands diff --git a/drivers/scsi/mpi3mr/mpi3mr_debug.h b/drivers/scsi/mpi3mr/mpi3mr_debug.h index c7982443f45a..65bfac72948c 100644 --- a/drivers/scsi/mpi3mr/mpi3mr_debug.h +++ b/drivers/scsi/mpi3mr/mpi3mr_debug.h @@ -23,8 +23,8 @@ #define MPI3_DEBUG_RESET 0x00000020 #define MPI3_DEBUG_SCSI_ERROR 0x00000040 #define MPI3_DEBUG_REPLY 0x00000080 -#define MPI3_DEBUG_IOCTL_ERROR 0x00008000 -#define MPI3_DEBUG_IOCTL_INFO 0x00010000 +#define MPI3_DEBUG_BSG_ERROR 0x00008000 +#define MPI3_DEBUG_BSG_INFO 0x00010000 #define MPI3_DEBUG_SCSI_INFO 0x00020000 #define MPI3_DEBUG 0x01000000 #define MPI3_DEBUG_SG 0x02000000 @@ -110,15 +110,15 @@ } while (0) -#define dprint_ioctl_info(ioc, fmt, ...) \ +#define dprint_bsg_info(ioc, fmt, ...) \ do { \ - if (ioc->logging_level & MPI3_DEBUG_IOCTL_INFO) \ + if (ioc->logging_level & MPI3_DEBUG_BSG_INFO) \ pr_info("%s: " fmt, (ioc)->name, ##__VA_ARGS__); \ } while (0) -#define dprint_ioctl_err(ioc, fmt, ...) \ +#define dprint_bsg_err(ioc, fmt, ...) \ do { \ - if (ioc->logging_level & MPI3_DEBUG_IOCTL_ERROR) \ + if (ioc->logging_level & MPI3_DEBUG_BSG_ERROR) \ pr_info("%s: " fmt, (ioc)->name, ##__VA_ARGS__); \ } while (0) diff --git a/drivers/scsi/mpi3mr/mpi3mr_fw.c b/drivers/scsi/mpi3mr/mpi3mr_fw.c index e25c02466043..480730721f50 100644 --- a/drivers/scsi/mpi3mr/mpi3mr_fw.c +++ b/drivers/scsi/mpi3mr/mpi3mr_fw.c @@ -297,6 +297,8 @@ mpi3mr_get_drv_cmd(struct mpi3mr_ioc *mrioc, u16 host_tag, switch (host_tag) { case MPI3MR_HOSTTAG_INITCMDS: return &mrioc->init_cmds; + case MPI3MR_HOSTTAG_BSG_CMDS: + return &mrioc->bsg_cmds; case MPI3MR_HOSTTAG_BLK_TMS: return &mrioc->host_tm_cmds; case MPI3MR_HOSTTAG_INVALID: @@ -865,10 +867,10 @@ static const struct { } mpi3mr_reset_reason_codes[] = { { MPI3MR_RESET_FROM_BRINGUP, "timeout in bringup" }, { MPI3MR_RESET_FROM_FAULT_WATCH, "fault" }, - { MPI3MR_RESET_FROM_IOCTL, "application invocation" }, + { MPI3MR_RESET_FROM_APP, "application invocation" }, { MPI3MR_RESET_FROM_EH_HOS, "error handling" }, { MPI3MR_RESET_FROM_TM_TIMEOUT, "TM timeout" }, - { MPI3MR_RESET_FROM_IOCTL_TIMEOUT, "IOCTL timeout" }, + { MPI3MR_RESET_FROM_APP_TIMEOUT, "application command timeout" }, { MPI3MR_RESET_FROM_MUR_FAILURE, "MUR failure" }, { MPI3MR_RESET_FROM_CTLR_CLEANUP, "timeout in controller cleanup" }, { MPI3MR_RESET_FROM_CIACTIV_FAULT, "component image activation fault" }, @@ -2813,6 +2815,10 @@ static int mpi3mr_alloc_reply_sense_bufs(struct mpi3mr_ioc *mrioc) if (!mrioc->init_cmds.reply) goto out_failed; + mrioc->bsg_cmds.reply = kzalloc(mrioc->reply_sz, GFP_KERNEL); + if (!mrioc->bsg_cmds.reply) + goto out_failed; + for (i = 0; i < MPI3MR_NUM_DEVRMCMD; i++) { mrioc->dev_rmhs_cmds[i].reply = kzalloc(mrioc->reply_sz, GFP_KERNEL); @@ -3948,6 +3954,8 @@ void mpi3mr_memset_buffers(struct mpi3mr_ioc *mrioc) if (mrioc->init_cmds.reply) { memset(mrioc->init_cmds.reply, 0, sizeof(*mrioc->init_cmds.reply)); + memset(mrioc->bsg_cmds.reply, 0, + sizeof(*mrioc->bsg_cmds.reply)); memset(mrioc->host_tm_cmds.reply, 0, sizeof(*mrioc->host_tm_cmds.reply)); for (i = 0; i < MPI3MR_NUM_DEVRMCMD; i++) @@ -4050,6 +4058,9 @@ void mpi3mr_free_mem(struct mpi3mr_ioc *mrioc) kfree(mrioc->init_cmds.reply); mrioc->init_cmds.reply = NULL; + kfree(mrioc->bsg_cmds.reply); + mrioc->bsg_cmds.reply = NULL; + kfree(mrioc->host_tm_cmds.reply); mrioc->host_tm_cmds.reply = NULL; @@ -4235,6 +4246,8 @@ static void mpi3mr_flush_drv_cmds(struct mpi3mr_ioc *mrioc) cmdptr = &mrioc->init_cmds; mpi3mr_drv_cmd_comp_reset(mrioc, cmdptr); + cmdptr = &mrioc->bsg_cmds; + mpi3mr_drv_cmd_comp_reset(mrioc, cmdptr); cmdptr = &mrioc->host_tm_cmds; mpi3mr_drv_cmd_comp_reset(mrioc, cmdptr); @@ -4258,7 +4271,7 @@ static void mpi3mr_flush_drv_cmds(struct mpi3mr_ioc *mrioc) * This is an handler for recovering controller by issuing soft * reset are diag fault reset. This is a blocking function and * when one reset is executed if any other resets they will be - * blocked. All IOCTLs/IO will be blocked during the reset. If + * blocked. All BSG requests will be blocked during the reset. If * controller reset is successful then the controller will be * reinitalized, otherwise the controller will be marked as not * recoverable @@ -4305,6 +4318,7 @@ int mpi3mr_soft_reset_handler(struct mpi3mr_ioc *mrioc, mpi3mr_reset_rc_name(reset_reason)); mrioc->reset_in_progress = 1; + mrioc->stop_bsgs = 1; mrioc->prev_reset_result = -1; if ((!snapdump) && (reset_reason != MPI3MR_RESET_FROM_FAULT_WATCH) && @@ -4377,6 +4391,7 @@ int mpi3mr_soft_reset_handler(struct mpi3mr_ioc *mrioc, &mrioc->watchdog_work, msecs_to_jiffies(MPI3MR_WATCHDOG_INTERVAL)); spin_unlock_irqrestore(&mrioc->watchdog_lock, flags); + mrioc->stop_bsgs = 0; } else { mpi3mr_issue_reset(mrioc, MPI3_SYSIF_HOST_DIAG_RESET_ACTION_DIAG_FAULT, reset_reason); diff --git a/drivers/scsi/mpi3mr/mpi3mr_os.c b/drivers/scsi/mpi3mr/mpi3mr_os.c index faf14a5f9123..a03e39083a42 100644 --- a/drivers/scsi/mpi3mr/mpi3mr_os.c +++ b/drivers/scsi/mpi3mr/mpi3mr_os.c @@ -3589,6 +3589,7 @@ static int mpi3mr_scan_finished(struct Scsi_Host *shost, mpi3mr_start_watchdog(mrioc); mrioc->is_driver_loading = 0; + mrioc->stop_bsgs = 0; return 1; } @@ -4259,6 +4260,7 @@ mpi3mr_probe(struct pci_dev *pdev, const struct pci_device_id *id) mutex_init(&mrioc->reset_mutex); mpi3mr_init_drv_cmd(&mrioc->init_cmds, MPI3MR_HOSTTAG_INITCMDS); mpi3mr_init_drv_cmd(&mrioc->host_tm_cmds, MPI3MR_HOSTTAG_BLK_TMS); + mpi3mr_init_drv_cmd(&mrioc->bsg_cmds, MPI3MR_HOSTTAG_BSG_CMDS); for (i = 0; i < MPI3MR_NUM_DEVRMCMD; i++) mpi3mr_init_drv_cmd(&mrioc->dev_rmhs_cmds[i], @@ -4271,6 +4273,7 @@ mpi3mr_probe(struct pci_dev *pdev, const struct pci_device_id *id) mrioc->logging_level = logging_level; mrioc->shost = shost; mrioc->pdev = pdev; + mrioc->stop_bsgs = 1; /* init shost parameters */ shost->max_cmd_len = MPI3MR_MAX_CDB_LENGTH; From patchwork Tue Mar 22 12:21:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sumit Saxena X-Patchwork-Id: 553849 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B92A8C433EF for ; Tue, 22 Mar 2022 12:22:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234820AbiCVMXa (ORCPT ); Tue, 22 Mar 2022 08:23:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33242 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233486AbiCVMX1 (ORCPT ); Tue, 22 Mar 2022 08:23:27 -0400 Received: from mail-pg1-x544.google.com (mail-pg1-x544.google.com [IPv6:2607:f8b0:4864:20::544]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 649468567B for ; Tue, 22 Mar 2022 05:21:59 -0700 (PDT) Received: by mail-pg1-x544.google.com with SMTP id t14so12508793pgr.3 for ; Tue, 22 Mar 2022 05:21:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=ripBV8Qq4ILkmOBw7eHa4SCLOeeFINWh/LYBIuW/syI=; b=YFf5gaF6TgEIoUtD0OtC1CucIjD4rSubcJVzx4IVLL75WOw7fviubgJ2+JfQqxXvES ZD5jIwHDBu6OgKjdEIQ3fkEKvUVaaWCVT2CYd7GPJZbs3qvnRB+YdSjY/CpgUkn3M+L+ XZK+UcTc4x+W4SSWXfQiwW3fz6melSQ0AE+7M= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=ripBV8Qq4ILkmOBw7eHa4SCLOeeFINWh/LYBIuW/syI=; b=tyRuM5i+zx6Y5moOOSfPJXpL30I1yyDkcywip+w+m0P23K12xfpqv+kFGyyOvuLgIe pi+uHVIoH1KSbPjXt1xKDir822pjfRk7djIsXDQBROJhk+9l31IFGLYcBVvG+vn6tTBb mPjeYD5pLIsZZQbF/LYYklYAayvL3+jjNE8I+PEbHcr7avFUkllkTJxovJlZZRAhnhqk vZ1VgyA22ElpV0OQP8n4BNTGlZb/B4D5oOQzLwx05hj1SQCvB9vLDHkl+bPYrCq34OU2 WaCS0TZrmsZ5VeORr9AVPUmjXQ8dcMtVeBqhw5gncpAdWJkKT6+8RUkmn1gfLob8/25M u5bw== X-Gm-Message-State: AOAM531Twt5XxmeZgJr+20STXsu/U6SEQ5aquJGMDlbriDkvrB3bjeTv Av9Lehr+bFN2FPCQrF8hS8GwdjpWMQ9uo/GhePdd7xmVhBMNjFAFr0B8tdbLHUIABXyNTt3cnPw ubXrNt9rDc5EtRUOZtlnZ3nvun8eQ3pwbh/RGHqKAZRaezJpfEriss+FBBlcy8/yjtfyf6tpmrK tOv3wQ9GfmHA== X-Google-Smtp-Source: ABdhPJxGSpwq8K0aHEl7y279zDr0z09qytOgTEsObAEuFXmmvTxT9bIXdAC8QmJH7Ozq62uV4N+fgQ== X-Received: by 2002:a63:5907:0:b0:382:2f93:5467 with SMTP id n7-20020a635907000000b003822f935467mr15681088pgb.460.1647951718331; Tue, 22 Mar 2022 05:21:58 -0700 (PDT) Received: from dhcp-10-123-20-15.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id d11-20020a056a0010cb00b004e1b76b09c0sm23225259pfu.74.2022.03.22.05.21.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Mar 2022 05:21:57 -0700 (PDT) From: Sumit Saxena To: linux-scsi@vger.kernel.org Cc: martin.petersen@oracle.com, sathya.prakash@broadcom.com, kashyap.desai@broadcom.com, chandrakanth.patil@broadcom.com, sreekanth.reddy@broadcom.com, prayas.patel@broadcom.com, Sumit Saxena Subject: [PATCH 3/7] mpi3mr: add support for MPT commands Date: Tue, 22 Mar 2022 08:21:03 -0400 Message-Id: <20220322122107.8482-4-sumit.saxena@broadcom.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220322122107.8482-1-sumit.saxena@broadcom.com> References: <20220322122107.8482-1-sumit.saxena@broadcom.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org There are certain management commands requiring firmware intervention. These commands are termed as MPT commands. This patch adds support for the same. Signed-off-by: Sumit Saxena --- drivers/scsi/mpi3mr/mpi3mr.h | 8 + drivers/scsi/mpi3mr/mpi3mr_app.c | 519 ++++++++++++++++++++++++++++- drivers/scsi/mpi3mr/mpi3mr_debug.h | 25 ++ drivers/scsi/mpi3mr/mpi3mr_os.c | 4 +- 4 files changed, 553 insertions(+), 3 deletions(-) diff --git a/drivers/scsi/mpi3mr/mpi3mr.h b/drivers/scsi/mpi3mr/mpi3mr.h index 958675d917ee..9dd87013767d 100644 --- a/drivers/scsi/mpi3mr/mpi3mr.h +++ b/drivers/scsi/mpi3mr/mpi3mr.h @@ -544,6 +544,7 @@ struct mpi3mr_sdev_priv_data { * @ioc_status: IOC status from the firmware * @ioc_loginfo:IOC log info from the firmware * @is_waiting: Is the command issued in block mode + * @is_sense: Is Sense data present * @retry_count: Retry count for retriable commands * @host_tag: Host tag used by the command * @callback: Callback for non blocking commands @@ -559,6 +560,7 @@ struct mpi3mr_drv_cmd { u16 ioc_status; u32 ioc_loginfo; u8 is_waiting; + u8 is_sense; u8 retry_count; u16 host_tag; @@ -980,5 +982,11 @@ int mpi3mr_process_op_reply_q(struct mpi3mr_ioc *mrioc, int mpi3mr_blk_mq_poll(struct Scsi_Host *shost, unsigned int queue_num); void mpi3mr_bsg_init(struct mpi3mr_ioc *mrioc); void mpi3mr_bsg_exit(struct mpi3mr_ioc *mrioc); +int mpi3mr_issue_tm(struct mpi3mr_ioc *mrioc, u8 tm_type, + u16 handle, uint lun, u16 htag, ulong timeout, + struct mpi3mr_drv_cmd *drv_cmd, + u8 *resp_code, struct scsi_cmnd *scmd); +struct mpi3mr_tgt_dev *mpi3mr_get_tgtdev_by_handle( + struct mpi3mr_ioc *mrioc, u16 handle); #endif /*MPI3MR_H_INCLUDED*/ diff --git a/drivers/scsi/mpi3mr/mpi3mr_app.c b/drivers/scsi/mpi3mr/mpi3mr_app.c index 8efb33040626..cd354cacf3e0 100644 --- a/drivers/scsi/mpi3mr/mpi3mr_app.c +++ b/drivers/scsi/mpi3mr/mpi3mr_app.c @@ -200,7 +200,6 @@ static long mpi3mr_get_all_tgt_info(struct mpi3mr_ioc *mrioc, kfree(alltgt_info); return rval; } - /** * mpi3mr_get_change_count - Get topology change count * @mrioc: Adapter instance reference @@ -390,6 +389,521 @@ static long mpi3mr_bsg_process_drv_cmds(struct bsg_job *job) return rval; } +/** + * mpi3mr_bsg_build_sgl - SGL construction for MPI commands + * @mpi_req: MPI request + * @sgl_offset: offset to start sgl in the MPI request + * @drv_bufs: DMA address of the buffers to be placed in sgl + * @bufcnt: Number of DMA buffers + * @is_rmc: Does the buffer list has management command buffer + * @is_rmr: Does the buffer list has management response buffer + * @num_datasges: Number of data buffers in the list + * + * This function places the DMA address of the given buffers in + * proper format as SGEs in the given MPI request. + * + * Return: Nothing + */ +static void mpi3mr_bsg_build_sgl(u8 *mpi_req, uint32_t sgl_offset, + struct mpi3mr_buf_map *drv_bufs, u8 bufcnt, u8 is_rmc, + u8 is_rmr, u8 num_datasges) +{ + u8 *sgl = (mpi_req + sgl_offset), count = 0; + struct mpi3_mgmt_passthrough_request *rmgmt_req = + (struct mpi3_mgmt_passthrough_request *)mpi_req; + struct mpi3mr_buf_map *drv_buf_iter = drv_bufs; + u8 sgl_flags, sgl_flags_last; + + sgl_flags = MPI3_SGE_FLAGS_ELEMENT_TYPE_SIMPLE | + MPI3_SGE_FLAGS_DLAS_SYSTEM | MPI3_SGE_FLAGS_END_OF_BUFFER; + sgl_flags_last = sgl_flags | MPI3_SGE_FLAGS_END_OF_LIST; + + if (is_rmc) { + mpi3mr_add_sg_single(&rmgmt_req->command_sgl, + sgl_flags_last, drv_buf_iter->kern_buf_len, + drv_buf_iter->kern_buf_dma); + sgl = (u8 *)drv_buf_iter->kern_buf + drv_buf_iter->bsg_buf_len; + drv_buf_iter++; + count++; + if (is_rmr) { + mpi3mr_add_sg_single(&rmgmt_req->response_sgl, + sgl_flags_last, drv_buf_iter->kern_buf_len, + drv_buf_iter->kern_buf_dma); + drv_buf_iter++; + count++; + } else + mpi3mr_build_zero_len_sge( + &rmgmt_req->response_sgl); + } + if (!num_datasges) { + mpi3mr_build_zero_len_sge(sgl); + return; + } + for (; count < bufcnt; count++, drv_buf_iter++) { + if (drv_buf_iter->data_dir == DMA_NONE) + continue; + if (num_datasges == 1 || !is_rmc) + mpi3mr_add_sg_single(sgl, sgl_flags_last, + drv_buf_iter->kern_buf_len, drv_buf_iter->kern_buf_dma); + else + mpi3mr_add_sg_single(sgl, sgl_flags, + drv_buf_iter->kern_buf_len, drv_buf_iter->kern_buf_dma); + sgl += sizeof(struct mpi3_sge_common); + num_datasges--; + } +} + +/** + * mpi3mr_bsg_process_mpt_cmds - MPI Pass through BSG handler + * @job: BSG job reference + * + * This function is the top level handler for MPI Pass through + * command, this does basic validation of the input data buffers, + * identifies the given buffer types and MPI command, allocates + * DMAable memory for user given buffers, construstcs SGL + * properly and passes the command to the firmware. + * + * Once the MPI command is completed the driver copies the data + * if any and reply, sense information to user provided buffers. + * If the command is timed out then issues controller reset + * prior to returning. + * + * Return: 0 on success and proper error codes on failure + */ + +static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job) +{ + long rval = -EINVAL; + + struct mpi3mr_ioc *mrioc = NULL; + u8 *mpi_req = NULL, *sense_buff_k = NULL; + u8 mpi_msg_size = 0; + struct mpi3mr_bsg_packet *bsg_req = NULL; + struct mpi3mr_bsg_mptcmd *karg; + struct mpi3mr_buf_entry *buf_entries = NULL; + struct mpi3mr_buf_map *drv_bufs = NULL, *drv_buf_iter = NULL; + u8 count, bufcnt = 0, is_rmcb = 0, is_rmrb = 0, din_cnt = 0, dout_cnt = 0; + u8 invalid_be = 0, erb_offset = 0xFF, mpirep_offset = 0xFF, sg_entries = 0; + u8 block_io = 0, resp_code = 0; + struct mpi3_request_header *mpi_header = NULL; + struct mpi3_status_reply_descriptor *status_desc; + struct mpi3_scsi_task_mgmt_request *tm_req; + u32 erbsz = MPI3MR_SENSE_BUF_SZ, tmplen; + u16 dev_handle; + struct mpi3mr_tgt_dev *tgtdev; + struct mpi3mr_stgt_priv_data *stgt_priv = NULL; + struct mpi3mr_bsg_in_reply_buf *bsg_reply_buf = NULL; + u32 din_size = 0, dout_size = 0; + u8 *din_buf = NULL, *dout_buf = NULL; + u8 *sgl_iter = NULL, *sgl_din_iter = NULL, *sgl_dout_iter = NULL; + + bsg_req = job->request; + karg = (struct mpi3mr_bsg_mptcmd *)&bsg_req->cmd.mptcmd; + + mpi3mr_bsg_verify_adapter(karg->mrioc_id, &mrioc); + if (!mrioc) + return -ENODEV; + + if (karg->timeout < MPI3MR_APP_DEFAULT_TIMEOUT) + karg->timeout = MPI3MR_APP_DEFAULT_TIMEOUT; + + mpi_req = kzalloc(MPI3MR_ADMIN_REQ_FRAME_SZ, GFP_KERNEL); + if (!mpi_req) + return -ENOMEM; + mpi_header = (struct mpi3_request_header *)mpi_req; + + bufcnt = karg->buf_entry_list.num_of_entries; + drv_bufs = kzalloc((sizeof(*drv_bufs) * bufcnt), GFP_KERNEL); + if (!drv_bufs) { + rval = -ENOMEM; + goto out; + } + + dout_buf = kzalloc(job->request_payload.payload_len, + GFP_KERNEL); + if (!dout_buf) { + rval = -ENOMEM; + goto out; + } + + din_buf = kzalloc(job->reply_payload.payload_len, + GFP_KERNEL); + if (!din_buf) { + rval = -ENOMEM; + goto out; + } + + sg_copy_to_buffer(job->request_payload.sg_list, + job->request_payload.sg_cnt, + dout_buf, job->request_payload.payload_len); + + buf_entries = karg->buf_entry_list.buf_entry; + sgl_din_iter = din_buf; + sgl_dout_iter = dout_buf; + drv_buf_iter = drv_bufs; + + for (count = 0; count < bufcnt; count++, buf_entries++, drv_buf_iter++) { + + if (sgl_dout_iter > (dout_buf + job->request_payload.payload_len)) { + dprint_bsg_err(mrioc, "%s: data_out buffer length mismatch\n", + __func__); + rval = -EINVAL; + goto out; + } + if (sgl_din_iter > (din_buf + job->reply_payload.payload_len)) { + dprint_bsg_err(mrioc, "%s: data_in buffer length mismatch\n", + __func__); + rval = -EINVAL; + goto out; + } + + switch (buf_entries->buf_type) { + case MPI3MR_BSG_BUFTYPE_RAIDMGMT_CMD: + sgl_iter = sgl_dout_iter; + sgl_dout_iter += buf_entries->buf_len; + drv_buf_iter->data_dir = DMA_TO_DEVICE; + is_rmcb = 1; + if (count != 0) + invalid_be = 1; + break; + case MPI3MR_BSG_BUFTYPE_RAIDMGMT_RESP: + sgl_iter = sgl_din_iter; + sgl_din_iter += buf_entries->buf_len; + drv_buf_iter->data_dir = DMA_FROM_DEVICE; + is_rmrb = 1; + if (count != 1 || !is_rmcb) + invalid_be = 1; + break; + case MPI3MR_BSG_BUFTYPE_DATA_IN: + sgl_iter = sgl_din_iter; + sgl_din_iter += buf_entries->buf_len; + drv_buf_iter->data_dir = DMA_FROM_DEVICE; + din_cnt++; + din_size += drv_buf_iter->bsg_buf_len; + if ((din_cnt > 1) && !is_rmcb) + invalid_be = 1; + break; + case MPI3MR_BSG_BUFTYPE_DATA_OUT: + sgl_iter = sgl_dout_iter; + sgl_dout_iter += buf_entries->buf_len; + drv_buf_iter->data_dir = DMA_TO_DEVICE; + dout_cnt++; + dout_size += drv_buf_iter->bsg_buf_len; + if ((dout_cnt > 1) && !is_rmcb) + invalid_be = 1; + break; + case MPI3MR_BSG_BUFTYPE_MPI_REPLY: + sgl_iter = sgl_din_iter; + sgl_din_iter += buf_entries->buf_len; + drv_buf_iter->data_dir = DMA_NONE; + mpirep_offset = count; + break; + case MPI3MR_BSG_BUFTYPE_ERR_RESPONSE: + sgl_iter = sgl_din_iter; + sgl_din_iter += buf_entries->buf_len; + drv_buf_iter->data_dir = DMA_NONE; + erb_offset = count; + break; + case MPI3MR_BSG_BUFTYPE_MPI_REQUEST: + sgl_iter = sgl_dout_iter; + sgl_dout_iter += buf_entries->buf_len; + drv_buf_iter->data_dir = DMA_NONE; + mpi_msg_size = buf_entries->buf_len; + if ((!mpi_msg_size || (mpi_msg_size % 4)) || + (mpi_msg_size > MPI3MR_ADMIN_REQ_FRAME_SZ)) { + dprint_bsg_err(mrioc, "%s: invalid MPI message size\n", + __func__); + rval = -EINVAL; + goto out; + } + memcpy(mpi_req, sgl_iter, buf_entries->buf_len); + break; + default: + invalid_be = 1; + break; + } + if (invalid_be) { + dprint_bsg_err(mrioc, "%s: invalid buffer entries passed\n", + __func__); + rval = -EINVAL; + goto out; + } + + drv_buf_iter->bsg_buf = sgl_iter; + drv_buf_iter->bsg_buf_len = buf_entries->buf_len; + + } + if (!is_rmcb && (dout_cnt || din_cnt)) { + sg_entries = dout_cnt + din_cnt; + if (((mpi_msg_size) + (sg_entries * + sizeof(struct mpi3_sge_common))) > MPI3MR_ADMIN_REQ_FRAME_SZ) { + dprint_bsg_err(mrioc, + "%s:%d: invalid message size passed\n", + __func__, __LINE__); + rval = -EINVAL; + goto out; + } + } + if (din_size > MPI3MR_MAX_APP_XFER_SIZE) { + dprint_bsg_err(mrioc, + "%s:%d: invalid data transfer size passed for function 0x%x din_size=%d\n", + __func__, __LINE__, mpi_header->function, din_size); + rval = -EINVAL; + goto out; + } + if (dout_size > MPI3MR_MAX_APP_XFER_SIZE) { + dprint_bsg_err(mrioc, + "%s:%d: invalid data transfer size passed for function 0x%x dout_size = %d\n", + __func__, __LINE__, mpi_header->function, dout_size); + rval = -EINVAL; + goto out; + } + + drv_buf_iter = drv_bufs; + for (count = 0; count < bufcnt; count++, drv_buf_iter++) { + if (drv_buf_iter->data_dir == DMA_NONE) + continue; + + drv_buf_iter->kern_buf_len = drv_buf_iter->bsg_buf_len; + if (is_rmcb && !count) + drv_buf_iter->kern_buf_len += ((dout_cnt + din_cnt) * + sizeof(struct mpi3_sge_common)); + + if (!drv_buf_iter->kern_buf_len) + continue; + + drv_buf_iter->kern_buf = dma_alloc_coherent(&mrioc->pdev->dev, + drv_buf_iter->kern_buf_len, &drv_buf_iter->kern_buf_dma, + GFP_KERNEL); + if (!drv_buf_iter->kern_buf) { + rval = -ENOMEM; + goto out; + } + if (drv_buf_iter->data_dir == DMA_TO_DEVICE) { + tmplen = min(drv_buf_iter->kern_buf_len, + drv_buf_iter->bsg_buf_len); + memcpy(drv_buf_iter->kern_buf, drv_buf_iter->bsg_buf, tmplen); + } + } + + if (erb_offset != 0xFF) { + sense_buff_k = kzalloc(erbsz, GFP_KERNEL); + if (!sense_buff_k) { + rval = -ENOMEM; + goto out; + } + } + + if (mutex_lock_interruptible(&mrioc->bsg_cmds.mutex)) { + rval = -ERESTARTSYS; + goto out; + } + if (mrioc->bsg_cmds.state & MPI3MR_CMD_PENDING) { + rval = -EAGAIN; + dprint_bsg_err(mrioc, "%s: command is in use\n", __func__); + mutex_unlock(&mrioc->bsg_cmds.mutex); + goto out; + } + if (mrioc->unrecoverable) { + dprint_bsg_err(mrioc, "%s: unrecoverable controller\n", + __func__); + rval = -EFAULT; + mutex_unlock(&mrioc->bsg_cmds.mutex); + goto out; + } + if (mrioc->reset_in_progress) { + dprint_bsg_err(mrioc, "%s: reset in progress\n", __func__); + rval = -EAGAIN; + mutex_unlock(&mrioc->bsg_cmds.mutex); + goto out; + } + if (mrioc->stop_bsgs) { + dprint_bsg_err(mrioc, "%s: bsgs are blocked\n", __func__); + rval = -EAGAIN; + mutex_unlock(&mrioc->bsg_cmds.mutex); + goto out; + } + + if (mpi_header->function != MPI3_FUNCTION_NVME_ENCAPSULATED) { + mpi3mr_bsg_build_sgl(mpi_req, (mpi_msg_size), + drv_bufs, bufcnt, is_rmcb, is_rmrb, + (dout_cnt + din_cnt)); + } + + if (mpi_header->function == MPI3_FUNCTION_SCSI_TASK_MGMT) { + tm_req = (struct mpi3_scsi_task_mgmt_request *)mpi_req; + if (tm_req->task_type != + MPI3_SCSITASKMGMT_TASKTYPE_ABORT_TASK) { + dev_handle = tm_req->dev_handle; + block_io = 1; + } + } + if (block_io) { + tgtdev = mpi3mr_get_tgtdev_by_handle(mrioc, dev_handle); + if (tgtdev && tgtdev->starget && tgtdev->starget->hostdata) { + stgt_priv = (struct mpi3mr_stgt_priv_data *) + tgtdev->starget->hostdata; + atomic_inc(&stgt_priv->block_io); + mpi3mr_tgtdev_put(tgtdev); + } + } + + mrioc->bsg_cmds.state = MPI3MR_CMD_PENDING; + mrioc->bsg_cmds.is_waiting = 1; + mrioc->bsg_cmds.callback = NULL; + mrioc->bsg_cmds.is_sense = 0; + mrioc->bsg_cmds.sensebuf = sense_buff_k; + memset(mrioc->bsg_cmds.reply, 0, mrioc->reply_sz); + mpi_header->host_tag = cpu_to_le16(MPI3MR_HOSTTAG_BSG_CMDS); + if (mrioc->logging_level & MPI3_DEBUG_BSG_INFO) { + dprint_bsg_info(mrioc, + "%s: posting bsg request to the controller\n", __func__); + dprint_dump(mpi_req, MPI3MR_ADMIN_REQ_FRAME_SZ, + "bsg_mpi3_req"); + if (mpi_header->function == MPI3_FUNCTION_MGMT_PASSTHROUGH) { + drv_buf_iter = &drv_bufs[0]; + dprint_dump(drv_buf_iter->kern_buf, + drv_buf_iter->kern_buf_len, "mpi3_mgmt_req"); + } + } + + init_completion(&mrioc->bsg_cmds.done); + rval = mpi3mr_admin_request_post(mrioc, mpi_req, + MPI3MR_ADMIN_REQ_FRAME_SZ, 0); + + + if (rval) { + mrioc->bsg_cmds.is_waiting = 0; + dprint_bsg_err(mrioc, + "%s: posting bsg request is failed\n", __func__); + rval = -EAGAIN; + goto out_unlock; + } + wait_for_completion_timeout(&mrioc->bsg_cmds.done, + (karg->timeout * HZ)); + if (block_io && stgt_priv) + atomic_dec(&stgt_priv->block_io); + if (!(mrioc->bsg_cmds.state & MPI3MR_CMD_COMPLETE)) { + mrioc->bsg_cmds.is_waiting = 0; + rval = -EAGAIN; + if (mrioc->bsg_cmds.state & MPI3MR_CMD_RESET) + goto out_unlock; + dprint_bsg_err(mrioc, + "%s: bsg request timedout after %d seconds\n", __func__, + karg->timeout); + if (mrioc->logging_level & MPI3_DEBUG_BSG_ERROR) { + dprint_dump(mpi_req, MPI3MR_ADMIN_REQ_FRAME_SZ, + "bsg_mpi3_req"); + if (mpi_header->function == + MPI3_FUNCTION_MGMT_PASSTHROUGH) { + drv_buf_iter = &drv_bufs[0]; + dprint_dump(drv_buf_iter->kern_buf, + drv_buf_iter->kern_buf_len, "mpi3_mgmt_req"); + } + } + + if (mpi_header->function == MPI3_FUNCTION_SCSI_IO) + mpi3mr_issue_tm(mrioc, + MPI3_SCSITASKMGMT_TASKTYPE_TARGET_RESET, + mpi_header->function_dependent, 0, + MPI3MR_HOSTTAG_BLK_TMS, MPI3MR_RESETTM_TIMEOUT, + &mrioc->host_tm_cmds, &resp_code, NULL); + if (!(mrioc->bsg_cmds.state & MPI3MR_CMD_COMPLETE) && + !(mrioc->bsg_cmds.state & MPI3MR_CMD_RESET)) + mpi3mr_soft_reset_handler(mrioc, + MPI3MR_RESET_FROM_APP_TIMEOUT, 1); + goto out_unlock; + } + dprint_bsg_info(mrioc, "%s: bsg request is completed\n", __func__); + + if ((mrioc->bsg_cmds.ioc_status & MPI3_IOCSTATUS_STATUS_MASK) + != MPI3_IOCSTATUS_SUCCESS) { + dprint_bsg_info(mrioc, + "%s: command failed, ioc_status(0x%04x) log_info(0x%08x)\n", + __func__, + (mrioc->bsg_cmds.ioc_status & MPI3_IOCSTATUS_STATUS_MASK), + mrioc->bsg_cmds.ioc_loginfo); + } + + if ((mpirep_offset != 0xFF) && + drv_bufs[mpirep_offset].bsg_buf_len) { + drv_buf_iter = &drv_bufs[mpirep_offset]; + drv_buf_iter->kern_buf_len = (sizeof(*bsg_reply_buf) - 1 + + mrioc->reply_sz); + bsg_reply_buf = kzalloc(drv_buf_iter->kern_buf_len, GFP_KERNEL); + + if (!bsg_reply_buf) { + rval = -ENOMEM; + goto out_unlock; + } + if (mrioc->bsg_cmds.state & MPI3MR_CMD_REPLY_VALID) { + bsg_reply_buf->mpi_reply_type = + MPI3MR_BSG_MPI_REPLY_BUFTYPE_ADDRESS; + memcpy(bsg_reply_buf->reply_buf, + mrioc->bsg_cmds.reply, mrioc->reply_sz); + } else { + bsg_reply_buf->mpi_reply_type = + MPI3MR_BSG_MPI_REPLY_BUFTYPE_STATUS; + status_desc = (struct mpi3_status_reply_descriptor *) + bsg_reply_buf->reply_buf; + status_desc->ioc_status = mrioc->bsg_cmds.ioc_status; + status_desc->ioc_log_info = mrioc->bsg_cmds.ioc_loginfo; + } + tmplen = min(drv_buf_iter->kern_buf_len, + drv_buf_iter->bsg_buf_len); + memcpy(drv_buf_iter->bsg_buf, bsg_reply_buf, tmplen); + } + + if (erb_offset != 0xFF && mrioc->bsg_cmds.sensebuf && + mrioc->bsg_cmds.is_sense) { + drv_buf_iter = &drv_bufs[erb_offset]; + tmplen = min(erbsz, drv_buf_iter->bsg_buf_len); + memcpy(drv_buf_iter->bsg_buf, sense_buff_k, tmplen); + } + + drv_buf_iter = drv_bufs; + for (count = 0; count < bufcnt; count++, drv_buf_iter++) { + if (drv_buf_iter->data_dir == DMA_NONE) + continue; + if (drv_buf_iter->data_dir == DMA_FROM_DEVICE) { + tmplen = min(drv_buf_iter->kern_buf_len, + drv_buf_iter->bsg_buf_len); + memcpy(drv_buf_iter->bsg_buf, + drv_buf_iter->kern_buf, tmplen); + } + } + +out_unlock: + if (din_buf) { + job->reply_payload_rcv_len = + sg_copy_from_buffer(job->reply_payload.sg_list, + job->reply_payload.sg_cnt, + din_buf, job->reply_payload.payload_len); + } + mrioc->bsg_cmds.is_sense = 0; + mrioc->bsg_cmds.sensebuf = NULL; + mrioc->bsg_cmds.state = MPI3MR_CMD_NOTUSED; + mutex_unlock(&mrioc->bsg_cmds.mutex); +out: + kfree(sense_buff_k); + kfree(dout_buf); + kfree(din_buf); + kfree(mpi_req); + if (drv_bufs) { + drv_buf_iter = drv_bufs; + for (count = 0; count < bufcnt; count++, drv_buf_iter++) { + if (drv_buf_iter->kern_buf && drv_buf_iter->kern_buf_dma) + dma_free_coherent(&mrioc->pdev->dev, + drv_buf_iter->kern_buf_len, + drv_buf_iter->kern_buf, + drv_buf_iter->kern_buf_dma); + } + kfree(drv_bufs); + } + kfree(bsg_reply_buf); + return rval; +} + /** * mpi3mr_bsg_request - bsg request entry point * @job: BSG job reference @@ -407,6 +921,9 @@ int mpi3mr_bsg_request(struct bsg_job *job) case MPI3MR_DRV_CMD: rval = mpi3mr_bsg_process_drv_cmds(job); break; + case MPI3MR_MPT_CMD: + rval = mpi3mr_bsg_process_mpt_cmds(job); + break; default: pr_err("%s: unsupported BSG command(0x%08x)\n", MPI3MR_DRIVER_NAME, bsg_req->cmd_type); diff --git a/drivers/scsi/mpi3mr/mpi3mr_debug.h b/drivers/scsi/mpi3mr/mpi3mr_debug.h index 65bfac72948c..2464c400a5a4 100644 --- a/drivers/scsi/mpi3mr/mpi3mr_debug.h +++ b/drivers/scsi/mpi3mr/mpi3mr_debug.h @@ -124,6 +124,31 @@ #endif /* MPT3SAS_DEBUG_H_INCLUDED */ +/** + * dprint_dump - print contents of a memory buffer + * @req: Pointer to a memory buffer + * @sz: Memory buffer size + * @namestr: Name String to identify the buffer type + */ +static inline void +dprint_dump(void *req, int sz, const char *name_string) +{ + int i; + __le32 *mfp = (__le32 *)req; + + sz = sz/4; + if (name_string) + pr_info("%s:\n\t", name_string); + else + pr_info("request:\n\t"); + for (i = 0; i < sz; i++) { + if (i && ((i % 8) == 0)) + pr_info("\n\t"); + pr_info("%08x ", le32_to_cpu(mfp[i])); + } + pr_info("\n"); +} + /** * dprint_dump_req - print message frame contents * @req: pointer to message frame diff --git a/drivers/scsi/mpi3mr/mpi3mr_os.c b/drivers/scsi/mpi3mr/mpi3mr_os.c index a03e39083a42..450574fc1fec 100644 --- a/drivers/scsi/mpi3mr/mpi3mr_os.c +++ b/drivers/scsi/mpi3mr/mpi3mr_os.c @@ -634,7 +634,7 @@ static struct mpi3mr_tgt_dev *__mpi3mr_get_tgtdev_by_handle( * * Return: Target device reference. */ -static struct mpi3mr_tgt_dev *mpi3mr_get_tgtdev_by_handle( +struct mpi3mr_tgt_dev *mpi3mr_get_tgtdev_by_handle( struct mpi3mr_ioc *mrioc, u16 handle) { struct mpi3mr_tgt_dev *tgtdev; @@ -2996,7 +2996,7 @@ inline void mpi3mr_poll_pend_io_completions(struct mpi3mr_ioc *mrioc) * * Return: 0 on success, non-zero on errors */ -static int mpi3mr_issue_tm(struct mpi3mr_ioc *mrioc, u8 tm_type, +int mpi3mr_issue_tm(struct mpi3mr_ioc *mrioc, u8 tm_type, u16 handle, uint lun, u16 htag, ulong timeout, struct mpi3mr_drv_cmd *drv_cmd, u8 *resp_code, struct scsi_cmnd *scmd) From patchwork Tue Mar 22 12:21:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sumit Saxena X-Patchwork-Id: 553697 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5580AC433EF for ; Tue, 22 Mar 2022 12:22:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233490AbiCVMXn (ORCPT ); Tue, 22 Mar 2022 08:23:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33418 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234502AbiCVMXa (ORCPT ); Tue, 22 Mar 2022 08:23:30 -0400 Received: from mail-pf1-x444.google.com (mail-pf1-x444.google.com [IPv6:2607:f8b0:4864:20::444]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7BC1D84EFE for ; Tue, 22 Mar 2022 05:22:02 -0700 (PDT) Received: by mail-pf1-x444.google.com with SMTP id s8so17951640pfk.12 for ; Tue, 22 Mar 2022 05:22:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=qGmvy7qiPcsPfrVLYtjGNocjAziyEJmxgCnfDcpYWbs=; b=XtUsk9TFM1DFlYwDV/v+vfs8dfJRliYVjM1+67XXlICnVm2tE1OpO/w9Y/tEyZOl7e p1FsTLnMs6g54mBe7Ls8obxvfx/7hYE46OcRTg73/vc3gT1MsvwyfzYM12la/fNa15N0 QaplPgqkjlaAdWnsBhmkv2/O6M8OD6+NMk4cQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=qGmvy7qiPcsPfrVLYtjGNocjAziyEJmxgCnfDcpYWbs=; b=iMpISIGBysUi2WrTV5HiaChzhLystYHE/Qhe55DNGgt7aVkx1eyHcc2zao41t5DyT2 asWt3rgJr6nFH0C0yFGTHU1CEaslnHL+m8LZeA2RlCFuMBRvZsYzIDIo7IzmZHS2F5Oa kJQiae8aZKybRUf/zao6tYKS+ZBskXSm7OLYn8Gce316+juKlOXTvKxg41yLxZyd/FtX bvKHbpVdrkXXBYWUzkJpnko7VpandPLU1sxw6JWC80v83WNcYyNB2Ay0PLh2ugoIxnNo X9KQzBtJenUFgvyecUtpRan60saFpl8r20hy9YkOub0rl5JJYlipbJf275M+0MFTaJga 3VJw== X-Gm-Message-State: AOAM5305QPyM0LAuV+aeJqSyFFdzXJBn5ZWHbYQiSjod+KeXfIGTcxMe 8vnVz9b3YzAmvkzjeGUl9cR51xFRy1/TmO3riTX6J3n3EIt4RAauFKAtoXXaL1II5gvoG1T88gz Wfs2gOnKZYIbz9PxFslTgABvvqX/mlBaeRfnlLte8u1OzjpqJzO2VsoYCzF7ttEH+3Kbv7sYXQf 51+jXKVNKJiQ== X-Google-Smtp-Source: ABdhPJwy7Ip82zrLDt4OZ2R1X5dIlj//mxy9S0vwkscgvxX4/kprHt9sp7pccy3u+hpbT4CVIQZPQQ== X-Received: by 2002:a63:d216:0:b0:382:24ef:147a with SMTP id a22-20020a63d216000000b0038224ef147amr18197832pgg.413.1647951721381; Tue, 22 Mar 2022 05:22:01 -0700 (PDT) Received: from dhcp-10-123-20-15.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id d11-20020a056a0010cb00b004e1b76b09c0sm23225259pfu.74.2022.03.22.05.21.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Mar 2022 05:22:00 -0700 (PDT) From: Sumit Saxena To: linux-scsi@vger.kernel.org Cc: martin.petersen@oracle.com, sathya.prakash@broadcom.com, kashyap.desai@broadcom.com, chandrakanth.patil@broadcom.com, sreekanth.reddy@broadcom.com, prayas.patel@broadcom.com, Sumit Saxena Subject: [PATCH 4/7] mpi3mr: add support for PEL commands Date: Tue, 22 Mar 2022 08:21:04 -0400 Message-Id: <20220322122107.8482-5-sumit.saxena@broadcom.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220322122107.8482-1-sumit.saxena@broadcom.com> References: <20220322122107.8482-1-sumit.saxena@broadcom.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This patch includes driver support for the management applications to enable the persistent event log(PEL) notification. Upon receipt of events, driver will increment sysfs variable named as event_counter. Application would poll for event_counter value and any change in it would signal the applications about events. Signed-off-by: Sumit Saxena --- drivers/scsi/mpi3mr/mpi3mr.h | 36 +++- drivers/scsi/mpi3mr/mpi3mr_app.c | 205 ++++++++++++++++++++ drivers/scsi/mpi3mr/mpi3mr_fw.c | 310 +++++++++++++++++++++++++++++++ drivers/scsi/mpi3mr/mpi3mr_os.c | 42 +++++ 4 files changed, 592 insertions(+), 1 deletion(-) diff --git a/drivers/scsi/mpi3mr/mpi3mr.h b/drivers/scsi/mpi3mr/mpi3mr.h index 9dd87013767d..10c074c1eea6 100644 --- a/drivers/scsi/mpi3mr/mpi3mr.h +++ b/drivers/scsi/mpi3mr/mpi3mr.h @@ -52,6 +52,7 @@ extern spinlock_t mrioc_list_lock; extern struct list_head mrioc_list; extern int prot_mask; +extern atomic64_t event_counter; #define MPI3MR_DRIVER_VERSION "8.0.0.68.0" #define MPI3MR_DRIVER_RELDATE "10-February-2022" @@ -90,6 +91,8 @@ extern int prot_mask; #define MPI3MR_HOSTTAG_INVALID 0xFFFF #define MPI3MR_HOSTTAG_INITCMDS 1 #define MPI3MR_HOSTTAG_BSG_CMDS 2 +#define MPI3MR_HOSTTAG_PEL_ABORT 3 +#define MPI3MR_HOSTTAG_PEL_WAIT 4 #define MPI3MR_HOSTTAG_BLK_TMS 5 #define MPI3MR_NUM_DEVRMCMD 16 @@ -151,6 +154,7 @@ extern int prot_mask; /* Command retry count definitions */ #define MPI3MR_DEV_RMHS_RETRY_COUNT 3 +#define MPI3MR_PEL_RETRY_COUNT 3 /* Default target device queue depth */ #define MPI3MR_DEFAULT_SDEV_QD 32 @@ -714,6 +718,16 @@ struct scmd_priv { * @current_event: Firmware event currently in process * @driver_info: Driver, Kernel, OS information to firmware * @change_count: Topology change count + * @pel_enabled: Persistent Event Log(PEL) enabled or not + * @pel_abort_requested: PEL abort is requested or not + * @pel_class: PEL Class identifier + * @pel_locale: PEL Locale identifier + * @pel_cmds: Command tracker for PEL wait command + * @pel_abort_cmd: Command tracker for PEL abort command + * @pel_newest_seqnum: Newest PEL sequenece number + * @pel_seqnum_virt: PEL sequence number virtual address + * @pel_seqnum_dma: PEL sequence number DMA address + * @pel_seqnum_sz: PEL sequenece number size * @op_reply_q_offset: Operational reply queue offset with MSIx * @default_qcount: Total Default queues * @active_poll_qcount: Currently active poll queue count @@ -860,8 +874,20 @@ struct mpi3mr_ioc { struct mpi3mr_fwevt *current_event; struct mpi3_driver_info_layout driver_info; u16 change_count; - u16 op_reply_q_offset; + u8 pel_enabled; + u8 pel_abort_requested; + u8 pel_class; + u16 pel_locale; + struct mpi3mr_drv_cmd pel_cmds; + struct mpi3mr_drv_cmd pel_abort_cmd; + + u32 pel_newest_seqnum; + void *pel_seqnum_virt; + dma_addr_t pel_seqnum_dma; + u32 pel_seqnum_sz; + + u16 op_reply_q_offset; u16 default_qcount; u16 active_poll_qcount; u16 requested_poll_qcount; @@ -884,6 +910,7 @@ struct mpi3mr_ioc { * @send_ack: Event acknowledgment required or not * @process_evt: Bottomhalf processing required or not * @evt_ctx: Event context to send in Ack + * @event_data_size: size of the event data in bytes * @pending_at_sml: waiting for device add/remove API to complete * @discard: discard this event * @ref_count: kref count @@ -897,6 +924,7 @@ struct mpi3mr_fwevt { bool send_ack; bool process_evt; u32 evt_ctx; + u16 event_data_size; bool pending_at_sml; bool discard; struct kref ref_count; @@ -988,5 +1016,11 @@ int mpi3mr_issue_tm(struct mpi3mr_ioc *mrioc, u8 tm_type, u8 *resp_code, struct scsi_cmnd *scmd); struct mpi3mr_tgt_dev *mpi3mr_get_tgtdev_by_handle( struct mpi3mr_ioc *mrioc, u16 handle); +void mpi3mr_pel_get_seqnum_complete(struct mpi3mr_ioc *mrioc, + struct mpi3mr_drv_cmd *drv_cmd); +int mpi3mr_pel_get_seqnum_post(struct mpi3mr_ioc *mrioc, + struct mpi3mr_drv_cmd *drv_cmd); +void mpi3mr_app_save_logdata(struct mpi3mr_ioc *mrioc, char *event_data, + u16 event_data_size); #endif /*MPI3MR_H_INCLUDED*/ diff --git a/drivers/scsi/mpi3mr/mpi3mr_app.c b/drivers/scsi/mpi3mr/mpi3mr_app.c index cd354cacf3e0..aa95bff5412a 100644 --- a/drivers/scsi/mpi3mr/mpi3mr_app.c +++ b/drivers/scsi/mpi3mr/mpi3mr_app.c @@ -11,6 +11,95 @@ #include "mpi3mr_app.h" #include +/** + * mpi3mr_bsg_pel_abort - sends PEL abort request + * @mrioc: Adapter instance reference + * + * This function sends PEL abort request to the firmware through + * admin request queue. + * + * Return: 0 on success, -1 on failure + */ +static int mpi3mr_bsg_pel_abort(struct mpi3mr_ioc *mrioc) +{ + struct mpi3_pel_req_action_abort pel_abort_req; + struct mpi3_pel_reply *pel_reply; + int retval = 0; + u16 pe_log_status; + + if (mrioc->reset_in_progress) { + dprint_bsg_err(mrioc, "%s: reset in progress\n", __func__); + return -1; + } + if (mrioc->stop_bsgs) { + dprint_bsg_err(mrioc, "%s: bsgs are blocked\n", __func__); + return -1; + } + + memset(&pel_abort_req, 0, sizeof(pel_abort_req)); + mutex_lock(&mrioc->pel_abort_cmd.mutex); + if (mrioc->pel_abort_cmd.state & MPI3MR_CMD_PENDING) { + dprint_bsg_err(mrioc, "%s: command is in use\n", __func__); + mutex_unlock(&mrioc->pel_abort_cmd.mutex); + return -1; + } + mrioc->pel_abort_cmd.state = MPI3MR_CMD_PENDING; + mrioc->pel_abort_cmd.is_waiting = 1; + mrioc->pel_abort_cmd.callback = NULL; + pel_abort_req.host_tag = cpu_to_le16(MPI3MR_HOSTTAG_PEL_ABORT); + pel_abort_req.function = MPI3_FUNCTION_PERSISTENT_EVENT_LOG; + pel_abort_req.action = MPI3_PEL_ACTION_ABORT; + pel_abort_req.abort_host_tag = cpu_to_le16(MPI3MR_HOSTTAG_PEL_WAIT); + + mrioc->pel_abort_requested = 1; + init_completion(&mrioc->pel_abort_cmd.done); + retval = mpi3mr_admin_request_post(mrioc, &pel_abort_req, + sizeof(pel_abort_req), 0); + if (retval) { + retval = -1; + dprint_bsg_err(mrioc, "%s: admin request post failed\n", + __func__); + mrioc->pel_abort_requested = 0; + goto out_unlock; + } + + wait_for_completion_timeout(&mrioc->pel_abort_cmd.done, + (MPI3MR_INTADMCMD_TIMEOUT * HZ)); + if (!(mrioc->pel_abort_cmd.state & MPI3MR_CMD_COMPLETE)) { + mrioc->pel_abort_cmd.is_waiting = 0; + dprint_bsg_err(mrioc, "%s: command timedout\n", __func__); + if (!(mrioc->pel_abort_cmd.state & MPI3MR_CMD_RESET)) + mpi3mr_soft_reset_handler(mrioc, + MPI3MR_RESET_FROM_PELABORT_TIMEOUT, 1); + retval = -1; + goto out_unlock; + } + if ((mrioc->pel_abort_cmd.ioc_status & MPI3_IOCSTATUS_STATUS_MASK) + != MPI3_IOCSTATUS_SUCCESS) { + dprint_bsg_err(mrioc, + "%s: command failed, ioc_status(0x%04x) log_info(0x%08x)\n", + __func__, (mrioc->pel_abort_cmd.ioc_status & + MPI3_IOCSTATUS_STATUS_MASK), + mrioc->pel_abort_cmd.ioc_loginfo); + retval = -1; + goto out_unlock; + } + if (mrioc->pel_abort_cmd.state & MPI3MR_CMD_REPLY_VALID) { + pel_reply = (struct mpi3_pel_reply *)mrioc->pel_abort_cmd.reply; + pe_log_status = le16_to_cpu(pel_reply->pe_log_status); + if (pe_log_status != MPI3_PEL_STATUS_SUCCESS) { + dprint_bsg_err(mrioc, + "%s: command failed, pel_status(0x%04x)\n", + __func__, pe_log_status); + retval = -1; + } + } + +out_unlock: + mrioc->pel_abort_cmd.state = MPI3MR_CMD_NOTUSED; + mutex_unlock(&mrioc->pel_abort_cmd.mutex); + return retval; +} /** * mpi3mr_bsg_verify_adapter - verify adapter number is valid * @ioc_number: Adapter number @@ -113,6 +202,87 @@ static long mpi3mr_get_logdata(struct mpi3mr_ioc *mrioc, return -EINVAL; } +/** + * mpi3mr_bsg_pel_enable - Handler for PEL enable driver + * @mrioc: Adapter instance reference + * @job: BSG job pointer + * + * This function is the handler for PEL enable driver. + * Validates the application given class and locale and if + * requires aborts the existing PEL wait request and/or issues + * new PEL wait request to the firmware and returns. + * + * Return: 0 on success and proper error codes on failure. + */ +static long mpi3mr_bsg_pel_enable(struct mpi3mr_ioc *mrioc, + struct bsg_job *job) +{ + long rval = -EINVAL; + struct mpi3mr_bsg_out_pel_enable pel_enable; + u8 issue_pel_wait; + u8 tmp_class; + u16 tmp_locale; + + if (job->request_payload.payload_len != sizeof(pel_enable)) { + dprint_bsg_err(mrioc, "%s: invalid size argument\n", + __func__); + return rval; + } + + sg_copy_to_buffer(job->request_payload.sg_list, + job->request_payload.sg_cnt, + &pel_enable, sizeof(pel_enable)); + + if (pel_enable.pel_class > MPI3_PEL_CLASS_FAULT) { + dprint_bsg_err(mrioc, "%s: out of range class %d sent\n", + __func__, pel_enable.pel_class); + rval = 0; + goto out; + } + if (!mrioc->pel_enabled) + issue_pel_wait = 1; + else { + if ((mrioc->pel_class <= pel_enable.pel_class) && + !((mrioc->pel_locale & pel_enable.pel_locale) ^ + pel_enable.pel_locale)) { + issue_pel_wait = 0; + rval = 0; + } else { + pel_enable.pel_locale |= mrioc->pel_locale; + + if (mrioc->pel_class < pel_enable.pel_class) + pel_enable.pel_class = mrioc->pel_class; + + rval = mpi3mr_bsg_pel_abort(mrioc); + if (rval) { + dprint_bsg_err(mrioc, + "%s: pel_abort failed, status(%ld)\n", + __func__, rval); + goto out; + } + issue_pel_wait = 1; + } + } + if (issue_pel_wait) { + tmp_class = mrioc->pel_class; + tmp_locale = mrioc->pel_locale; + mrioc->pel_class = pel_enable.pel_class; + mrioc->pel_locale = pel_enable.pel_locale; + mrioc->pel_enabled = 1; + rval = mpi3mr_pel_get_seqnum_post(mrioc, NULL); + if (rval) { + mrioc->pel_class = tmp_class; + mrioc->pel_locale = tmp_locale; + mrioc->pel_enabled = 0; + dprint_bsg_err(mrioc, + "%s: pel get sequence number failed, status(%ld)\n", + __func__, rval); + } + } + +out: + return rval; +} /** * mpi3mr_get_all_tgt_info - Get all target information * @mrioc: Adapter instance reference @@ -379,6 +549,9 @@ static long mpi3mr_bsg_process_drv_cmds(struct bsg_job *job) case MPI3MR_DRVBSG_OPCODE_GETLOGDATA: rval = mpi3mr_get_logdata(mrioc, job); break; + case MPI3MR_DRVBSG_OPCODE_PELENABLE: + rval = mpi3mr_bsg_pel_enable(mrioc, job); + break; case MPI3MR_DRVBSG_OPCODE_UNKNOWN: default: pr_err("%s: unsupported driver command opcode %d\n", @@ -904,6 +1077,38 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job) return rval; } +/** + * mpi3mr_app_save_logdata - Save Log Data events + * @mrioc: Adapter instance reference + * @event_data: event data associated with log data event + * @event_data_size: event data size to copy + * + * If log data event caching is enabled by the applicatiobns, + * then this function saves the log data in the circular queue + * and Sends async signal SIGIO to indicate there is an async + * event from the firmware to the event monitoring applications. + * + * Return:Nothing + */ +void mpi3mr_app_save_logdata(struct mpi3mr_ioc *mrioc, char *event_data, + u16 event_data_size) +{ + u32 index = mrioc->logdata_buf_idx, sz; + struct mpi3mr_logdata_entry *entry; + + if (!(mrioc->logdata_buf)) + return; + + entry = (struct mpi3mr_logdata_entry *) + (mrioc->logdata_buf + (index * mrioc->logdata_entry_sz)); + entry->valid_entry = 1; + sz = min(mrioc->logdata_entry_sz, event_data_size); + memcpy(entry->data, event_data, sz); + mrioc->logdata_buf_idx = + ((++index) % MPI3MR_BSG_LOGDATA_MAX_ENTRIES); + atomic64_inc(&event_counter); +} + /** * mpi3mr_bsg_request - bsg request entry point * @job: BSG job reference diff --git a/drivers/scsi/mpi3mr/mpi3mr_fw.c b/drivers/scsi/mpi3mr/mpi3mr_fw.c index 480730721f50..74e09727a1b8 100644 --- a/drivers/scsi/mpi3mr/mpi3mr_fw.c +++ b/drivers/scsi/mpi3mr/mpi3mr_fw.c @@ -15,6 +15,8 @@ mpi3mr_issue_reset(struct mpi3mr_ioc *mrioc, u16 reset_type, u32 reset_reason); static int mpi3mr_setup_admin_qpair(struct mpi3mr_ioc *mrioc); static void mpi3mr_process_factsdata(struct mpi3mr_ioc *mrioc, struct mpi3_ioc_facts_data *facts_data); +static void mpi3mr_pel_wait_complete(struct mpi3mr_ioc *mrioc, + struct mpi3mr_drv_cmd *drv_cmd); static int poll_queues; module_param(poll_queues, int, 0444); @@ -301,6 +303,10 @@ mpi3mr_get_drv_cmd(struct mpi3mr_ioc *mrioc, u16 host_tag, return &mrioc->bsg_cmds; case MPI3MR_HOSTTAG_BLK_TMS: return &mrioc->host_tm_cmds; + case MPI3MR_HOSTTAG_PEL_ABORT: + return &mrioc->pel_abort_cmd; + case MPI3MR_HOSTTAG_PEL_WAIT: + return &mrioc->pel_cmds; case MPI3MR_HOSTTAG_INVALID: if (def_reply && def_reply->function == MPI3_FUNCTION_EVENT_NOTIFICATION) @@ -2837,6 +2843,14 @@ static int mpi3mr_alloc_reply_sense_bufs(struct mpi3mr_ioc *mrioc) if (!mrioc->host_tm_cmds.reply) goto out_failed; + mrioc->pel_cmds.reply = kzalloc(mrioc->reply_sz, GFP_KERNEL); + if (!mrioc->pel_cmds.reply) + goto out_failed; + + mrioc->pel_abort_cmd.reply = kzalloc(mrioc->reply_sz, GFP_KERNEL); + if (!mrioc->pel_abort_cmd.reply) + goto out_failed; + mrioc->dev_handle_bitmap_sz = mrioc->facts.max_devhandle / 8; if (mrioc->facts.max_devhandle % 8) mrioc->dev_handle_bitmap_sz++; @@ -3734,6 +3748,16 @@ int mpi3mr_init_ioc(struct mpi3mr_ioc *mrioc) goto out_failed; } + if (!mrioc->pel_seqnum_virt) { + dprint_init(mrioc, "allocating memory for pel_seqnum_virt\n"); + mrioc->pel_seqnum_sz = sizeof(struct mpi3_pel_seq); + mrioc->pel_seqnum_virt = dma_alloc_coherent(&mrioc->pdev->dev, + mrioc->pel_seqnum_sz, &mrioc->pel_seqnum_dma, + GFP_KERNEL); + if (!mrioc->pel_seqnum_virt) + goto out_failed_noretry; + } + retval = mpi3mr_enable_events(mrioc); if (retval) { ioc_err(mrioc, "failed to enable events %d\n", @@ -3843,6 +3867,16 @@ int mpi3mr_reinit_ioc(struct mpi3mr_ioc *mrioc, u8 is_resume) goto out_failed; } + if (!mrioc->pel_seqnum_virt) { + dprint_reset(mrioc, "allocating memory for pel_seqnum_virt\n"); + mrioc->pel_seqnum_sz = sizeof(struct mpi3_pel_seq); + mrioc->pel_seqnum_virt = dma_alloc_coherent(&mrioc->pdev->dev, + mrioc->pel_seqnum_sz, &mrioc->pel_seqnum_dma, + GFP_KERNEL); + if (!mrioc->pel_seqnum_virt) + goto out_failed_noretry; + } + if (mrioc->shost->nr_hw_queues > mrioc->num_op_reply_q) { ioc_err(mrioc, "cannot create minimum number of operational queues expected:%d created:%d\n", @@ -3958,6 +3992,10 @@ void mpi3mr_memset_buffers(struct mpi3mr_ioc *mrioc) sizeof(*mrioc->bsg_cmds.reply)); memset(mrioc->host_tm_cmds.reply, 0, sizeof(*mrioc->host_tm_cmds.reply)); + memset(mrioc->pel_cmds.reply, 0, + sizeof(*mrioc->pel_cmds.reply)); + memset(mrioc->pel_abort_cmd.reply, 0, + sizeof(*mrioc->pel_abort_cmd.reply)); for (i = 0; i < MPI3MR_NUM_DEVRMCMD; i++) memset(mrioc->dev_rmhs_cmds[i].reply, 0, sizeof(*mrioc->dev_rmhs_cmds[i].reply)); @@ -4064,6 +4102,12 @@ void mpi3mr_free_mem(struct mpi3mr_ioc *mrioc) kfree(mrioc->host_tm_cmds.reply); mrioc->host_tm_cmds.reply = NULL; + kfree(mrioc->pel_cmds.reply); + mrioc->pel_cmds.reply = NULL; + + kfree(mrioc->pel_abort_cmd.reply); + mrioc->pel_abort_cmd.reply = NULL; + for (i = 0; i < MPI3MR_NUM_EVTACKCMD; i++) { kfree(mrioc->evtack_cmds[i].reply); mrioc->evtack_cmds[i].reply = NULL; @@ -4112,6 +4156,16 @@ void mpi3mr_free_mem(struct mpi3mr_ioc *mrioc) mrioc->admin_req_base, mrioc->admin_req_dma); mrioc->admin_req_base = NULL; } + + if (mrioc->pel_seqnum_virt) { + dma_free_coherent(&mrioc->pdev->dev, mrioc->pel_seqnum_sz, + mrioc->pel_seqnum_virt, mrioc->pel_seqnum_dma); + mrioc->pel_seqnum_virt = NULL; + } + + kfree(mrioc->logdata_buf); + mrioc->logdata_buf = NULL; + } /** @@ -4260,6 +4314,254 @@ static void mpi3mr_flush_drv_cmds(struct mpi3mr_ioc *mrioc) cmdptr = &mrioc->evtack_cmds[i]; mpi3mr_drv_cmd_comp_reset(mrioc, cmdptr); } + + cmdptr = &mrioc->pel_cmds; + mpi3mr_drv_cmd_comp_reset(mrioc, cmdptr); + + cmdptr = &mrioc->pel_abort_cmd; + mpi3mr_drv_cmd_comp_reset(mrioc, cmdptr); + +} + +/** + * mpi3mr_pel_wait_post - Issue PEL Wait + * @mrioc: Adapter instance reference + * @drv_cmd: Internal command tracker + * + * Issue PEL Wait MPI request through admin queue and return. + * + * Return: Nothing. + */ +static void mpi3mr_pel_wait_post(struct mpi3mr_ioc *mrioc, + struct mpi3mr_drv_cmd *drv_cmd) +{ + struct mpi3_pel_req_action_wait pel_wait; + + mrioc->pel_abort_requested = false; + + memset(&pel_wait, 0, sizeof(pel_wait)); + drv_cmd->state = MPI3MR_CMD_PENDING; + drv_cmd->is_waiting = 0; + drv_cmd->callback = mpi3mr_pel_wait_complete; + drv_cmd->ioc_status = 0; + drv_cmd->ioc_loginfo = 0; + pel_wait.host_tag = cpu_to_le16(MPI3MR_HOSTTAG_PEL_WAIT); + pel_wait.function = MPI3_FUNCTION_PERSISTENT_EVENT_LOG; + pel_wait.action = MPI3_PEL_ACTION_WAIT; + pel_wait.starting_sequence_number = cpu_to_le32(mrioc->pel_newest_seqnum); + pel_wait.locale = cpu_to_le16(mrioc->pel_locale); + pel_wait.class = cpu_to_le16(mrioc->pel_class); + pel_wait.wait_time = MPI3_PEL_WAITTIME_INFINITE_WAIT; + dprint_bsg_info(mrioc, "sending pel_wait seqnum(%d), class(%d), locale(0x%08x)\n", + mrioc->pel_newest_seqnum, mrioc->pel_class, mrioc->pel_locale); + + if (mpi3mr_admin_request_post(mrioc, &pel_wait, sizeof(pel_wait), 0)) { + dprint_bsg_err(mrioc, + "Issuing PELWait: Admin post failed\n"); + drv_cmd->state = MPI3MR_CMD_NOTUSED; + drv_cmd->callback = NULL; + drv_cmd->retry_count = 0; + mrioc->pel_enabled = false; + } +} + +/** + * mpi3mr_pel_get_seqnum_post - Issue PEL Get Sequence number + * @mrioc: Adapter instance reference + * @drv_cmd: Internal command tracker + * + * Issue PEL get sequence number MPI request through admin queue + * and return. + * + * Return: 0 on success, non-zero on failure. + */ +int mpi3mr_pel_get_seqnum_post(struct mpi3mr_ioc *mrioc, + struct mpi3mr_drv_cmd *drv_cmd) +{ + struct mpi3_pel_req_action_get_sequence_numbers pel_getseq_req; + u8 sgl_flags = MPI3MR_SGEFLAGS_SYSTEM_SIMPLE_END_OF_LIST; + int retval = 0; + + memset(&pel_getseq_req, 0, sizeof(pel_getseq_req)); + mrioc->pel_cmds.state = MPI3MR_CMD_PENDING; + mrioc->pel_cmds.is_waiting = 0; + mrioc->pel_cmds.ioc_status = 0; + mrioc->pel_cmds.ioc_loginfo = 0; + mrioc->pel_cmds.callback = mpi3mr_pel_get_seqnum_complete; + pel_getseq_req.host_tag = cpu_to_le16(MPI3MR_HOSTTAG_PEL_WAIT); + pel_getseq_req.function = MPI3_FUNCTION_PERSISTENT_EVENT_LOG; + pel_getseq_req.action = MPI3_PEL_ACTION_GET_SEQNUM; + mpi3mr_add_sg_single(&pel_getseq_req.sgl, sgl_flags, + mrioc->pel_seqnum_sz, mrioc->pel_seqnum_dma); + + retval = mpi3mr_admin_request_post(mrioc, &pel_getseq_req, + sizeof(pel_getseq_req), 0); + if (retval) { + if (drv_cmd) { + drv_cmd->state = MPI3MR_CMD_NOTUSED; + drv_cmd->callback = NULL; + drv_cmd->retry_count = 0; + } + mrioc->pel_enabled = false; + } + + return retval; +} + +/** + * mpi3mr_pel_wait_complete - PELWait Completion callback + * @mrioc: Adapter instance reference + * @drv_cmd: Internal command tracker + * + * This is a callback handler for the PELWait request and + * firmware completes a PELWait request when it is aborted or a + * new PEL entry is available. This sends AEN to the application + * and if the PELwait completion is not due to PELAbort then + * this will send a request for new PEL Sequence number + * + * Return: Nothing. + */ +static void mpi3mr_pel_wait_complete(struct mpi3mr_ioc *mrioc, + struct mpi3mr_drv_cmd *drv_cmd) +{ + struct mpi3_pel_reply *pel_reply = NULL; + u16 ioc_status, pe_log_status; + bool do_retry = false; + + if (drv_cmd->state & MPI3MR_CMD_RESET) + goto cleanup_drv_cmd; + + ioc_status = drv_cmd->ioc_status & MPI3_IOCSTATUS_STATUS_MASK; + if (ioc_status != MPI3_IOCSTATUS_SUCCESS) { + ioc_err(mrioc, "%s: Failed ioc_status(0x%04x) Loginfo(0x%08x)\n", + __func__, ioc_status, drv_cmd->ioc_loginfo); + dprint_bsg_err(mrioc, + "pel_wait: failed with ioc_status(0x%04x), log_info(0x%08x)\n", + ioc_status, drv_cmd->ioc_loginfo); + do_retry = true; + } + + if (drv_cmd->state & MPI3MR_CMD_REPLY_VALID) + pel_reply = (struct mpi3_pel_reply *)drv_cmd->reply; + + if (!pel_reply) { + dprint_bsg_err(mrioc, + "pel_wait: failed due to no reply\n"); + goto out_failed; + } + + pe_log_status = le16_to_cpu(pel_reply->pe_log_status); + if ((pe_log_status != MPI3_PEL_STATUS_SUCCESS) && + (pe_log_status != MPI3_PEL_STATUS_ABORTED)) { + ioc_err(mrioc, "%s: Failed pe_log_status(0x%04x)\n", + __func__, pe_log_status); + dprint_bsg_err(mrioc, + "pel_wait: failed due to pel_log_status(0x%04x)\n", + pe_log_status); + do_retry = true; + } + + if (do_retry) { + if (drv_cmd->retry_count < MPI3MR_PEL_RETRY_COUNT) { + drv_cmd->retry_count++; + dprint_bsg_err(mrioc, "pel_wait: retrying(%d)\n", + drv_cmd->retry_count); + mpi3mr_pel_wait_post(mrioc, drv_cmd); + return; + } + dprint_bsg_err(mrioc, + "pel_wait: failed after all retries(%d)\n", + drv_cmd->retry_count); + goto out_failed; + } + atomic64_inc(&event_counter); + if (!mrioc->pel_abort_requested) { + mrioc->pel_cmds.retry_count = 0; + mpi3mr_pel_get_seqnum_post(mrioc, &mrioc->pel_cmds); + } + + return; +out_failed: + mrioc->pel_enabled = false; +cleanup_drv_cmd: + drv_cmd->state = MPI3MR_CMD_NOTUSED; + drv_cmd->callback = NULL; + drv_cmd->retry_count = 0; +} + +/** + * mpi3mr_pel_get_seqnum_complete - PELGetSeqNum Completion callback + * @mrioc: Adapter instance reference + * @drv_cmd: Internal command tracker + * + * This is a callback handler for the PEL get sequence number + * request and a new PEL wait request will be issued to the + * firmware from this + * + * Return: Nothing. + */ +void mpi3mr_pel_get_seqnum_complete(struct mpi3mr_ioc *mrioc, + struct mpi3mr_drv_cmd *drv_cmd) +{ + struct mpi3_pel_reply *pel_reply = NULL; + struct mpi3_pel_seq *pel_seqnum_virt; + u16 ioc_status; + bool do_retry = false; + + pel_seqnum_virt = (struct mpi3_pel_seq *)mrioc->pel_seqnum_virt; + + if (drv_cmd->state & MPI3MR_CMD_RESET) + goto cleanup_drv_cmd; + + ioc_status = drv_cmd->ioc_status & MPI3_IOCSTATUS_STATUS_MASK; + if (ioc_status != MPI3_IOCSTATUS_SUCCESS) { + dprint_bsg_err(mrioc, + "pel_get_seqnum: failed with ioc_status(0x%04x), log_info(0x%08x)\n", + ioc_status, drv_cmd->ioc_loginfo); + do_retry = true; + } + + if (drv_cmd->state & MPI3MR_CMD_REPLY_VALID) + pel_reply = (struct mpi3_pel_reply *)drv_cmd->reply; + if (!pel_reply) { + dprint_bsg_err(mrioc, + "pel_get_seqnum: failed due to no reply\n"); + goto out_failed; + } + + if (le16_to_cpu(pel_reply->pe_log_status) != MPI3_PEL_STATUS_SUCCESS) { + dprint_bsg_err(mrioc, + "pel_get_seqnum: failed due to pel_log_status(0x%04x)\n", + le16_to_cpu(pel_reply->pe_log_status)); + do_retry = true; + } + + if (do_retry) { + if (drv_cmd->retry_count < MPI3MR_PEL_RETRY_COUNT) { + drv_cmd->retry_count++; + dprint_bsg_err(mrioc, + "pel_get_seqnum: retrying(%d)\n", + drv_cmd->retry_count); + mpi3mr_pel_get_seqnum_post(mrioc, drv_cmd); + return; + } + + dprint_bsg_err(mrioc, + "pel_get_seqnum: failed after all retries(%d)\n", + drv_cmd->retry_count); + goto out_failed; + } + mrioc->pel_newest_seqnum = le32_to_cpu(pel_seqnum_virt->newest) + 1; + drv_cmd->retry_count = 0; + mpi3mr_pel_wait_post(mrioc, drv_cmd); + + return; +out_failed: + mrioc->pel_enabled = false; +cleanup_drv_cmd: + drv_cmd->state = MPI3MR_CMD_NOTUSED; + drv_cmd->callback = NULL; + drv_cmd->retry_count = 0; } /** @@ -4383,6 +4685,12 @@ int mpi3mr_soft_reset_handler(struct mpi3mr_ioc *mrioc, if (!retval) { mrioc->diagsave_timeout = 0; mrioc->reset_in_progress = 0; + mrioc->pel_abort_requested = 0; + if (mrioc->pel_enabled) { + mrioc->pel_cmds.retry_count = 0; + mpi3mr_pel_wait_post(mrioc, &mrioc->pel_cmds); + } + mpi3mr_rfresh_tgtdevs(mrioc); mrioc->ts_update_counter = 0; spin_lock_irqsave(&mrioc->watchdog_lock, flags); @@ -4392,6 +4700,8 @@ int mpi3mr_soft_reset_handler(struct mpi3mr_ioc *mrioc, msecs_to_jiffies(MPI3MR_WATCHDOG_INTERVAL)); spin_unlock_irqrestore(&mrioc->watchdog_lock, flags); mrioc->stop_bsgs = 0; + if (mrioc->pel_enabled) + atomic64_inc(&event_counter); } else { mpi3mr_issue_reset(mrioc, MPI3_SYSIF_HOST_DIAG_RESET_ACTION_DIAG_FAULT, reset_reason); diff --git a/drivers/scsi/mpi3mr/mpi3mr_os.c b/drivers/scsi/mpi3mr/mpi3mr_os.c index 450574fc1fec..19298136edb6 100644 --- a/drivers/scsi/mpi3mr/mpi3mr_os.c +++ b/drivers/scsi/mpi3mr/mpi3mr_os.c @@ -14,6 +14,7 @@ LIST_HEAD(mrioc_list); DEFINE_SPINLOCK(mrioc_list_lock); static int mrioc_ids; static int warn_non_secure_ctlr; +atomic64_t event_counter; MODULE_AUTHOR(MPI3MR_DRIVER_AUTHOR); MODULE_DESCRIPTION(MPI3MR_DRIVER_DESC); @@ -1415,6 +1416,23 @@ static void mpi3mr_pcietopochg_evt_bh(struct mpi3mr_ioc *mrioc, } } +/** + * mpi3mr_logdata_evt_bh - Log data event bottomhalf + * @mrioc: Adapter instance reference + * @fwevt: Firmware event reference + * + * Extracts the event data and calls application interfacing + * function to process the event further. + * + * Return: Nothing. + */ +static void mpi3mr_logdata_evt_bh(struct mpi3mr_ioc *mrioc, + struct mpi3mr_fwevt *fwevt) +{ + mpi3mr_app_save_logdata(mrioc, fwevt->event_data, + fwevt->event_data_size); +} + /** * mpi3mr_fwevt_bh - Firmware event bottomhalf handler * @mrioc: Adapter instance reference @@ -1467,6 +1485,11 @@ static void mpi3mr_fwevt_bh(struct mpi3mr_ioc *mrioc, mpi3mr_pcietopochg_evt_bh(mrioc, fwevt); break; } + case MPI3_EVENT_LOG_DATA: + { + mpi3mr_logdata_evt_bh(mrioc, fwevt); + break; + } default: break; } @@ -2298,6 +2321,7 @@ void mpi3mr_os_handle_events(struct mpi3mr_ioc *mrioc, break; } case MPI3_EVENT_DEVICE_INFO_CHANGED: + case MPI3_EVENT_LOG_DATA: { process_evt_bh = 1; break; @@ -4568,6 +4592,12 @@ static struct pci_driver mpi3mr_pci_driver = { #endif }; +static ssize_t event_counter_show(struct device_driver *dd, char *buf) +{ + return sprintf(buf, "%llu\n", atomic64_read(&event_counter)); +} +static DRIVER_ATTR_RO(event_counter); + static int __init mpi3mr_init(void) { int ret_val; @@ -4576,6 +4606,16 @@ static int __init mpi3mr_init(void) MPI3MR_DRIVER_VERSION); ret_val = pci_register_driver(&mpi3mr_pci_driver); + if (ret_val) { + pr_err("%s failed to load due to pci register driver failure\n", + MPI3MR_DRIVER_NAME); + return ret_val; + } + + ret_val = driver_create_file(&mpi3mr_pci_driver.driver, + &driver_attr_event_counter); + if (ret_val) + pci_unregister_driver(&mpi3mr_pci_driver); return ret_val; } @@ -4590,6 +4630,8 @@ static void __exit mpi3mr_exit(void) pr_info("Unloading %s version %s\n", MPI3MR_DRIVER_NAME, MPI3MR_DRIVER_VERSION); + driver_remove_file(&mpi3mr_pci_driver.driver, + &driver_attr_event_counter); pci_unregister_driver(&mpi3mr_pci_driver); } From patchwork Tue Mar 22 12:21:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sumit Saxena X-Patchwork-Id: 553848 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE155C4332F for ; Tue, 22 Mar 2022 12:22:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233485AbiCVMXo (ORCPT ); Tue, 22 Mar 2022 08:23:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33444 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233486AbiCVMXc (ORCPT ); Tue, 22 Mar 2022 08:23:32 -0400 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3D6D78594F for ; Tue, 22 Mar 2022 05:22:05 -0700 (PDT) Received: by mail-pj1-x1031.google.com with SMTP id m11-20020a17090a7f8b00b001beef6143a8so1994751pjl.4 for ; Tue, 22 Mar 2022 05:22:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=yMtSbVWU3mjUEszQbVhbuP+4Rh097vOncCiKrLGAha8=; b=YIG4r2hA7jt8Arf5aQ1svR4Z26IQdvvximHcRkSkAd8Q6UBwszkZsvSeO876WoLTLJ sg6ef/N+O0NkcU60CNwEfCN9ORoeeV5C9w/UxrcHV2tvlQ5XWP/MORczWzJ/u4mYk8Y3 1Yx66sL6atq32a5teIK2VIkPiCXTQ/kfRs4uQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=yMtSbVWU3mjUEszQbVhbuP+4Rh097vOncCiKrLGAha8=; b=cFDKonchYDuBN7DOsf+2F2ULAAt6wZp716cnqm+GBBvLEJKWnoebbWzMA7KPWbL5Vv mja/lPnP/jPZA+Een0lwaPcDeu8qya5SipR3ofzriXaynGlYQfTwswr7jGgzYMJ7KqyV Z7EIX4xPuQYi8L44XcBvhZHQbUde5Umj0BY5vL+c2UWejoZqlUlAQIKE1BMkYKK51lfJ qxf96mHyel9Ny/Rzx61J/ki6O7dvrMs9qvAh790X1MbjkSiHNqTU1f6guhJdDPnioJGP mbBcWfpdbuCWfHgSIfViXS12/mbuhZkJHSwcCKnJH8hvJNpz1aI4V9YbHL9nKt7OmNHU rOkg== X-Gm-Message-State: AOAM530/nfnAcrrn5t0ZFwo7IBUnMnkMrvF3auF0M/z9bc45UIzI7RKV EgcXdczBls0/cOzgwHt4OaYSASpaqCYzHQqEsQwacNt3pPXAnhe4AtSx5Zy+NPC+m5pWj97aJl4 vRti4VbjNrQTvCigt/3VHo22OOIx20pODTdvvkFyOyHREZM9OYawR3rnIQc8cyVL4lVRCk/6GqX r4uQ6qpg49Eg== X-Google-Smtp-Source: ABdhPJzZzcMt2vE8UwiATN63pUwEqx1dMdhdaVHBe6UgjFDDJ0w4RTYLk0xn8nRc9pRSjOlS9kBUMA== X-Received: by 2002:a17:90b:38d2:b0:1c6:fa94:96bb with SMTP id nn18-20020a17090b38d200b001c6fa9496bbmr4760680pjb.206.1647951724421; Tue, 22 Mar 2022 05:22:04 -0700 (PDT) Received: from dhcp-10-123-20-15.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id d11-20020a056a0010cb00b004e1b76b09c0sm23225259pfu.74.2022.03.22.05.22.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Mar 2022 05:22:03 -0700 (PDT) From: Sumit Saxena To: linux-scsi@vger.kernel.org Cc: martin.petersen@oracle.com, sathya.prakash@broadcom.com, kashyap.desai@broadcom.com, chandrakanth.patil@broadcom.com, sreekanth.reddy@broadcom.com, prayas.patel@broadcom.com, Sumit Saxena Subject: [PATCH 5/7] mpi3mr: expose adapter state to sysfs Date: Tue, 22 Mar 2022 08:21:05 -0400 Message-Id: <20220322122107.8482-6-sumit.saxena@broadcom.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220322122107.8482-1-sumit.saxena@broadcom.com> References: <20220322122107.8482-1-sumit.saxena@broadcom.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Signed-off-by: Sumit Saxena --- drivers/scsi/mpi3mr/mpi3mr.h | 2 +- drivers/scsi/mpi3mr/mpi3mr_app.c | 46 ++++++++++++++++++++++++++++++++ drivers/scsi/mpi3mr/mpi3mr_os.c | 1 + 3 files changed, 48 insertions(+), 1 deletion(-) diff --git a/drivers/scsi/mpi3mr/mpi3mr.h b/drivers/scsi/mpi3mr/mpi3mr.h index 10c074c1eea6..5a1b2d9b64da 100644 --- a/drivers/scsi/mpi3mr/mpi3mr.h +++ b/drivers/scsi/mpi3mr/mpi3mr.h @@ -1022,5 +1022,5 @@ int mpi3mr_pel_get_seqnum_post(struct mpi3mr_ioc *mrioc, struct mpi3mr_drv_cmd *drv_cmd); void mpi3mr_app_save_logdata(struct mpi3mr_ioc *mrioc, char *event_data, u16 event_data_size); - +extern const struct attribute_group *mpi3mr_host_groups[]; #endif /*MPI3MR_H_INCLUDED*/ diff --git a/drivers/scsi/mpi3mr/mpi3mr_app.c b/drivers/scsi/mpi3mr/mpi3mr_app.c index aa95bff5412a..da3c7835ada4 100644 --- a/drivers/scsi/mpi3mr/mpi3mr_app.c +++ b/drivers/scsi/mpi3mr/mpi3mr_app.c @@ -1220,3 +1220,49 @@ void mpi3mr_bsg_init(struct mpi3mr_ioc *mrioc) err_device_add: kfree(mrioc->bsg_dev); } + +/** + * adapter_state_show - SysFS callback for adapter state show + * @dev: class device + * @attr: Device attributes + * @buf: Buffer to copy + * + * Return: snprintf() return after copying adapter state + */ +static ssize_t +adp_state_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + struct Scsi_Host *shost = class_to_shost(dev); + struct mpi3mr_ioc *mrioc = shost_priv(shost); + enum mpi3mr_iocstate ioc_state; + uint8_t adp_state; + + ioc_state = mpi3mr_get_iocstate(mrioc); + if (ioc_state == MRIOC_STATE_UNRECOVERABLE) + adp_state = MPI3MR_BSG_ADPSTATE_UNRECOVERABLE; + else if ((mrioc->reset_in_progress) || (mrioc->stop_bsgs)) + adp_state = MPI3MR_BSG_ADPSTATE_IN_RESET; + else if (ioc_state == MRIOC_STATE_FAULT) + adp_state = MPI3MR_BSG_ADPSTATE_FAULT; + else + adp_state = MPI3MR_BSG_ADPSTATE_OPERATIONAL; + + return snprintf(buf, PAGE_SIZE, "%u\n", adp_state); +} + +static DEVICE_ATTR_RO(adp_state); + +static struct attribute *mpi3mr_host_attrs[] = { + &dev_attr_adp_state.attr, + NULL, +}; + +static const struct attribute_group mpi3mr_host_attr_group = { + .attrs = mpi3mr_host_attrs +}; + +const struct attribute_group *mpi3mr_host_groups[] = { + &mpi3mr_host_attr_group, + NULL, +}; diff --git a/drivers/scsi/mpi3mr/mpi3mr_os.c b/drivers/scsi/mpi3mr/mpi3mr_os.c index 19298136edb6..89a4918c4a9e 100644 --- a/drivers/scsi/mpi3mr/mpi3mr_os.c +++ b/drivers/scsi/mpi3mr/mpi3mr_os.c @@ -4134,6 +4134,7 @@ static struct scsi_host_template mpi3mr_driver_template = { .max_segment_size = 0xffffffff, .track_queue_depth = 1, .cmd_size = sizeof(struct scmd_priv), + .shost_groups = mpi3mr_host_groups, }; /** From patchwork Tue Mar 22 12:21:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sumit Saxena X-Patchwork-Id: 553847 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6296C433EF for ; Tue, 22 Mar 2022 12:22:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234841AbiCVMXu (ORCPT ); Tue, 22 Mar 2022 08:23:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33510 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234003AbiCVMXk (ORCPT ); Tue, 22 Mar 2022 08:23:40 -0400 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7FDAC85942 for ; Tue, 22 Mar 2022 05:22:08 -0700 (PDT) Received: by mail-pj1-x102f.google.com with SMTP id mm17-20020a17090b359100b001c6da62a559so2475483pjb.3 for ; Tue, 22 Mar 2022 05:22:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=4xGzxX/yL01YmP/4w0wn/0PwN1RinMeeT5oQhU0fVHU=; b=cBhhwBf5HGOZM7V1qkUjjvTWGgca44cV+1Bm7PTg3TZMsJLvO2Z/Ee13+DNzMlx+a6 CKEUpi7yI2YSnsLjockiRKfsoxwo9AB4fTt9FycXL7ftfk0g6/n/z7RamTwfU/NHzH9P 9+bxbouyhRJFSC5Q87kPQ8N7Duvi4P2G/Jt54= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=4xGzxX/yL01YmP/4w0wn/0PwN1RinMeeT5oQhU0fVHU=; b=pu/l2Etrymyt1BWe2v8Ia/XvlQGa3jCk3A6n0kqDnhkwe2t5E/nGNXo+I0vkmX5hBm Nu2LIViER1fOw0wSd+kiE83ckv0xz1X451YWhk8bJxoSaEWVlMfa6CkE/Q6xRdEQjcc+ 4AceKAPc9LMwXI5IHRIj/2Qb050/wX2UPN26m4g91iSUebhKMgkvCsRBMjI+yzpKJCHQ wejzPdKpCCZemL8PConmmnxSvKxWzJZ89Lim2SHtBQ7vtyWVfzx+3PBsx2l7w+hHzoH0 WoLQbjQrqTFnry6oowRhNrlBEPYMlbRmkj3IFzFJZy/4LbSuZEV+ACiUYPkBGSCPLrt8 izIQ== X-Gm-Message-State: AOAM532dNL4xtu2TBRd4Lm8bEY9SZohthnX6MI85WXZVtdVGpx0ivwk0 dVrmSkmkMoaX5U9DYnmDN+8xMUzTr2b3YfOAa/tTaRP/2f5ap6iNqnET/MzeETxmVen9xneQdPV yMe90RyJApfXndPBkzbIk/oefhc7ULoD4k0xJE99XXAJuRBtKNC9xl/EwILnFdNl+Yznrferla1 a8tkihJ3dedg== X-Google-Smtp-Source: ABdhPJxBy1B0B9LQDQZHZyqOgkkqlIcxXBZmrmjveGQvdXwvEMeXpVYkw4GN/ZBOPvrv9K7K10JonQ== X-Received: by 2002:a17:902:f70f:b0:153:ebfe:21b3 with SMTP id h15-20020a170902f70f00b00153ebfe21b3mr17908822plo.119.1647951727453; Tue, 22 Mar 2022 05:22:07 -0700 (PDT) Received: from dhcp-10-123-20-15.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id d11-20020a056a0010cb00b004e1b76b09c0sm23225259pfu.74.2022.03.22.05.22.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Mar 2022 05:22:07 -0700 (PDT) From: Sumit Saxena To: linux-scsi@vger.kernel.org Cc: martin.petersen@oracle.com, sathya.prakash@broadcom.com, kashyap.desai@broadcom.com, chandrakanth.patil@broadcom.com, sreekanth.reddy@broadcom.com, prayas.patel@broadcom.com, Sumit Saxena Subject: [PATCH 6/7] mpi3mr: add support for nvme pass-through Date: Tue, 22 Mar 2022 08:21:06 -0400 Message-Id: <20220322122107.8482-7-sumit.saxena@broadcom.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220322122107.8482-1-sumit.saxena@broadcom.com> References: <20220322122107.8482-1-sumit.saxena@broadcom.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This patch adds support for management applications to send an MPI3 Encapsulated NVMe passthru commands to the NVMe devices attached to the Avenger controller. Since the NVMe drives are exposed as SCSI devices by the controller the standard NVMe applications cannot be used to interact with the drives and the command sets supported is also limited by the controller firmware. Special handling is required for MPI3 Encapsulated NVMe passthru commands for PRP/SGL setup in the commands hence the additional changes. Signed-off-by: Sumit Saxena --- drivers/scsi/mpi3mr/mpi3mr.h | 7 + drivers/scsi/mpi3mr/mpi3mr_app.c | 348 ++++++++++++++++++++++++++++++- 2 files changed, 352 insertions(+), 3 deletions(-) diff --git a/drivers/scsi/mpi3mr/mpi3mr.h b/drivers/scsi/mpi3mr/mpi3mr.h index 5a1b2d9b64da..22426f71c720 100644 --- a/drivers/scsi/mpi3mr/mpi3mr.h +++ b/drivers/scsi/mpi3mr/mpi3mr.h @@ -712,6 +712,9 @@ struct scmd_priv { * @reset_waitq: Controller reset wait queue * @prepare_for_reset: Prepare for reset event received * @prepare_for_reset_timeout_counter: Prepare for reset timeout + * @prp_list_virt: NVMe encapsulated PRP list virtual base + * @prp_list_dma: NVMe encapsulated PRP list DMA + * @prp_sz: NVME encapsulated PRP list size * @diagsave_timeout: Diagnostic information save timeout * @logging_level: Controller debug logging level * @flush_io_count: I/O count to flush after reset @@ -867,6 +870,10 @@ struct mpi3mr_ioc { u8 prepare_for_reset; u16 prepare_for_reset_timeout_counter; + void *prp_list_virt; + dma_addr_t prp_list_dma; + u32 prp_sz; + u16 diagsave_timeout; int logging_level; u16 flush_io_count; diff --git a/drivers/scsi/mpi3mr/mpi3mr_app.c b/drivers/scsi/mpi3mr/mpi3mr_app.c index da3c7835ada4..0ed900116776 100644 --- a/drivers/scsi/mpi3mr/mpi3mr_app.c +++ b/drivers/scsi/mpi3mr/mpi3mr_app.c @@ -626,6 +626,314 @@ static void mpi3mr_bsg_build_sgl(u8 *mpi_req, uint32_t sgl_offset, } } +/** + * mpi3mr_get_nvme_data_fmt - returns the NVMe data format + * @nvme_encap_request: NVMe encapsulated MPI request + * + * This function returns the type of the data format specified + * in user provided NVMe command in NVMe encapsulated request. + * + * Return: Data format of the NVMe command (PRP/SGL etc) + */ +static unsigned int mpi3mr_get_nvme_data_fmt( + struct mpi3_nvme_encapsulated_request *nvme_encap_request) +{ + u8 format = 0; + + format = ((nvme_encap_request->command[0] & 0xc000) >> 14); + return format; + +} + +/** + * mpi3mr_build_nvme_sgl - SGL constructor for NVME + * encapsulated request + * @mrioc: Adapter instance reference + * @nvme_encap_request: NVMe encapsulated MPI request + * @drv_bufs: DMA address of the buffers to be placed in sgl + * @bufcnt: Number of DMA buffers + * + * This function places the DMA address of the given buffers in + * proper format as SGEs in the given NVMe encapsulated request. + * + * Return: 0 on success, -1 on failure + */ +static int mpi3mr_build_nvme_sgl(struct mpi3mr_ioc *mrioc, + struct mpi3_nvme_encapsulated_request *nvme_encap_request, + struct mpi3mr_buf_map *drv_bufs, u8 bufcnt) +{ + struct mpi3mr_nvme_pt_sge *nvme_sgl; + u64 sgl_ptr; + u8 count; + size_t length = 0; + struct mpi3mr_buf_map *drv_buf_iter = drv_bufs; + u64 sgemod_mask = ((u64)((mrioc->facts.sge_mod_mask) << + mrioc->facts.sge_mod_shift) << 32); + u64 sgemod_val = ((u64)(mrioc->facts.sge_mod_value) << + mrioc->facts.sge_mod_shift) << 32; + + /* + * Not all commands require a data transfer. If no data, just return + * without constructing any sgl. + */ + for (count = 0; count < bufcnt; count++, drv_buf_iter++) { + if (drv_buf_iter->data_dir == DMA_NONE) + continue; + sgl_ptr = (u64)drv_buf_iter->kern_buf_dma; + length = drv_buf_iter->kern_buf_len; + break; + } + if (!length) + return 0; + + if (sgl_ptr & sgemod_mask) { + dprint_bsg_err(mrioc, + "%s: SGL address collides with SGE modifier\n", + __func__); + return -1; + } + + sgl_ptr &= ~sgemod_mask; + sgl_ptr |= sgemod_val; + nvme_sgl = (struct mpi3mr_nvme_pt_sge *) + ((u8 *)(nvme_encap_request->command) + MPI3MR_NVME_CMD_SGL_OFFSET); + memset(nvme_sgl, 0, sizeof(struct mpi3mr_nvme_pt_sge)); + nvme_sgl->base_addr = sgl_ptr; + nvme_sgl->length = length; + return 0; +} + +/** + * mpi3mr_build_nvme_prp - PRP constructor for NVME + * encapsulated request + * @mrioc: Adapter instance reference + * @nvme_encap_request: NVMe encapsulated MPI request + * @drv_bufs: DMA address of the buffers to be placed in SGL + * @bufcnt: Number of DMA buffers + * + * This function places the DMA address of the given buffers in + * proper format as PRP entries in the given NVMe encapsulated + * request. + * + * Return: 0 on success, -1 on failure + */ +static int mpi3mr_build_nvme_prp(struct mpi3mr_ioc *mrioc, + struct mpi3_nvme_encapsulated_request *nvme_encap_request, + struct mpi3mr_buf_map *drv_bufs, u8 bufcnt) +{ + int prp_size = MPI3MR_NVME_PRP_SIZE; + __le64 *prp_entry, *prp1_entry, *prp2_entry; + __le64 *prp_page; + dma_addr_t prp_entry_dma, prp_page_dma, dma_addr; + u32 offset, entry_len, dev_pgsz; + u32 page_mask_result, page_mask; + size_t length = 0; + u8 count; + struct mpi3mr_buf_map *drv_buf_iter = drv_bufs; + u64 sgemod_mask = ((u64)((mrioc->facts.sge_mod_mask) << + mrioc->facts.sge_mod_shift) << 32); + u64 sgemod_val = ((u64)(mrioc->facts.sge_mod_value) << + mrioc->facts.sge_mod_shift) << 32; + u16 dev_handle = nvme_encap_request->dev_handle; + struct mpi3mr_tgt_dev *tgtdev; + + tgtdev = mpi3mr_get_tgtdev_by_handle(mrioc, dev_handle); + if (!tgtdev) { + dprint_bsg_err(mrioc, "%s: invalid device handle 0x%04x\n", + __func__, dev_handle); + return -1; + } + + if (tgtdev->dev_spec.pcie_inf.pgsz == 0) { + dprint_bsg_err(mrioc, + "%s: NVMe device page size is zero for handle 0x%04x\n", + __func__, dev_handle); + mpi3mr_tgtdev_put(tgtdev); + return -1; + } + + dev_pgsz = 1 << (tgtdev->dev_spec.pcie_inf.pgsz); + mpi3mr_tgtdev_put(tgtdev); + + /* + * Not all commands require a data transfer. If no data, just return + * without constructing any PRP. + */ + for (count = 0; count < bufcnt; count++, drv_buf_iter++) { + if (drv_buf_iter->data_dir == DMA_NONE) + continue; + dma_addr = drv_buf_iter->kern_buf_dma; + length = drv_buf_iter->kern_buf_len; + break; + } + + if (!length) + return 0; + + mrioc->prp_sz = 0; + mrioc->prp_list_virt = dma_alloc_coherent(&mrioc->pdev->dev, + dev_pgsz, &mrioc->prp_list_dma, GFP_KERNEL); + + if (!mrioc->prp_list_virt) + return -1; + mrioc->prp_sz = dev_pgsz; + + /* + * Set pointers to PRP1 and PRP2, which are in the NVMe command. + * PRP1 is located at a 24 byte offset from the start of the NVMe + * command. Then set the current PRP entry pointer to PRP1. + */ + prp1_entry = (__le64 *)((u8 *)(nvme_encap_request->command) + + MPI3MR_NVME_CMD_PRP1_OFFSET); + prp2_entry = (__le64 *)((u8 *)(nvme_encap_request->command) + + MPI3MR_NVME_CMD_PRP2_OFFSET); + prp_entry = prp1_entry; + /* + * For the PRP entries, use the specially allocated buffer of + * contiguous memory. + */ + prp_page = (__le64 *)mrioc->prp_list_virt; + prp_page_dma = mrioc->prp_list_dma; + + /* + * Check if we are within 1 entry of a page boundary we don't + * want our first entry to be a PRP List entry. + */ + page_mask = dev_pgsz - 1; + page_mask_result = (uintptr_t)((u8 *)prp_page + prp_size) & page_mask; + if (!page_mask_result) { + dprint_bsg_err(mrioc, "%s: PRP page is not page aligned\n", + __func__); + goto err_out; + } + + /* + * Set PRP physical pointer, which initially points to the current PRP + * DMA memory page. + */ + prp_entry_dma = prp_page_dma; + + + /* Loop while the length is not zero. */ + while (length) { + page_mask_result = (prp_entry_dma + prp_size) & page_mask; + if (!page_mask_result && (length > dev_pgsz)) { + dprint_bsg_err(mrioc, + "%s: single PRP page is not sufficient\n", + __func__); + goto err_out; + } + + /* Need to handle if entry will be part of a page. */ + offset = dma_addr & page_mask; + entry_len = dev_pgsz - offset; + + if (prp_entry == prp1_entry) { + /* + * Must fill in the first PRP pointer (PRP1) before + * moving on. + */ + *prp1_entry = cpu_to_le64(dma_addr); + if (*prp1_entry & sgemod_mask) { + dprint_bsg_err(mrioc, + "%s: PRP1 address collides with SGE modifier\n", + __func__); + goto err_out; + } + *prp1_entry &= ~sgemod_mask; + *prp1_entry |= sgemod_val; + + /* + * Now point to the second PRP entry within the + * command (PRP2). + */ + prp_entry = prp2_entry; + } else if (prp_entry == prp2_entry) { + /* + * Should the PRP2 entry be a PRP List pointer or just + * a regular PRP pointer? If there is more than one + * more page of data, must use a PRP List pointer. + */ + if (length > dev_pgsz) { + /* + * PRP2 will contain a PRP List pointer because + * more PRP's are needed with this command. The + * list will start at the beginning of the + * contiguous buffer. + */ + *prp2_entry = cpu_to_le64(prp_entry_dma); + if (*prp2_entry & sgemod_mask) { + dprint_bsg_err(mrioc, + "%s: PRP list address collides with SGE modifier\n", + __func__); + goto err_out; + } + *prp2_entry &= ~sgemod_mask; + *prp2_entry |= sgemod_val; + + /* + * The next PRP Entry will be the start of the + * first PRP List. + */ + prp_entry = prp_page; + continue; + } else { + /* + * After this, the PRP Entries are complete. + * This command uses 2 PRP's and no PRP list. + */ + *prp2_entry = cpu_to_le64(dma_addr); + if (*prp2_entry & sgemod_mask) { + dprint_bsg_err(mrioc, + "%s: PRP2 collides with SGE modifier\n", + __func__); + goto err_out; + } + *prp2_entry &= ~sgemod_mask; + *prp2_entry |= sgemod_val; + } + } else { + /* + * Put entry in list and bump the addresses. + * + * After PRP1 and PRP2 are filled in, this will fill in + * all remaining PRP entries in a PRP List, one per + * each time through the loop. + */ + *prp_entry = cpu_to_le64(dma_addr); + if (*prp1_entry & sgemod_mask) { + dprint_bsg_err(mrioc, + "%s: PRP address collides with SGE modifier\n", + __func__); + goto err_out; + } + *prp_entry &= ~sgemod_mask; + *prp_entry |= sgemod_val; + prp_entry++; + prp_entry_dma++; + } + + /* + * Bump the phys address of the command's data buffer by the + * entry_len. + */ + dma_addr += entry_len; + + /* decrement length accounting for last partial page. */ + if (entry_len > length) + length = 0; + else + length -= entry_len; + } + return 0; +err_out: + if (mrioc->prp_list_virt) { + dma_free_coherent(&mrioc->pdev->dev, mrioc->prp_sz, + mrioc->prp_list_virt, mrioc->prp_list_dma); + mrioc->prp_list_virt = NULL; + } + return -1; +} /** * mpi3mr_bsg_process_mpt_cmds - MPI Pass through BSG handler * @job: BSG job reference @@ -657,7 +965,7 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job) struct mpi3mr_buf_map *drv_bufs = NULL, *drv_buf_iter = NULL; u8 count, bufcnt = 0, is_rmcb = 0, is_rmrb = 0, din_cnt = 0, dout_cnt = 0; u8 invalid_be = 0, erb_offset = 0xFF, mpirep_offset = 0xFF, sg_entries = 0; - u8 block_io = 0, resp_code = 0; + u8 block_io = 0, resp_code = 0, nvme_fmt = 0; struct mpi3_request_header *mpi_header = NULL; struct mpi3_status_reply_descriptor *status_desc; struct mpi3_scsi_task_mgmt_request *tm_req; @@ -897,7 +1205,34 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job) goto out; } - if (mpi_header->function != MPI3_FUNCTION_NVME_ENCAPSULATED) { + if (mpi_header->function == MPI3_FUNCTION_NVME_ENCAPSULATED) { + nvme_fmt = mpi3mr_get_nvme_data_fmt( + (struct mpi3_nvme_encapsulated_request *)mpi_req); + if (nvme_fmt == MPI3MR_NVME_DATA_FORMAT_PRP) { + if (mpi3mr_build_nvme_prp(mrioc, + (struct mpi3_nvme_encapsulated_request *)mpi_req, + drv_bufs, bufcnt)) { + rval = -ENOMEM; + mutex_unlock(&mrioc->bsg_cmds.mutex); + goto out; + } + } else if (nvme_fmt == MPI3MR_NVME_DATA_FORMAT_SGL1 || + nvme_fmt == MPI3MR_NVME_DATA_FORMAT_SGL2) { + if (mpi3mr_build_nvme_sgl(mrioc, + (struct mpi3_nvme_encapsulated_request *)mpi_req, + drv_bufs, bufcnt)) { + rval = -EINVAL; + mutex_unlock(&mrioc->bsg_cmds.mutex); + goto out; + } + } else { + dprint_bsg_err(mrioc, + "%s:invalid NVMe command format\n", __func__); + rval = -EINVAL; + mutex_unlock(&mrioc->bsg_cmds.mutex); + goto out; + } + } else { mpi3mr_bsg_build_sgl(mpi_req, (mpi_msg_size), drv_bufs, bufcnt, is_rmcb, is_rmrb, (dout_cnt + din_cnt)); @@ -975,7 +1310,8 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job) } } - if (mpi_header->function == MPI3_FUNCTION_SCSI_IO) + if ((mpi_header->function == MPI3_FUNCTION_NVME_ENCAPSULATED) || + (mpi_header->function == MPI3_FUNCTION_SCSI_IO)) mpi3mr_issue_tm(mrioc, MPI3_SCSITASKMGMT_TASKTYPE_TARGET_RESET, mpi_header->function_dependent, 0, @@ -989,6 +1325,12 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job) } dprint_bsg_info(mrioc, "%s: bsg request is completed\n", __func__); + if (mrioc->prp_list_virt) { + dma_free_coherent(&mrioc->pdev->dev, mrioc->prp_sz, + mrioc->prp_list_virt, mrioc->prp_list_dma); + mrioc->prp_list_virt = NULL; + } + if ((mrioc->bsg_cmds.ioc_status & MPI3_IOCSTATUS_STATUS_MASK) != MPI3_IOCSTATUS_SUCCESS) { dprint_bsg_info(mrioc, From patchwork Tue Mar 22 12:21:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sumit Saxena X-Patchwork-Id: 553696 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9EE3FC433F5 for ; Tue, 22 Mar 2022 12:22:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234820AbiCVMXr (ORCPT ); Tue, 22 Mar 2022 08:23:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33516 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234832AbiCVMXk (ORCPT ); Tue, 22 Mar 2022 08:23:40 -0400 Received: from mail-pl1-x62e.google.com (mail-pl1-x62e.google.com [IPv6:2607:f8b0:4864:20::62e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 15F2185952 for ; Tue, 22 Mar 2022 05:22:11 -0700 (PDT) Received: by mail-pl1-x62e.google.com with SMTP id x2so1771911plm.7 for ; Tue, 22 Mar 2022 05:22:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=D5RImeLHIqIE8z+kAhAEPG2Wa79Ke4rURZopgrpZ5Lk=; b=BSi3HloJ0thzK/uQWt1Ta+KfjTLbtdTiv1ldsV59m62XZA+qved4V1jqBTWIJdpZl0 x15vkQjIjFfCjktC8jTgPN/rEitBUle1PhULRBhQ37Per0DszhmmbumONkD8SgwGgJ4t M4uh2VO4fIQwtSh/7a24lJ3D1IOpXEDQR9FNM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=D5RImeLHIqIE8z+kAhAEPG2Wa79Ke4rURZopgrpZ5Lk=; b=vzsqW5A4kveIpSE68M2gcJr5H12HuXCKPr6xqZj1cB5a+KEDIAkG5XUd9winIn8ak/ ZSgHNiK+L81qfu989T5+3nyp/3sUWlR0m933fQkGE0toX3BbWgIQSi8l3524mnHsJFoz t0o2YoD8/qJcKI6alupkHcBi5BzFI0NCdAajvC2uaYjxr2jt9A5a4/4OGVDOZwi5/8y6 6KCpRDwqvHGMcPhSLvD28XmZ2ASQVcffegLFycVfjgB1d2V+y3wWp1+F1WHz7fm1efdG zAorN+IpHNdzpQQJ5Xdu+ocQb3brJPQCJ2FVN87UIQfQ04Xgsl+8KbPPSDB2HjGpzO/0 vODQ== X-Gm-Message-State: AOAM533CSFby5GNvb3JfJlf3bqu7AJrjJ66nx0TRhzNKvvIXZxE5dJgp LFWJpPbVQwKsdyDt2UFqXRquTryuHrK3uZ7UxT14xXrSTqTAzJUwi4YsA1T0zLNeBYgTYjaP1T2 3sfDmOoevE2kd5CHF1sFigLar6/JJs3sYoEzwv91bEvv+REq9L8OMCqY8lzKgTHIUhvOTDekVCI rL1T6MN+HryQ== X-Google-Smtp-Source: ABdhPJwe821sjrmT+RH9NGuLSWBEW6FX8eW6H1v1KqlgPIxPl0ORQYebC1CYYHLWo+B0bU7OGFy/lQ== X-Received: by 2002:a17:902:f701:b0:14d:7cea:82af with SMTP id h1-20020a170902f70100b0014d7cea82afmr17595792plo.71.1647951730252; Tue, 22 Mar 2022 05:22:10 -0700 (PDT) Received: from dhcp-10-123-20-15.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id d11-20020a056a0010cb00b004e1b76b09c0sm23225259pfu.74.2022.03.22.05.22.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Mar 2022 05:22:09 -0700 (PDT) From: Sumit Saxena To: linux-scsi@vger.kernel.org Cc: martin.petersen@oracle.com, sathya.prakash@broadcom.com, kashyap.desai@broadcom.com, chandrakanth.patil@broadcom.com, sreekanth.reddy@broadcom.com, prayas.patel@broadcom.com, Sumit Saxena Subject: [PATCH 7/7] mpi3mr: update driver version to 8.0.0.69.0 Date: Tue, 22 Mar 2022 08:21:07 -0400 Message-Id: <20220322122107.8482-8-sumit.saxena@broadcom.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220322122107.8482-1-sumit.saxena@broadcom.com> References: <20220322122107.8482-1-sumit.saxena@broadcom.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Signed-off-by: Sumit Saxena --- drivers/scsi/mpi3mr/mpi3mr.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/scsi/mpi3mr/mpi3mr.h b/drivers/scsi/mpi3mr/mpi3mr.h index 22426f71c720..354337ad2169 100644 --- a/drivers/scsi/mpi3mr/mpi3mr.h +++ b/drivers/scsi/mpi3mr/mpi3mr.h @@ -54,8 +54,8 @@ extern struct list_head mrioc_list; extern int prot_mask; extern atomic64_t event_counter; -#define MPI3MR_DRIVER_VERSION "8.0.0.68.0" -#define MPI3MR_DRIVER_RELDATE "10-February-2022" +#define MPI3MR_DRIVER_VERSION "8.0.0.69.0" +#define MPI3MR_DRIVER_RELDATE "16-March-2022" #define MPI3MR_DRIVER_NAME "mpi3mr" #define MPI3MR_DRIVER_LICENSE "GPL"