From patchwork Tue Mar 2 03:21:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Asutosh Das \(asd\)" X-Patchwork-Id: 392478 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 060E7C433E6 for ; Tue, 2 Mar 2021 19:49:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AB14364F1D for ; Tue, 2 Mar 2021 19:49:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1378260AbhCBSpN (ORCPT ); Tue, 2 Mar 2021 13:45:13 -0500 Received: from labrats.qualcomm.com ([199.106.110.90]:44758 "EHLO labrats.qualcomm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1573449AbhCBDWt (ORCPT ); Mon, 1 Mar 2021 22:22:49 -0500 IronPort-SDR: SKjB2R839065y5mLfwxmlw+JbKqi7gPLtM8cTsnNKaJKdeb/B5pf/2sUPQm1xmNdlNhhGO0s3H ZYioatSw8fek/dhWnqKWtsM9w3nlBkHcPcoi2TdAd1575zrrTkjK5S70CPb4mtN1r1+BOSdqxe w6UTLBIrYQUSzVRVsRLx4TYhC6UOnax0ArIytpaBL6QBZfyXYNkvqRzQMMkM7I40tHWYAY23IP tacnK1WcQSOjgcYL3pB2O8Ws2hyaWvlQVqldK5nblmA2TBnIvKyYqAXFKL1k7DqWJR0VHpQali I9I= X-IronPort-AV: E=Sophos;i="5.81,216,1610438400"; d="scan'208";a="47798270" Received: from unknown (HELO ironmsg01-sd.qualcomm.com) ([10.53.140.141]) by labrats.qualcomm.com with ESMTP; 01 Mar 2021 19:22:01 -0800 X-QCInternal: smtphost Received: from wsp769891wss.qualcomm.com (HELO stor-presley.qualcomm.com) ([192.168.140.85]) by ironmsg01-sd.qualcomm.com with ESMTP; 01 Mar 2021 19:21:59 -0800 Received: by stor-presley.qualcomm.com (Postfix, from userid 92687) id 8C5BB21A19; Mon, 1 Mar 2021 19:21:59 -0800 (PST) From: Asutosh Das To: cang@codeaurora.org, martin.petersen@oracle.com, adrian.hunter@intel.com, linux-scsi@vger.kernel.org Cc: Asutosh Das , linux-arm-msm@vger.kernel.org, Alim Akhtar , Avri Altman , "James E.J. Bottomley" , Pedro Sousa , Krzysztof Kozlowski , Stanley Chu , Andy Gross , Bjorn Andersson , Steven Rostedt , Ingo Molnar , Matthias Brugger , Kiwoong Kim , Bean Huo , Wei Yongjun , Dan Carpenter , Jaegeuk Kim , Satya Tangirala , linux-kernel@vger.kernel.org (open list), linux-arm-kernel@lists.infradead.org (moderated list:ARM/SAMSUNG S3C, S5P AND EXYNOS ARM ARCHITECTURES), linux-samsung-soc@vger.kernel.org (open list:ARM/SAMSUNG S3C, S5P AND EXYNOS ARM ARCHITECTURES), linux-mediatek@lists.infradead.org (moderated list:UNIVERSAL FLASH STORAGE HOST CONTROLLER DRIVER...) Subject: [PATCH v9 1/2] scsi: ufs: Enable power management for wlun Date: Mon, 1 Mar 2021 19:21:30 -0800 Message-Id: X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org During runtime-suspend of ufs host, the scsi devices are already suspended and so are the queues associated with them. But the ufs host sends SSU to wlun during its runtime-suspend. During the process blk_queue_enter checks if the queue is not in suspended state. If so, it waits for the queue to resume, and never comes out of it. The commit (d55d15a33: scsi: block: Do not accept any requests while suspended) adds the check if the queue is in suspended state in blk_queue_enter(). Call trace: __switch_to+0x174/0x2c4 __schedule+0x478/0x764 schedule+0x9c/0xe0 blk_queue_enter+0x158/0x228 blk_mq_alloc_request+0x40/0xa4 blk_get_request+0x2c/0x70 __scsi_execute+0x60/0x1c4 ufshcd_set_dev_pwr_mode+0x124/0x1e4 ufshcd_suspend+0x208/0x83c ufshcd_runtime_suspend+0x40/0x154 ufshcd_pltfrm_runtime_suspend+0x14/0x20 pm_generic_runtime_suspend+0x28/0x3c __rpm_callback+0x80/0x2a4 rpm_suspend+0x308/0x614 rpm_idle+0x158/0x228 pm_runtime_work+0x84/0xac process_one_work+0x1f0/0x470 worker_thread+0x26c/0x4c8 kthread+0x13c/0x320 ret_from_fork+0x10/0x18 Fix this by registering ufs device wlun as a scsi driver and registering it for block runtime-pm. Also make this as a supplier for all other luns. That way, this device wlun suspends after all the consumers and resumes after hba resumes. Co-developed-by: Can Guo Signed-off-by: Can Guo Signed-off-by: Asutosh Das --- drivers/scsi/ufs/cdns-pltfrm.c | 2 + drivers/scsi/ufs/tc-dwc-g210-pci.c | 2 + drivers/scsi/ufs/ufs-exynos.c | 2 + drivers/scsi/ufs/ufs-hisi.c | 2 + drivers/scsi/ufs/ufs-mediatek.c | 2 + drivers/scsi/ufs/ufs-qcom.c | 2 + drivers/scsi/ufs/ufshcd-pci.c | 32 +- drivers/scsi/ufs/ufshcd.c | 609 +++++++++++++++++++++++++++---------- drivers/scsi/ufs/ufshcd.h | 7 + include/trace/events/ufs.h | 20 ++ 10 files changed, 493 insertions(+), 187 deletions(-) diff --git a/drivers/scsi/ufs/cdns-pltfrm.c b/drivers/scsi/ufs/cdns-pltfrm.c index 149391f..3e70c23 100644 --- a/drivers/scsi/ufs/cdns-pltfrm.c +++ b/drivers/scsi/ufs/cdns-pltfrm.c @@ -319,6 +319,8 @@ static const struct dev_pm_ops cdns_ufs_dev_pm_ops = { .runtime_suspend = ufshcd_pltfrm_runtime_suspend, .runtime_resume = ufshcd_pltfrm_runtime_resume, .runtime_idle = ufshcd_pltfrm_runtime_idle, + .prepare = ufshcd_suspend_prepare, + .complete = ufshcd_resume_complete, }; static struct platform_driver cdns_ufs_pltfrm_driver = { diff --git a/drivers/scsi/ufs/tc-dwc-g210-pci.c b/drivers/scsi/ufs/tc-dwc-g210-pci.c index 67a6a61..b01db12 100644 --- a/drivers/scsi/ufs/tc-dwc-g210-pci.c +++ b/drivers/scsi/ufs/tc-dwc-g210-pci.c @@ -148,6 +148,8 @@ static const struct dev_pm_ops tc_dwc_g210_pci_pm_ops = { .runtime_suspend = tc_dwc_g210_pci_runtime_suspend, .runtime_resume = tc_dwc_g210_pci_runtime_resume, .runtime_idle = tc_dwc_g210_pci_runtime_idle, + .prepare = ufshcd_suspend_prepare, + .complete = ufshcd_resume_complete, }; static const struct pci_device_id tc_dwc_g210_pci_tbl[] = { diff --git a/drivers/scsi/ufs/ufs-exynos.c b/drivers/scsi/ufs/ufs-exynos.c index 267943a1..45c0b02 100644 --- a/drivers/scsi/ufs/ufs-exynos.c +++ b/drivers/scsi/ufs/ufs-exynos.c @@ -1268,6 +1268,8 @@ static const struct dev_pm_ops exynos_ufs_pm_ops = { .runtime_suspend = ufshcd_pltfrm_runtime_suspend, .runtime_resume = ufshcd_pltfrm_runtime_resume, .runtime_idle = ufshcd_pltfrm_runtime_idle, + .prepare = ufshcd_suspend_prepare, + .complete = ufshcd_resume_complete, }; static struct platform_driver exynos_ufs_pltform = { diff --git a/drivers/scsi/ufs/ufs-hisi.c b/drivers/scsi/ufs/ufs-hisi.c index 0aa5813..d463b44 100644 --- a/drivers/scsi/ufs/ufs-hisi.c +++ b/drivers/scsi/ufs/ufs-hisi.c @@ -574,6 +574,8 @@ static const struct dev_pm_ops ufs_hisi_pm_ops = { .runtime_suspend = ufshcd_pltfrm_runtime_suspend, .runtime_resume = ufshcd_pltfrm_runtime_resume, .runtime_idle = ufshcd_pltfrm_runtime_idle, + .prepare = ufshcd_suspend_prepare, + .complete = ufshcd_resume_complete, }; static struct platform_driver ufs_hisi_pltform = { diff --git a/drivers/scsi/ufs/ufs-mediatek.c b/drivers/scsi/ufs/ufs-mediatek.c index c55202b..df1eabb 100644 --- a/drivers/scsi/ufs/ufs-mediatek.c +++ b/drivers/scsi/ufs/ufs-mediatek.c @@ -1097,6 +1097,8 @@ static const struct dev_pm_ops ufs_mtk_pm_ops = { .runtime_suspend = ufshcd_pltfrm_runtime_suspend, .runtime_resume = ufshcd_pltfrm_runtime_resume, .runtime_idle = ufshcd_pltfrm_runtime_idle, + .prepare = ufshcd_suspend_prepare, + .complete = ufshcd_resume_complete, }; static struct platform_driver ufs_mtk_pltform = { diff --git a/drivers/scsi/ufs/ufs-qcom.c b/drivers/scsi/ufs/ufs-qcom.c index f97d7b0..9aa098a 100644 --- a/drivers/scsi/ufs/ufs-qcom.c +++ b/drivers/scsi/ufs/ufs-qcom.c @@ -1546,6 +1546,8 @@ static const struct dev_pm_ops ufs_qcom_pm_ops = { .runtime_suspend = ufshcd_pltfrm_runtime_suspend, .runtime_resume = ufshcd_pltfrm_runtime_resume, .runtime_idle = ufshcd_pltfrm_runtime_idle, + .prepare = ufshcd_suspend_prepare, + .complete = ufshcd_resume_complete, }; static struct platform_driver ufs_qcom_pltform = { diff --git a/drivers/scsi/ufs/ufshcd-pci.c b/drivers/scsi/ufs/ufshcd-pci.c index fadd566..50f84bf 100644 --- a/drivers/scsi/ufs/ufshcd-pci.c +++ b/drivers/scsi/ufs/ufshcd-pci.c @@ -247,29 +247,6 @@ static int ufshcd_pci_resume(struct device *dev) return ufshcd_system_resume(dev_get_drvdata(dev)); } -/** - * ufshcd_pci_poweroff - suspend-to-disk poweroff function - * @dev: pointer to PCI device handle - * - * Returns 0 if successful - * Returns non-zero otherwise - */ -static int ufshcd_pci_poweroff(struct device *dev) -{ - struct ufs_hba *hba = dev_get_drvdata(dev); - int spm_lvl = hba->spm_lvl; - int ret; - - /* - * For poweroff we need to set the UFS device to PowerDown mode. - * Force spm_lvl to ensure that. - */ - hba->spm_lvl = 5; - ret = ufshcd_system_suspend(hba); - hba->spm_lvl = spm_lvl; - return ret; -} - #endif /* !CONFIG_PM_SLEEP */ #ifdef CONFIG_PM @@ -366,12 +343,9 @@ ufshcd_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) static const struct dev_pm_ops ufshcd_pci_pm_ops = { #ifdef CONFIG_PM_SLEEP - .suspend = ufshcd_pci_suspend, - .resume = ufshcd_pci_resume, - .freeze = ufshcd_pci_suspend, - .thaw = ufshcd_pci_resume, - .poweroff = ufshcd_pci_poweroff, - .restore = ufshcd_pci_resume, + .prepare = ufshcd_suspend_prepare, + .complete = ufshcd_resume_complete, + SET_SYSTEM_SLEEP_PM_OPS(ufshcd_pci_suspend, ufshcd_pci_resume) #endif SET_RUNTIME_PM_OPS(ufshcd_pci_runtime_suspend, ufshcd_pci_runtime_resume, diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index 45624c7..a8d08b2 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -16,6 +16,7 @@ #include #include #include +#include #include "ufshcd.h" #include "ufs_quirks.h" #include "unipro.h" @@ -78,6 +79,8 @@ /* Polling time to wait for fDeviceInit */ #define FDEVICEINIT_COMPL_TIMEOUT 1500 /* millisecs */ +#define wlun_dev_to_hba(dv) shost_priv(to_scsi_device(dv)->host) + #define ufshcd_toggle_vreg(_dev, _vreg, _on) \ ({ \ int _ret; \ @@ -1556,7 +1559,7 @@ static ssize_t ufshcd_clkscale_enable_store(struct device *dev, if (value == hba->clk_scaling.is_enabled) goto out; - pm_runtime_get_sync(hba->dev); + scsi_autopm_get_device(hba->sdev_ufs_device); ufshcd_hold(hba, false); hba->clk_scaling.is_enabled = value; @@ -1572,7 +1575,7 @@ static ssize_t ufshcd_clkscale_enable_store(struct device *dev, } ufshcd_release(hba); - pm_runtime_put_sync(hba->dev); + scsi_autopm_put_device(hba->sdev_ufs_device); out: up(&hba->host_sem); return err ? err : count; @@ -2572,6 +2575,17 @@ static inline u16 ufshcd_upiu_wlun_to_scsi_wlun(u8 upiu_wlun_id) return (upiu_wlun_id & ~UFS_UPIU_WLUN_ID) | SCSI_W_LUN_BASE; } +static inline bool is_rpmb_wlun(struct scsi_device *sdev) +{ + return (sdev->lun == ufshcd_upiu_wlun_to_scsi_wlun(UFS_UPIU_RPMB_WLUN)); +} + +static inline bool is_device_wlun(struct scsi_device *sdev) +{ + return (sdev->lun == + ufshcd_upiu_wlun_to_scsi_wlun(UFS_UPIU_UFS_DEVICE_WLUN)); +} + static void ufshcd_init_lrb(struct ufs_hba *hba, struct ufshcd_lrb *lrb, int i) { struct utp_transfer_cmd_desc *cmd_descp = hba->ucdl_base_addr; @@ -4808,6 +4822,41 @@ static inline void ufshcd_get_lu_power_on_wp_status(struct ufs_hba *hba, } /** + * ufshcd_setup_links - associate link b/w device wlun and other luns + * @sdev: pointer to SCSI device + * @hba: pointer to ufs hba + */ +static void ufshcd_setup_links(struct ufs_hba *hba, struct scsi_device *sdev) +{ + struct device_link *link; + + /* + * device wlun is the supplier & rest of the luns are consumers + * This ensures that device wlun suspends after all other luns. + */ + if (hba->sdev_ufs_device) { + link = device_link_add(&sdev->sdev_gendev, + &hba->sdev_ufs_device->sdev_gendev, + DL_FLAG_PM_RUNTIME); + if (!link) { + dev_err(&sdev->sdev_gendev, "Failed establishing link - %s\n", + dev_name(&hba->sdev_ufs_device->sdev_gendev)); + return; + } + hba->luns_avail--; + /* Ignore REPORT_LUN wlun probing */ + if (hba->luns_avail != 1) + return; + + pm_runtime_put_noidle(&hba->sdev_ufs_device->sdev_gendev); + pm_runtime_mark_last_busy(&hba->sdev_ufs_device->sdev_gendev); + } else { + /* device wlun is probed */ + hba->luns_avail--; + } +} + +/** * ufshcd_slave_alloc - handle initial SCSI device configurations * @sdev: pointer to SCSI device * @@ -4838,6 +4887,8 @@ static int ufshcd_slave_alloc(struct scsi_device *sdev) ufshcd_get_lu_power_on_wp_status(hba, sdev); + ufshcd_setup_links(hba, sdev); + return 0; } @@ -4985,15 +5036,9 @@ ufshcd_transfer_rsp_status(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) * UFS device needs urgent BKOPs. */ if (!hba->pm_op_in_progress && - ufshcd_is_exception_event(lrbp->ucd_rsp_ptr) && - schedule_work(&hba->eeh_work)) { - /* - * Prevent suspend once eeh_work is scheduled - * to avoid deadlock between ufshcd_suspend - * and exception event handler. - */ - pm_runtime_get_noresume(hba->dev); - } + ufshcd_is_exception_event(lrbp->ucd_rsp_ptr)) + /* Flushed in suspend */ + schedule_work(&hba->eeh_work); break; case UPIU_TRANSACTION_REJECT_UPIU: /* TODO: handle Reject UPIU Response */ @@ -5589,8 +5634,8 @@ static void ufshcd_rpm_dev_flush_recheck_work(struct work_struct *work) * after a certain delay to recheck the threshold by next runtime * suspend. */ - pm_runtime_get_sync(hba->dev); - pm_runtime_put_sync(hba->dev); + scsi_autopm_get_device(hba->sdev_ufs_device); + scsi_autopm_put_device(hba->sdev_ufs_device); } /** @@ -5607,7 +5652,6 @@ static void ufshcd_exception_event_handler(struct work_struct *work) u32 status = 0; hba = container_of(work, struct ufs_hba, eeh_work); - pm_runtime_get_sync(hba->dev); ufshcd_scsi_block_requests(hba); err = ufshcd_get_ee_status(hba, &status); if (err) { @@ -5623,14 +5667,6 @@ static void ufshcd_exception_event_handler(struct work_struct *work) out: ufshcd_scsi_unblock_requests(hba); - /* - * pm_runtime_get_noresume is called while scheduling - * eeh_work to avoid suspend racing with exception work. - * Hence decrement usage counter using pm_runtime_put_noidle - * to allow suspend on completion of exception event handler. - */ - pm_runtime_put_noidle(hba->dev); - pm_runtime_put(hba->dev); return; } @@ -7207,11 +7243,12 @@ static void ufshcd_set_active_icc_lvl(struct ufs_hba *hba) static inline void ufshcd_blk_pm_runtime_init(struct scsi_device *sdev) { + int dly = is_device_wlun(sdev) ? 0:RPM_AUTOSUSPEND_DELAY_MS; + scsi_autopm_get_device(sdev); blk_pm_runtime_init(sdev->request_queue, &sdev->sdev_gendev); if (sdev->rpm_autosuspend) - pm_runtime_set_autosuspend_delay(&sdev->sdev_gendev, - RPM_AUTOSUSPEND_DELAY_MS); + pm_runtime_set_autosuspend_delay(&sdev->sdev_gendev, dly); scsi_autopm_put_device(sdev); } @@ -7254,6 +7291,15 @@ static int ufshcd_scsi_add_wlus(struct ufs_hba *hba) goto out; } ufshcd_blk_pm_runtime_init(hba->sdev_ufs_device); + /* + * A pm_runtime_put_sync is invoked when this device enables blk_pm_runtime + * & would suspend the device-wlun upon timer expiry. + * But suspending device wlun _may_ put the ufs device in the pre-defined + * low power mode (SSU ). Probing of other luns may fail then. + * Don't allow this suspend until all the luns have been probed. + * See pm_runtime_mark_last_busy in ufshcd_setup_links. + */ + pm_runtime_get_noresume(&hba->sdev_ufs_device->sdev_gendev); scsi_device_put(hba->sdev_ufs_device); hba->sdev_rpmb = __scsi_add_device(hba->host, 0, 0, @@ -7417,6 +7463,9 @@ static int ufs_get_device_desc(struct ufs_hba *hba) goto out; } + hba->luns_avail = desc_buf[DEVICE_DESC_PARAM_NUM_LU] + + desc_buf[DEVICE_DESC_PARAM_NUM_WLU]; + ufs_fixup_device_setup(hba); ufshcd_wb_probe(hba, desc_buf); @@ -7892,6 +7941,7 @@ static int ufshcd_probe_hba(struct ufs_hba *hba, bool async) ufshcd_set_ufs_dev_active(hba); ufshcd_force_reset_auto_bkops(hba); hba->wlun_dev_clr_ua = true; + hba->wlun_rpmb_clr_ua = true; /* Gear up to HS gear if supported */ if (hba->max_pwr_info.is_valid) { @@ -7964,6 +8014,7 @@ static void ufshcd_async_scan(void *data, async_cookie_t cookie) pm_runtime_put_sync(hba->dev); ufshcd_hba_exit(hba); } + hba->init_done = true; } static const struct attribute_group *ufshcd_driver_groups[] = { @@ -8475,7 +8526,8 @@ static int ufshcd_set_dev_pwr_mode(struct ufs_hba *hba, * handling context. */ hba->host->eh_noresume = 1; - ufshcd_clear_ua_wluns(hba); + if (hba->wlun_dev_clr_ua) + ufshcd_clear_ua_wlun(hba, UFS_UPIU_UFS_DEVICE_WLUN); cmd[4] = pwr_mode << 4; @@ -8650,31 +8702,21 @@ static void ufshcd_hba_vreg_set_hpm(struct ufs_hba *hba) ufshcd_setup_hba_vreg(hba, true); } -/** - * ufshcd_suspend - helper function for suspend operations - * @hba: per adapter instance - * @pm_op: desired low power operation type - * - * This function will try to put the UFS device and link into low power - * mode based on the "rpm_lvl" (Runtime PM level) or "spm_lvl" - * (System PM level). - * - * If this function is called during shutdown, it will make sure that - * both UFS device and UFS link is powered off. - * - * NOTE: UFS device & link must be active before we enter in this function. - * - * Returns 0 for success and non-zero for failure - */ -static int ufshcd_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op) +static int __ufshcd_wl_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op) { int ret = 0; - int check_for_bkops; enum ufs_pm_level pm_lvl; enum ufs_dev_pwr_mode req_dev_pwr_mode; enum uic_link_state req_link_state; + int check_for_bkops; - hba->pm_op_in_progress = 1; + /* + * Is invoked when the device wlun is added to sysfs + * But by then hba->sdev_ufs_device may not be initialized. + */ + if (!hba->init_done) + return 0; + hba->pm_op_in_progress = true; if (!ufshcd_is_shutdown_pm(pm_op)) { pm_lvl = ufshcd_is_runtime_pm(pm_op) ? hba->rpm_lvl : hba->spm_lvl; @@ -8691,23 +8733,22 @@ static int ufshcd_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op) */ ufshcd_hold(hba, false); hba->clk_gating.is_suspended = true; - if (ufshcd_is_clkscaling_supported(hba)) ufshcd_clk_scaling_suspend(hba, true); if (req_dev_pwr_mode == UFS_ACTIVE_PWR_MODE && req_link_state == UIC_LINK_ACTIVE_STATE) { - goto disable_clks; + goto enable_scaling; } if ((req_dev_pwr_mode == hba->curr_dev_pwr_mode) && (req_link_state == hba->uic_link_state)) - goto enable_gating; + goto enable_scaling; /* UFS device & link must be active before we enter in this function */ if (!ufshcd_is_ufs_dev_active(hba) || !ufshcd_is_link_active(hba)) { ret = -EINVAL; - goto enable_gating; + goto enable_scaling; } if (ufshcd_is_runtime_pm(pm_op)) { @@ -8719,7 +8760,7 @@ static int ufshcd_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op) */ ret = ufshcd_urgent_bkops(hba); if (ret) - goto enable_gating; + goto enable_scaling; } else { /* make sure that auto bkops is disabled */ ufshcd_disable_auto_bkops(hba); @@ -8747,20 +8788,231 @@ static int ufshcd_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op) if (!hba->dev_info.b_rpm_dev_flush_capable) { ret = ufshcd_set_dev_pwr_mode(hba, req_dev_pwr_mode); if (ret) - goto enable_gating; + goto enable_scaling; } } - /* * In the case of DeepSleep, the device is expected to remain powered * with the link off, so do not check for bkops. */ check_for_bkops = !ufshcd_is_ufs_dev_deepsleep(hba); ret = ufshcd_link_state_transition(hba, req_link_state, check_for_bkops); + if (!ret) + goto out; + + if (!ufshcd_set_dev_pwr_mode(hba, UFS_ACTIVE_PWR_MODE)) + ufshcd_disable_auto_bkops(hba); +enable_scaling: + if (ufshcd_is_clkscaling_supported(hba)) + ufshcd_clk_scaling_suspend(hba, false); + + hba->dev_info.b_rpm_dev_flush_capable = false; +out: + if (hba->dev_info.b_rpm_dev_flush_capable) { + schedule_delayed_work(&hba->rpm_dev_flush_recheck_work, + msecs_to_jiffies(RPM_DEV_FLUSH_RECHECK_WORK_DELAY_MS)); + } + if (ret) { + ufshcd_update_evt_hist(hba, UFS_EVT_WL_SUSP_ERR, (u32)ret); + hba->clk_gating.is_suspended = false; + ufshcd_release(hba); + } + hba->pm_op_in_progress = false; + return ret; +} + +static int __ufshcd_wl_resume(struct ufs_hba *hba, enum ufs_pm_op pm_op) +{ + int ret = 0; + + if (!hba->init_done) + return 0; + + hba->pm_op_in_progress = true; + + /* For DeepSleep, the only supported option is to have the link off */ + WARN_ON(ufshcd_is_ufs_dev_deepsleep(hba) && !ufshcd_is_link_off(hba)); + + if (ufshcd_is_link_hibern8(hba)) { + ret = ufshcd_uic_hibern8_exit(hba); + if (!ret) { + ufshcd_set_link_active(hba); + } else { + dev_err(hba->dev, "%s: hibern8 exit failed %d\n", + __func__, ret); + goto out; + } + } else if (ufshcd_is_link_off(hba)) { + /* + * A full initialization of the host and the device is + * required since the link was put to off during suspend. + * Note, in the case of DeepSleep, the device will exit + * DeepSleep due to device reset. + */ + ret = ufshcd_reset_and_restore(hba); + /* + * ufshcd_reset_and_restore() should have already + * set the link state as active + */ + if (ret || !ufshcd_is_link_active(hba)) + goto out; + } + + if (!ufshcd_is_ufs_dev_active(hba)) { + ret = ufshcd_set_dev_pwr_mode(hba, UFS_ACTIVE_PWR_MODE); + if (ret) + goto out; + } + + if (ufshcd_keep_autobkops_enabled_except_suspend(hba)) + ufshcd_enable_auto_bkops(hba); + else + /* + * If BKOPs operations are urgently needed at this moment then + * keep auto-bkops enabled or else disable it. + */ + ufshcd_urgent_bkops(hba); + + if (hba->clk_scaling.is_allowed) + ufshcd_resume_clkscaling(hba); + + if (hba->dev_info.b_rpm_dev_flush_capable) { + hba->dev_info.b_rpm_dev_flush_capable = false; + cancel_delayed_work(&hba->rpm_dev_flush_recheck_work); + } + + /* Enable Auto-Hibernate if configured */ + ufshcd_auto_hibern8_enable(hba); + +out: + if (ret) + ufshcd_update_evt_hist(hba, UFS_EVT_WL_RES_ERR, (u32)ret); + hba->clk_gating.is_suspended = false; + ufshcd_release(hba); + hba->pm_op_in_progress = false; + return ret; +} + +static int ufshcd_wl_runtime_suspend(struct device *dev) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufs_hba *hba; + int ret; + ktime_t start = ktime_get(); + + hba = shost_priv(sdev->host); + + ret = __ufshcd_wl_suspend(hba, UFS_RUNTIME_PM); if (ret) - goto set_dev_active; + dev_err(&sdev->sdev_gendev, "%s failed: %d\n", __func__, ret); + + trace_ufshcd_wl_runtime_suspend(dev_name(dev), ret, + ktime_to_us(ktime_sub(ktime_get(), start)), + hba->curr_dev_pwr_mode, hba->uic_link_state); + + return ret; +} + +static int ufshcd_wl_runtime_resume(struct device *dev) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufs_hba *hba; + int ret = 0; + ktime_t start = ktime_get(); + + hba = shost_priv(sdev->host); + + ret = __ufshcd_wl_resume(hba, UFS_RUNTIME_PM); + if (ret) + dev_err(&sdev->sdev_gendev, "%s failed: %d\n", __func__, ret); + + trace_ufshcd_wl_runtime_resume(dev_name(dev), ret, + ktime_to_us(ktime_sub(ktime_get(), start)), + hba->curr_dev_pwr_mode, hba->uic_link_state); + + return ret; +} + +#ifdef CONFIG_PM_SLEEP +static int ufshcd_wl_suspend(struct device *dev) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufs_hba *hba; + int ret; + ktime_t start = ktime_get(); + + hba = shost_priv(sdev->host); + ret = __ufshcd_wl_suspend(hba, UFS_SYSTEM_PM); + if (ret) + dev_err(&sdev->sdev_gendev, "%s failed: %d\n", __func__, ret); + + trace_ufshcd_wl_suspend(dev_name(dev), ret, + ktime_to_us(ktime_sub(ktime_get(), start)), + hba->curr_dev_pwr_mode, hba->uic_link_state); + + return ret; +} + +static int ufshcd_wl_resume(struct device *dev) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufs_hba *hba; + int ret = 0; + ktime_t start = ktime_get(); + + if (pm_runtime_suspended(dev)) + return 0; + hba = shost_priv(sdev->host); + + ret = __ufshcd_wl_resume(hba, UFS_SYSTEM_PM); + if (ret) + dev_err(&sdev->sdev_gendev, "%s failed: %d\n", __func__, ret); + + trace_ufshcd_wl_resume(dev_name(dev), ret, + ktime_to_us(ktime_sub(ktime_get(), start)), + hba->curr_dev_pwr_mode, hba->uic_link_state); + + return ret; +} +#endif + +static void ufshcd_wl_shutdown(struct device *dev) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufs_hba *hba; + + hba = shost_priv(sdev->host); + /* Turn on everything while shutting down */ + scsi_autopm_get_device(sdev); + scsi_device_quiesce(sdev); + shost_for_each_device(sdev, hba->host) { + if (sdev == hba->sdev_ufs_device) + continue; + scsi_device_quiesce(sdev); + } + __ufshcd_wl_suspend(hba, UFS_SHUTDOWN_PM); +} + +/** + * ufshcd_suspend - helper function for suspend operations + * @hba: per adapter instance + * @pm_op: desired low power operation type + * + * This function will try to put the UFS link into low power + * mode based on the "rpm_lvl" (Runtime PM level) or "spm_lvl" + * (System PM level). + * + * If this function is called during shutdown, it will make sure that + * link is powered off. + * + * Returns 0 for success and non-zero for failure + */ +static int ufshcd_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op) +{ + int ret = 0; + + hba->pm_op_in_progress = 1; -disable_clks: /* * Call vendor specific suspend callback. As these callbacks may access * vendor specific host controller register space call them before the @@ -8768,7 +9020,7 @@ static int ufshcd_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op) */ ret = ufshcd_vops_suspend(hba, pm_op); if (ret) - goto set_link_active; + goto out; /* * Disable the host irq as host controller as there won't be any * host controller transaction expected till resume. @@ -8787,45 +9039,8 @@ static int ufshcd_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op) /* Put the host controller in low power mode if possible */ ufshcd_hba_vreg_set_lpm(hba); - goto out; - -set_link_active: - ufshcd_vreg_set_hpm(hba); - /* - * Device hardware reset is required to exit DeepSleep. Also, for - * DeepSleep, the link is off so host reset and restore will be done - * further below. - */ - if (ufshcd_is_ufs_dev_deepsleep(hba)) { - ufshcd_device_reset(hba); - WARN_ON(!ufshcd_is_link_off(hba)); - } - if (ufshcd_is_link_hibern8(hba) && !ufshcd_uic_hibern8_exit(hba)) - ufshcd_set_link_active(hba); - else if (ufshcd_is_link_off(hba)) - ufshcd_host_reset_and_restore(hba); -set_dev_active: - /* Can also get here needing to exit DeepSleep */ - if (ufshcd_is_ufs_dev_deepsleep(hba)) { - ufshcd_device_reset(hba); - ufshcd_host_reset_and_restore(hba); - } - if (!ufshcd_set_dev_pwr_mode(hba, UFS_ACTIVE_PWR_MODE)) - ufshcd_disable_auto_bkops(hba); -enable_gating: - if (ufshcd_is_clkscaling_supported(hba)) - ufshcd_clk_scaling_suspend(hba, false); - hba->clk_gating.is_suspended = false; - hba->dev_info.b_rpm_dev_flush_capable = false; - ufshcd_clear_ua_wluns(hba); - ufshcd_release(hba); out: - if (hba->dev_info.b_rpm_dev_flush_capable) { - schedule_delayed_work(&hba->rpm_dev_flush_recheck_work, - msecs_to_jiffies(RPM_DEV_FLUSH_RECHECK_WORK_DELAY_MS)); - } - hba->pm_op_in_progress = 0; if (ret) @@ -8846,10 +9061,8 @@ static int ufshcd_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op) static int ufshcd_resume(struct ufs_hba *hba, enum ufs_pm_op pm_op) { int ret; - enum uic_link_state old_link_state; hba->pm_op_in_progress = 1; - old_link_state = hba->uic_link_state; ufshcd_hba_vreg_set_hpm(hba); ret = ufshcd_vreg_set_hpm(hba); @@ -8873,73 +9086,8 @@ static int ufshcd_resume(struct ufs_hba *hba, enum ufs_pm_op pm_op) if (ret) goto disable_irq_and_vops_clks; - /* For DeepSleep, the only supported option is to have the link off */ - WARN_ON(ufshcd_is_ufs_dev_deepsleep(hba) && !ufshcd_is_link_off(hba)); - - if (ufshcd_is_link_hibern8(hba)) { - ret = ufshcd_uic_hibern8_exit(hba); - if (!ret) { - ufshcd_set_link_active(hba); - } else { - dev_err(hba->dev, "%s: hibern8 exit failed %d\n", - __func__, ret); - goto vendor_suspend; - } - } else if (ufshcd_is_link_off(hba)) { - /* - * A full initialization of the host and the device is - * required since the link was put to off during suspend. - * Note, in the case of DeepSleep, the device will exit - * DeepSleep due to device reset. - */ - ret = ufshcd_reset_and_restore(hba); - /* - * ufshcd_reset_and_restore() should have already - * set the link state as active - */ - if (ret || !ufshcd_is_link_active(hba)) - goto vendor_suspend; - } - - if (!ufshcd_is_ufs_dev_active(hba)) { - ret = ufshcd_set_dev_pwr_mode(hba, UFS_ACTIVE_PWR_MODE); - if (ret) - goto set_old_link_state; - } - - if (ufshcd_keep_autobkops_enabled_except_suspend(hba)) - ufshcd_enable_auto_bkops(hba); - else - /* - * If BKOPs operations are urgently needed at this moment then - * keep auto-bkops enabled or else disable it. - */ - ufshcd_urgent_bkops(hba); - - hba->clk_gating.is_suspended = false; - - if (ufshcd_is_clkscaling_supported(hba)) - ufshcd_clk_scaling_suspend(hba, false); - - /* Enable Auto-Hibernate if configured */ - ufshcd_auto_hibern8_enable(hba); - - if (hba->dev_info.b_rpm_dev_flush_capable) { - hba->dev_info.b_rpm_dev_flush_capable = false; - cancel_delayed_work(&hba->rpm_dev_flush_recheck_work); - } - - ufshcd_clear_ua_wluns(hba); - - /* Schedule clock gating in case of no access to UFS device yet */ - ufshcd_release(hba); - goto out; -set_old_link_state: - ufshcd_link_state_transition(hba, old_link_state, 0); -vendor_suspend: - ufshcd_vops_suspend(hba, pm_op); disable_irq_and_vops_clks: ufshcd_disable_irq(hba); ufshcd_setup_clocks(hba, false); @@ -9477,15 +9625,160 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq) } EXPORT_SYMBOL_GPL(ufshcd_init); +void ufshcd_resume_complete(struct device *dev) +{ + struct ufs_hba *hba = dev_get_drvdata(dev); + + pm_runtime_put_noidle(&hba->sdev_ufs_device->sdev_gendev); +} +EXPORT_SYMBOL_GPL(ufshcd_resume_complete); + +int ufshcd_suspend_prepare(struct device *dev) +{ + struct ufs_hba *hba = dev_get_drvdata(dev); + + /* + * SCSI assumes that runtime-pm and system-pm for scsi drivers + * are same. And it doesn't wake up the device for system-suspend + * if it's runtime suspended. But ufs doesn't follow that. + * The rpm-lvl and spm-lvl can be different in ufs. + * Force it to honor system-suspend. + */ + scsi_autopm_get_device(hba->sdev_ufs_device); + /* Refer ufshcd_resume_complete() */ + pm_runtime_get_noresume(&hba->sdev_ufs_device->sdev_gendev); + scsi_autopm_put_device(hba->sdev_ufs_device); + return 0; +} +EXPORT_SYMBOL_GPL(ufshcd_suspend_prepare); + +#ifdef CONFIG_PM_SLEEP +static int ufshcd_wl_poweroff(struct device *dev) +{ + ufshcd_wl_shutdown(dev); + return 0; +} +#endif + +static int ufshcd_wl_probe(struct device *dev) +{ + return is_device_wlun(to_scsi_device(dev)) ? 0 : -ENODEV; +} + +static int ufshcd_wl_remove(struct device *dev) +{ + return 0; +} + +static const struct dev_pm_ops ufshcd_wl_pm_ops = { +#ifdef CONFIG_PM_SLEEP + .suspend = ufshcd_wl_suspend, + .resume = ufshcd_wl_resume, + .freeze = ufshcd_wl_suspend, + .thaw = ufshcd_wl_resume, + .poweroff = ufshcd_wl_poweroff, + .restore = ufshcd_wl_resume, +#endif + SET_RUNTIME_PM_OPS(ufshcd_wl_runtime_suspend, ufshcd_wl_runtime_resume, NULL) +}; + +/** + * ufs_dev_wlun_template - describes ufs device wlun + * ufs-device wlun - used to send pm commands + * All luns are consumers of ufs-device wlun. + * + * Currently, no sd driver is present for wluns. + * Hence the no specific pm operations are performed. + * With ufs design, SSU should be sent to ufs-device wlun. + * Hence register a scsi driver for ufs wluns only. + */ +static struct scsi_driver ufs_dev_wlun_template = { + .gendrv = { + .name = "ufs_device_wlun", + .owner = THIS_MODULE, + .probe = ufshcd_wl_probe, + .remove = ufshcd_wl_remove, + .pm = &ufshcd_wl_pm_ops, + .shutdown = ufshcd_wl_shutdown, + }, +}; + +static int ufshcd_rpmb_probe(struct device *dev) +{ + return is_rpmb_wlun(to_scsi_device(dev)) ? 0 : -ENODEV; +} + +static inline int ufshcd_clear_rpmb_uac(struct ufs_hba *hba) +{ + int ret = 0; + + if (!hba->wlun_rpmb_clr_ua) + return 0; + ret = ufshcd_clear_ua_wlun(hba, UFS_UPIU_RPMB_WLUN); + if (!ret) + hba->wlun_rpmb_clr_ua = 0; + return ret; +} + +static int ufshcd_rpmb_runtime_resume(struct device *dev) +{ + struct ufs_hba *hba = wlun_dev_to_hba(dev); + + if (hba->sdev_rpmb) + return ufshcd_clear_rpmb_uac(hba); + return 0; +} + +static int ufshcd_rpmb_resume(struct device *dev) +{ + struct ufs_hba *hba = wlun_dev_to_hba(dev); + + if (hba->sdev_rpmb && !pm_runtime_suspended(dev)) + return ufshcd_clear_rpmb_uac(hba); + return 0; +} + +static const struct dev_pm_ops ufs_rpmb_pm_ops = { + SET_RUNTIME_PM_OPS(NULL, ufshcd_rpmb_runtime_resume, NULL) + SET_SYSTEM_SLEEP_PM_OPS(NULL, ufshcd_rpmb_resume) +}; + +/** + * Describes the ufs rpmb wlun. + * Used only to send uac. + */ +static struct scsi_driver ufs_rpmb_wlun_template = { + .gendrv = { + .name = "ufs_rpmb_wlun", + .owner = THIS_MODULE, + .probe = ufshcd_rpmb_probe, + .pm = &ufs_rpmb_pm_ops, + }, +}; + static int __init ufshcd_core_init(void) { + int ret; + ufs_debugfs_init(); + + ret = scsi_register_driver(&ufs_dev_wlun_template.gendrv); + if (ret) + return ret; + + ret = scsi_register_driver(&ufs_rpmb_wlun_template.gendrv); + if (ret) { + scsi_unregister_driver(&ufs_dev_wlun_template.gendrv); + return ret; + } return 0; } static void __exit ufshcd_core_exit(void) { ufs_debugfs_exit(); + scsi_unregister_driver(&ufs_dev_wlun_template.gendrv); + scsi_unregister_driver(&ufs_rpmb_wlun_template.gendrv); } module_init(ufshcd_core_init); diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h index ee61f82..e246ec3 100644 --- a/drivers/scsi/ufs/ufshcd.h +++ b/drivers/scsi/ufs/ufshcd.h @@ -72,6 +72,8 @@ enum ufs_event_type { UFS_EVT_LINK_STARTUP_FAIL, UFS_EVT_RESUME_ERR, UFS_EVT_SUSPEND_ERR, + UFS_EVT_WL_SUSP_ERR, + UFS_EVT_WL_RES_ERR, /* abnormal events */ UFS_EVT_DEV_RESET, @@ -804,6 +806,7 @@ struct ufs_hba { struct list_head clk_list_head; bool wlun_dev_clr_ua; + bool wlun_rpmb_clr_ua; /* Number of requests aborts */ int req_abort_count; @@ -841,6 +844,8 @@ struct ufs_hba { #ifdef CONFIG_DEBUG_FS struct dentry *debugfs_root; #endif + bool init_done; + u32 luns_avail; }; /* Returns true if clocks can be gated. Otherwise false */ @@ -1100,6 +1105,8 @@ int ufshcd_exec_raw_upiu_cmd(struct ufs_hba *hba, enum query_opcode desc_op); int ufshcd_wb_ctrl(struct ufs_hba *hba, bool enable); +int ufshcd_suspend_prepare(struct device *dev); +void ufshcd_resume_complete(struct device *dev); /* Wrapper functions for safely calling variant operations */ static inline const char *ufshcd_get_var_name(struct ufs_hba *hba) diff --git a/include/trace/events/ufs.h b/include/trace/events/ufs.h index e151477..d9d233b 100644 --- a/include/trace/events/ufs.h +++ b/include/trace/events/ufs.h @@ -246,6 +246,26 @@ DEFINE_EVENT(ufshcd_template, ufshcd_init, int dev_state, int link_state), TP_ARGS(dev_name, err, usecs, dev_state, link_state)); +DEFINE_EVENT(ufshcd_template, ufshcd_wl_suspend, + TP_PROTO(const char *dev_name, int err, s64 usecs, + int dev_state, int link_state), + TP_ARGS(dev_name, err, usecs, dev_state, link_state)); + +DEFINE_EVENT(ufshcd_template, ufshcd_wl_resume, + TP_PROTO(const char *dev_name, int err, s64 usecs, + int dev_state, int link_state), + TP_ARGS(dev_name, err, usecs, dev_state, link_state)); + +DEFINE_EVENT(ufshcd_template, ufshcd_wl_runtime_suspend, + TP_PROTO(const char *dev_name, int err, s64 usecs, + int dev_state, int link_state), + TP_ARGS(dev_name, err, usecs, dev_state, link_state)); + +DEFINE_EVENT(ufshcd_template, ufshcd_wl_runtime_resume, + TP_PROTO(const char *dev_name, int err, s64 usecs, + int dev_state, int link_state), + TP_ARGS(dev_name, err, usecs, dev_state, link_state)); + TRACE_EVENT(ufshcd_command, TP_PROTO(const char *dev_name, enum ufs_trace_str_t str_t, unsigned int tag, u32 doorbell, int transfer_len, u32 intr, From patchwork Tue Mar 2 03:21:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Asutosh Das \(asd\)" X-Patchwork-Id: 390704 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55A4FC433DB for ; Tue, 2 Mar 2021 19:49:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0EC5464F25 for ; Tue, 2 Mar 2021 19:49:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1378338AbhCBSqi (ORCPT ); Tue, 2 Mar 2021 13:46:38 -0500 Received: from labrats.qualcomm.com ([199.106.110.90]:44758 "EHLO labrats.qualcomm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1573518AbhCBDXa (ORCPT ); Mon, 1 Mar 2021 22:23:30 -0500 IronPort-SDR: GmdS4pZaOpi6MZnmbY3mKcCa5rPtJNNRUujbfgCfGpe9je9eGgbUJHEDP6qmjQjhkhIIPZpNtp H/MOEM/CHW9WFbo7Gs97QbxDlFENv5pnzrWGJdC0TnS4IT6wt090aaMHM9U3ImvDPSqNYjyca9 0UjUKbDiEoTCIIMe0NkU2AcC/GazCSx3XPeWNeV9wKRHbXn1ZdO5wqyxA8mmSsVnYYNd4zhqK7 IAEki2pEorH9A1OK54H6P4mAMS7Dqhbb4wWPUpF6uK8Dxd9nndBg7Oq6vieqiskcB/828FtuWB s7o= X-IronPort-AV: E=Sophos;i="5.81,216,1610438400"; d="scan'208";a="47798271" Received: from unknown (HELO ironmsg-SD-alpha.qualcomm.com) ([10.53.140.30]) by labrats.qualcomm.com with ESMTP; 01 Mar 2021 19:22:02 -0800 X-QCInternal: smtphost Received: from stor-presley.qualcomm.com ([192.168.140.85]) by ironmsg-SD-alpha.qualcomm.com with ESMTP; 01 Mar 2021 19:22:01 -0800 Received: by stor-presley.qualcomm.com (Postfix, from userid 92687) id C86BE21A19; Mon, 1 Mar 2021 19:22:01 -0800 (PST) From: Asutosh Das To: cang@codeaurora.org, martin.petersen@oracle.com, adrian.hunter@intel.com, linux-scsi@vger.kernel.org Cc: Asutosh Das , linux-arm-msm@vger.kernel.org, Alim Akhtar , Avri Altman , "James E.J. Bottomley" , Stanley Chu , Bean Huo , "Bao D. Nguyen" , Nitin Rawat , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v9 2/2] ufs: sysfs: Resume the proper scsi device Date: Mon, 1 Mar 2021 19:21:31 -0800 Message-Id: <64ed829d1e18cc1721a06a46b015dfc0bc68abd7.1614655058.git.asutoshd@codeaurora.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Resumes the actual scsi device the unit descriptor of which is being accessed instead of the hba alone. Signed-off-by: Asutosh Das --- drivers/scsi/ufs/ufs-sysfs.c | 26 +++++++++++++++----------- 1 file changed, 15 insertions(+), 11 deletions(-) diff --git a/drivers/scsi/ufs/ufs-sysfs.c b/drivers/scsi/ufs/ufs-sysfs.c index acc54f5..34481e3 100644 --- a/drivers/scsi/ufs/ufs-sysfs.c +++ b/drivers/scsi/ufs/ufs-sysfs.c @@ -297,10 +297,10 @@ static ssize_t ufs_sysfs_read_desc_param(struct ufs_hba *hba, goto out; } - pm_runtime_get_sync(hba->dev); + scsi_autopm_get_device(hba->sdev_ufs_device); ret = ufshcd_read_desc_param(hba, desc_id, desc_index, param_offset, desc_buf, param_size); - pm_runtime_put_sync(hba->dev); + scsi_autopm_put_device(hba->sdev_ufs_device); if (ret) { ret = -EINVAL; goto out; @@ -678,7 +678,7 @@ static ssize_t _name##_show(struct device *dev, \ up(&hba->host_sem); \ return -ENOMEM; \ } \ - pm_runtime_get_sync(hba->dev); \ + scsi_autopm_get_device(hba->sdev_ufs_device); \ ret = ufshcd_query_descriptor_retry(hba, \ UPIU_QUERY_OPCODE_READ_DESC, QUERY_DESC_IDN_DEVICE, \ 0, 0, desc_buf, &desc_len); \ @@ -695,7 +695,7 @@ static ssize_t _name##_show(struct device *dev, \ goto out; \ ret = sysfs_emit(buf, "%s\n", desc_buf); \ out: \ - pm_runtime_put_sync(hba->dev); \ + scsi_autopm_put_device(hba->sdev_ufs_device); \ kfree(desc_buf); \ up(&hba->host_sem); \ return ret; \ @@ -744,10 +744,10 @@ static ssize_t _name##_show(struct device *dev, \ } \ if (ufshcd_is_wb_flags(QUERY_FLAG_IDN##_uname)) \ index = ufshcd_wb_get_query_index(hba); \ - pm_runtime_get_sync(hba->dev); \ + scsi_autopm_get_device(hba->sdev_ufs_device); \ ret = ufshcd_query_flag(hba, UPIU_QUERY_OPCODE_READ_FLAG, \ QUERY_FLAG_IDN##_uname, index, &flag); \ - pm_runtime_put_sync(hba->dev); \ + scsi_autopm_put_device(hba->sdev_ufs_device); \ if (ret) { \ ret = -EINVAL; \ goto out; \ @@ -813,10 +813,10 @@ static ssize_t _name##_show(struct device *dev, \ } \ if (ufshcd_is_wb_attrs(QUERY_ATTR_IDN##_uname)) \ index = ufshcd_wb_get_query_index(hba); \ - pm_runtime_get_sync(hba->dev); \ + scsi_autopm_get_device(hba->sdev_ufs_device); \ ret = ufshcd_query_attr(hba, UPIU_QUERY_OPCODE_READ_ATTR, \ QUERY_ATTR_IDN##_uname, index, 0, &value); \ - pm_runtime_put_sync(hba->dev); \ + scsi_autopm_put_device(hba->sdev_ufs_device); \ if (ret) { \ ret = -EINVAL; \ goto out; \ @@ -899,11 +899,15 @@ static ssize_t _pname##_show(struct device *dev, \ struct scsi_device *sdev = to_scsi_device(dev); \ struct ufs_hba *hba = shost_priv(sdev->host); \ u8 lun = ufshcd_scsi_to_upiu_lun(sdev->lun); \ + int ret; \ if (!ufs_is_valid_unit_desc_lun(&hba->dev_info, lun, \ _duname##_DESC_PARAM##_puname)) \ return -EINVAL; \ - return ufs_sysfs_read_desc_param(hba, QUERY_DESC_IDN_##_duname, \ + scsi_autopm_get_device(sdev); \ + ret = ufs_sysfs_read_desc_param(hba, QUERY_DESC_IDN_##_duname, \ lun, _duname##_DESC_PARAM##_puname, buf, _size); \ + scsi_autopm_put_device(sdev); \ + return ret; \ } \ static DEVICE_ATTR_RO(_pname) @@ -964,10 +968,10 @@ static ssize_t dyn_cap_needed_attribute_show(struct device *dev, goto out; } - pm_runtime_get_sync(hba->dev); + scsi_autopm_get_device(hba->sdev_ufs_device); ret = ufshcd_query_attr(hba, UPIU_QUERY_OPCODE_READ_ATTR, QUERY_ATTR_IDN_DYN_CAP_NEEDED, lun, 0, &value); - pm_runtime_put_sync(hba->dev); + scsi_autopm_put_device(hba->sdev_ufs_device); if (ret) { ret = -EINVAL; goto out;