From patchwork Wed Feb 17 09:12:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daejun Park X-Patchwork-Id: 384217 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56443C433E0 for ; Wed, 17 Feb 2021 09:13:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E0F2364E28 for ; Wed, 17 Feb 2021 09:13:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231933AbhBQJNn (ORCPT ); Wed, 17 Feb 2021 04:13:43 -0500 Received: from mailout3.samsung.com ([203.254.224.33]:51837 "EHLO mailout3.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232081AbhBQJNi (ORCPT ); Wed, 17 Feb 2021 04:13:38 -0500 Received: from epcas2p2.samsung.com (unknown [182.195.41.54]) by mailout3.samsung.com (KnoxPortal) with ESMTP id 20210217091251epoutp03d6da76b3c3485909fdaa7cdb06758dc0~kfg_TK9eA0801008010epoutp03m for ; Wed, 17 Feb 2021 09:12:51 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout3.samsung.com 20210217091251epoutp03d6da76b3c3485909fdaa7cdb06758dc0~kfg_TK9eA0801008010epoutp03m DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1613553171; bh=qu9cYRJOhOYakMgtEaQFNOOqV+wxyVwxzCS6KUGpdno=; h=Subject:Reply-To:From:To:CC:In-Reply-To:Date:References:From; b=mJBNZ/AzYD188DTEcWcXyWZHFesmCb4aktv2EmftyaLpZynVt1jkzNFo0MPrN6zzc qu56IRUHNulHFgVmFKcoT/nOip9Jrx1qeuMYaRhhbUWq02OnXMC1Yk6FJxycvT0MHt oNyOQGootMQTkJ+ouJpQ599lqrtlMfCdf7Iz52tw= Received: from epsnrtp2.localdomain (unknown [182.195.42.163]) by epcas2p4.samsung.com (KnoxPortal) with ESMTP id 20210217091250epcas2p41b5d3ffdfa4d2209edc415523b1b436d~kfg9Bdt6a2310023100epcas2p4L; Wed, 17 Feb 2021 09:12:50 +0000 (GMT) Received: from epsmges2p4.samsung.com (unknown [182.195.40.189]) by epsnrtp2.localdomain (Postfix) with ESMTP id 4DgXDb3Z5Lz4x9Pt; Wed, 17 Feb 2021 09:12:47 +0000 (GMT) X-AuditID: b6c32a48-4f9ff7000000cd1f-63-602cde0f08c7 Received: from epcas2p2.samsung.com ( [182.195.41.54]) by epsmges2p4.samsung.com (Symantec Messaging Gateway) with SMTP id B7.24.52511.F0EDC206; Wed, 17 Feb 2021 18:12:47 +0900 (KST) Mime-Version: 1.0 Subject: [PATCH v20 1/4] scsi: ufs: Introduce HPB feature Reply-To: daejun7.park@samsung.com Sender: Daejun Park From: Daejun Park To: Daejun Park , Greg KH , "avri.altman@wdc.com" , "jejb@linux.ibm.com" , "martin.petersen@oracle.com" , "asutoshd@codeaurora.org" , "stanley.chu@mediatek.com" , "cang@codeaurora.org" , "bvanassche@acm.org" , "huobean@gmail.com" , ALIM AKHTAR , Javier Gonzalez CC: "linux-scsi@vger.kernel.org" , "linux-kernel@vger.kernel.org" , JinHwan Park , SEUNGUK SHIN , Sung-Jun Park , yongmyung lee , Jinyoung CHOI , BoRam Shin X-Priority: 3 X-Content-Kind-Code: NORMAL In-Reply-To: <20210217090853epcms2p17db2903a3a0c1a13e4ee071b9a39dbc8@epcms2p1> X-CPGS-Detection: blocking_info_exchange X-Drm-Type: N,general X-Msg-Generator: Mail X-Msg-Type: PERSONAL X-Reply-Demand: N Message-ID: <20210217091246epcms2p8f64bf2ff9bffb4eb4d3e86695515201c@epcms2p8> Date: Wed, 17 Feb 2021 18:12:46 +0900 X-CMS-MailID: 20210217091246epcms2p8f64bf2ff9bffb4eb4d3e86695515201c X-Sendblock-Type: AUTO_CONFIDENTIAL X-CPGSPASS: Y X-CPGSPASS: Y CMS-TYPE: 102P X-Brightmail-Tracker: H4sIAAAAAAAAA52TbUxTVxjHd1/aXojMy6tHcLPcbgyQlra09bCJcZEtzZhLs7mXsBdoyk1L Rku5tzjHllgUB0Pe9AOyiuAgagJsDYSWCmMgbA422Qs4KC1DiJJgHIjiGHUT19IyzT7u2//8 zv/J8/zPyUNgYX28aCLXYKIZgzqP4gbj9sEEhXDzdFK2eHEoAs402Lmw95MhHrzh+ZULB92L PFi75MHgHes5DrwxkABbZt6ER5qtXFg/YkZhZbWNC69NLfNgk9OOwuq1UhyOdddz4bEJBxee /24Nhe7OYHjWNonAT0+24bDp8x58T6Ry7EqGcqyqElVesPzGU9Y09SPKvtNtPGXJcB+uvD3n wpVVnS2IcrnjSWVp/zFUFZxpRnapGY0u9wDNpw2a/JxcgzaNen3/i0JI8XX5rCmNelsCpSJJ qlwkSxVJd777rEQslsopvkGtp9Oog8JANcVnNEav20SzJobW0F7E7GFNai0tYtV6ttCgFWny 9RT/gDqv0FtHJe/epaPVOTTDz76O6DxXXByj+1v0oLPrBM+MlFei5UgQAUgZ+KpklevTYaQD AYs/asoRggghQ8F9R7gPh5OpoGLNzfFbKGD9xcLzcxFwzbYhPs0lk8DJoateHkxEkB04mD5T xvEdMPIeCoauLSH+ZiGgrnQO9+sY0HXets6DyFfAbbsn4IkHq+cqMb+OBJOtC7wNfetSY8AT AY5OjwQ8oWDG0xPgW8GlnqVAsEPANnUP8Q0ByAoEDF5wcfwXyWC8rH19iBByH6j9vnFd4+TT YOL3VY4vPSDTgXsmy4cxcjvoWqjHfBgjE4C1O9nvEIBvXPhGKnP7X7z/aox8HJQN3v+XOxqu ByaLA196rGgNIrA8fGnLI70sD3udQbAWJIo2snotzUqNskd/ugNZX4JEpQM5tbAkGkBQAhlA AIFRESG85cTssJAc9YdFNJOfxRTm0ewAUuQNeRyLjtTke7fIYMqSKMRShTxFlpIil8n/N5ZL FQpxqhzKFVJIbQlhxDNZYaRWbaLfp2kjzWw0R4mgaDPaMJqBSN/YzTTO3hSYO0e1rJ4W7Xyg invuA9UOQYGmRid0Fh+2PdM0fEK0rfvqSxGdO+Y1PwkLPnOMb9b37rHg28abbwqqzCvh8Y7i P2rjZJ5MdwX8UzL+clCGxxl1d6rAvmLGi9emaA6ybH5rS2Xy8Mc2q5sZODxya9OC6gHzWHj/ yjsGdZbAKZ9snyO6jscM911mIl3CUXPeR3uPYK/+/d7sSO0LEhY1Xvyh5Ozpiure7S2hGTXN 5LwxoT4p/WIq2MqNVZVFkfCpCWnZpvj9qta9h2xHiViOXhxzSkvzK+68dveJ+cXMr38m6va1 Nn7R/7yziFunr9Klx9qTLlM4q1NLEjGGVf8DaVipl9IEAAA= DLP-Filter: Pass X-CFilter-Loop: Reflected X-CMS-RootMailID: 20210217090853epcms2p17db2903a3a0c1a13e4ee071b9a39dbc8 References: <20210217090853epcms2p17db2903a3a0c1a13e4ee071b9a39dbc8@epcms2p1> Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This is a patch for the HPB initialization and adds HPB function calls to UFS core driver. NAND flash-based storage devices, including UFS, have mechanisms to translate logical addresses of IO requests to the corresponding physical addresses of the flash storage. In UFS, Logical-address-to-Physical-address (L2P) map data, which is required to identify the physical address for the requested IOs, can only be partially stored in SRAM from NAND flash. Due to this partial loading, accessing the flash address area where the L2P information for that address is not loaded in the SRAM can result in serious performance degradation. The basic concept of HPB is to cache L2P mapping entries in host system memory so that both physical block address (PBA) and logical block address (LBA) can be delivered in HPB read command. The HPB READ command allows to read data faster than a read command in UFS since it provides the physical address (HPB Entry) of the desired logical block in addition to its logical address. The UFS device can access the physical block in NAND directly without searching and uploading L2P mapping table. This improves read performance because the NAND read operation for uploading L2P mapping table is removed. In HPB initialization, the host checks if the UFS device supports HPB feature and retrieves related device capabilities. Then, some HPB parameters are configured in the device. We measured the total start-up time of popular applications and observed the difference by enabling the HPB. Popular applications are 12 game apps and 24 non-game apps. Each target applications were launched in order. The cycle consists of running 36 applications in sequence. We repeated the cycle for observing performance improvement by L2P mapping cache hit in HPB. The Following is experiment environment: - kernel version: 4.4.0 - RAM: 8GB - UFS 2.1 (64GB) Result: +-------+----------+----------+-------+ | cycle | baseline | with HPB | diff | +-------+----------+----------+-------+ | 1 | 272.4 | 264.9 | -7.5 | | 2 | 250.4 | 248.2 | -2.2 | | 3 | 226.2 | 215.6 | -10.6 | | 4 | 230.6 | 214.8 | -15.8 | | 5 | 232.0 | 218.1 | -13.9 | | 6 | 231.9 | 212.6 | -19.3 | +-------+----------+----------+-------+ We also measured HPB performance using iozone. Here is my iozone script: iozone -r 4k -+n -i2 -ecI -t 16 -l 16 -u 16 -s $IO_RANGE/16 -F mnt/tmp_1 mnt/tmp_2 mnt/tmp_3 mnt/tmp_4 mnt/tmp_5 mnt/tmp_6 mnt/tmp_7 mnt/tmp_8 mnt/tmp_9 mnt/tmp_10 mnt/tmp_11 mnt/tmp_12 mnt/tmp_13 mnt/tmp_14 mnt/tmp_15 mnt/tmp_16 Result: +----------+--------+---------+ | IO range | HPB on | HPB off | +----------+--------+---------+ | 1 GB | 294.8 | 300.87 | | 4 GB | 293.51 | 179.35 | | 8 GB | 294.85 | 162.52 | | 16 GB | 293.45 | 156.26 | | 32 GB | 277.4 | 153.25 | +----------+--------+---------+ Reviewed-by: Bart Van Assche Reviewed-by: Can Guo Acked-by: Avri Altman Tested-by: Bean Huo Reported-by: kernel test robot Signed-off-by: Daejun Park --- Documentation/ABI/testing/sysfs-driver-ufs | 127 +++++ drivers/scsi/ufs/Kconfig | 9 + drivers/scsi/ufs/Makefile | 1 + drivers/scsi/ufs/ufs-sysfs.c | 18 + drivers/scsi/ufs/ufs.h | 15 + drivers/scsi/ufs/ufshcd.c | 49 ++ drivers/scsi/ufs/ufshcd.h | 22 + drivers/scsi/ufs/ufshpb.c | 570 +++++++++++++++++++++ drivers/scsi/ufs/ufshpb.h | 166 ++++++ 9 files changed, 977 insertions(+) create mode 100644 drivers/scsi/ufs/ufshpb.c create mode 100644 drivers/scsi/ufs/ufshpb.h diff --git a/Documentation/ABI/testing/sysfs-driver-ufs b/Documentation/ABI/testing/sysfs-driver-ufs index d1bc23cb6a9d..bf5cb8846de1 100644 --- a/Documentation/ABI/testing/sysfs-driver-ufs +++ b/Documentation/ABI/testing/sysfs-driver-ufs @@ -1172,3 +1172,130 @@ Description: This node is used to set or display whether UFS WriteBooster is (if the platform supports UFSHCD_CAP_CLK_SCALING). For a platform that doesn't support UFSHCD_CAP_CLK_SCALING, we can disable/enable WriteBooster through this sysfs node. + +What: /sys/bus/platform/drivers/ufshcd/*/device_descriptor/hpb_version +Date: February 2021 +Contact: Daejun Park +Description: This entry shows the HPB specification version. + The full information about the descriptor could be found at UFS + HPB (Host Performance Booster) Extension specifications. + Example: version 1.2.3 = 0123h + + The file is read only. + +What: /sys/bus/platform/drivers/ufshcd/*/device_descriptor/hpb_control +Date: February 2021 +Contact: Daejun Park +Description: This entry shows an indication of the HPB control mode. + 00h: Host control mode + 01h: Device control mode + + The file is read only. + +What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/hpb_region_size +Date: February 2021 +Contact: Daejun Park +Description: This entry shows the bHPBRegionSize which can be calculated + as in the following (in bytes): + HPB Region size = 512B * 2^bHPBRegionSize + + The file is read only. + +What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/hpb_number_lu +Date: February 2021 +Contact: Daejun Park +Description: This entry shows the maximum number of HPB LU supported by + the device. + 00h: HPB is not supported by the device. + 01h ~ 20h: Maximum number of HPB LU supported by the device + + The file is read only. + +What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/hpb_subregion_size +Date: February 2021 +Contact: Daejun Park +Description: This entry shows the bHPBSubRegionSize, which can be + calculated as in the following (in bytes) and shall be a multiple of + logical block size: + HPB Sub-Region size = 512B x 2^bHPBSubRegionSize + bHPBSubRegionSize shall not exceed bHPBRegionSize. + + The file is read only. + +What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/hpb_max_active_regions +Date: February 2021 +Contact: Daejun Park +Description: This entry shows the maximum number of active HPB regions that + is supported by the device. + + The file is read only. + +What: /sys/class/scsi_device/*/device/unit_descriptor/hpb_lu_max_active_regions +Date: February 2021 +Contact: Daejun Park +Description: This entry shows the maximum number of HPB regions assigned to + the HPB logical unit. + + The file is read only. + +What: /sys/class/scsi_device/*/device/unit_descriptor/hpb_pinned_region_start_offset +Date: February 2021 +Contact: Daejun Park +Description: This entry shows the start offset of HPB pinned region. + + The file is read only. + +What: /sys/class/scsi_device/*/device/unit_descriptor/hpb_number_pinned_regions +Date: February 2021 +Contact: Daejun Park +Description: This entry shows the number of HPB pinned regions assigned to + the HPB logical unit. + + The file is read only. + +What: /sys/class/scsi_device/*/device/hpb_sysfs/hit_cnt +Date: February 2021 +Contact: Daejun Park +Description: This entry shows the number of reads that changed to HPB read. + + The file is read only. + +What: /sys/class/scsi_device/*/device/hpb_sysfs/miss_cnt +Date: February 2021 +Contact: Daejun Park +Description: This entry shows the number of reads that cannot be changed to + HPB read. + + The file is read only. + +What: /sys/class/scsi_device/*/device/hpb_sysfs/rb_noti_cnt +Date: February 2021 +Contact: Daejun Park +Description: This entry shows the number of response UPIUs that has + recommendations for activating sub-regions and/or inactivating region. + + The file is read only. + +What: /sys/class/scsi_device/*/device/hpb_sysfs/rb_active_cnt +Date: February 2021 +Contact: Daejun Park +Description: This entry shows the number of active sub-regions recommended by + response UPIUs. + + The file is read only. + +What: /sys/class/scsi_device/*/device/hpb_sysfs/rb_inactive_cnt +Date: February 2021 +Contact: Daejun Park +Description: This entry shows the number of inactive regions recommended by + response UPIUs. + + The file is read only. + +What: /sys/class/scsi_device/*/device/hpb_sysfs/map_req_cnt +Date: February 2021 +Contact: Daejun Park +Description: This entry shows the number of read buffer commands for + activating sub-regions recommended by response UPIUs. + + The file is read only. diff --git a/drivers/scsi/ufs/Kconfig b/drivers/scsi/ufs/Kconfig index 07cf415367b4..29ec6e4a87bd 100644 --- a/drivers/scsi/ufs/Kconfig +++ b/drivers/scsi/ufs/Kconfig @@ -182,3 +182,12 @@ config SCSI_UFS_CRYPTO Enabling this makes it possible for the kernel to use the crypto capabilities of the UFS device (if present) to perform crypto operations on data being transferred to/from the device. + +config SCSI_UFS_HPB + bool "Support UFS Host Performance Booster" + depends on SCSI_UFSHCD + help + The UFS HPB feature improves random read performance. It caches + L2P (logical to physical) map of UFS to host DRAM. The driver uses HPB + read command by piggybacking physical page number for bypassing FTL (flash + translation layer)'s L2P address translation. diff --git a/drivers/scsi/ufs/Makefile b/drivers/scsi/ufs/Makefile index 06f3a3fe4a44..cce9b3916f5b 100644 --- a/drivers/scsi/ufs/Makefile +++ b/drivers/scsi/ufs/Makefile @@ -8,6 +8,7 @@ ufshcd-core-y += ufshcd.o ufs-sysfs.o ufshcd-core-$(CONFIG_DEBUG_FS) += ufs-debugfs.o ufshcd-core-$(CONFIG_SCSI_UFS_BSG) += ufs_bsg.o ufshcd-core-$(CONFIG_SCSI_UFS_CRYPTO) += ufshcd-crypto.o +ufshcd-core-$(CONFIG_SCSI_UFS_HPB) += ufshpb.o obj-$(CONFIG_SCSI_UFS_DWC_TC_PCI) += tc-dwc-g210-pci.o ufshcd-dwc.o tc-dwc-g210.o obj-$(CONFIG_SCSI_UFS_DWC_TC_PLATFORM) += tc-dwc-g210-pltfrm.o ufshcd-dwc.o tc-dwc-g210.o diff --git a/drivers/scsi/ufs/ufs-sysfs.c b/drivers/scsi/ufs/ufs-sysfs.c index acc54f530f2d..2546e7a1ac4f 100644 --- a/drivers/scsi/ufs/ufs-sysfs.c +++ b/drivers/scsi/ufs/ufs-sysfs.c @@ -368,6 +368,8 @@ UFS_DEVICE_DESC_PARAM(device_version, _DEV_VER, 2); UFS_DEVICE_DESC_PARAM(number_of_secure_wpa, _NUM_SEC_WPA, 1); UFS_DEVICE_DESC_PARAM(psa_max_data_size, _PSA_MAX_DATA, 4); UFS_DEVICE_DESC_PARAM(psa_state_timeout, _PSA_TMT, 1); +UFS_DEVICE_DESC_PARAM(hpb_version, _HPB_VER, 2); +UFS_DEVICE_DESC_PARAM(hpb_control, _HPB_CONTROL, 1); UFS_DEVICE_DESC_PARAM(ext_feature_sup, _EXT_UFS_FEATURE_SUP, 4); UFS_DEVICE_DESC_PARAM(wb_presv_us_en, _WB_PRESRV_USRSPC_EN, 1); UFS_DEVICE_DESC_PARAM(wb_type, _WB_TYPE, 1); @@ -400,6 +402,8 @@ static struct attribute *ufs_sysfs_device_descriptor[] = { &dev_attr_number_of_secure_wpa.attr, &dev_attr_psa_max_data_size.attr, &dev_attr_psa_state_timeout.attr, + &dev_attr_hpb_version.attr, + &dev_attr_hpb_control.attr, &dev_attr_ext_feature_sup.attr, &dev_attr_wb_presv_us_en.attr, &dev_attr_wb_type.attr, @@ -473,6 +477,10 @@ UFS_GEOMETRY_DESC_PARAM(enh4_memory_max_alloc_units, _ENM4_MAX_NUM_UNITS, 4); UFS_GEOMETRY_DESC_PARAM(enh4_memory_capacity_adjustment_factor, _ENM4_CAP_ADJ_FCTR, 2); +UFS_GEOMETRY_DESC_PARAM(hpb_region_size, _HPB_REGION_SIZE, 1); +UFS_GEOMETRY_DESC_PARAM(hpb_number_lu, _HPB_NUMBER_LU, 1); +UFS_GEOMETRY_DESC_PARAM(hpb_subregion_size, _HPB_SUBREGION_SIZE, 1); +UFS_GEOMETRY_DESC_PARAM(hpb_max_active_regions, _HPB_MAX_ACTIVE_REGS, 2); UFS_GEOMETRY_DESC_PARAM(wb_max_alloc_units, _WB_MAX_ALLOC_UNITS, 4); UFS_GEOMETRY_DESC_PARAM(wb_max_wb_luns, _WB_MAX_WB_LUNS, 1); UFS_GEOMETRY_DESC_PARAM(wb_buff_cap_adj, _WB_BUFF_CAP_ADJ, 1); @@ -510,6 +518,10 @@ static struct attribute *ufs_sysfs_geometry_descriptor[] = { &dev_attr_enh3_memory_capacity_adjustment_factor.attr, &dev_attr_enh4_memory_max_alloc_units.attr, &dev_attr_enh4_memory_capacity_adjustment_factor.attr, + &dev_attr_hpb_region_size.attr, + &dev_attr_hpb_number_lu.attr, + &dev_attr_hpb_subregion_size.attr, + &dev_attr_hpb_max_active_regions.attr, &dev_attr_wb_max_alloc_units.attr, &dev_attr_wb_max_wb_luns.attr, &dev_attr_wb_buff_cap_adj.attr, @@ -923,6 +935,9 @@ UFS_UNIT_DESC_PARAM(provisioning_type, _PROVISIONING_TYPE, 1); UFS_UNIT_DESC_PARAM(physical_memory_resourse_count, _PHY_MEM_RSRC_CNT, 8); UFS_UNIT_DESC_PARAM(context_capabilities, _CTX_CAPABILITIES, 2); UFS_UNIT_DESC_PARAM(large_unit_granularity, _LARGE_UNIT_SIZE_M1, 1); +UFS_UNIT_DESC_PARAM(hpb_lu_max_active_regions, _HPB_LU_MAX_ACTIVE_RGNS, 2); +UFS_UNIT_DESC_PARAM(hpb_pinned_region_start_offset, _HPB_PIN_RGN_START_OFF, 2); +UFS_UNIT_DESC_PARAM(hpb_number_pinned_regions, _HPB_NUM_PIN_RGNS, 2); UFS_UNIT_DESC_PARAM(wb_buf_alloc_units, _WB_BUF_ALLOC_UNITS, 4); @@ -940,6 +955,9 @@ static struct attribute *ufs_sysfs_unit_descriptor[] = { &dev_attr_physical_memory_resourse_count.attr, &dev_attr_context_capabilities.attr, &dev_attr_large_unit_granularity.attr, + &dev_attr_hpb_lu_max_active_regions.attr, + &dev_attr_hpb_pinned_region_start_offset.attr, + &dev_attr_hpb_number_pinned_regions.attr, &dev_attr_wb_buf_alloc_units.attr, NULL, }; diff --git a/drivers/scsi/ufs/ufs.h b/drivers/scsi/ufs/ufs.h index bf1897a72532..65563635e20e 100644 --- a/drivers/scsi/ufs/ufs.h +++ b/drivers/scsi/ufs/ufs.h @@ -122,6 +122,7 @@ enum flag_idn { QUERY_FLAG_IDN_WB_EN = 0x0E, QUERY_FLAG_IDN_WB_BUFF_FLUSH_EN = 0x0F, QUERY_FLAG_IDN_WB_BUFF_FLUSH_DURING_HIBERN8 = 0x10, + QUERY_FLAG_IDN_HPB_RESET = 0x11, }; /* Attribute idn for Query requests */ @@ -195,6 +196,9 @@ enum unit_desc_param { UNIT_DESC_PARAM_PHY_MEM_RSRC_CNT = 0x18, UNIT_DESC_PARAM_CTX_CAPABILITIES = 0x20, UNIT_DESC_PARAM_LARGE_UNIT_SIZE_M1 = 0x22, + UNIT_DESC_PARAM_HPB_LU_MAX_ACTIVE_RGNS = 0x23, + UNIT_DESC_PARAM_HPB_PIN_RGN_START_OFF = 0x25, + UNIT_DESC_PARAM_HPB_NUM_PIN_RGNS = 0x27, UNIT_DESC_PARAM_WB_BUF_ALLOC_UNITS = 0x29, }; @@ -235,6 +239,8 @@ enum device_desc_param { DEVICE_DESC_PARAM_PSA_MAX_DATA = 0x25, DEVICE_DESC_PARAM_PSA_TMT = 0x29, DEVICE_DESC_PARAM_PRDCT_REV = 0x2A, + DEVICE_DESC_PARAM_HPB_VER = 0x40, + DEVICE_DESC_PARAM_HPB_CONTROL = 0x42, DEVICE_DESC_PARAM_EXT_UFS_FEATURE_SUP = 0x4F, DEVICE_DESC_PARAM_WB_PRESRV_USRSPC_EN = 0x53, DEVICE_DESC_PARAM_WB_TYPE = 0x54, @@ -283,6 +289,10 @@ enum geometry_desc_param { GEOMETRY_DESC_PARAM_ENM4_MAX_NUM_UNITS = 0x3E, GEOMETRY_DESC_PARAM_ENM4_CAP_ADJ_FCTR = 0x42, GEOMETRY_DESC_PARAM_OPT_LOG_BLK_SIZE = 0x44, + GEOMETRY_DESC_PARAM_HPB_REGION_SIZE = 0x48, + GEOMETRY_DESC_PARAM_HPB_NUMBER_LU = 0x49, + GEOMETRY_DESC_PARAM_HPB_SUBREGION_SIZE = 0x4A, + GEOMETRY_DESC_PARAM_HPB_MAX_ACTIVE_REGS = 0x4B, GEOMETRY_DESC_PARAM_WB_MAX_ALLOC_UNITS = 0x4F, GEOMETRY_DESC_PARAM_WB_MAX_WB_LUNS = 0x53, GEOMETRY_DESC_PARAM_WB_BUFF_CAP_ADJ = 0x54, @@ -327,8 +337,10 @@ enum { /* Possible values for dExtendedUFSFeaturesSupport */ enum { + UFS_DEV_HPB_SUPPORT = BIT(7), UFS_DEV_WRITE_BOOSTER_SUP = BIT(8), }; +#define UFS_DEV_HPB_SUPPORT_VERSION 0x310 #define POWER_DESC_MAX_ACTV_ICC_LVLS 16 @@ -538,6 +550,9 @@ struct ufs_dev_info { u16 wspecversion; u32 clk_gating_wait_us; + /* UFS HPB related flag */ + bool hpb_enabled; + /* UFS WB related flags */ bool wb_enabled; bool wb_buf_flush_enabled; diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index 80620c866192..49b3d5d24fa6 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -23,6 +23,7 @@ #include "ufs-debugfs.h" #include "ufs_bsg.h" #include "ufshcd-crypto.h" +#include "ufshpb.h" #include #include @@ -4859,6 +4860,25 @@ static int ufshcd_change_queue_depth(struct scsi_device *sdev, int depth) return scsi_change_queue_depth(sdev, depth); } +static void ufshcd_hpb_destroy(struct ufs_hba *hba, struct scsi_device *sdev) +{ + /* skip well-known LU */ + if ((sdev->lun >= UFS_UPIU_MAX_UNIT_NUM_ID) || !ufshpb_is_allowed(hba)) + return; + + ufshpb_destroy_lu(hba, sdev); +} + +static void ufshcd_hpb_configure(struct ufs_hba *hba, struct scsi_device *sdev) +{ + /* skip well-known LU */ + if ((sdev->lun >= UFS_UPIU_MAX_UNIT_NUM_ID) || + !(hba->dev_info.hpb_enabled) || !ufshpb_is_allowed(hba)) + return; + + ufshpb_init_hpb_lu(hba, sdev); +} + /** * ufshcd_slave_configure - adjust SCSI device configurations * @sdev: pointer to SCSI device @@ -4868,6 +4888,8 @@ static int ufshcd_slave_configure(struct scsi_device *sdev) struct ufs_hba *hba = shost_priv(sdev->host); struct request_queue *q = sdev->request_queue; + ufshcd_hpb_configure(hba, sdev); + blk_queue_update_dma_pad(q, PRDT_DATA_BYTE_COUNT_PAD - 1); if (hba->quirks & UFSHCD_QUIRK_ALIGN_SG_WITH_PAGE_SIZE) blk_queue_update_dma_alignment(q, PAGE_SIZE - 1); @@ -4889,6 +4911,9 @@ static void ufshcd_slave_destroy(struct scsi_device *sdev) struct ufs_hba *hba; hba = shost_priv(sdev->host); + + ufshcd_hpb_destroy(hba, sdev); + /* Drop the reference as it won't be needed anymore */ if (ufshcd_scsi_to_upiu_lun(sdev->lun) == UFS_UPIU_UFS_DEVICE_WLUN) { unsigned long flags; @@ -6979,6 +7004,8 @@ static int ufshcd_host_reset_and_restore(struct ufs_hba *hba) * Stop the host controller and complete the requests * cleared by h/w */ + ufshpb_reset_host(hba); + ufshcd_hba_stop(hba); spin_lock_irqsave(hba->host->host_lock, flags); @@ -7381,6 +7408,7 @@ static int ufs_get_device_desc(struct ufs_hba *hba) { int err; u8 model_index; + u8 b_ufs_feature_sup; u8 *desc_buf; struct ufs_dev_info *dev_info = &hba->dev_info; @@ -7408,9 +7436,16 @@ static int ufs_get_device_desc(struct ufs_hba *hba) /* getting Specification Version in big endian format */ dev_info->wspecversion = desc_buf[DEVICE_DESC_PARAM_SPEC_VER] << 8 | desc_buf[DEVICE_DESC_PARAM_SPEC_VER + 1]; + b_ufs_feature_sup = desc_buf[DEVICE_DESC_PARAM_UFS_FEAT]; model_index = desc_buf[DEVICE_DESC_PARAM_PRDCT_NAME]; + if (dev_info->wspecversion >= UFS_DEV_HPB_SUPPORT_VERSION && + (b_ufs_feature_sup & UFS_DEV_HPB_SUPPORT)) { + dev_info->hpb_enabled = true; + ufshpb_get_dev_info(hba, desc_buf); + } + err = ufshcd_read_string_desc(hba, model_index, &dev_info->model, SD_ASCII_STD); if (err < 0) { @@ -7639,6 +7674,10 @@ static int ufshcd_device_geo_params_init(struct ufs_hba *hba) else if (desc_buf[GEOMETRY_DESC_PARAM_MAX_NUM_LUN] == 0) hba->dev_info.max_lu_supported = 8; + if (hba->desc_size[QUERY_DESC_IDN_GEOMETRY] >= + GEOMETRY_DESC_PARAM_HPB_MAX_ACTIVE_REGS) + ufshpb_get_geo_info(hba, desc_buf); + out: kfree(desc_buf); return err; @@ -7781,6 +7820,7 @@ static int ufshcd_add_lus(struct ufs_hba *hba) } ufs_bsg_probe(hba); + ufshpb_init(hba); scsi_scan_host(hba->host); pm_runtime_put_sync(hba->dev); @@ -7924,6 +7964,7 @@ static int ufshcd_probe_hba(struct ufs_hba *hba, bool async) /* Enable Auto-Hibernate if configured */ ufshcd_auto_hibern8_enable(hba); + ufshpb_reset(hba); out: spin_lock_irqsave(hba->host->host_lock, flags); if (ret) @@ -7971,6 +8012,9 @@ static void ufshcd_async_scan(void *data, async_cookie_t cookie) static const struct attribute_group *ufshcd_driver_groups[] = { &ufs_sysfs_unit_descriptor_group, &ufs_sysfs_lun_attributes_group, +#ifdef CONFIG_SCSI_UFS_HPB + &ufs_sysfs_hpb_stat_group, +#endif NULL, }; @@ -8687,6 +8731,8 @@ static int ufshcd_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op) req_link_state = UIC_LINK_OFF_STATE; } + ufshpb_suspend(hba); + /* * If we can't transition into any of the low power modes * just gate the clocks. @@ -8822,6 +8868,7 @@ static int ufshcd_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op) hba->dev_info.b_rpm_dev_flush_capable = false; ufshcd_clear_ua_wluns(hba); ufshcd_release(hba); + ufshpb_resume(hba); out: if (hba->dev_info.b_rpm_dev_flush_capable) { schedule_delayed_work(&hba->rpm_dev_flush_recheck_work, @@ -8926,6 +8973,8 @@ static int ufshcd_resume(struct ufs_hba *hba, enum ufs_pm_op pm_op) /* Enable Auto-Hibernate if configured */ ufshcd_auto_hibern8_enable(hba); + ufshpb_resume(hba); + if (hba->dev_info.b_rpm_dev_flush_capable) { hba->dev_info.b_rpm_dev_flush_capable = false; cancel_delayed_work(&hba->rpm_dev_flush_recheck_work); diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h index ee61f821f75d..961fc5b77943 100644 --- a/drivers/scsi/ufs/ufshcd.h +++ b/drivers/scsi/ufs/ufshcd.h @@ -645,6 +645,25 @@ struct ufs_hba_variant_params { u32 wb_flush_threshold; }; +#ifdef CONFIG_SCSI_UFS_HPB +/** + * struct ufshpb_dev_info - UFSHPB device related info + * @num_lu: the number of user logical unit to check whether all lu finished + * initialization + * @rgn_size: device reported HPB region size + * @srgn_size: device reported HPB sub-region size + * @slave_conf_cnt: counter to check all lu finished initialization + * @hpb_disabled: flag to check if HPB is disabled + */ +struct ufshpb_dev_info { + int num_lu; + int rgn_size; + int srgn_size; + atomic_t slave_conf_cnt; + bool hpb_disabled; +}; +#endif + /** * struct ufs_hba - per adapter private structure * @mmio_base: UFSHCI base register address @@ -832,6 +851,9 @@ struct ufs_hba { struct request_queue *bsg_queue; struct delayed_work rpm_dev_flush_recheck_work; +#ifdef CONFIG_SCSI_UFS_HPB + struct ufshpb_dev_info ufshpb_dev; +#endif #ifdef CONFIG_SCSI_UFS_CRYPTO union ufs_crypto_capabilities crypto_capabilities; union ufs_crypto_cap_entry *crypto_cap_array; diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c new file mode 100644 index 000000000000..0de96cb5f220 --- /dev/null +++ b/drivers/scsi/ufs/ufshpb.c @@ -0,0 +1,570 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Universal Flash Storage Host Performance Booster + * + * Copyright (C) 2017-2021 Samsung Electronics Co., Ltd. + * + * Authors: + * Yongmyung Lee + * Jinyoung Choi + */ + +#include +#include + +#include "ufshcd.h" +#include "ufshpb.h" +#include "../sd.h" + +bool ufshpb_is_allowed(struct ufs_hba *hba) +{ + return !(hba->ufshpb_dev.hpb_disabled); +} + +static struct ufshpb_lu *ufshpb_get_hpb_data(struct scsi_device *sdev) +{ + return sdev->hostdata; +} + +static int ufshpb_get_state(struct ufshpb_lu *hpb) +{ + return atomic_read(&hpb->hpb_state); +} + +static void ufshpb_set_state(struct ufshpb_lu *hpb, int state) +{ + atomic_set(&hpb->hpb_state, state); +} + +static void ufshpb_init_subregion_tbl(struct ufshpb_lu *hpb, + struct ufshpb_region *rgn, bool last) +{ + int srgn_idx; + struct ufshpb_subregion *srgn; + + for (srgn_idx = 0; srgn_idx < rgn->srgn_cnt; srgn_idx++) { + srgn = rgn->srgn_tbl + srgn_idx; + + srgn->rgn_idx = rgn->rgn_idx; + srgn->srgn_idx = srgn_idx; + srgn->srgn_state = HPB_SRGN_UNUSED; + } + + if (unlikely(last && hpb->last_srgn_entries)) + srgn->is_last = true; +} + +static int ufshpb_alloc_subregion_tbl(struct ufshpb_lu *hpb, + struct ufshpb_region *rgn, + int srgn_cnt) +{ + rgn->srgn_tbl = kvcalloc(srgn_cnt, sizeof(struct ufshpb_subregion), + GFP_KERNEL); + if (!rgn->srgn_tbl) + return -ENOMEM; + + rgn->srgn_cnt = srgn_cnt; + return 0; +} + +static void ufshpb_lu_parameter_init(struct ufs_hba *hba, + struct ufshpb_lu *hpb, + struct ufshpb_dev_info *hpb_dev_info, + struct ufshpb_lu_info *hpb_lu_info) +{ + u32 entries_per_rgn; + u64 rgn_mem_size, tmp; + + hpb->lu_pinned_start = hpb_lu_info->pinned_start; + hpb->lu_pinned_end = hpb_lu_info->num_pinned ? + (hpb_lu_info->pinned_start + hpb_lu_info->num_pinned - 1) + : PINNED_NOT_SET; + + rgn_mem_size = (1ULL << hpb_dev_info->rgn_size) * HPB_RGN_SIZE_UNIT + * HPB_ENTRY_SIZE; + do_div(rgn_mem_size, HPB_ENTRY_BLOCK_SIZE); + hpb->srgn_mem_size = (1ULL << hpb_dev_info->srgn_size) + * HPB_RGN_SIZE_UNIT / HPB_ENTRY_BLOCK_SIZE * HPB_ENTRY_SIZE; + + tmp = rgn_mem_size; + do_div(tmp, HPB_ENTRY_SIZE); + entries_per_rgn = (u32)tmp; + hpb->entries_per_rgn_shift = ilog2(entries_per_rgn); + hpb->entries_per_rgn_mask = entries_per_rgn - 1; + + hpb->entries_per_srgn = hpb->srgn_mem_size / HPB_ENTRY_SIZE; + hpb->entries_per_srgn_shift = ilog2(hpb->entries_per_srgn); + hpb->entries_per_srgn_mask = hpb->entries_per_srgn - 1; + + tmp = rgn_mem_size; + do_div(tmp, hpb->srgn_mem_size); + hpb->srgns_per_rgn = (int)tmp; + + hpb->rgns_per_lu = DIV_ROUND_UP(hpb_lu_info->num_blocks, + entries_per_rgn); + hpb->srgns_per_lu = DIV_ROUND_UP(hpb_lu_info->num_blocks, + (hpb->srgn_mem_size / HPB_ENTRY_SIZE)); + hpb->last_srgn_entries = hpb_lu_info->num_blocks + % (hpb->srgn_mem_size / HPB_ENTRY_SIZE); + + hpb->pages_per_srgn = DIV_ROUND_UP(hpb->srgn_mem_size, PAGE_SIZE); +} + +static int ufshpb_alloc_region_tbl(struct ufs_hba *hba, struct ufshpb_lu *hpb) +{ + struct ufshpb_region *rgn_table, *rgn; + int rgn_idx, i; + int ret = 0; + + rgn_table = kvcalloc(hpb->rgns_per_lu, sizeof(struct ufshpb_region), + GFP_KERNEL); + if (!rgn_table) + return -ENOMEM; + + hpb->rgn_tbl = rgn_table; + + for (rgn_idx = 0; rgn_idx < hpb->rgns_per_lu; rgn_idx++) { + int srgn_cnt = hpb->srgns_per_rgn; + bool last_srgn = false; + + rgn = rgn_table + rgn_idx; + rgn->rgn_idx = rgn_idx; + + if (rgn_idx == hpb->rgns_per_lu - 1) { + srgn_cnt = ((hpb->srgns_per_lu - 1) % + hpb->srgns_per_rgn) + 1; + last_srgn = true; + } + + ret = ufshpb_alloc_subregion_tbl(hpb, rgn, srgn_cnt); + if (ret) + goto release_srgn_table; + ufshpb_init_subregion_tbl(hpb, rgn, last_srgn); + + rgn->rgn_state = HPB_RGN_INACTIVE; + } + + return 0; + +release_srgn_table: + for (i = 0; i < rgn_idx; i++) { + rgn = rgn_table + i; + if (rgn->srgn_tbl) + kvfree(rgn->srgn_tbl); + } + kvfree(rgn_table); + return ret; +} + +static void ufshpb_destroy_subregion_tbl(struct ufshpb_lu *hpb, + struct ufshpb_region *rgn) +{ + int srgn_idx; + + for (srgn_idx = 0; srgn_idx < rgn->srgn_cnt; srgn_idx++) { + struct ufshpb_subregion *srgn; + + srgn = rgn->srgn_tbl + srgn_idx; + srgn->srgn_state = HPB_SRGN_UNUSED; + } +} + +static void ufshpb_destroy_region_tbl(struct ufshpb_lu *hpb) +{ + int rgn_idx; + + for (rgn_idx = 0; rgn_idx < hpb->rgns_per_lu; rgn_idx++) { + struct ufshpb_region *rgn; + + rgn = hpb->rgn_tbl + rgn_idx; + if (rgn->rgn_state != HPB_RGN_INACTIVE) { + rgn->rgn_state = HPB_RGN_INACTIVE; + + ufshpb_destroy_subregion_tbl(hpb, rgn); + } + + kvfree(rgn->srgn_tbl); + } + + kvfree(hpb->rgn_tbl); +} + +/* SYSFS functions */ +#define ufshpb_sysfs_attr_show_func(__name) \ +static ssize_t __name##_show(struct device *dev, \ + struct device_attribute *attr, char *buf) \ +{ \ + struct scsi_device *sdev = to_scsi_device(dev); \ + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); \ + if (!hpb) \ + return -ENODEV; \ + \ + return sysfs_emit(buf, "%llu\n", hpb->stats.__name); \ +} \ +\ +static DEVICE_ATTR_RO(__name) + +ufshpb_sysfs_attr_show_func(hit_cnt); +ufshpb_sysfs_attr_show_func(miss_cnt); +ufshpb_sysfs_attr_show_func(rb_noti_cnt); +ufshpb_sysfs_attr_show_func(rb_active_cnt); +ufshpb_sysfs_attr_show_func(rb_inactive_cnt); +ufshpb_sysfs_attr_show_func(map_req_cnt); + +static struct attribute *hpb_dev_attrs[] = { + &dev_attr_hit_cnt.attr, + &dev_attr_miss_cnt.attr, + &dev_attr_rb_noti_cnt.attr, + &dev_attr_rb_active_cnt.attr, + &dev_attr_rb_inactive_cnt.attr, + &dev_attr_map_req_cnt.attr, + NULL, +}; + +struct attribute_group ufs_sysfs_hpb_stat_group = { + .name = "hpb_sysfs", + .attrs = hpb_dev_attrs, +}; + +static void ufshpb_stat_init(struct ufshpb_lu *hpb) +{ + hpb->stats.hit_cnt = 0; + hpb->stats.miss_cnt = 0; + hpb->stats.rb_noti_cnt = 0; + hpb->stats.rb_active_cnt = 0; + hpb->stats.rb_inactive_cnt = 0; + hpb->stats.map_req_cnt = 0; +} + +static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb) +{ + int ret; + + ret = ufshpb_alloc_region_tbl(hba, hpb); + + ufshpb_stat_init(hpb); + + return 0; +} + +static struct ufshpb_lu *ufshpb_alloc_hpb_lu(struct ufs_hba *hba, int lun, + struct ufshpb_dev_info *hpb_dev_info, + struct ufshpb_lu_info *hpb_lu_info) +{ + struct ufshpb_lu *hpb; + int ret; + + hpb = kzalloc(sizeof(struct ufshpb_lu), GFP_KERNEL); + if (!hpb) + return NULL; + + hpb->lun = lun; + + ufshpb_lu_parameter_init(hba, hpb, hpb_dev_info, hpb_lu_info); + + ret = ufshpb_lu_hpb_init(hba, hpb); + if (ret) { + dev_err(hba->dev, "hpb lu init failed. ret %d", ret); + goto release_hpb; + } + + return hpb; + +release_hpb: + kfree(hpb); + return NULL; +} + +static bool ufshpb_check_hpb_reset_query(struct ufs_hba *hba) +{ + int err = 0; + bool flag_res = true; + int try; + + /* wait for the device to complete HPB reset query */ + for (try = 0; try < HPB_RESET_REQ_RETRIES; try++) { + dev_dbg(hba->dev, + "%s start flag reset polling %d times\n", + __func__, try); + + /* Poll fHpbReset flag to be cleared */ + err = ufshcd_query_flag(hba, UPIU_QUERY_OPCODE_READ_FLAG, + QUERY_FLAG_IDN_HPB_RESET, 0, &flag_res); + + if (err) { + dev_err(hba->dev, + "%s reading fHpbReset flag failed with error %d\n", + __func__, err); + return flag_res; + } + + if (!flag_res) + goto out; + + usleep_range(1000, 1100); + } + if (flag_res) { + dev_err(hba->dev, + "%s fHpbReset was not cleared by the device\n", + __func__); + } +out: + return flag_res; +} + +void ufshpb_reset(struct ufs_hba *hba) +{ + struct ufshpb_lu *hpb; + struct scsi_device *sdev; + + shost_for_each_device(sdev, hba->host) { + hpb = sdev->hostdata; + if (!hpb) + continue; + + if (ufshpb_get_state(hpb) != HPB_RESET) + continue; + + ufshpb_set_state(hpb, HPB_PRESENT); + } +} + +void ufshpb_reset_host(struct ufs_hba *hba) +{ + struct ufshpb_lu *hpb; + struct scsi_device *sdev; + + shost_for_each_device(sdev, hba->host) { + hpb = sdev->hostdata; + if (!hpb) + continue; + + if (ufshpb_get_state(hpb) != HPB_PRESENT) + continue; + ufshpb_set_state(hpb, HPB_RESET); + } +} + +void ufshpb_suspend(struct ufs_hba *hba) +{ + struct ufshpb_lu *hpb; + struct scsi_device *sdev; + + shost_for_each_device(sdev, hba->host) { + hpb = sdev->hostdata; + if (!hpb) + continue; + + if (ufshpb_get_state(hpb) != HPB_PRESENT) + continue; + ufshpb_set_state(hpb, HPB_SUSPEND); + } +} + +void ufshpb_resume(struct ufs_hba *hba) +{ + struct ufshpb_lu *hpb; + struct scsi_device *sdev; + + shost_for_each_device(sdev, hba->host) { + hpb = sdev->hostdata; + if (!hpb) + continue; + + if ((ufshpb_get_state(hpb) != HPB_PRESENT) && + (ufshpb_get_state(hpb) != HPB_SUSPEND)) + continue; + ufshpb_set_state(hpb, HPB_PRESENT); + } +} + +static int ufshpb_get_lu_info(struct ufs_hba *hba, int lun, + struct ufshpb_lu_info *hpb_lu_info) +{ + u16 max_active_rgns; + u8 lu_enable; + int size; + int ret; + char desc_buf[QUERY_DESC_MAX_SIZE]; + + ufshcd_map_desc_id_to_length(hba, QUERY_DESC_IDN_UNIT, &size); + + pm_runtime_get_sync(hba->dev); + ret = ufshcd_query_descriptor_retry(hba, UPIU_QUERY_OPCODE_READ_DESC, + QUERY_DESC_IDN_UNIT, lun, 0, + desc_buf, &size); + pm_runtime_put_sync(hba->dev); + + if (ret) { + dev_err(hba->dev, + "%s: idn: %d lun: %d query request failed", + __func__, QUERY_DESC_IDN_UNIT, lun); + return ret; + } + + lu_enable = desc_buf[UNIT_DESC_PARAM_LU_ENABLE]; + if (lu_enable != LU_ENABLED_HPB_FUNC) + return -ENODEV; + + max_active_rgns = get_unaligned_be16( + desc_buf + UNIT_DESC_PARAM_HPB_LU_MAX_ACTIVE_RGNS); + if (!max_active_rgns) { + dev_err(hba->dev, + "lun %d wrong number of max active regions\n", lun); + return -ENODEV; + } + + hpb_lu_info->num_blocks = get_unaligned_be64( + desc_buf + UNIT_DESC_PARAM_LOGICAL_BLK_COUNT); + hpb_lu_info->pinned_start = get_unaligned_be16( + desc_buf + UNIT_DESC_PARAM_HPB_PIN_RGN_START_OFF); + hpb_lu_info->num_pinned = get_unaligned_be16( + desc_buf + UNIT_DESC_PARAM_HPB_NUM_PIN_RGNS); + hpb_lu_info->max_active_rgns = max_active_rgns; + + return 0; +} + +void ufshpb_destroy_lu(struct ufs_hba *hba, struct scsi_device *sdev) +{ + struct ufshpb_lu *hpb = sdev->hostdata; + + if (!hpb) + return; + + ufshpb_set_state(hpb, HPB_FAILED); + + sdev = hpb->sdev_ufs_lu; + sdev->hostdata = NULL; + + ufshpb_destroy_region_tbl(hpb); + + list_del_init(&hpb->list_hpb_lu); + + kfree(hpb); +} + +static void ufshpb_hpb_lu_prepared(struct ufs_hba *hba) +{ + struct ufshpb_lu *hpb; + struct scsi_device *sdev; + bool init_success; + + init_success = !ufshpb_check_hpb_reset_query(hba); + + shost_for_each_device(sdev, hba->host) { + hpb = sdev->hostdata; + if (!hpb) + continue; + + if (init_success) { + ufshpb_set_state(hpb, HPB_PRESENT); + } else { + dev_err(hba->dev, "destroy HPB lu %d\n", hpb->lun); + ufshpb_destroy_lu(hba, sdev); + } + } +} + +void ufshpb_init_hpb_lu(struct ufs_hba *hba, struct scsi_device *sdev) +{ + struct ufshpb_lu *hpb; + int ret; + struct ufshpb_lu_info hpb_lu_info = { 0 }; + int lun = sdev->lun; + + if (lun >= hba->dev_info.max_lu_supported) + goto out; + + ret = ufshpb_get_lu_info(hba, lun, &hpb_lu_info); + if (ret) + goto out; + + hpb = ufshpb_alloc_hpb_lu(hba, lun, &hba->ufshpb_dev, + &hpb_lu_info); + if (!hpb) + goto out; + + hpb->sdev_ufs_lu = sdev; + sdev->hostdata = hpb; + +out: + /* All LUs are initialized */ + if (atomic_dec_and_test(&hba->ufshpb_dev.slave_conf_cnt)) + ufshpb_hpb_lu_prepared(hba); +} + +void ufshpb_get_geo_info(struct ufs_hba *hba, u8 *geo_buf) +{ + struct ufshpb_dev_info *hpb_dev_info = &hba->ufshpb_dev; + int hpb_device_max_active_rgns = 0; + int hpb_num_lu; + + hpb_num_lu = geo_buf[GEOMETRY_DESC_PARAM_HPB_NUMBER_LU]; + if (hpb_num_lu == 0) { + dev_err(hba->dev, "No HPB LU supported\n"); + hpb_dev_info->hpb_disabled = true; + return; + } + + hpb_dev_info->rgn_size = geo_buf[GEOMETRY_DESC_PARAM_HPB_REGION_SIZE]; + hpb_dev_info->srgn_size = geo_buf[GEOMETRY_DESC_PARAM_HPB_SUBREGION_SIZE]; + hpb_device_max_active_rgns = + get_unaligned_be16(geo_buf + + GEOMETRY_DESC_PARAM_HPB_MAX_ACTIVE_REGS); + + if (hpb_dev_info->rgn_size == 0 || hpb_dev_info->srgn_size == 0 || + hpb_device_max_active_rgns == 0) { + dev_err(hba->dev, "No HPB supported device\n"); + hpb_dev_info->hpb_disabled = true; + return; + } +} + +void ufshpb_get_dev_info(struct ufs_hba *hba, u8 *desc_buf) +{ + struct ufshpb_dev_info *hpb_dev_info = &hba->ufshpb_dev; + int version; + u8 hpb_mode; + + hpb_mode = desc_buf[DEVICE_DESC_PARAM_HPB_CONTROL]; + if (hpb_mode == HPB_HOST_CONTROL) { + dev_err(hba->dev, "%s: host control mode is not supported.\n", + __func__); + hpb_dev_info->hpb_disabled = true; + return; + } + + version = get_unaligned_be16(desc_buf + DEVICE_DESC_PARAM_HPB_VER); + if (version != HPB_SUPPORT_VERSION) { + dev_err(hba->dev, "%s: HPB %x version is not supported.\n", + __func__, version); + hpb_dev_info->hpb_disabled = true; + return; + } + + /* + * Get the number of user logical unit to check whether all + * scsi_device finish initialization + */ + hpb_dev_info->num_lu = desc_buf[DEVICE_DESC_PARAM_NUM_LU]; +} + +void ufshpb_init(struct ufs_hba *hba) +{ + struct ufshpb_dev_info *hpb_dev_info = &hba->ufshpb_dev; + int try; + int ret; + + if (!ufshpb_is_allowed(hba)) + return; + + atomic_set(&hpb_dev_info->slave_conf_cnt, hpb_dev_info->num_lu); + /* issue HPB reset query */ + for (try = 0; try < HPB_RESET_REQ_RETRIES; try++) { + ret = ufshcd_query_flag(hba, UPIU_QUERY_OPCODE_SET_FLAG, + QUERY_FLAG_IDN_HPB_RESET, 0, NULL); + if (!ret) + break; + } +} diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h new file mode 100644 index 000000000000..11f5b018af51 --- /dev/null +++ b/drivers/scsi/ufs/ufshpb.h @@ -0,0 +1,166 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Universal Flash Storage Host Performance Booster + * + * Copyright (C) 2017-2021 Samsung Electronics Co., Ltd. + * + * Authors: + * Yongmyung Lee + * Jinyoung Choi + */ + +#ifndef _UFSHPB_H_ +#define _UFSHPB_H_ + +/* hpb response UPIU macro */ +#define HPB_RSP_NONE 0x0 +#define HPB_RSP_REQ_REGION_UPDATE 0x1 +#define HPB_RSP_DEV_RESET 0x2 +#define MAX_ACTIVE_NUM 2 +#define MAX_INACTIVE_NUM 2 +#define DEV_DATA_SEG_LEN 0x14 +#define DEV_SENSE_SEG_LEN 0x12 +#define DEV_DES_TYPE 0x80 +#define DEV_ADDITIONAL_LEN 0x10 + +/* hpb map & entries macro */ +#define HPB_RGN_SIZE_UNIT 512 +#define HPB_ENTRY_BLOCK_SIZE 4096 +#define HPB_ENTRY_SIZE 0x8 +#define PINNED_NOT_SET U32_MAX + +/* hpb support chunk size */ +#define HPB_MULTI_CHUNK_HIGH 1 + +/* hpb vender defined opcode */ +#define UFSHPB_READ 0xF8 +#define UFSHPB_READ_BUFFER 0xF9 +#define UFSHPB_READ_BUFFER_ID 0x01 +#define HPB_READ_BUFFER_CMD_LENGTH 10 +#define LU_ENABLED_HPB_FUNC 0x02 + +#define HPB_RESET_REQ_RETRIES 10 + +#define HPB_SUPPORT_VERSION 0x100 + +enum UFSHPB_MODE { + HPB_HOST_CONTROL, + HPB_DEVICE_CONTROL, +}; + +enum UFSHPB_STATE { + HPB_PRESENT = 1, + HPB_SUSPEND, + HPB_FAILED, + HPB_RESET, +}; + +enum HPB_RGN_STATE { + HPB_RGN_INACTIVE, + HPB_RGN_ACTIVE, + /* pinned regions are always active */ + HPB_RGN_PINNED, +}; + +enum HPB_SRGN_STATE { + HPB_SRGN_UNUSED, + HPB_SRGN_INVALID, + HPB_SRGN_VALID, + HPB_SRGN_ISSUED, +}; + +/** + * struct ufshpb_lu_info - UFSHPB logical unit related info + * @num_blocks: the number of logical block + * @pinned_start: the start region number of pinned region + * @num_pinned: the number of pinned regions + * @max_active_rgns: maximum number of active regions + */ +struct ufshpb_lu_info { + int num_blocks; + int pinned_start; + int num_pinned; + int max_active_rgns; +}; + +struct ufshpb_subregion { + enum HPB_SRGN_STATE srgn_state; + int rgn_idx; + int srgn_idx; + bool is_last; +}; + +struct ufshpb_region { + struct ufshpb_subregion *srgn_tbl; + enum HPB_RGN_STATE rgn_state; + int rgn_idx; + int srgn_cnt; +}; + +struct ufshpb_stats { + u64 hit_cnt; + u64 miss_cnt; + u64 rb_noti_cnt; + u64 rb_active_cnt; + u64 rb_inactive_cnt; + u64 map_req_cnt; +}; + +struct ufshpb_lu { + int lun; + struct scsi_device *sdev_ufs_lu; + struct ufshpb_region *rgn_tbl; + + atomic_t hpb_state; + + /* pinned region information */ + u32 lu_pinned_start; + u32 lu_pinned_end; + + /* HPB related configuration */ + u32 rgns_per_lu; + u32 srgns_per_lu; + u32 last_srgn_entries; + int srgns_per_rgn; + u32 srgn_mem_size; + u32 entries_per_rgn_mask; + u32 entries_per_rgn_shift; + u32 entries_per_srgn; + u32 entries_per_srgn_mask; + u32 entries_per_srgn_shift; + u32 pages_per_srgn; + + struct ufshpb_stats stats; + + struct list_head list_hpb_lu; +}; + +struct ufs_hba; +struct ufshcd_lrb; + +#ifndef CONFIG_SCSI_UFS_HPB +static void ufshpb_resume(struct ufs_hba *hba) {} +static void ufshpb_suspend(struct ufs_hba *hba) {} +static void ufshpb_reset(struct ufs_hba *hba) {} +static void ufshpb_reset_host(struct ufs_hba *hba) {} +static void ufshpb_init(struct ufs_hba *hba) {} +static void ufshpb_init_hpb_lu(struct ufs_hba *hba, struct scsi_device *sdev) {} +static void ufshpb_destroy_lu(struct ufs_hba *hba, struct scsi_device *sdev) {} +static bool ufshpb_is_allowed(struct ufs_hba *hba) { return false; } +static void ufshpb_get_geo_info(struct ufs_hba *hba, u8 *geo_buf) {} +static void ufshpb_get_dev_info(struct ufs_hba *hba, u8 *desc_buf) {} +#else +void ufshpb_resume(struct ufs_hba *hba); +void ufshpb_suspend(struct ufs_hba *hba); +void ufshpb_reset(struct ufs_hba *hba); +void ufshpb_reset_host(struct ufs_hba *hba); +void ufshpb_init(struct ufs_hba *hba); +void ufshpb_init_hpb_lu(struct ufs_hba *hba, struct scsi_device *sdev); +void ufshpb_destroy_lu(struct ufs_hba *hba, struct scsi_device *sdev); +bool ufshpb_is_allowed(struct ufs_hba *hba); +void ufshpb_get_geo_info(struct ufs_hba *hba, u8 *geo_buf); +void ufshpb_get_dev_info(struct ufs_hba *hba, u8 *desc_buf); +extern struct attribute_group ufs_sysfs_hpb_stat_group; +#endif + +#endif /* End of Header */ From patchwork Wed Feb 17 09:15:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daejun Park X-Patchwork-Id: 384216 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7DD7CC433E0 for ; Wed, 17 Feb 2021 09:17:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 11BC164E3E for ; Wed, 17 Feb 2021 09:17:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231843AbhBQJRC (ORCPT ); Wed, 17 Feb 2021 04:17:02 -0500 Received: from mailout3.samsung.com ([203.254.224.33]:54361 "EHLO mailout3.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231933AbhBQJQb (ORCPT ); Wed, 17 Feb 2021 04:16:31 -0500 Received: from epcas2p1.samsung.com (unknown [182.195.41.53]) by mailout3.samsung.com (KnoxPortal) with ESMTP id 20210217091528epoutp039dbb1254e8504a19e6dfb3be5b646df2~kfjQpK_3X1016410164epoutp03U for ; Wed, 17 Feb 2021 09:15:28 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout3.samsung.com 20210217091528epoutp039dbb1254e8504a19e6dfb3be5b646df2~kfjQpK_3X1016410164epoutp03U DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1613553329; bh=rP4xw17YxOKl7mDL1soKfZ+FVUk/CGUKjOGNi1zuUCk=; h=Subject:Reply-To:From:To:CC:In-Reply-To:Date:References:From; b=FTDAL5/EtCPAWN0gxWxVLQjBR/7BCL9HQQzTtpd1V8yb+G0wot6m87RuopX+Vbgyq EDv8fXrF7dT2ESC/sHi41n57xPKisWQs+gp8MvJ1ViQzAIOMKXaHC3gYH80w8/U9jW YZzb/TKyNgkVNuSIqdU0BlP6Tb46UX6hc5pJ967w= Received: from epsnrtp4.localdomain (unknown [182.195.42.165]) by epcas2p3.samsung.com (KnoxPortal) with ESMTP id 20210217091521epcas2p3ced511cb0cd2bd1c94ccc6a38c1b8352~kfjJgFGK00463404634epcas2p36; Wed, 17 Feb 2021 09:15:21 +0000 (GMT) Received: from epsmges2p3.samsung.com (unknown [182.195.40.187]) by epsnrtp4.localdomain (Postfix) with ESMTP id 4DgXHW16vTz4x9Px; Wed, 17 Feb 2021 09:15:19 +0000 (GMT) X-AuditID: b6c32a47-b97ff7000000148e-e5-602cdea6580b Received: from epcas2p2.samsung.com ( [182.195.41.54]) by epsmges2p3.samsung.com (Symantec Messaging Gateway) with SMTP id 4E.DC.05262.6AEDC206; Wed, 17 Feb 2021 18:15:18 +0900 (KST) Mime-Version: 1.0 Subject: [PATCH v20 4/4] scsi: ufs: Add HPB 2.0 support Reply-To: daejun7.park@samsung.com Sender: Daejun Park From: Daejun Park To: Daejun Park , Greg KH , "avri.altman@wdc.com" , "jejb@linux.ibm.com" , "martin.petersen@oracle.com" , "asutoshd@codeaurora.org" , "stanley.chu@mediatek.com" , "cang@codeaurora.org" , "bvanassche@acm.org" , "huobean@gmail.com" , ALIM AKHTAR , Javier Gonzalez CC: "linux-scsi@vger.kernel.org" , "linux-kernel@vger.kernel.org" , JinHwan Park , SEUNGUK SHIN , Sung-Jun Park , yongmyung lee , Jinyoung CHOI , BoRam Shin X-Priority: 3 X-Content-Kind-Code: NORMAL In-Reply-To: <20210217090853epcms2p17db2903a3a0c1a13e4ee071b9a39dbc8@epcms2p1> X-CPGS-Detection: blocking_info_exchange X-Drm-Type: N,general X-Msg-Generator: Mail X-Msg-Type: PERSONAL X-Reply-Demand: N Message-ID: <20210217091517epcms2p1a5ff04e9c1fff2641e7914032c5fa5e7@epcms2p1> Date: Wed, 17 Feb 2021 18:15:17 +0900 X-CMS-MailID: 20210217091517epcms2p1a5ff04e9c1fff2641e7914032c5fa5e7 X-Sendblock-Type: AUTO_CONFIDENTIAL X-CPGSPASS: Y X-CPGSPASS: Y CMS-TYPE: 102P X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrMJsWRmVeSWpSXmKPExsWy7bCmme7yezoJBvf0LB7M28ZmsbftBLvF y59X2SwO337HbjHtw09mi0/rl7FavDykabHqQbhF8+L1bBZzzjYwWfT2b2WzeHznM7vFohvb mCz6/7WzWFzeNYfNovv6DjaL5cf/MVnc3sJlsXTrTUaLzulrWCwWLdzN4iDqcfmKt8flvl4m j52z7rJ7TFh0gNFj/9w17B4tJ/ezeHx8eovFo2/LKkaPz5vkPNoPdDMFcEXl2GSkJqakFimk 5iXnp2TmpdsqeQfHO8ebmhkY6hpaWpgrKeQl5qbaKrn4BOi6ZeYAPaikUJaYUwoUCkgsLlbS t7Mpyi8tSVXIyC8usVVKLUjJKTA0LNArTswtLs1L10vOz7UyNDAwMgWqTMjJuDr/CEvBlBeM FTOW/mZuYFy8l7GLkYNDQsBEYutsuS5GTg4hgR2MEi2vVUDCvAKCEn93CIOEhQXMJb6evcsO UaIksf7iLHaIuJ7ErYdrGEFsNgEdiekn7gPFuThEBDaxSNxb0MEK4jAL/GKSOPH4A1iVhACv xIz2pywQtrTE9uVbweKcAn4SH7f9hKrRkPixrJcZwhaVuLn6LTuM/f7YfKgaEYnWe2ehagQl HvzcDRWXlDi2+wMThF0vsfXOL0aQIyQEehglDu+8xQqR0Je41rER7AheAV+JPc+OsIHYLAKq Eh+vHIA6zkXi3qLHYAuYBeQltr+dwwwKFWYBTYn1u/Qh4aYsceQWC8xbDRt/s6OzmQX4JDoO /4WL75j3BOo0NYl1P9czTWBUnoUI6llIds1C2LWAkXkVo1hqQXFuemqxUYExcuxuYgSndi33 HYwz3n7QO8TIxMF4iFGCg1lJhJf9s1aCEG9KYmVValF+fFFpTmrxIUZToC8nMkuJJucDs0te SbyhqZGZmYGlqYWpmZGFkjhvscGDeCGB9MSS1OzU1ILUIpg+Jg5OqQamtsVThCu2LF8nn8p7 SF7x0ETlZ8vr9af/Ufj5UE5+rzrv/EVPGzIn5lwwzjX4/Uuq/t9VuYLoiju1KpdTo0Venduh lLz7bs2GfSkXTp2ew3PFJs/2xUa1PgfRXva9B6oenIlZ82v/TtF4Gd3q+4IRa7Y7hVxwVvqt IpsvxHa/fPdUj698BznllnAUrvpVvkm/cnXdqg1rOpdOqtbhku2xvCh9XPq48tlJexI87vmK +9/dEeyVZnnt/5l9HUlfEz8a2GjpP1TNOvNZZlZMn5lR85z37hMYjl3P2yGxcdPunV+3Xr7K eaSiMH0xq4exEIt91PYI/eI1tZquG2bGc7rfXvM9wOka66e4xQotn1imKLEUZyQaajEXFScC AN4B7Kp2BAAA DLP-Filter: Pass X-CFilter-Loop: Reflected X-CMS-RootMailID: 20210217090853epcms2p17db2903a3a0c1a13e4ee071b9a39dbc8 References: <20210217090853epcms2p17db2903a3a0c1a13e4ee071b9a39dbc8@epcms2p1> Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This patch supports the HPB 2.0. The HPB 2.0 supports read of varying sizes from 4KB to 512KB. In the case of Read (<= 32KB) is supported as single HPB read. In the case of Read (36KB ~ 512KB) is supported by as a combination of write buffer command and HPB read command to deliver more PPN. The write buffer commands may not be issued immediately due to busy tags. To use HPB read more aggressively, the driver can requeue the write buffer command. The requeue threshold is implemented as timeout and can be modified with requeue_timeout_ms entry in sysfs. Signed-off-by: Daejun Park --- Documentation/ABI/testing/sysfs-driver-ufs | 19 +- drivers/scsi/ufs/ufshcd.c | 8 +- drivers/scsi/ufs/ufshpb.c | 499 +++++++++++++++++++-- drivers/scsi/ufs/ufshpb.h | 61 ++- 4 files changed, 534 insertions(+), 53 deletions(-) diff --git a/Documentation/ABI/testing/sysfs-driver-ufs b/Documentation/ABI/testing/sysfs-driver-ufs index bf5cb8846de1..239fe741a7f6 100644 --- a/Documentation/ABI/testing/sysfs-driver-ufs +++ b/Documentation/ABI/testing/sysfs-driver-ufs @@ -1253,14 +1253,14 @@ Description: This entry shows the number of HPB pinned regions assigned to The file is read only. -What: /sys/class/scsi_device/*/device/hpb_sysfs/hit_cnt +What: /sys/class/scsi_device/*/device/hpb_stat_sysfs/hit_cnt Date: February 2021 Contact: Daejun Park Description: This entry shows the number of reads that changed to HPB read. The file is read only. -What: /sys/class/scsi_device/*/device/hpb_sysfs/miss_cnt +What: /sys/class/scsi_device/*/device/hpb_stat_sysfs/miss_cnt Date: February 2021 Contact: Daejun Park Description: This entry shows the number of reads that cannot be changed to @@ -1268,7 +1268,7 @@ Description: This entry shows the number of reads that cannot be changed to The file is read only. -What: /sys/class/scsi_device/*/device/hpb_sysfs/rb_noti_cnt +What: /sys/class/scsi_device/*/device/hpb_stat_sysfs/rb_noti_cnt Date: February 2021 Contact: Daejun Park Description: This entry shows the number of response UPIUs that has @@ -1276,7 +1276,7 @@ Description: This entry shows the number of response UPIUs that has The file is read only. -What: /sys/class/scsi_device/*/device/hpb_sysfs/rb_active_cnt +What: /sys/class/scsi_device/*/device/hpb_stat_sysfs/rb_active_cnt Date: February 2021 Contact: Daejun Park Description: This entry shows the number of active sub-regions recommended by @@ -1284,7 +1284,7 @@ Description: This entry shows the number of active sub-regions recommended by The file is read only. -What: /sys/class/scsi_device/*/device/hpb_sysfs/rb_inactive_cnt +What: /sys/class/scsi_device/*/device/hpb_stat_sysfs/rb_inactive_cnt Date: February 2021 Contact: Daejun Park Description: This entry shows the number of inactive regions recommended by @@ -1292,10 +1292,17 @@ Description: This entry shows the number of inactive regions recommended by The file is read only. -What: /sys/class/scsi_device/*/device/hpb_sysfs/map_req_cnt +What: /sys/class/scsi_device/*/device/hpb_stat_sysfs/map_req_cnt Date: February 2021 Contact: Daejun Park Description: This entry shows the number of read buffer commands for activating sub-regions recommended by response UPIUs. The file is read only. + +What: /sys/class/scsi_device/*/device/hpb_param_sysfs/requeue_timeout_ms +Date: February 2021 +Contact: Daejun Park +Description: This entry shows the requeue timeout threshold for write buffer + command in ms. This value can be changed by writing proper integer to + this entry. diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index 851c01a26207..9881267eebc1 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -2656,7 +2656,12 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd) lrbp->req_abort_skip = false; - ufshpb_prep(hba, lrbp); + err = ufshpb_prep(hba, lrbp); + if (err == -EAGAIN) { + lrbp->cmd = NULL; + ufshcd_release(hba); + goto out; + } ufshcd_comp_scsi_upiu(hba, lrbp); @@ -8019,6 +8024,7 @@ static const struct attribute_group *ufshcd_driver_groups[] = { &ufs_sysfs_lun_attributes_group, #ifdef CONFIG_SCSI_UFS_HPB &ufs_sysfs_hpb_stat_group, + &ufs_sysfs_hpb_param_group, #endif NULL, }; diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index 937327180dda..312b5aede0c7 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -54,13 +54,22 @@ static bool ufshpb_is_support_chunk(int transfer_len) return transfer_len <= HPB_MULTI_CHUNK_HIGH; } +/* + * WRITE_BUFFER CMD support 36K (len=9) ~ 512K (len=128) default. + * it is possible to change range of transfer_len through sysfs. + */ +static inline bool ufshpb_is_required_wb(struct ufshpb_lu *hpb, int len) +{ + return (len >= hpb->pre_req_min_tr_len && + len <= hpb->pre_req_max_tr_len); +} + static bool ufshpb_is_general_lun(int lun) { return lun < UFS_UPIU_MAX_UNIT_NUM_ID; } -static bool -ufshpb_is_pinned_region(struct ufshpb_lu *hpb, int rgn_idx) +static bool ufshpb_is_pinned_region(struct ufshpb_lu *hpb, int rgn_idx) { if (hpb->lu_pinned_end != PINNED_NOT_SET && rgn_idx >= hpb->lu_pinned_start && @@ -213,6 +222,35 @@ static bool ufshpb_test_ppn_dirty(struct ufshpb_lu *hpb, int rgn_idx, return false; } +static int ufshpb_fill_ppn_from_page(struct ufshpb_lu *hpb, + struct ufshpb_map_ctx *mctx, int pos, int len, + u64 *ppn_buf) +{ + struct page *page; + int index, offset; + int copied; + + index = pos / (PAGE_SIZE / HPB_ENTRY_SIZE); + offset = pos % (PAGE_SIZE / HPB_ENTRY_SIZE); + + if ((offset + len) <= (PAGE_SIZE / HPB_ENTRY_SIZE)) + copied = len; + else + copied = (PAGE_SIZE / HPB_ENTRY_SIZE) - offset; + + page = mctx->m_page[index]; + if (unlikely(!page)) { + dev_err(&hpb->sdev_ufs_lu->sdev_dev, + "error. cannot find page in mctx\n"); + return -ENOMEM; + } + + memcpy(ppn_buf, page_address(page) + (offset * HPB_ENTRY_SIZE), + copied * HPB_ENTRY_SIZE); + + return copied; +} + static u64 ufshpb_get_ppn(struct ufshpb_lu *hpb, struct ufshpb_map_ctx *mctx, int pos, int *error) { @@ -256,7 +294,8 @@ ufshpb_get_pos_from_lpn(struct ufshpb_lu *hpb, unsigned long lpn, int *rgn_idx, static void ufshpb_set_hpb_read_to_upiu(struct ufshpb_lu *hpb, struct ufshcd_lrb *lrbp, - u32 lpn, u64 ppn, unsigned int transfer_len) + u32 lpn, u64 ppn, unsigned int transfer_len, + int read_id) { unsigned char *cdb = lrbp->cmd->cmnd; @@ -265,15 +304,269 @@ ufshpb_set_hpb_read_to_upiu(struct ufshpb_lu *hpb, struct ufshcd_lrb *lrbp, /* ppn value is stored as big-endian in the host memory */ put_unaligned(ppn, (u64 *)&cdb[6]); cdb[14] = transfer_len; + cdb[15] = read_id; lrbp->cmd->cmd_len = UFS_CDB_SIZE; } +static inline void ufshpb_set_write_buf_cmd(unsigned char *cdb, + unsigned long lpn, unsigned int len, + int read_id) +{ + cdb[0] = UFSHPB_WRITE_BUFFER; + cdb[1] = UFSHPB_WRITE_BUFFER_PREFETCH_ID; + + put_unaligned_be32(lpn, &cdb[2]); + cdb[6] = read_id; + put_unaligned_be16(len * HPB_ENTRY_SIZE, &cdb[7]); + + cdb[9] = 0x00; /* Control = 0x00 */ +} + +static struct ufshpb_req *ufshpb_get_pre_req(struct ufshpb_lu *hpb) +{ + struct ufshpb_req *pre_req; + + if (hpb->num_inflight_pre_req >= hpb->throttle_pre_req) { + dev_info(&hpb->sdev_ufs_lu->sdev_dev, + "pre_req throttle. inflight %d throttle %d", + hpb->num_inflight_pre_req, hpb->throttle_pre_req); + return NULL; + } + + pre_req = list_first_entry_or_null(&hpb->lh_pre_req_free, + struct ufshpb_req, list_req); + if (!pre_req) { + dev_info(&hpb->sdev_ufs_lu->sdev_dev, "There is no pre_req"); + return NULL; + } + + list_del_init(&pre_req->list_req); + hpb->num_inflight_pre_req++; + + return pre_req; +} + +static inline void ufshpb_put_pre_req(struct ufshpb_lu *hpb, + struct ufshpb_req *pre_req) +{ + pre_req->req = NULL; + pre_req->bio = NULL; + list_add_tail(&pre_req->list_req, &hpb->lh_pre_req_free); + hpb->num_inflight_pre_req--; +} + +static void ufshpb_pre_req_compl_fn(struct request *req, blk_status_t error) +{ + struct ufshpb_req *pre_req = (struct ufshpb_req *)req->end_io_data; + struct ufshpb_lu *hpb = pre_req->hpb; + unsigned long flags; + struct scsi_sense_hdr sshdr; + + if (error) { + dev_err(&hpb->sdev_ufs_lu->sdev_dev, "block status %d", error); + scsi_normalize_sense(pre_req->sense, SCSI_SENSE_BUFFERSIZE, + &sshdr); + dev_err(&hpb->sdev_ufs_lu->sdev_dev, + "code %x sense_key %x asc %x ascq %x", + sshdr.response_code, + sshdr.sense_key, sshdr.asc, sshdr.ascq); + dev_err(&hpb->sdev_ufs_lu->sdev_dev, + "byte4 %x byte5 %x byte6 %x additional_len %x", + sshdr.byte4, sshdr.byte5, + sshdr.byte6, sshdr.additional_length); + } + + bio_put(pre_req->bio); + blk_mq_free_request(req); + spin_lock_irqsave(&hpb->rgn_state_lock, flags); + ufshpb_put_pre_req(pre_req->hpb, pre_req); + spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); +} + +static int ufshpb_prep_entry(struct ufshpb_req *pre_req, struct page *page) +{ + struct ufshpb_lu *hpb = pre_req->hpb; + struct ufshpb_region *rgn; + struct ufshpb_subregion *srgn; + u64 *addr; + int offset = 0; + int copied; + unsigned long lpn = pre_req->wb.lpn; + int rgn_idx, srgn_idx, srgn_offset; + unsigned long flags; + + addr = page_address(page); + ufshpb_get_pos_from_lpn(hpb, lpn, &rgn_idx, &srgn_idx, &srgn_offset); + + spin_lock_irqsave(&hpb->rgn_state_lock, flags); + +next_offset: + rgn = hpb->rgn_tbl + rgn_idx; + srgn = rgn->srgn_tbl + srgn_idx; + + if (!ufshpb_is_valid_srgn(rgn, srgn)) + goto mctx_error; + + if (!srgn->mctx) + goto mctx_error; + + copied = ufshpb_fill_ppn_from_page(hpb, srgn->mctx, srgn_offset, + pre_req->wb.len - offset, + &addr[offset]); + + if (copied < 0) + goto mctx_error; + + offset += copied; + srgn_offset += offset; + + if (srgn_offset == hpb->entries_per_srgn) { + srgn_offset = 0; + + if (++srgn_idx == hpb->srgns_per_rgn) { + srgn_idx = 0; + rgn_idx++; + } + } + + if (offset < pre_req->wb.len) + goto next_offset; + + spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); + return 0; +mctx_error: + spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); + return -ENOMEM; +} + +static int ufshpb_pre_req_add_bio_page(struct ufshpb_lu *hpb, + struct request_queue *q, + struct ufshpb_req *pre_req) +{ + struct page *page = pre_req->wb.m_page; + struct bio *bio = pre_req->bio; + int entries_bytes, ret; + + if (!page) + return -ENOMEM; + + if (ufshpb_prep_entry(pre_req, page)) + return -ENOMEM; + + entries_bytes = pre_req->wb.len * sizeof(u64); + + ret = bio_add_pc_page(q, bio, page, entries_bytes, 0); + if (ret != entries_bytes) { + dev_err(&hpb->sdev_ufs_lu->sdev_dev, + "bio_add_pc_page fail: %d", ret); + return -ENOMEM; + } + return 0; +} + +static inline int ufshpb_get_read_id(struct ufshpb_lu *hpb) +{ + if (++hpb->cur_read_id >= MAX_HPB_READ_ID) + hpb->cur_read_id = 0; + return hpb->cur_read_id; +} + +static int ufshpb_execute_pre_req(struct ufshpb_lu *hpb, struct scsi_cmnd *cmd, + struct ufshpb_req *pre_req, int read_id) +{ + struct scsi_device *sdev = cmd->device; + struct request_queue *q = sdev->request_queue; + struct request *req; + struct scsi_request *rq; + struct bio *bio = pre_req->bio; + + pre_req->hpb = hpb; + pre_req->wb.lpn = sectors_to_logical(cmd->device, + blk_rq_pos(cmd->request)); + pre_req->wb.len = sectors_to_logical(cmd->device, + blk_rq_sectors(cmd->request)); + if (ufshpb_pre_req_add_bio_page(hpb, q, pre_req)) + return -ENOMEM; + + req = pre_req->req; + + /* 1. request setup */ + blk_rq_append_bio(req, &bio); + req->rq_disk = NULL; + req->end_io_data = (void *)pre_req; + req->end_io = ufshpb_pre_req_compl_fn; + + /* 2. scsi_request setup */ + rq = scsi_req(req); + rq->retries = 1; + + ufshpb_set_write_buf_cmd(rq->cmd, pre_req->wb.lpn, pre_req->wb.len, + read_id); + rq->cmd_len = scsi_command_size(rq->cmd); + + if (blk_insert_cloned_request(q, req) != BLK_STS_OK) + return -EAGAIN; + + hpb->stats.pre_req_cnt++; + + return 0; +} + +static int ufshpb_issue_pre_req(struct ufshpb_lu *hpb, struct scsi_cmnd *cmd, + int *read_id) +{ + struct ufshpb_req *pre_req; + struct request *req = NULL; + struct bio *bio = NULL; + unsigned long flags; + int _read_id; + int ret = 0; + + req = blk_get_request(cmd->device->request_queue, + REQ_OP_SCSI_OUT | REQ_SYNC, BLK_MQ_REQ_NOWAIT); + if (IS_ERR(req)) + return -EAGAIN; + + bio = bio_alloc(GFP_ATOMIC, 1); + if (!bio) { + blk_put_request(req); + return -EAGAIN; + } + + spin_lock_irqsave(&hpb->rgn_state_lock, flags); + pre_req = ufshpb_get_pre_req(hpb); + if (!pre_req) { + ret = -EAGAIN; + goto unlock_out; + } + _read_id = ufshpb_get_read_id(hpb); + spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); + + pre_req->req = req; + pre_req->bio = bio; + + ret = ufshpb_execute_pre_req(hpb, cmd, pre_req, _read_id); + if (ret) + goto free_pre_req; + + *read_id = _read_id; + + return ret; +free_pre_req: + spin_lock_irqsave(&hpb->rgn_state_lock, flags); + ufshpb_put_pre_req(hpb, pre_req); +unlock_out: + spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); + bio_put(bio); + blk_put_request(req); + return ret; +} + /* * This function will set up HPB read command using host-side L2P map data. - * In HPB v1.0, maximum size of HPB read command is 4KB. */ -void ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) +int ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) { struct ufshpb_lu *hpb; struct ufshpb_region *rgn; @@ -283,25 +576,27 @@ void ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) u64 ppn; unsigned long flags; int transfer_len, rgn_idx, srgn_idx, srgn_offset; + int read_id = MAX_HPB_READ_ID; int err = 0; hpb = ufshpb_get_hpb_data(cmd->device); if (!hpb) - return; + return -ENODEV; if (ufshpb_get_state(hpb) != HPB_PRESENT) { dev_notice(&hpb->sdev_ufs_lu->sdev_dev, "%s: ufshpb state is not PRESENT", __func__); - return; + return -ENODEV; } if (!ufshpb_is_write_or_discard_cmd(cmd) && !ufshpb_is_read_cmd(cmd)) - return; + return 0; - transfer_len = sectors_to_logical(cmd->device, blk_rq_sectors(cmd->request)); + transfer_len = sectors_to_logical(cmd->device, + blk_rq_sectors(cmd->request)); if (unlikely(!transfer_len)) - return; + return 0; lpn = sectors_to_logical(cmd->device, blk_rq_pos(cmd->request)); ufshpb_get_pos_from_lpn(hpb, lpn, &rgn_idx, &srgn_idx, &srgn_offset); @@ -314,18 +609,18 @@ void ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) ufshpb_set_ppn_dirty(hpb, rgn_idx, srgn_idx, srgn_offset, transfer_len); spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); - return; + return 0; } if (!ufshpb_is_support_chunk(transfer_len)) - return; + return 0; spin_lock_irqsave(&hpb->rgn_state_lock, flags); if (ufshpb_test_ppn_dirty(hpb, rgn_idx, srgn_idx, srgn_offset, transfer_len)) { hpb->stats.miss_cnt++; spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); - return; + return 0; } ppn = ufshpb_get_ppn(hpb, srgn->mctx, srgn_offset, &err); @@ -339,12 +634,29 @@ void ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) */ WARN_ON(true); dev_err(hba->dev, "ufshpb_get_ppn failed. err %d\n", err); - return; + return err; + } + + if (ufshpb_is_required_wb(hpb, transfer_len)) { + err = ufshpb_issue_pre_req(hpb, cmd, &read_id); + if (err) { + unsigned long timeout; + + timeout = cmd->jiffies_at_alloc + msecs_to_jiffies( + hpb->params.requeue_timeout_ms); + + if (time_before(jiffies, timeout)) + return -EAGAIN; + + hpb->stats.miss_cnt++; + return 0; + } } - ufshpb_set_hpb_read_to_upiu(hpb, lrbp, lpn, ppn, transfer_len); + ufshpb_set_hpb_read_to_upiu(hpb, lrbp, lpn, ppn, transfer_len, read_id); hpb->stats.hit_cnt++; + return 0; } static struct ufshpb_req *ufshpb_get_map_req(struct ufshpb_lu *hpb, @@ -381,9 +693,9 @@ static struct ufshpb_req *ufshpb_get_map_req(struct ufshpb_lu *hpb, map_req->req = req; map_req->bio = bio; - map_req->rgn_idx = srgn->rgn_idx; - map_req->srgn_idx = srgn->srgn_idx; - map_req->mctx = srgn->mctx; + map_req->rb.rgn_idx = srgn->rgn_idx; + map_req->rb.srgn_idx = srgn->srgn_idx; + map_req->rb.mctx = srgn->mctx; return map_req; @@ -476,8 +788,8 @@ static void ufshpb_map_req_compl_fn(struct request *req, blk_status_t error) struct ufshpb_subregion *srgn; unsigned long flags; - srgn = hpb->rgn_tbl[map_req->rgn_idx].srgn_tbl + - map_req->srgn_idx; + srgn = hpb->rgn_tbl[map_req->rb.rgn_idx].srgn_tbl + + map_req->rb.srgn_idx; ufshpb_clear_dirty_bitmap(hpb, srgn); spin_lock_irqsave(&hpb->rgn_state_lock, flags); @@ -512,12 +824,12 @@ static int ufshpb_execute_map_req(struct ufshpb_lu *hpb, q = hpb->sdev_ufs_lu->request_queue; for (i = 0; i < hpb->pages_per_srgn; i++) { - ret = bio_add_pc_page(q, map_req->bio, map_req->mctx->m_page[i], + ret = bio_add_pc_page(q, map_req->bio, map_req->rb.mctx->m_page[i], PAGE_SIZE, 0); if (ret != PAGE_SIZE) { dev_err(&hpb->sdev_ufs_lu->sdev_dev, "bio_add_pc_page fail %d - %d\n", - map_req->rgn_idx, map_req->srgn_idx); + map_req->rb.rgn_idx, map_req->rb.srgn_idx); return ret; } } @@ -533,8 +845,8 @@ static int ufshpb_execute_map_req(struct ufshpb_lu *hpb, if (unlikely(last)) mem_size = hpb->last_srgn_entries * HPB_ENTRY_SIZE; - ufshpb_set_read_buf_cmd(rq->cmd, map_req->rgn_idx, - map_req->srgn_idx, mem_size); + ufshpb_set_read_buf_cmd(rq->cmd, map_req->rb.rgn_idx, + map_req->rb.srgn_idx, hpb->srgn_mem_size); rq->cmd_len = HPB_READ_BUFFER_CMD_LENGTH; blk_execute_rq_nowait(q, NULL, req, 1, ufshpb_map_req_compl_fn); @@ -1165,6 +1477,11 @@ static void ufshpb_lu_parameter_init(struct ufs_hba *hba, u32 entries_per_rgn; u64 rgn_mem_size, tmp; + /* for pre_req */ + hpb->pre_req_min_tr_len = HPB_MULTI_CHUNK_LOW; + hpb->pre_req_max_tr_len = HPB_MULTI_CHUNK_HIGH; + hpb->cur_read_id = 0; + hpb->lu_pinned_start = hpb_lu_info->pinned_start; hpb->lu_pinned_end = hpb_lu_info->num_pinned ? (hpb_lu_info->pinned_start + hpb_lu_info->num_pinned - 1) @@ -1312,7 +1629,7 @@ ufshpb_sysfs_attr_show_func(rb_active_cnt); ufshpb_sysfs_attr_show_func(rb_inactive_cnt); ufshpb_sysfs_attr_show_func(map_req_cnt); -static struct attribute *hpb_dev_attrs[] = { +static struct attribute *hpb_dev_stat_attrs[] = { &dev_attr_hit_cnt.attr, &dev_attr_miss_cnt.attr, &dev_attr_rb_noti_cnt.attr, @@ -1323,10 +1640,108 @@ static struct attribute *hpb_dev_attrs[] = { }; struct attribute_group ufs_sysfs_hpb_stat_group = { - .name = "hpb_sysfs", - .attrs = hpb_dev_attrs, + .name = "hpb_stat_sysfs", + .attrs = hpb_dev_stat_attrs, +}; + +/* SYSFS functions */ +#define ufshpb_sysfs_param_show_func(__name) \ +static ssize_t __name##_show(struct device *dev, \ + struct device_attribute *attr, char *buf) \ +{ \ + struct scsi_device *sdev = to_scsi_device(dev); \ + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); \ + if (!hpb) \ + return -ENODEV; \ + \ + return sysfs_emit(buf, "%d\n", hpb->params.__name); \ +} + +ufshpb_sysfs_param_show_func(requeue_timeout_ms); +static ssize_t +requeue_timeout_ms_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + if (val <= 0) + return -EINVAL; + + hpb->params.requeue_timeout_ms = val; + + return count; +} +static DEVICE_ATTR_RW(requeue_timeout_ms); + +static struct attribute *hpb_dev_param_attrs[] = { + &dev_attr_requeue_timeout_ms.attr, }; +struct attribute_group ufs_sysfs_hpb_param_group = { + .name = "hpb_param_sysfs", + .attrs = hpb_dev_param_attrs, +}; + +static int ufshpb_pre_req_mempool_init(struct ufshpb_lu *hpb) +{ + struct ufshpb_req *pre_req = NULL; + int qd = hpb->sdev_ufs_lu->queue_depth; + int i, j; + + INIT_LIST_HEAD(&hpb->lh_pre_req_free); + + hpb->pre_req = kcalloc(qd, sizeof(struct ufshpb_req), GFP_KERNEL); + hpb->throttle_pre_req = qd; + hpb->num_inflight_pre_req = 0; + + if (!hpb->pre_req) + goto release_mem; + + for (i = 0; i < qd; i++) { + pre_req = hpb->pre_req + i; + INIT_LIST_HEAD(&pre_req->list_req); + pre_req->req = NULL; + pre_req->bio = NULL; + + pre_req->wb.m_page = alloc_page(GFP_KERNEL | __GFP_ZERO); + if (!pre_req->wb.m_page) { + for (j = 0; j < i; j++) + __free_page(hpb->pre_req[j].wb.m_page); + + goto release_mem; + } + list_add_tail(&pre_req->list_req, &hpb->lh_pre_req_free); + } + + return 0; +release_mem: + kfree(hpb->pre_req); + return -ENOMEM; +} + +static void ufshpb_pre_req_mempool_destroy(struct ufshpb_lu *hpb) +{ + struct ufshpb_req *pre_req = NULL; + int i; + + for (i = 0; i < hpb->throttle_pre_req; i++) { + pre_req = hpb->pre_req + i; + if (!pre_req->wb.m_page) + __free_page(hpb->pre_req[i].wb.m_page); + list_del_init(&pre_req->list_req); + } + + kfree(hpb->pre_req); +} + static void ufshpb_stat_init(struct ufshpb_lu *hpb) { hpb->stats.hit_cnt = 0; @@ -1337,6 +1752,11 @@ static void ufshpb_stat_init(struct ufshpb_lu *hpb) hpb->stats.map_req_cnt = 0; } +static void ufshpb_param_init(struct ufshpb_lu *hpb) +{ + hpb->params.requeue_timeout_ms = HPB_REQUEUE_TIME_MS; +} + static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb) { int ret; @@ -1369,14 +1789,24 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb) goto release_req_cache; } + ret = ufshpb_pre_req_mempool_init(hpb); + if (ret) { + dev_err(hba->dev, "ufshpb(%d) pre_req_mempool init fail", + hpb->lun); + goto release_m_page_cache; + } + ret = ufshpb_alloc_region_tbl(hba, hpb); if (ret) - goto release_m_page_cache; + goto release_pre_req_mempool; ufshpb_stat_init(hpb); + ufshpb_param_init(hpb); return 0; +release_pre_req_mempool: + ufshpb_pre_req_mempool_destroy(hpb); release_m_page_cache: kmem_cache_destroy(hpb->m_page_cache); release_req_cache: @@ -1384,7 +1814,8 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb) return ret; } -static struct ufshpb_lu *ufshpb_alloc_hpb_lu(struct ufs_hba *hba, int lun, +static struct ufshpb_lu *ufshpb_alloc_hpb_lu(struct ufs_hba *hba, + struct scsi_device *sdev, struct ufshpb_dev_info *hpb_dev_info, struct ufshpb_lu_info *hpb_lu_info) { @@ -1395,7 +1826,8 @@ static struct ufshpb_lu *ufshpb_alloc_hpb_lu(struct ufs_hba *hba, int lun, if (!hpb) return NULL; - hpb->lun = lun; + hpb->lun = sdev->lun; + hpb->sdev_ufs_lu = sdev; ufshpb_lu_parameter_init(hba, hpb, hpb_dev_info, hpb_lu_info); @@ -1405,6 +1837,7 @@ static struct ufshpb_lu *ufshpb_alloc_hpb_lu(struct ufs_hba *hba, int lun, goto release_hpb; } + sdev->hostdata = hpb; return hpb; release_hpb: @@ -1607,6 +2040,7 @@ void ufshpb_destroy_lu(struct ufs_hba *hba, struct scsi_device *sdev) ufshpb_cancel_jobs(hpb); + ufshpb_pre_req_mempool_destroy(hpb); ufshpb_destroy_region_tbl(hpb); kmem_cache_destroy(hpb->map_req_cache); @@ -1670,7 +2104,7 @@ void ufshpb_init_hpb_lu(struct ufs_hba *hba, struct scsi_device *sdev) if (ret) goto out; - hpb = ufshpb_alloc_hpb_lu(hba, lun, &hba->ufshpb_dev, + hpb = ufshpb_alloc_hpb_lu(hba, sdev, &hba->ufshpb_dev, &hpb_lu_info); if (!hpb) goto out; @@ -1678,9 +2112,6 @@ void ufshpb_init_hpb_lu(struct ufs_hba *hba, struct scsi_device *sdev) tot_active_srgn_pages += hpb_lu_info.max_active_rgns * hpb->srgns_per_rgn * hpb->pages_per_srgn; - hpb->sdev_ufs_lu = sdev; - sdev->hostdata = hpb; - out: /* All LUs are initialized */ if (atomic_dec_and_test(&hba->ufshpb_dev.slave_conf_cnt)) diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h index c70e73546e35..eb8366d47d8a 100644 --- a/drivers/scsi/ufs/ufshpb.h +++ b/drivers/scsi/ufs/ufshpb.h @@ -30,19 +30,24 @@ #define PINNED_NOT_SET U32_MAX /* hpb support chunk size */ -#define HPB_MULTI_CHUNK_HIGH 1 +#define HPB_MULTI_CHUNK_LOW 9 +#define HPB_MULTI_CHUNK_HIGH 128 /* hpb vender defined opcode */ #define UFSHPB_READ 0xF8 #define UFSHPB_READ_BUFFER 0xF9 #define UFSHPB_READ_BUFFER_ID 0x01 +#define UFSHPB_WRITE_BUFFER 0xFA +#define UFSHPB_WRITE_BUFFER_PREFETCH_ID 0x02 +#define MAX_HPB_READ_ID 0x7F #define HPB_READ_BUFFER_CMD_LENGTH 10 #define LU_ENABLED_HPB_FUNC 0x02 #define HPB_RESET_REQ_RETRIES 10 #define HPB_MAP_REQ_RETRIES 5 +#define HPB_REQUEUE_TIME_MS 3 -#define HPB_SUPPORT_VERSION 0x100 +#define HPB_SUPPORT_VERSION 0x200 enum UFSHPB_MODE { HPB_HOST_CONTROL, @@ -118,23 +123,39 @@ struct ufshpb_region { (i)++) /** - * struct ufshpb_req - UFSHPB READ BUFFER (for caching map) request structure - * @req: block layer request for READ BUFFER - * @bio: bio for holding map page - * @hpb: ufshpb_lu structure that related to the L2P map + * struct ufshpb_req - HPB related request structure (write/read buffer) + * @req: block layer request structure + * @bio: bio for this request + * @hpb: ufshpb_lu structure that related to + * @list_req: ufshpb_req mempool list + * @sense: store its sense data * @mctx: L2P map information * @rgn_idx: target region index * @srgn_idx: target sub-region index * @lun: target logical unit number + * @m_page: L2P map information data for pre-request + * @len: length of host-side cached L2P map in m_page + * @lpn: start LPN of L2P map in m_page */ struct ufshpb_req { struct request *req; struct bio *bio; struct ufshpb_lu *hpb; - struct ufshpb_map_ctx *mctx; - - unsigned int rgn_idx; - unsigned int srgn_idx; + struct list_head list_req; + char sense[SCSI_SENSE_BUFFERSIZE]; + union { + struct { + struct ufshpb_map_ctx *mctx; + unsigned int rgn_idx; + unsigned int srgn_idx; + unsigned int lun; + } rb; + struct { + struct page *m_page; + unsigned int len; + unsigned long lpn; + } wb; + }; }; struct victim_select_info { @@ -143,6 +164,10 @@ struct victim_select_info { atomic_t active_cnt; }; +struct ufshpb_params { + unsigned int requeue_timeout_ms; +}; + struct ufshpb_stats { u64 hit_cnt; u64 miss_cnt; @@ -150,6 +175,7 @@ struct ufshpb_stats { u64 rb_active_cnt; u64 rb_inactive_cnt; u64 map_req_cnt; + u64 pre_req_cnt; }; struct ufshpb_lu { @@ -165,6 +191,15 @@ struct ufshpb_lu { struct list_head lh_act_srgn; /* hold rsp_list_lock */ struct list_head lh_inact_rgn; /* hold rsp_list_lock */ + /* pre request information */ + struct ufshpb_req *pre_req; + int num_inflight_pre_req; + int throttle_pre_req; + struct list_head lh_pre_req_free; + int cur_read_id; + int pre_req_min_tr_len; + int pre_req_max_tr_len; + /* cached L2P map management worker */ struct work_struct map_work; @@ -189,6 +224,7 @@ struct ufshpb_lu { u32 pages_per_srgn; struct ufshpb_stats stats; + struct ufshpb_params params; struct kmem_cache *map_req_cache; struct kmem_cache *m_page_cache; @@ -200,7 +236,7 @@ struct ufs_hba; struct ufshcd_lrb; #ifndef CONFIG_SCSI_UFS_HPB -static void ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) {} +static int ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) { return 0; } static void ufshpb_rsp_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) {} static void ufshpb_resume(struct ufs_hba *hba) {} static void ufshpb_suspend(struct ufs_hba *hba) {} @@ -214,7 +250,7 @@ static bool ufshpb_is_allowed(struct ufs_hba *hba) { return false; } static void ufshpb_get_geo_info(struct ufs_hba *hba, u8 *geo_buf) {} static void ufshpb_get_dev_info(struct ufs_hba *hba, u8 *desc_buf) {} #else -void ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp); +int ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp); void ufshpb_rsp_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp); void ufshpb_resume(struct ufs_hba *hba); void ufshpb_suspend(struct ufs_hba *hba); @@ -228,6 +264,7 @@ bool ufshpb_is_allowed(struct ufs_hba *hba); void ufshpb_get_geo_info(struct ufs_hba *hba, u8 *geo_buf); void ufshpb_get_dev_info(struct ufs_hba *hba, u8 *desc_buf); extern struct attribute_group ufs_sysfs_hpb_stat_group; +extern struct attribute_group ufs_sysfs_hpb_param_group; #endif #endif /* End of Header */