From patchwork Mon Jun 29 06:48:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daejun Park X-Patchwork-Id: 213653 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.3 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, UPPERCASE_50_75, URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53C41C433DF for ; Mon, 29 Jun 2020 21:48:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1865520760 for ; Mon, 29 Jun 2020 21:48:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=samsung.com header.i=@samsung.com header.b="Zhx8tlVZ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2404325AbgF2Vsz (ORCPT ); Mon, 29 Jun 2020 17:48:55 -0400 Received: from mailout1.samsung.com ([203.254.224.24]:64034 "EHLO mailout1.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726643AbgF2Sfl (ORCPT ); Mon, 29 Jun 2020 14:35:41 -0400 Received: from epcas1p2.samsung.com (unknown [182.195.41.46]) by mailout1.samsung.com (KnoxPortal) with ESMTP id 20200629065003epoutp011adbbcecb55c5b42a6c56e95c28b5bf0~c8QxWcV6T1796017960epoutp01S for ; Mon, 29 Jun 2020 06:50:03 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout1.samsung.com 20200629065003epoutp011adbbcecb55c5b42a6c56e95c28b5bf0~c8QxWcV6T1796017960epoutp01S DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1593413403; bh=Gf3pprKXKrcSeP86aKBRO6sfzz3FSEUrZ6VtYq5iOiA=; h=Subject:Reply-To:From:To:CC:In-Reply-To:Date:References:From; b=Zhx8tlVZf4alD7CbADu6icHoUN1LkMfPcJ/k7csz25kpWYHTC53GSK8j4r7fqe4y2 ttngKi65L1+89iFYQCWV2SsDSj0db5QndEFWr98RbsUZrNLL0KzHv73+2/DQJcoyld RHamEz2khA2tzNeDOrNbRveowQlkInwiR3lqXL4g= Received: from epcpadp1 (unknown [182.195.40.11]) by epcas1p2.samsung.com (KnoxPortal) with ESMTP id 20200629065002epcas1p27b22a8678c41d2bcdde721adb43f5085~c8Qw6GAND1512515125epcas1p2d; Mon, 29 Jun 2020 06:50:02 +0000 (GMT) Mime-Version: 1.0 Subject: [PATCH v4 1/5] scsi: ufs: Add UFS feature related parameter Reply-To: daejun7.park@samsung.com From: Daejun Park To: Daejun Park , "avri.altman@wdc.com" , "jejb@linux.ibm.com" , "martin.petersen@oracle.com" , "asutoshd@codeaurora.org" , "beanhuo@micron.com" , "stanley.chu@mediatek.com" , "cang@codeaurora.org" , "bvanassche@acm.org" , "tomas.winkler@intel.com" , ALIM AKHTAR CC: "linux-scsi@vger.kernel.org" , "linux-kernel@vger.kernel.org" , Sang-yoon Oh , Sung-Jun Park , yongmyung lee , Jinyoung CHOI , Adel Choi , BoRam Shin X-Priority: 3 X-Content-Kind-Code: NORMAL In-Reply-To: <231786897.01593413281727.JavaMail.epsvc@epcpadp2> X-CPGS-Detection: blocking_info_exchange X-Drm-Type: N,general X-Msg-Generator: Mail X-Msg-Type: PERSONAL X-Reply-Demand: N Message-ID: <1239183618.61593413402991.JavaMail.epsvc@epcpadp1> Date: Mon, 29 Jun 2020 15:48:57 +0900 X-CMS-MailID: 20200629064857epcms2p4e4bc13826bb0079440fe011c4c105070 X-Sendblock-Type: AUTO_CONFIDENTIAL X-CPGSPASS: Y X-CPGSPASS: Y X-Hop-Count: 3 X-CMS-RootMailID: 20200629064323epcms2p787baba58a416fef7fdd3927f8da701da References: <231786897.01593413281727.JavaMail.epsvc@epcpadp2> Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This is a patch for parameters to be used for UFS features layer and HPB module. Signed-off-by: Daejun Park --- drivers/scsi/ufs/ufs.h | 12 ++++++++++++ 1 file changed, 12 insertions(+) base-commit: fbca7a04dbd8271752a58594727b61307bcc85b6 diff --git a/drivers/scsi/ufs/ufs.h b/drivers/scsi/ufs/ufs.h index f8ab16f30fdc..ae557b8d3eba 100644 --- a/drivers/scsi/ufs/ufs.h +++ b/drivers/scsi/ufs/ufs.h @@ -122,6 +122,7 @@ enum flag_idn { QUERY_FLAG_IDN_WB_EN = 0x0E, QUERY_FLAG_IDN_WB_BUFF_FLUSH_EN = 0x0F, QUERY_FLAG_IDN_WB_BUFF_FLUSH_DURING_HIBERN8 = 0x10, + QUERY_FLAG_IDN_HPB_RESET = 0x11, }; /* Attribute idn for Query requests */ @@ -195,6 +196,9 @@ enum unit_desc_param { UNIT_DESC_PARAM_PHY_MEM_RSRC_CNT = 0x18, UNIT_DESC_PARAM_CTX_CAPABILITIES = 0x20, UNIT_DESC_PARAM_LARGE_UNIT_SIZE_M1 = 0x22, + UNIT_DESC_HPB_LU_MAX_ACTIVE_REGIONS = 0x23, + UNIT_DESC_HPB_LU_PIN_REGION_START_OFFSET = 0x25, + UNIT_DESC_HPB_LU_NUM_PIN_REGIONS = 0x27, UNIT_DESC_PARAM_WB_BUF_ALLOC_UNITS = 0x29, }; @@ -235,6 +239,8 @@ enum device_desc_param { DEVICE_DESC_PARAM_PSA_MAX_DATA = 0x25, DEVICE_DESC_PARAM_PSA_TMT = 0x29, DEVICE_DESC_PARAM_PRDCT_REV = 0x2A, + DEVICE_DESC_PARAM_HPB_VER = 0x40, + DEVICE_DESC_PARAM_HPB_CONTROL = 0x42, DEVICE_DESC_PARAM_EXT_UFS_FEATURE_SUP = 0x4F, DEVICE_DESC_PARAM_WB_PRESRV_USRSPC_EN = 0x53, DEVICE_DESC_PARAM_WB_TYPE = 0x54, @@ -283,6 +289,10 @@ enum geometry_desc_param { GEOMETRY_DESC_PARAM_ENM4_MAX_NUM_UNITS = 0x3E, GEOMETRY_DESC_PARAM_ENM4_CAP_ADJ_FCTR = 0x42, GEOMETRY_DESC_PARAM_OPT_LOG_BLK_SIZE = 0x44, + GEOMETRY_DESC_HPB_REGION_SIZE = 0x48, + GEOMETRY_DESC_HPB_NUMBER_LU = 0x49, + GEOMETRY_DESC_HPB_SUBREGION_SIZE = 0x4A, + GEOMETRY_DESC_HPB_DEVICE_MAX_ACTIVE_REGIONS = 0x4B, GEOMETRY_DESC_PARAM_WB_MAX_ALLOC_UNITS = 0x4F, GEOMETRY_DESC_PARAM_WB_MAX_WB_LUNS = 0x53, GEOMETRY_DESC_PARAM_WB_BUFF_CAP_ADJ = 0x54, @@ -327,6 +337,7 @@ enum { /* Possible values for dExtendedUFSFeaturesSupport */ enum { + UFS_DEV_HPB_SUPPORT = BIT(7), UFS_DEV_WRITE_BOOSTER_SUP = BIT(8), }; @@ -537,6 +548,7 @@ struct ufs_dev_info { u8 *model; u16 wspecversion; u32 clk_gating_wait_us; + u8 b_ufs_feature_sup; u32 d_ext_ufs_feature_sup; u8 b_wb_buffer_type; u32 d_wb_alloc_units; From patchwork Mon Jun 29 06:50:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Daejun Park X-Patchwork-Id: 213654 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 95445C433DF for ; Mon, 29 Jun 2020 21:36:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5F032206A1 for ; Mon, 29 Jun 2020 21:36:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=samsung.com header.i=@samsung.com header.b="vh68/fxo" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391341AbgF2Vg2 (ORCPT ); Mon, 29 Jun 2020 17:36:28 -0400 Received: from mailout1.samsung.com ([203.254.224.24]:10073 "EHLO mailout1.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728447AbgF2Skm (ORCPT ); Mon, 29 Jun 2020 14:40:42 -0400 Received: from epcas1p4.samsung.com (unknown [182.195.41.48]) by mailout1.samsung.com (KnoxPortal) with ESMTP id 20200629065303epoutp01c3d7cc6b0e9a3a1a7bd85150ed5dc724~c8TY4SG2u2009520095epoutp01a for ; Mon, 29 Jun 2020 06:53:03 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout1.samsung.com 20200629065303epoutp01c3d7cc6b0e9a3a1a7bd85150ed5dc724~c8TY4SG2u2009520095epoutp01a DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1593413583; bh=7bdDrtKfhDIeoE6nA3jqgvIZ53z3Zzgsj9GRaApA+3s=; h=Subject:Reply-To:From:To:CC:In-Reply-To:Date:References:From; b=vh68/fxo0BXeKLPxfU2BSZz300/5XF1N4O65/cGh8MmE7IxnbJMTJ8TKr88+6Malh zRP5406vPg949HTvlVIqszXBKgVT5Xds5qAXXCC+84rfv92amSB1oi/r7MVutEYjXx vXz8me3rGiocnrJLNL83e2OTk951UOXWiugSp51M= Received: from epcpadp2 (unknown [182.195.40.12]) by epcas1p2.samsung.com (KnoxPortal) with ESMTP id 20200629065302epcas1p2eb3ffeb26eb214d361ac7007ce1b8291~c8TYc09-71437714377epcas1p25; Mon, 29 Jun 2020 06:53:02 +0000 (GMT) Mime-Version: 1.0 Subject: [PATCH v4 2/5] scsi: ufs: Add UFS-feature layer Reply-To: daejun7.park@samsung.com From: Daejun Park To: Daejun Park , "avri.altman@wdc.com" , "jejb@linux.ibm.com" , "martin.petersen@oracle.com" , "asutoshd@codeaurora.org" , "beanhuo@micron.com" , "stanley.chu@mediatek.com" , "cang@codeaurora.org" , "bvanassche@acm.org" , "tomas.winkler@intel.com" , ALIM AKHTAR CC: "linux-scsi@vger.kernel.org" , "linux-kernel@vger.kernel.org" , Sang-yoon Oh , Sung-Jun Park , yongmyung lee , Jinyoung CHOI , Adel Choi , BoRam Shin X-Priority: 3 X-Content-Kind-Code: NORMAL In-Reply-To: <1239183618.61593413402991.JavaMail.epsvc@epcpadp1> X-CPGS-Detection: blocking_info_exchange X-Drm-Type: N,general X-Msg-Generator: Mail X-Msg-Type: PERSONAL X-Reply-Demand: N Message-ID: <963815509.21593413582881.JavaMail.epsvc@epcpadp2> Date: Mon, 29 Jun 2020 15:50:52 +0900 X-CMS-MailID: 20200629065052epcms2p3096e06bb5ac384bf6a5158996eef31c4 X-Sendblock-Type: AUTO_CONFIDENTIAL X-CPGSPASS: Y X-CPGSPASS: Y X-Hop-Count: 3 X-CMS-RootMailID: 20200629064323epcms2p787baba58a416fef7fdd3927f8da701da References: <1239183618.61593413402991.JavaMail.epsvc@epcpadp1> <231786897.01593413281727.JavaMail.epsvc@epcpadp2> Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This patch is adding UFS feature layer to UFS core driver. UFS Driver data structure (struct ufs_hba) │ ┌--------------┐ │ UFS feature │ <-- HPB module │ layer │ <-- other extended feature module └--------------┘ Each extended UFS-Feature module has a bus of ufs-ext feature type. The UFS feature layer manages common APIs used by each extended feature module. The APIs are set of UFS Query requests and UFS Vendor commands related to each extended feature module. Other extended features can also be implemented using the proposed APIs. For example, in Write Booster, "prep_fn" can be used to guarantee the lifetime of UFS by updating the amount of write IO. And reset/reset_host/suspend/resume can be used to manage the kernel task for checking lifetime of UFS. The following 6 callback functions have been added to "ufshcd.c". prep_fn: called after construct upiu structure reset: called after proving hba reset_host: called before ufshcd_host_reset_and_restore suspend: called before ufshcd_suspend resume: called after ufshcd_resume rsp_upiu: called in ufshcd_transfer_rsp_status with SAM_STAT_GOOD state Signed-off-by: Daejun Park --- drivers/scsi/ufs/Makefile | 2 +- drivers/scsi/ufs/ufsfeature.c | 148 ++++++++++++++++++++++++++++++++++ drivers/scsi/ufs/ufsfeature.h | 69 ++++++++++++++++ drivers/scsi/ufs/ufshcd.c | 17 ++++ drivers/scsi/ufs/ufshcd.h | 3 + 5 files changed, 238 insertions(+), 1 deletion(-) create mode 100644 drivers/scsi/ufs/ufsfeature.c create mode 100644 drivers/scsi/ufs/ufsfeature.h diff --git a/drivers/scsi/ufs/Makefile b/drivers/scsi/ufs/Makefile index f0c5b95ec9cc..433b871badfa 100644 --- a/drivers/scsi/ufs/Makefile +++ b/drivers/scsi/ufs/Makefile @@ -6,7 +6,7 @@ obj-$(CONFIG_SCSI_UFS_CDNS_PLATFORM) += cdns-pltfrm.o obj-$(CONFIG_SCSI_UFS_QCOM) += ufs-qcom.o obj-$(CONFIG_SCSI_UFS_EXYNOS) += ufs-exynos.o obj-$(CONFIG_SCSI_UFSHCD) += ufshcd-core.o -ufshcd-core-y += ufshcd.o ufs-sysfs.o +ufshcd-core-y += ufshcd.o ufs-sysfs.o ufsfeature.o ufshcd-core-$(CONFIG_SCSI_UFS_BSG) += ufs_bsg.o obj-$(CONFIG_SCSI_UFSHCD_PCI) += ufshcd-pci.o obj-$(CONFIG_SCSI_UFSHCD_PLATFORM) += ufshcd-pltfrm.o diff --git a/drivers/scsi/ufs/ufsfeature.c b/drivers/scsi/ufs/ufsfeature.c new file mode 100644 index 000000000000..94c6be6babd3 --- /dev/null +++ b/drivers/scsi/ufs/ufsfeature.c @@ -0,0 +1,148 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Universal Flash Storage Feature Support + * + * Copyright (C) 2017-2018 Samsung Electronics Co., Ltd. + * + * Authors: + * Yongmyung Lee + * Jinyoung Choi + */ + +#include "ufshcd.h" +#include "ufsfeature.h" + +inline void ufsf_slave_configure(struct ufs_hba *hba, + struct scsi_device *sdev) +{ + /* skip well-known LU */ + if (sdev->lun >= UFS_UPIU_MAX_UNIT_NUM_ID) + return; + + if (!(hba->dev_info.b_ufs_feature_sup & UFS_DEV_HPB_SUPPORT)) + return; + + atomic_inc(&hba->ufsf.slave_conf_cnt); + + wake_up(&hba->ufsf.sdev_wait); +} + +inline void ufsf_ops_prep_fn(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) +{ + struct ufshpb_driver *ufshpb_drv; + + ufshpb_drv = dev_get_drvdata(&hba->ufsf.hpb_dev); + + if (ufshpb_drv && ufshpb_drv->ufshpb_ops.prep_fn) + ufshpb_drv->ufshpb_ops.prep_fn(hba, lrbp); +} + +inline void ufsf_ops_rsp_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) +{ + struct ufshpb_driver *ufshpb_drv; + + ufshpb_drv = dev_get_drvdata(&hba->ufsf.hpb_dev); + + if (ufshpb_drv && ufshpb_drv->ufshpb_ops.rsp_upiu) + ufshpb_drv->ufshpb_ops.rsp_upiu(hba, lrbp); +} + +inline void ufsf_ops_reset_host(struct ufs_hba *hba) +{ + struct ufshpb_driver *ufshpb_drv; + + ufshpb_drv = dev_get_drvdata(&hba->ufsf.hpb_dev); + + if (ufshpb_drv && ufshpb_drv->ufshpb_ops.reset_host) + ufshpb_drv->ufshpb_ops.reset_host(hba); +} + +inline void ufsf_ops_reset(struct ufs_hba *hba) +{ + struct ufshpb_driver *ufshpb_drv; + + ufshpb_drv = dev_get_drvdata(&hba->ufsf.hpb_dev); + + if (ufshpb_drv && ufshpb_drv->ufshpb_ops.reset) + ufshpb_drv->ufshpb_ops.reset(hba); +} + +inline void ufsf_ops_suspend(struct ufs_hba *hba) +{ + struct ufshpb_driver *ufshpb_drv; + + ufshpb_drv = dev_get_drvdata(&hba->ufsf.hpb_dev); + + if (ufshpb_drv && ufshpb_drv->ufshpb_ops.suspend) + ufshpb_drv->ufshpb_ops.suspend(hba); +} + +inline void ufsf_ops_resume(struct ufs_hba *hba) +{ + struct ufshpb_driver *ufshpb_drv; + + ufshpb_drv = dev_get_drvdata(&hba->ufsf.hpb_dev); + + if (ufshpb_drv && ufshpb_drv->ufshpb_ops.resume) + ufshpb_drv->ufshpb_ops.resume(hba); +} + +struct device_type ufshpb_dev_type = { + .name = "ufshpb_device" +}; +EXPORT_SYMBOL(ufshpb_dev_type); + +static int ufsf_bus_match(struct device *dev, + struct device_driver *gendrv) +{ + if (dev->type == &ufshpb_dev_type) + return 1; + + return 0; +} + +struct bus_type ufsf_bus_type = { + .name = "ufsf_bus", + .match = ufsf_bus_match, +}; +EXPORT_SYMBOL(ufsf_bus_type); + +static void ufsf_dev_release(struct device *dev) +{ + put_device(dev->parent); +} + +void ufsf_scan_features(struct ufs_hba *hba) +{ + int ret; + + init_waitqueue_head(&hba->ufsf.sdev_wait); + atomic_set(&hba->ufsf.slave_conf_cnt, 0); + + if (hba->dev_info.wspecversion >= HPB_SUPPORTED_VERSION && + (hba->dev_info.b_ufs_feature_sup & UFS_DEV_HPB_SUPPORT)) { + device_initialize(&hba->ufsf.hpb_dev); + + hba->ufsf.hpb_dev.bus = &ufsf_bus_type; + hba->ufsf.hpb_dev.type = &ufshpb_dev_type; + hba->ufsf.hpb_dev.parent = get_device(hba->dev); + hba->ufsf.hpb_dev.release = ufsf_dev_release; + + dev_set_name(&hba->ufsf.hpb_dev, "ufshpb"); + ret = device_add(&hba->ufsf.hpb_dev); + if (ret) + dev_warn(hba->dev, "ufshpb: failed to add device\n"); + } +} + +static int __init ufsf_init(void) +{ + int ret; + + ret = bus_register(&ufsf_bus_type); + if (ret) + pr_err("%s bus_register failed\n", __func__); + + return ret; +} +device_initcall(ufsf_init); diff --git a/drivers/scsi/ufs/ufsfeature.h b/drivers/scsi/ufs/ufsfeature.h new file mode 100644 index 000000000000..1822d9d8e745 --- /dev/null +++ b/drivers/scsi/ufs/ufsfeature.h @@ -0,0 +1,69 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * Universal Flash Storage Feature Support + * + * Copyright (C) 2017-2018 Samsung Electronics Co., Ltd. + * + * Authors: + * Yongmyung Lee + * Jinyoung Choi + */ + +#ifndef _UFSFEATURE_H_ +#define _UFSFEATURE_H_ + +#define HPB_SUPPORTED_VERSION 0x0310 + +struct ufs_hba; +struct ufshcd_lrb; + +/** + * struct ufsf_operation - UFS feature specific callbacks + * @prep_fn: called after construct upiu structure. The prep_fn should work + * properly even if it processes the same SCSI command multiple + * times by requeuing. + * @reset: called after probing hba + * @reset_host: called before ufshcd_host_reset_and_restore + * @suspend: called before ufshcd_suspend + * @resume: called after ufshcd_resume + * @rsp_upiu: called in ufshcd_transfer_rsp_status with SAM_STAT_GOOD state + */ +struct ufsf_operation { + void (*prep_fn)(struct ufs_hba *hba, struct ufshcd_lrb *lrbp); + void (*reset)(struct ufs_hba *hba); + void (*reset_host)(struct ufs_hba *hba); + void (*suspend)(struct ufs_hba *hba); + void (*resume)(struct ufs_hba *hba); + void (*rsp_upiu)(struct ufs_hba *hba, struct ufshcd_lrb *lrbp); +}; + +struct ufshpb_driver { + struct device_driver drv; + struct list_head lh_hpb_lu; + + struct ufsf_operation ufshpb_ops; + + /* memory management */ + struct kmem_cache *ufshpb_mctx_cache; + mempool_t *ufshpb_mctx_pool; + mempool_t *ufshpb_page_pool; + + struct workqueue_struct *ufshpb_wq; +}; + +struct ufsf_feature_info { + atomic_t slave_conf_cnt; + wait_queue_head_t sdev_wait; + struct device hpb_dev; +}; + +void ufsf_slave_configure(struct ufs_hba *hba, struct scsi_device *sdev); +void ufsf_scan_features(struct ufs_hba *hba); +void ufsf_ops_prep_fn(struct ufs_hba *hba, struct ufshcd_lrb *lrbp); +void ufsf_ops_rsp_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp); +void ufsf_ops_reset_host(struct ufs_hba *hba); +void ufsf_ops_reset(struct ufs_hba *hba); +void ufsf_ops_suspend(struct ufs_hba *hba); +void ufsf_ops_resume(struct ufs_hba *hba); + +#endif /* End of Header */ diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index 59358bb75014..d02106bf80d8 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -2533,6 +2533,8 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd) ufshcd_comp_scsi_upiu(hba, lrbp); + ufsf_ops_prep_fn(hba, lrbp); + err = ufshcd_map_sg(hba, lrbp); if (err) { lrbp->cmd = NULL; @@ -4665,6 +4667,8 @@ static int ufshcd_slave_configure(struct scsi_device *sdev) struct ufs_hba *hba = shost_priv(sdev->host); struct request_queue *q = sdev->request_queue; + ufsf_slave_configure(hba, sdev); + blk_queue_update_dma_pad(q, PRDT_DATA_BYTE_COUNT_PAD - 1); if (ufshcd_is_rpm_autosuspend_allowed(hba)) @@ -4791,6 +4795,9 @@ ufshcd_transfer_rsp_status(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) */ pm_runtime_get_noresume(hba->dev); } + + if (scsi_status == SAM_STAT_GOOD) + ufsf_ops_rsp_upiu(hba, lrbp); break; case UPIU_TRANSACTION_REJECT_UPIU: /* TODO: handle Reject UPIU Response */ @@ -6539,6 +6546,8 @@ static int ufshcd_host_reset_and_restore(struct ufs_hba *hba) * Stop the host controller and complete the requests * cleared by h/w */ + ufsf_ops_reset_host(hba); + ufshcd_hba_stop(hba); spin_lock_irqsave(hba->host->host_lock, flags); @@ -6973,6 +6982,7 @@ static int ufs_get_device_desc(struct ufs_hba *hba) /* getting Specification Version in big endian format */ dev_info->wspecversion = desc_buf[DEVICE_DESC_PARAM_SPEC_VER] << 8 | desc_buf[DEVICE_DESC_PARAM_SPEC_VER + 1]; + dev_info->b_ufs_feature_sup = desc_buf[DEVICE_DESC_PARAM_UFS_FEAT]; model_index = desc_buf[DEVICE_DESC_PARAM_PRDCT_NAME]; @@ -7343,6 +7353,7 @@ static int ufshcd_add_lus(struct ufs_hba *hba) } ufs_bsg_probe(hba); + ufsf_scan_features(hba); scsi_scan_host(hba->host); pm_runtime_put_sync(hba->dev); @@ -7431,6 +7442,7 @@ static int ufshcd_probe_hba(struct ufs_hba *hba, bool async) /* Enable Auto-Hibernate if configured */ ufshcd_auto_hibern8_enable(hba); + ufsf_ops_reset(hba); out: trace_ufshcd_init(dev_name(hba->dev), ret, @@ -8188,6 +8200,8 @@ static int ufshcd_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op) req_link_state = UIC_LINK_OFF_STATE; } + ufsf_ops_suspend(hba); + /* * If we can't transition into any of the low power modes * just gate the clocks. @@ -8309,6 +8323,7 @@ static int ufshcd_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op) hba->clk_gating.is_suspended = false; hba->dev_info.b_rpm_dev_flush_capable = false; ufshcd_release(hba); + ufsf_ops_resume(hba); out: if (hba->dev_info.b_rpm_dev_flush_capable) { schedule_delayed_work(&hba->rpm_dev_flush_recheck_work, @@ -8405,6 +8420,8 @@ static int ufshcd_resume(struct ufs_hba *hba, enum ufs_pm_op pm_op) /* Enable Auto-Hibernate if configured */ ufshcd_auto_hibern8_enable(hba); + ufsf_ops_resume(hba); + if (hba->dev_info.b_rpm_dev_flush_capable) { hba->dev_info.b_rpm_dev_flush_capable = false; cancel_delayed_work(&hba->rpm_dev_flush_recheck_work); diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h index c774012582b4..6fe5c9b3a0e7 100644 --- a/drivers/scsi/ufs/ufshcd.h +++ b/drivers/scsi/ufs/ufshcd.h @@ -46,6 +46,7 @@ #include "ufs.h" #include "ufs_quirks.h" #include "ufshci.h" +#include "ufsfeature.h" #define UFSHCD "ufshcd" #define UFSHCD_DRIVER_VERSION "0.2" @@ -736,6 +737,8 @@ struct ufs_hba { bool wb_buf_flush_enabled; bool wb_enabled; struct delayed_work rpm_dev_flush_recheck_work; + + struct ufsf_feature_info ufsf; }; /* Returns true if clocks can be gated. Otherwise false */ From patchwork Mon Jun 29 06:55:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daejun Park X-Patchwork-Id: 213668 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6167C433DF for ; Mon, 29 Jun 2020 18:44:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9F9C22064B for ; Mon, 29 Jun 2020 18:44:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=samsung.com header.i=@samsung.com header.b="Z5U4oNEI" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729192AbgF2Sn7 (ORCPT ); Mon, 29 Jun 2020 14:43:59 -0400 Received: from mailout4.samsung.com ([203.254.224.34]:58083 "EHLO mailout4.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728604AbgF2Sn4 (ORCPT ); Mon, 29 Jun 2020 14:43:56 -0400 Received: from epcas1p1.samsung.com (unknown [182.195.41.45]) by mailout4.samsung.com (KnoxPortal) with ESMTP id 20200629065802epoutp04c53988145491612a1a06ec5ca0c9eab9~c8XvzJfoO1661916619epoutp04S for ; Mon, 29 Jun 2020 06:58:02 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout4.samsung.com 20200629065802epoutp04c53988145491612a1a06ec5ca0c9eab9~c8XvzJfoO1661916619epoutp04S DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1593413882; bh=0ZH6bqHCJAdLtPemWcAFCgqL3MWYyNaHNt2Zl8EOZpY=; h=Subject:Reply-To:From:To:CC:In-Reply-To:Date:References:From; b=Z5U4oNEItOWTshGI/KMXnUSsrddtPY2t5RhEjKERBIFB0VhxuN88c/8Wa+HEDxMbi tAgpF7J4w2WgVTVka33rG44CTKVoNDj3BeEsEIB1ZVI0I7nDV4ww/yqzcZENcNnLwd mdC7+GJt1t0wfq6Ve8D5sfDJp+yfiHMlVkYMdJtE= Received: from epcpadp2 (unknown [182.195.40.12]) by epcas1p1.samsung.com (KnoxPortal) with ESMTP id 20200629065802epcas1p192d81d7c1b3d297abaae7e45a261765c~c8XvYW--q0034400344epcas1p1-; Mon, 29 Jun 2020 06:58:02 +0000 (GMT) Mime-Version: 1.0 Subject: [PATCH v4 3/5] scsi: ufs: Introduce HPB module Reply-To: daejun7.park@samsung.com From: Daejun Park To: Daejun Park , "avri.altman@wdc.com" , "jejb@linux.ibm.com" , "martin.petersen@oracle.com" , "asutoshd@codeaurora.org" , "beanhuo@micron.com" , "stanley.chu@mediatek.com" , "cang@codeaurora.org" , "bvanassche@acm.org" , "tomas.winkler@intel.com" , ALIM AKHTAR CC: "linux-scsi@vger.kernel.org" , "linux-kernel@vger.kernel.org" , Sang-yoon Oh , Sung-Jun Park , yongmyung lee , Jinyoung CHOI , Adel Choi , BoRam Shin X-Priority: 3 X-Content-Kind-Code: NORMAL In-Reply-To: <963815509.21593413582881.JavaMail.epsvc@epcpadp2> X-CPGS-Detection: blocking_info_exchange X-Drm-Type: N,general X-Msg-Generator: Mail X-Msg-Type: PERSONAL X-Reply-Demand: N Message-ID: <1239183618.61593413882377.JavaMail.epsvc@epcpadp2> Date: Mon, 29 Jun 2020 15:55:00 +0900 X-CMS-MailID: 20200629065500epcms2p4a763b34850eaea915d4f881f221a394d X-Sendblock-Type: AUTO_CONFIDENTIAL X-CPGSPASS: Y X-CPGSPASS: Y X-Hop-Count: 3 X-CMS-RootMailID: 20200629064323epcms2p787baba58a416fef7fdd3927f8da701da References: <963815509.21593413582881.JavaMail.epsvc@epcpadp2> <1239183618.61593413402991.JavaMail.epsvc@epcpadp1> <231786897.01593413281727.JavaMail.epsvc@epcpadp2> Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This is a patch for the HPB module. The HPB module queries UFS for device information during initialization. We added the export symbol to two functions in ufshcd.c to initialize the HPB module. The HPB module can be loaded or built-in as needed. The mininum size of the memory pool used in the HPB module is implemented as a module parameter, so that it can be configurable by the user. To gurantee a minimum memory pool size of 4MB: $ insmod ufshpb.ko ufshpb_host_map_kbytes=4096 Signed-off-by: Daejun Park --- drivers/scsi/ufs/Kconfig | 9 + drivers/scsi/ufs/Makefile | 1 + drivers/scsi/ufs/ufshcd.c | 2 + drivers/scsi/ufs/ufshpb.c | 778 ++++++++++++++++++++++++++++++++++++++ drivers/scsi/ufs/ufshpb.h | 162 ++++++++ 5 files changed, 952 insertions(+) create mode 100644 drivers/scsi/ufs/ufshpb.c create mode 100644 drivers/scsi/ufs/ufshpb.h diff --git a/drivers/scsi/ufs/Kconfig b/drivers/scsi/ufs/Kconfig index 3188a50dfb51..5e480b2cea12 100644 --- a/drivers/scsi/ufs/Kconfig +++ b/drivers/scsi/ufs/Kconfig @@ -172,3 +172,12 @@ config SCSI_UFS_EXYNOS Select this if you have UFS host controller on EXYNOS chipset. If unsure, say N. + +config UFSHPB + tristate "Support UFS Host Performance Booster" + depends on SCSI_UFSHCD + help + A UFS HPB Feature improves random read performance. It caches + L2P map of UFS to host DRAM. The driver uses HPB read command + by piggybacking physical page number for bypassing FTL's L2P address + translation. diff --git a/drivers/scsi/ufs/Makefile b/drivers/scsi/ufs/Makefile index 433b871badfa..aa901b92e9e7 100644 --- a/drivers/scsi/ufs/Makefile +++ b/drivers/scsi/ufs/Makefile @@ -8,6 +8,7 @@ obj-$(CONFIG_SCSI_UFS_EXYNOS) += ufs-exynos.o obj-$(CONFIG_SCSI_UFSHCD) += ufshcd-core.o ufshcd-core-y += ufshcd.o ufs-sysfs.o ufsfeature.o ufshcd-core-$(CONFIG_SCSI_UFS_BSG) += ufs_bsg.o +obj-$(CONFIG_UFSHPB) += ufshpb.o obj-$(CONFIG_SCSI_UFSHCD_PCI) += ufshcd-pci.o obj-$(CONFIG_SCSI_UFSHCD_PLATFORM) += ufshcd-pltfrm.o obj-$(CONFIG_SCSI_UFS_HISI) += ufs-hisi.o diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index d02106bf80d8..367fb36e579a 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -2863,6 +2863,7 @@ int ufshcd_query_flag(struct ufs_hba *hba, enum query_opcode opcode, ufshcd_release(hba); return err; } +EXPORT_SYMBOL(ufshcd_query_flag); /** * ufshcd_query_attr - API function for sending attribute requests @@ -3061,6 +3062,7 @@ int ufshcd_query_descriptor_retry(struct ufs_hba *hba, return err; } +EXPORT_SYMBOL(ufshcd_query_descriptor_retry); /** * ufshcd_map_desc_id_to_length - map descriptor IDN to its length diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c new file mode 100644 index 000000000000..c63955a457b1 --- /dev/null +++ b/drivers/scsi/ufs/ufshpb.c @@ -0,0 +1,778 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Universal Flash Storage Host Performance Booster + * + * Copyright (C) 2017-2018 Samsung Electronics Co., Ltd. + * + * Authors: + * Yongmyung Lee + * Jinyoung Choi + */ + +#include + +#include "ufshcd.h" +#include "ufshpb.h" + +static struct ufshpb_driver ufshpb_drv; +unsigned int ufshpb_host_map_kbytes = 1 * 1024; + +static int ufshpb_create_sysfs(struct ufs_hba *hba, struct ufshpb_lu *hpb); + +static inline int ufshpb_is_valid_srgn(struct ufshpb_region *rgn, + struct ufshpb_subregion *srgn) +{ + return rgn->rgn_state != HPB_RGN_INACTIVE && + srgn->srgn_state == HPB_SRGN_VALID; +} + +static inline int ufshpb_get_state(struct ufshpb_lu *hpb) +{ + return atomic_read(&hpb->hpb_state); +} + +static inline void ufshpb_set_state(struct ufshpb_lu *hpb, int state) +{ + atomic_set(&hpb->hpb_state, state); +} + +static inline int ufshpb_lu_get_dev(struct ufshpb_lu *hpb) +{ + if (get_device(&hpb->hpb_lu_dev)) + return 0; + + return -ENODEV; +} + +static inline int ufshpb_lu_get(struct ufshpb_lu *hpb) +{ + if (!hpb || (ufshpb_get_state(hpb) != HPB_PRESENT)) + return -ENODEV; + + if (ufshpb_lu_get_dev(hpb)) + return -ENODEV; + + return 0; +} + +static inline void ufshpb_lu_put(struct ufshpb_lu *hpb) +{ + put_device(&hpb->hpb_lu_dev); +} + +static void ufshpb_init_subregion_tbl(struct ufshpb_lu *hpb, + struct ufshpb_region *rgn) +{ + int srgn_idx; + + for (srgn_idx = 0; srgn_idx < rgn->srgn_cnt; srgn_idx++) { + struct ufshpb_subregion *srgn = rgn->srgn_tbl + srgn_idx; + + srgn->rgn_idx = rgn->rgn_idx; + srgn->srgn_idx = srgn_idx; + srgn->srgn_state = HPB_SRGN_UNUSED; + } +} + +static inline int ufshpb_alloc_subregion_tbl(struct ufshpb_lu *hpb, + struct ufshpb_region *rgn, + int srgn_cnt) +{ + rgn->srgn_tbl = kvcalloc(srgn_cnt, sizeof(struct ufshpb_subregion), + GFP_KERNEL); + if (!rgn->srgn_tbl) + return -ENOMEM; + + rgn->srgn_cnt = srgn_cnt; + return 0; +} + +static void ufshpb_init_lu_parameter(struct ufs_hba *hba, + struct ufshpb_lu *hpb, + struct ufshpb_dev_info *hpb_dev_info, + struct ufshpb_lu_info *hpb_lu_info) +{ + u32 entries_per_rgn; + u64 rgn_mem_size; + + + hpb->lu_pinned_start = hpb_lu_info->pinned_start; + hpb->lu_pinned_end = hpb_lu_info->num_pinned ? + (hpb_lu_info->pinned_start + hpb_lu_info->num_pinned - 1) + : PINNED_NOT_SET; + + rgn_mem_size = (1ULL << hpb_dev_info->rgn_size) * HPB_RGN_SIZE_UNIT + / HPB_ENTRY_BLOCK_SIZE * HPB_ENTRY_SIZE; + hpb->srgn_mem_size = (1ULL << hpb_dev_info->srgn_size) + * HPB_RGN_SIZE_UNIT / HPB_ENTRY_BLOCK_SIZE * HPB_ENTRY_SIZE; + + entries_per_rgn = rgn_mem_size / HPB_ENTRY_SIZE; + hpb->entries_per_rgn_shift = ilog2(entries_per_rgn); + hpb->entries_per_rgn_mask = entries_per_rgn - 1; + + hpb->entries_per_srgn = hpb->srgn_mem_size / HPB_ENTRY_SIZE; + hpb->entries_per_srgn_shift = ilog2(hpb->entries_per_srgn); + hpb->entries_per_srgn_mask = hpb->entries_per_srgn - 1; + + hpb->srgns_per_rgn = rgn_mem_size / hpb->srgn_mem_size; + + hpb->rgns_per_lu = DIV_ROUND_UP(hpb_lu_info->num_blocks, + (rgn_mem_size / HPB_ENTRY_SIZE)); + hpb->srgns_per_lu = DIV_ROUND_UP(hpb_lu_info->num_blocks, + (hpb->srgn_mem_size / HPB_ENTRY_SIZE)); + + hpb->pages_per_srgn = hpb->srgn_mem_size / PAGE_SIZE; + + dev_info(hba->dev, "ufshpb(%d): region memory size - %llu (bytes)\n", + hpb->lun, rgn_mem_size); + dev_info(hba->dev, "ufshpb(%d): subregion memory size - %u (bytes)\n", + hpb->lun, hpb->srgn_mem_size); + dev_info(hba->dev, "ufshpb(%d): total blocks per lu - %d\n", + hpb->lun, hpb_lu_info->num_blocks); + dev_info(hba->dev, "ufshpb(%d): subregions per region - %d, regions per lu - %u", + hpb->lun, hpb->srgns_per_rgn, hpb->rgns_per_lu); +} + + +static int ufshpb_alloc_region_tbl(struct ufs_hba *hba, struct ufshpb_lu *hpb) +{ + struct ufshpb_region *rgn_table, *rgn; + int rgn_idx, i; + int ret = 0; + + rgn_table = kvcalloc(hpb->rgns_per_lu, sizeof(struct ufshpb_region), + GFP_KERNEL); + if (!rgn_table) + return -ENOMEM; + + hpb->rgn_tbl = rgn_table; + + for (rgn_idx = 0; rgn_idx < hpb->rgns_per_lu; rgn_idx++) { + int srgn_cnt = hpb->srgns_per_rgn; + + rgn = rgn_table + rgn_idx; + rgn->rgn_idx = rgn_idx; + + if (rgn_idx == hpb->rgns_per_lu - 1) + srgn_cnt = ((hpb->srgns_per_lu - 1) % + hpb->srgns_per_rgn) + 1; + + ret = ufshpb_alloc_subregion_tbl(hpb, rgn, srgn_cnt); + if (ret) + goto release_srgn_table; + ufshpb_init_subregion_tbl(hpb, rgn); + + rgn->rgn_state = HPB_RGN_INACTIVE; + } + + return 0; + +release_srgn_table: + for (i = 0; i < rgn_idx; i++) { + rgn = rgn_table + i; + if (rgn->srgn_tbl) + kvfree(rgn->srgn_tbl); + } + kvfree(rgn_table); + return ret; +} + +static void ufshpb_destroy_subregion_tbl(struct ufshpb_lu *hpb, + struct ufshpb_region *rgn) +{ + int srgn_idx; + + for (srgn_idx = 0; srgn_idx < rgn->srgn_cnt; srgn_idx++) { + struct ufshpb_subregion *srgn; + + srgn = rgn->srgn_tbl + srgn_idx; + srgn->srgn_state = HPB_SRGN_UNUSED; + } +} + +static void ufshpb_destroy_region_tbl(struct ufshpb_lu *hpb) +{ + int rgn_idx; + + for (rgn_idx = 0; rgn_idx < hpb->rgns_per_lu; rgn_idx++) { + struct ufshpb_region *rgn; + + rgn = hpb->rgn_tbl + rgn_idx; + if (rgn->rgn_state != HPB_RGN_INACTIVE) { + rgn->rgn_state = HPB_RGN_INACTIVE; + + ufshpb_destroy_subregion_tbl(hpb, rgn); + } + + kvfree(rgn->srgn_tbl); + } + + kvfree(hpb->rgn_tbl); +} + +static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb, + struct ufshpb_dev_info *hpb_dev_info) +{ + int ret; + + spin_lock_init(&hpb->hpb_state_lock); + + ret = ufshpb_alloc_region_tbl(hba, hpb); + + ret = ufshpb_create_sysfs(hba, hpb); + if (ret) + goto release_rgn_table; + + return 0; + +release_rgn_table: + ufshpb_destroy_region_tbl(hpb); + return ret; +} + +static struct ufshpb_lu *ufshpb_alloc_hpb_lu(struct ufs_hba *hba, int lun, + struct ufshpb_dev_info *hpb_dev_info, + struct ufshpb_lu_info *hpb_lu_info) +{ + struct ufshpb_lu *hpb; + int ret; + + hpb = kzalloc(sizeof(struct ufshpb_lu), GFP_KERNEL); + if (!hpb) + return NULL; + + hpb->ufsf = &hba->ufsf; + hpb->lun = lun; + + ufshpb_init_lu_parameter(hba, hpb, hpb_dev_info, hpb_lu_info); + + ret = ufshpb_lu_hpb_init(hba, hpb, hpb_dev_info); + if (ret) { + dev_err(hba->dev, "hpb lu init failed. ret %d", ret); + goto release_hpb; + } + + return hpb; +release_hpb: + kfree(hpb); + return NULL; +} + +static void ufshpb_lu_release(struct ufshpb_lu *hpb) +{ + ufshpb_destroy_region_tbl(hpb); + + list_del_init(&hpb->list_hpb_lu); +} + +static void ufshpb_issue_hpb_reset_query(struct ufs_hba *hba) +{ + int err; + int retries; + + for (retries = 0; retries < HPB_RESET_REQ_RETRIES; retries++) { + err = ufshcd_query_flag(hba, UPIU_QUERY_OPCODE_SET_FLAG, + QUERY_FLAG_IDN_HPB_RESET, 0, NULL); + if (err) + dev_dbg(hba->dev, + "%s: failed with error %d, retries %d\n", + __func__, err, retries); + else + break; + } + + if (err) { + dev_err(hba->dev, + "%s setting fHpbReset flag failed with error %d\n", + __func__, err); + return; + } +} + +static void ufshpb_check_hpb_reset_query(struct ufs_hba *hba) +{ + int err; + bool flag_res = true; + int try = 0; + + /* wait for the device to complete HPB reset query */ + do { + if (++try == HPB_RESET_REQ_RETRIES) + break; + + dev_info(hba->dev, + "%s start flag reset polling %d times\n", + __func__, try); + + /* Poll fHpbReset flag to be cleared */ + err = ufshcd_query_flag(hba, UPIU_QUERY_OPCODE_READ_FLAG, + QUERY_FLAG_IDN_HPB_RESET, 0, &flag_res); + usleep_range(1000, 1100); + } while (flag_res); + + if (err) { + dev_err(hba->dev, + "%s reading fHpbReset flag failed with error %d\n", + __func__, err); + return; + } + + if (flag_res) { + dev_err(hba->dev, + "%s fHpbReset was not cleared by the device\n", + __func__); + } +} + +static void ufshpb_reset(struct ufs_hba *hba) +{ + struct ufshpb_lu *hpb; + + list_for_each_entry(hpb, &ufshpb_drv.lh_hpb_lu, list_hpb_lu) { + if (ufshpb_lu_get_dev(hpb)) + continue; + + ufshpb_set_state(hpb, HPB_PRESENT); + ufshpb_lu_put(hpb); + } +} + +static void ufshpb_reset_host(struct ufs_hba *hba) +{ + struct ufshpb_lu *hpb; + + list_for_each_entry(hpb, &ufshpb_drv.lh_hpb_lu, list_hpb_lu) { + if (ufshpb_lu_get(hpb)) + continue; + + dev_info(&hpb->hpb_lu_dev, "ufshpb run reset_host"); + + ufshpb_set_state(hpb, HPB_RESET); + ufshpb_lu_put(hpb); + } +} + +static void ufshpb_suspend(struct ufs_hba *hba) +{ + struct ufshpb_lu *hpb; + + list_for_each_entry(hpb, &ufshpb_drv.lh_hpb_lu, list_hpb_lu) { + if (ufshpb_lu_get(hpb)) + continue; + + dev_info(&hpb->hpb_lu_dev, "ufshpb goto suspend"); + ufshpb_set_state(hpb, HPB_SUSPEND); + + ufshpb_lu_put(hpb); + } +} + +static void ufshpb_resume(struct ufs_hba *hba) +{ + struct ufshpb_lu *hpb; + + list_for_each_entry(hpb, &ufshpb_drv.lh_hpb_lu, list_hpb_lu) { + if (ufshpb_lu_get_dev(hpb)) + continue; + + dev_info(&hpb->hpb_lu_dev, "ufshpb resume"); + ufshpb_set_state(hpb, HPB_PRESENT); + ufshpb_lu_put(hpb); + } +} + +static void ufshpb_stat_init(struct ufshpb_lu *hpb) +{ + atomic_set(&hpb->stats.hit_cnt, 0); + atomic_set(&hpb->stats.miss_cnt, 0); + atomic_set(&hpb->stats.rb_noti_cnt, 0); + atomic_set(&hpb->stats.rb_active_cnt, 0); + atomic_set(&hpb->stats.rb_inactive_cnt, 0); + atomic_set(&hpb->stats.map_req_cnt, 0); +} + +/* SYSFS functions */ +#define ufshpb_sysfs_attr_show_func(__name) \ +static ssize_t __name##_show(struct device *dev, \ + struct device_attribute *attr, \ + char *buf) \ +{ \ + struct ufshpb_lu *hpb; \ + hpb = container_of(dev, struct ufshpb_lu, hpb_lu_dev); \ + return snprintf(buf, PAGE_SIZE, "%d\n", \ + atomic_read(&hpb->stats.__name)); \ +} + +ufshpb_sysfs_attr_show_func(hit_cnt); +ufshpb_sysfs_attr_show_func(miss_cnt); +ufshpb_sysfs_attr_show_func(rb_noti_cnt); +ufshpb_sysfs_attr_show_func(rb_active_cnt); +ufshpb_sysfs_attr_show_func(rb_inactive_cnt); +ufshpb_sysfs_attr_show_func(map_req_cnt); + +static DEVICE_ATTR_RO(hit_cnt); +static DEVICE_ATTR_RO(miss_cnt); +static DEVICE_ATTR_RO(rb_noti_cnt); +static DEVICE_ATTR_RO(rb_active_cnt); +static DEVICE_ATTR_RO(rb_inactive_cnt); +static DEVICE_ATTR_RO(map_req_cnt); + +static struct attribute *hpb_dev_attrs[] = { + &dev_attr_hit_cnt.attr, + &dev_attr_miss_cnt.attr, + &dev_attr_rb_noti_cnt.attr, + &dev_attr_rb_active_cnt.attr, + &dev_attr_rb_inactive_cnt.attr, + &dev_attr_map_req_cnt.attr, + NULL, +}; + +static struct attribute_group ufshpb_sysfs_group = { + .attrs = hpb_dev_attrs, +}; + +static inline void ufshpb_dev_release(struct device *dev) +{ + struct ufs_hba *hba; + struct ufsf_feature_info *ufsf; + struct ufshpb_lu *hpb; + + hpb = container_of(dev, struct ufshpb_lu, hpb_lu_dev); + ufsf = hpb->ufsf; + hba = container_of(ufsf, struct ufs_hba, ufsf); + + ufshpb_lu_release(hpb); + dev_info(dev, "%s: release success\n", __func__); + put_device(dev->parent); + + kfree(hpb); +} + +static int ufshpb_create_sysfs(struct ufs_hba *hba, struct ufshpb_lu *hpb) +{ + int ret; + + device_initialize(&hpb->hpb_lu_dev); + + ufshpb_stat_init(hpb); + + hpb->hpb_lu_dev.parent = get_device(&hba->ufsf.hpb_dev); + hpb->hpb_lu_dev.release = ufshpb_dev_release; + dev_set_name(&hpb->hpb_lu_dev, "ufshpb_lu%d", hpb->lun); + + ret = device_add(&hpb->hpb_lu_dev); + if (ret) { + dev_err(hba->dev, "ufshpb(%d) device_add failed", + hpb->lun); + return -ENODEV; + } + + if (device_add_group(&hpb->hpb_lu_dev, &ufshpb_sysfs_group)) + dev_err(hba->dev, "ufshpb(%d) create file error\n", + hpb->lun); + + return 0; +} + +static int ufshpb_read_desc(struct ufs_hba *hba, u8 desc_id, u8 desc_index, + u8 selector, u8 *desc_buf) +{ + int err = 0; + int size; + + ufshcd_map_desc_id_to_length(hba, desc_id, &size); + + pm_runtime_get_sync(hba->dev); + + err = ufshcd_query_descriptor_retry(hba, UPIU_QUERY_OPCODE_READ_DESC, + desc_id, desc_index, + selector, + desc_buf, &size); + if (err) + dev_err(hba->dev, "read desc failed: %d, id %d, idx %d\n", + err, desc_id, desc_index); + + pm_runtime_put_sync(hba->dev); + + return err; +} + +static int ufshpb_get_geo_info(struct ufs_hba *hba, u8 *geo_buf, + struct ufshpb_dev_info *hpb_dev_info) +{ + int hpb_device_max_active_rgns = 0; + int hpb_num_lu; + + hpb_num_lu = geo_buf[GEOMETRY_DESC_HPB_NUMBER_LU]; + if (hpb_num_lu == 0) { + dev_err(hba->dev, "No HPB LU supported\n"); + return -ENODEV; + } + + hpb_dev_info->rgn_size = geo_buf[GEOMETRY_DESC_HPB_REGION_SIZE]; + hpb_dev_info->srgn_size = geo_buf[GEOMETRY_DESC_HPB_SUBREGION_SIZE]; + hpb_device_max_active_rgns = + get_unaligned_be16(geo_buf + + GEOMETRY_DESC_HPB_DEVICE_MAX_ACTIVE_REGIONS); + + if (hpb_dev_info->rgn_size == 0 || hpb_dev_info->srgn_size == 0 || + hpb_device_max_active_rgns == 0) { + dev_err(hba->dev, "No HPB supported device\n"); + return -ENODEV; + } + + return 0; +} + +static int ufshpb_get_dev_info(struct ufs_hba *hba, + struct ufshpb_dev_info *hpb_dev_info, + u8 *desc_buf) +{ + int ret; + int version; + u8 hpb_mode; + + ret = ufshpb_read_desc(hba, QUERY_DESC_IDN_DEVICE, 0, 0, desc_buf); + if (ret) { + dev_err(hba->dev, "%s: idn: %d query request failed\n", + __func__, QUERY_DESC_IDN_DEVICE); + return -ENODEV; + } + + hpb_mode = desc_buf[DEVICE_DESC_PARAM_HPB_CONTROL]; + if (hpb_mode == HPB_HOST_CONTROL) { + dev_err(hba->dev, "%s: host control mode is not supported.\n", + __func__); + return -ENODEV; + } + + version = get_unaligned_be16(desc_buf + DEVICE_DESC_PARAM_HPB_VER); + if (version != HPB_SUPPORT_VERSION) { + dev_err(hba->dev, "%s: HPB %x version is not supported.\n", + __func__, version); + return -ENODEV; + } + + /* + * Get the number of user logical unit to check whether all + * scsi_device finish initialization + */ + hpb_dev_info->num_lu = desc_buf[DEVICE_DESC_PARAM_NUM_LU]; + + ret = ufshpb_read_desc(hba, QUERY_DESC_IDN_GEOMETRY, 0, 0, desc_buf); + if (ret) { + dev_err(hba->dev, "%s: idn: %d query request failed\n", + __func__, QUERY_DESC_IDN_DEVICE); + return ret; + } + + ret = ufshpb_get_geo_info(hba, desc_buf, hpb_dev_info); + if (ret) + return ret; + + return 0; +} + +static int ufshpb_get_lu_info(struct ufs_hba *hba, int lun, + struct ufshpb_lu_info *hpb_lu_info, + u8 *desc_buf) +{ + u16 max_active_rgns; + u8 lu_enable; + int ret; + + ret = ufshpb_read_desc(hba, QUERY_DESC_IDN_UNIT, lun, 0, desc_buf); + if (ret) { + dev_err(hba->dev, + "%s: idn: %d lun: %d query request failed", + __func__, QUERY_DESC_IDN_UNIT, lun); + return ret; + } + + lu_enable = desc_buf[UNIT_DESC_PARAM_LU_ENABLE]; + if (lu_enable != LU_ENABLED_HPB_FUNC) + return -ENODEV; + + max_active_rgns = get_unaligned_be16( + desc_buf + UNIT_DESC_HPB_LU_MAX_ACTIVE_REGIONS); + if (!max_active_rgns) { + dev_err(hba->dev, + "lun %d wrong number of max active regions\n", lun); + return -ENODEV; + } + + hpb_lu_info->num_blocks = get_unaligned_be64( + desc_buf + UNIT_DESC_PARAM_LOGICAL_BLK_COUNT); + hpb_lu_info->pinned_start = get_unaligned_be16( + desc_buf + UNIT_DESC_HPB_LU_PIN_REGION_START_OFFSET); + hpb_lu_info->num_pinned = get_unaligned_be16( + desc_buf + UNIT_DESC_HPB_LU_NUM_PIN_REGIONS); + hpb_lu_info->max_active_rgns = max_active_rgns; + + return 0; +} + +static void ufshpb_scan_hpb_lu(struct ufs_hba *hba, + struct ufshpb_dev_info *hpb_dev_info, + u8 *desc_buf) +{ + struct scsi_device *sdev; + struct ufshpb_lu *hpb; + int find_hpb_lu = 0; + int ret; + + INIT_LIST_HEAD(&ufshpb_drv.lh_hpb_lu); + + shost_for_each_device(sdev, hba->host) { + struct ufshpb_lu_info hpb_lu_info = { 0 }; + int lun = sdev->lun; + + if (lun >= hba->dev_info.max_lu_supported) + continue; + + ret = ufshpb_get_lu_info(hba, lun, &hpb_lu_info, desc_buf); + if (ret) + continue; + + hpb = ufshpb_alloc_hpb_lu(hba, lun, hpb_dev_info, + &hpb_lu_info); + if (!hpb) + continue; + + hpb->sdev_ufs_lu = sdev; + sdev->hostdata = hpb; + + list_add_tail(&hpb->list_hpb_lu, &ufshpb_drv.lh_hpb_lu); + find_hpb_lu++; + } + + if (!find_hpb_lu) + return; + + ufshpb_check_hpb_reset_query(hba); + dev_set_drvdata(&hba->ufsf.hpb_dev, &ufshpb_drv); + + list_for_each_entry(hpb, &ufshpb_drv.lh_hpb_lu, list_hpb_lu) { + dev_info(&hpb->hpb_lu_dev, "set state to present\n"); + ufshpb_set_state(hpb, HPB_PRESENT); + } +} + +static int ufshpb_probe(struct device *dev) +{ + struct ufs_hba *hba; + struct ufsf_feature_info *ufsf; + struct ufshpb_dev_info hpb_dev_info = { 0 }; + char *desc_buf; + int ret; + + if (dev->type != &ufshpb_dev_type) + return -ENODEV; + + ufsf = container_of(dev, struct ufsf_feature_info, hpb_dev); + hba = container_of(ufsf, struct ufs_hba, ufsf); + + desc_buf = kzalloc(QUERY_DESC_MAX_SIZE, GFP_KERNEL); + if (!desc_buf) + goto release_desc_buf; + + ret = ufshpb_get_dev_info(hba, &hpb_dev_info, desc_buf); + if (ret) + goto release_desc_buf; + + /* + * Because HPB driver uses scsi_device data structure, + * we should wait at this point until finishing initialization of all + * scsi devices. Even if timeout occurs, HPB driver will search + * the scsi_device list on struct scsi_host (shost->__host list_head) + * and can find out HPB logical units in all scsi_devices + */ + wait_event_timeout(hba->ufsf.sdev_wait, + (atomic_read(&hba->ufsf.slave_conf_cnt) + == hpb_dev_info.num_lu), + SDEV_WAIT_TIMEOUT); + + ufshpb_issue_hpb_reset_query(hba); + + dev_dbg(hba->dev, "ufshpb: slave count %d, lu count %d\n", + atomic_read(&hba->ufsf.slave_conf_cnt), hpb_dev_info.num_lu); + + ufshpb_scan_hpb_lu(hba, &hpb_dev_info, desc_buf); + +release_desc_buf: + kfree(desc_buf); + return 0; +} + +static int ufshpb_remove(struct device *dev) +{ + struct ufshpb_lu *hpb, *n_hpb; + struct ufsf_feature_info *ufsf; + struct scsi_device *sdev; + + ufsf = container_of(dev, struct ufsf_feature_info, hpb_dev); + + dev_set_drvdata(&ufsf->hpb_dev, NULL); + + list_for_each_entry_safe(hpb, n_hpb, &ufshpb_drv.lh_hpb_lu, + list_hpb_lu) { + ufshpb_set_state(hpb, HPB_FAILED); + + sdev = hpb->sdev_ufs_lu; + sdev->hostdata = NULL; + + device_del(&hpb->hpb_lu_dev); + + dev_info(&hpb->hpb_lu_dev, "hpb_lu_dev refcnt %d\n", + kref_read(&hpb->hpb_lu_dev.kobj.kref)); + put_device(&hpb->hpb_lu_dev); + } + dev_info(dev, "ufshpb: remove success\n"); + + return 0; +} + +static struct ufshpb_driver ufshpb_drv = { + .drv = { + .name = "ufshpb_driver", + .owner = THIS_MODULE, + .probe = ufshpb_probe, + .remove = ufshpb_remove, + .bus = &ufsf_bus_type, + .probe_type = PROBE_PREFER_ASYNCHRONOUS, + }, + .ufshpb_ops = { + .reset = ufshpb_reset, + .reset_host = ufshpb_reset_host, + .suspend = ufshpb_suspend, + .resume = ufshpb_resume, + }, +}; + +module_param(ufshpb_host_map_kbytes, uint, 0644); +MODULE_PARM_DESC(ufshpb_host_map_kbytes, + "ufshpb host mapping memory kilo-bytes for ufshpb memory-pool"); + +static int __init ufshpb_init(void) +{ + int ret; + + ret = driver_register(&ufshpb_drv.drv); + if (ret) + pr_err("ufshpb: driver register failed\n"); + + return ret; +} + +static void __exit ufshpb_exit(void) +{ + driver_unregister(&ufshpb_drv.drv); +} + +MODULE_AUTHOR("Yongmyong Lee "); +MODULE_AUTHOR("Jinyoung Choi "); +MODULE_DESCRIPTION("UFS Host Performance Booster Driver"); + +module_init(ufshpb_init); +module_exit(ufshpb_exit); +MODULE_LICENSE("GPL"); diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h new file mode 100644 index 000000000000..eaa4a3e035b1 --- /dev/null +++ b/drivers/scsi/ufs/ufshpb.h @@ -0,0 +1,162 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * Universal Flash Storage Host Performance Booster + * + * Copyright (C) 2017-2018 Samsung Electronics Co., Ltd. + * + * Authors: + * Yongmyung Lee + * Jinyoung Choi + */ + +#ifndef _UFSHPB_H_ +#define _UFSHPB_H_ + +/* hpb response UPIU macro */ +#define MAX_ACTIVE_NUM 2 +#define MAX_INACTIVE_NUM 2 +#define HPB_RSP_NONE 0x00 +#define HPB_RSP_REQ_REGION_UPDATE 0x01 +#define HPB_RSP_DEV_RESET 0x02 +#define DEV_DATA_SEG_LEN 0x14 +#define DEV_SENSE_SEG_LEN 0x12 +#define DEV_DES_TYPE 0x80 +#define DEV_ADDITIONAL_LEN 0x10 + +/* hpb map & entries macro */ +#define HPB_RGN_SIZE_UNIT 512 +#define HPB_ENTRY_BLOCK_SIZE 4096 +#define HPB_ENTRY_SIZE 0x8 +#define PINNED_NOT_SET U32_MAX + +/* hpb support chunk size */ +#define HPB_MULTI_CHUNK_HIGH 1 + +/* hpb vender defined opcode */ +#define UFSHPB_READ 0xF8 +#define UFSHPB_READ_BUFFER 0xF9 +#define UFSHPB_WRITE_BUFFER 0xFA +#define UFSHPB_READ_BUFFER_ID 0x01 +#define UFSHPB_WRITE_BUFFER_ID 0x02 +#define HPB_READ_BUFFER_CMD_LENGTH 10 +#define LU_ENABLED_HPB_FUNC 0x02 + +#define SDEV_WAIT_TIMEOUT (10 * HZ) +#define MAP_REQ_TIMEOUT (30 * HZ) +#define HPB_RESET_REQ_RETRIES 10 +#define HPB_RESET_REQ_MSLEEP 2 + +#define HPB_SUPPORT_VERSION 0x100 + +enum UFSHPB_MODE { + HPB_HOST_CONTROL, + HPB_DEVICE_CONTROL, +}; + +enum UFSHPB_STATE { + HPB_PRESENT = 1, + HPB_SUSPEND, + HPB_FAILED, + HPB_RESET, +}; + +enum HPB_RGN_STATE { + HPB_RGN_INACTIVE, + HPB_RGN_ACTIVE, + /* pinned regions are always active */ + HPB_RGN_PINNED, +}; + +enum HPB_SRGN_STATE { + HPB_SRGN_UNUSED, + HPB_SRGN_INVALID, + HPB_SRGN_VALID, + HPB_SRGN_ISSUED, +}; + +/** + * struct ufshpb_dev_info - UFSHPB device related info + * @num_lu: the number of user logical unit to check whether all lu finished + * initialization + * @rgn_size: device reported HPB region size + * @srgn_size: device reported HPB sub-region size + */ +struct ufshpb_dev_info { + int num_lu; + int rgn_size; + int srgn_size; +}; + +/** + * struct ufshpb_lu_info - UFSHPB logical unit related info + * @num_blocks: the number of logical block + * @pinned_start: the start region number of pinned region + * @num_pinned: the number of pinned regions + * @max_active_rgns: maximum number of active regions + */ +struct ufshpb_lu_info { + int num_blocks; + int pinned_start; + int num_pinned; + int max_active_rgns; +}; + +struct ufshpb_subregion { + enum HPB_SRGN_STATE srgn_state; + int rgn_idx; + int srgn_idx; +}; + +struct ufshpb_region { + struct ufshpb_subregion *srgn_tbl; + enum HPB_RGN_STATE rgn_state; + int rgn_idx; + int srgn_cnt; +}; + +struct ufshpb_stats { + atomic_t hit_cnt; + atomic_t miss_cnt; + atomic_t rb_noti_cnt; + atomic_t rb_active_cnt; + atomic_t rb_inactive_cnt; + atomic_t map_req_cnt; +}; + +struct ufshpb_lu { + int lun; + + struct device hpb_lu_dev; + struct scsi_device *sdev_ufs_lu; + + struct ufshpb_region *rgn_tbl; + + spinlock_t hpb_state_lock; + atomic_t hpb_state; /* hpb_state_lock */ + + /* pinned region information */ + u32 lu_pinned_start; + u32 lu_pinned_end; + + /* HPB related configuration */ + u32 rgns_per_lu; + u32 srgns_per_lu; + int srgns_per_rgn; + u32 srgn_mem_size; + u32 entries_per_rgn_mask; + u32 entries_per_rgn_shift; + u32 entries_per_srgn; + u32 entries_per_srgn_mask; + u32 entries_per_srgn_shift; + u32 pages_per_srgn; + + struct ufshpb_stats stats; + + struct ufsf_feature_info *ufsf; + struct list_head list_hpb_lu; +}; + +extern struct device_type ufshpb_dev_type; +extern struct bus_type ufsf_bus_type; + +#endif /* End of Header */