From patchwork Mon Jun 7 06:13:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 455634 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58EDEC47082 for ; Mon, 7 Jun 2021 06:14:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 409BF611AD for ; Mon, 7 Jun 2021 06:14:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230222AbhFGGQe (ORCPT ); Mon, 7 Jun 2021 02:16:34 -0400 Received: from esa6.hgst.iphmx.com ([216.71.154.45]:24848 "EHLO esa6.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229993AbhFGGQd (ORCPT ); Mon, 7 Jun 2021 02:16:33 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1623046483; x=1654582483; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=kU7WBgg2cM0JHdF73dtNBxdvqY08/m7Ks0SLhflPAFY=; b=ogrUHWW/eXvSgI5JRR/TgK6TBJFB1dTnnW6u56T7+dzkzkuqSy/NOMcV LSO/wkvCQUGDQoTr+MUxastXhju42aLJAqrV0MOUJO9Bj2RJWgiYi68q8 w2qZbz3n7j5T8ZjdBdCefZUtuguKeC/fD3NQCoBw9RIruKM4z80u9mCEu i31cZkCioLDF0zNIaVakrA0MFJkJtoz9NXL9rqBpkEs+pRcCfhyiGGfBB g7g2UF4eOV6vEadWnfrfhl2DPtvCNZY+BeiM6xHCQX1SBVMXLT3Sgyp4T HGdbqee00e1Jz63wNqmYWIco32E+9NkDeY/JAGTe7t7j99euVvnlo75rf Q==; IronPort-SDR: 76/qI0u5ectsvMu715vcXUUMFeCAgcWSpvfG4mPz239GFe5IvIp75BEetScf76UOsA1s1Yuvos 0y84ArPASMqVTT2NcymHGt2vKQIi/zs/YvpMPInCt3LXgIUMgmiye5wrwCTzN4ICCghXonyxmd Qy6v5eLfjrX1KUWVhnpZ3Uti7Js+5sRwkM7niawWtaEbNuyeDtioHTbBNJ32Wt8zGq1XzO+YU2 4k06/GYxp/V+2fswMMRwMMyQ6snLhNPqlPFpgP/M5EFI66nubfluy9mhF+Scvz2PkaHPHGTzL+ 8XM= X-IronPort-AV: E=Sophos;i="5.83,254,1616428800"; d="scan'208";a="171530251" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 07 Jun 2021 14:14:43 +0800 IronPort-SDR: KgO51tMf75kICpXxr7hU+I5K1aYvBWQlhf+bVJOaY5dkUsuhVP9pBvuI3bXVtPYzLiF+QYSRU/ dwpm+D3JOTdQIehveOoxVJzUQ9iEzbh+syQy518DAgLrZqkXw/ifFnDK+hRsZ3R1boaHonFSge 8E87j26RuVmd9OrQvI0P0gmrk6nn8It3zatn0xJyb5OPfal10UYyUY9+7xwlAinYywU/WpIoFg zVIcBIfZ3+vAQ4b5svGBM9nNZDRvt0OuVcAyQXTAnFXwvrehHwPwbz0StMasChB0ArrPOpQGup f5xxv/XVAyTImso4SbzHagMZ Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2021 22:53:50 -0700 IronPort-SDR: 8uxDMEJbslS2cLvcFTHZiZLM7lAug4VNwjcv9lMyqlzlzPfm7bpwgIDg4RmMxYtoXDx8CKzXgP 118t8q6b46gIOzMnbiCpQ9CmMUWFrgTY8LfquIP45dDLrk9pkv/ejaQ0ANhsw85I8rQboawEox VoiMRDF3C6gxG+8pZtPokUTXXZFmvMFxcPyijE96R3HuQyhzI8KHftE82yu1QmanUQZTnKQa3/ Lje4Qm0f3fdExezthuGGBcKC6kQX3YX2mc53CX7Nx7hOmvxCWKA9tnu5JNTbaa2ezOz7Q2ehwF 5h8= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip02.wdc.com with ESMTP; 06 Jun 2021 23:14:38 -0700 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v10 02/12] scsi: ufshpb: Add host control mode support to rsp_upiu Date: Mon, 7 Jun 2021 09:13:51 +0300 Message-Id: <20210607061401.58884-3-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210607061401.58884-1-avri.altman@wdc.com> References: <20210607061401.58884-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org In device control mode, the device may recommend the host to either activate or inactivate a region, and the host should follow. Meaning those are not actually recommendations, but more of instructions. On the contrary, in host control mode, the recommendation protocol is slightly changed: a) The device may only recommend the host to update a subregion of an already-active region. And, b) The device may *not* recommend to inactivate a region. Furthermore, in host control mode, the host may choose not to follow any of the device's recommendations. However, in case of a recommendation to update an active and clean subregion, it is better to follow those recommendation because otherwise the host has no other way to know that some internal relocation took place. Signed-off-by: Avri Altman Reviewed-by: Daejun Park --- drivers/scsi/ufs/ufshpb.c | 34 +++++++++++++++++++++++++++++++++- drivers/scsi/ufs/ufshpb.h | 2 ++ 2 files changed, 35 insertions(+), 1 deletion(-) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index d45343a00e9f..6f2e3b3c9252 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -166,6 +166,8 @@ static void ufshpb_set_ppn_dirty(struct ufshpb_lu *hpb, int rgn_idx, else set_bit_len = cnt; + set_bit(RGN_FLAG_DIRTY, &rgn->rgn_flags); + if (rgn->rgn_state != HPB_RGN_INACTIVE && srgn->srgn_state == HPB_SRGN_VALID) bitmap_set(srgn->mctx->ppn_dirty, srgn_offset, set_bit_len); @@ -235,6 +237,11 @@ static bool ufshpb_test_ppn_dirty(struct ufshpb_lu *hpb, int rgn_idx, return false; } +static inline bool is_rgn_dirty(struct ufshpb_region *rgn) +{ + return test_bit(RGN_FLAG_DIRTY, &rgn->rgn_flags); +} + static int ufshpb_fill_ppn_from_page(struct ufshpb_lu *hpb, struct ufshpb_map_ctx *mctx, int pos, int len, __be64 *ppn_buf) @@ -712,6 +719,7 @@ static void ufshpb_put_map_req(struct ufshpb_lu *hpb, static int ufshpb_clear_dirty_bitmap(struct ufshpb_lu *hpb, struct ufshpb_subregion *srgn) { + struct ufshpb_region *rgn; u32 num_entries = hpb->entries_per_srgn; if (!srgn->mctx) { @@ -725,6 +733,10 @@ static int ufshpb_clear_dirty_bitmap(struct ufshpb_lu *hpb, num_entries = hpb->last_srgn_entries; bitmap_zero(srgn->mctx->ppn_dirty, num_entries); + + rgn = hpb->rgn_tbl + srgn->rgn_idx; + clear_bit(RGN_FLAG_DIRTY, &rgn->rgn_flags); + return 0; } @@ -1244,6 +1256,18 @@ static void ufshpb_rsp_req_region_update(struct ufshpb_lu *hpb, srgn_i = be16_to_cpu(rsp_field->hpb_active_field[i].active_srgn); + rgn = hpb->rgn_tbl + rgn_i; + if (hpb->is_hcm && + (rgn->rgn_state != HPB_RGN_ACTIVE || is_rgn_dirty(rgn))) { + /* + * in host control mode, subregion activation + * recommendations are only allowed to active regions. + * Also, ignore recommendations for dirty regions - the + * host will make decisions concerning those by himself + */ + continue; + } + dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, "activate(%d) region %d - %d\n", i, rgn_i, srgn_i); @@ -1251,7 +1275,6 @@ static void ufshpb_rsp_req_region_update(struct ufshpb_lu *hpb, ufshpb_update_active_info(hpb, rgn_i, srgn_i); spin_unlock(&hpb->rsp_list_lock); - rgn = hpb->rgn_tbl + rgn_i; srgn = rgn->srgn_tbl + srgn_i; /* blocking HPB_READ */ @@ -1262,6 +1285,14 @@ static void ufshpb_rsp_req_region_update(struct ufshpb_lu *hpb, hpb->stats.rb_active_cnt++; } + if (hpb->is_hcm) { + /* + * in host control mode the device is not allowed to inactivate + * regions + */ + goto out; + } + for (i = 0; i < rsp_field->inactive_rgn_cnt; i++) { rgn_i = be16_to_cpu(rsp_field->hpb_inactive_field[i]); dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, @@ -1286,6 +1317,7 @@ static void ufshpb_rsp_req_region_update(struct ufshpb_lu *hpb, hpb->stats.rb_inactive_cnt++; } +out: dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, "Noti: #ACT %u #INACT %u\n", rsp_field->active_rgn_cnt, rsp_field->inactive_rgn_cnt); diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h index 7df30340386a..032672114881 100644 --- a/drivers/scsi/ufs/ufshpb.h +++ b/drivers/scsi/ufs/ufshpb.h @@ -121,6 +121,8 @@ struct ufshpb_region { /* below information is used by lru */ struct list_head list_lru_rgn; + unsigned long rgn_flags; +#define RGN_FLAG_DIRTY 0 }; #define for_each_sub_region(rgn, i, srgn) \ From patchwork Mon Jun 7 06:13:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 455633 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E9ADC47082 for ; Mon, 7 Jun 2021 06:14:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 719566121D for ; Mon, 7 Jun 2021 06:14:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230252AbhFGGQs (ORCPT ); Mon, 7 Jun 2021 02:16:48 -0400 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:4757 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230191AbhFGGQr (ORCPT ); Mon, 7 Jun 2021 02:16:47 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1623046497; x=1654582497; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/uVpI9cCRF77MVsFcmUIyS2US+7d5GjqQu2LbdJilI4=; b=rqU4QVRQdU8JoZOgR+vQqM4GYr7JcLuFwf3yGc4nhmrZmDKppBd/SCKD UT7cDqKvT4jwjhWjVIBL/Z3JHRtJpbIOKru9ZgQIxpNMsIa+c3XV1bTmQ lekkynvMCe0ykB2UFjR54FT8wp4gFVaNKYV+RPjbjZTraBSTC6aZRjlFh 5vD1fz2+BPfsEHVXqzUPhicCkNALoUH4C1Gu53YoHiydFYUrqd0S0XkTx pvlTeyKPvqP7zjz8Y4YqlGy16NOJc6fxiXZHb3d7c6YFlozgNaOUPjkoA E8kRZFEyLwV5bJ1nmXvZ3/FZEthjJiHZbfMT+vBO6x/GLfrTTpCF9Cw3A Q==; IronPort-SDR: Dyl2D4TfihI/MD1E14l7Oc4VDKKyHx7WWHzt/o/LIhD1lYiM4YOMo7mZy78tFkV1i62MF2ZteA u+AxeyVebEcb32uBtHtaDIquXX+cnKuBfYNTvNPJH4gIvbnmwU6yPhBt6suGrcHIlx7S1Nmjo9 jgxWrL1Hf8hFMV+9nun0StCwSwBRNFZv9tcZrPyxU8CDczUk0DnatDmnOZB/noVO58uwWSj2Sg fMHbdzHzhlvxd0M61RuVoQTUh4pvA/FaTsBVduospareQp5X1ZFW38Wxych8Yr9iOCbCpRZCCd OJA= X-IronPort-AV: E=Sophos;i="5.83,254,1616428800"; d="scan'208";a="170991540" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 07 Jun 2021 14:14:56 +0800 IronPort-SDR: SXar8ADT02dTlnHnqtuNHLWG47/foleNLwQsYfSNgPpKXBqi66MXV/9OiLXxB0taDvMU3D89sD XBhqRwNSFGSRdNGJVPYF7nVnaVnoTBNR6eZW94OLbEN4OrJd9yGxHo/YPr2xbz8lJMot3gWfCn kraUSOv1jDe+xBXjK8a8DM3wTpcis7WIb+9kPDolIU55hZLTAm53FXuiN/dR34H/H0ynFVCpjw dXXCL76kY3QHxexA+6gFeXyhSOmIveKPRust2Q5+514U3dZIx1iXiQp2iP0zPHZgtjPUH6JOJE 3fzMQPPh2ygWFBjwmBt+B0wj Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2021 22:54:04 -0700 IronPort-SDR: xqN+qbWCiUYQ6BZ9QjKbg9t7YRrKcfMvuYeypAZojGiRLzjdKNM/kBmfvoRdLptgxmrL2LgTUP XnRYz4WkF3RsvUVBX0MMvLLmZIiW73ZDbrBClTqqNHVWOsfCTAvMVcvFdWkpFEXkcEjJIKXGZz 0nxFpSafjZ8Zw/InBDg+v/nbVTk1nbNjpF6B/Uco+GnNa3yyna5YIY4VkIVNO3iBwuGjIxAsIF uuaq3cAO7dOl3mLqxg2GbsOYrgYQFWOn3uHdJ+MTfXrl2MJ26JlmnxT16t3zYpMPfpfNCunfyd Dvw= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip02.wdc.com with ESMTP; 06 Jun 2021 23:14:52 -0700 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v10 04/12] scsi: ufshpb: Add reads counter Date: Mon, 7 Jun 2021 09:13:53 +0300 Message-Id: <20210607061401.58884-5-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210607061401.58884-1-avri.altman@wdc.com> References: <20210607061401.58884-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org In host control mode, reads are the major source of activation trials. Keep track of those reads counters, for both active as well inactive regions. We reset the read counter upon write - we are only interested in "clean" reads. Keep those counters normalized, as we are using those reads as a comparative score, to make various decisions. If during consecutive normalizations an active region has exhaust its reads - inactivate it. while at it, protect the {active,inactive}_count stats by adding them into the applicable handler. Signed-off-by: Avri Altman Reviewed-by: Daejun Park --- drivers/scsi/ufs/ufshpb.c | 94 ++++++++++++++++++++++++++++++++++++--- drivers/scsi/ufs/ufshpb.h | 9 ++++ 2 files changed, 97 insertions(+), 6 deletions(-) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index 01a4efa37db8..b080bd9ca35a 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -16,6 +16,8 @@ #include "ufshpb.h" #include "../sd.h" +#define ACTIVATION_THRESHOLD 8 /* 8 IOs */ + /* memory management */ static struct kmem_cache *ufshpb_mctx_cache; static mempool_t *ufshpb_mctx_pool; @@ -26,6 +28,9 @@ static int tot_active_srgn_pages; static struct workqueue_struct *ufshpb_wq; +static void ufshpb_update_active_info(struct ufshpb_lu *hpb, int rgn_idx, + int srgn_idx); + bool ufshpb_is_allowed(struct ufs_hba *hba) { return !(hba->ufshpb_dev.hpb_disabled); @@ -148,7 +153,7 @@ static void ufshpb_iterate_rgn(struct ufshpb_lu *hpb, int rgn_idx, int srgn_idx, int srgn_offset, int cnt, bool set_dirty) { struct ufshpb_region *rgn; - struct ufshpb_subregion *srgn; + struct ufshpb_subregion *srgn, *prev_srgn = NULL; int set_bit_len; int bitmap_len; unsigned long flags; @@ -167,15 +172,39 @@ static void ufshpb_iterate_rgn(struct ufshpb_lu *hpb, int rgn_idx, int srgn_idx, else set_bit_len = cnt; - if (set_dirty) - set_bit(RGN_FLAG_DIRTY, &rgn->rgn_flags); - spin_lock_irqsave(&hpb->rgn_state_lock, flags); if (set_dirty && rgn->rgn_state != HPB_RGN_INACTIVE && srgn->srgn_state == HPB_SRGN_VALID) bitmap_set(srgn->mctx->ppn_dirty, srgn_offset, set_bit_len); spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); + if (hpb->is_hcm && prev_srgn != srgn) { + bool activate = false; + + spin_lock(&rgn->rgn_lock); + if (set_dirty) { + rgn->reads -= srgn->reads; + srgn->reads = 0; + set_bit(RGN_FLAG_DIRTY, &rgn->rgn_flags); + } else { + srgn->reads++; + rgn->reads++; + if (srgn->reads == ACTIVATION_THRESHOLD) + activate = true; + } + spin_unlock(&rgn->rgn_lock); + + if (activate) { + spin_lock_irqsave(&hpb->rsp_list_lock, flags); + ufshpb_update_active_info(hpb, rgn_idx, srgn_idx); + spin_unlock_irqrestore(&hpb->rsp_list_lock, flags); + dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, + "activate region %d-%d\n", rgn_idx, srgn_idx); + } + + prev_srgn = srgn; + } + srgn_offset = 0; if (++srgn_idx == hpb->srgns_per_rgn) { srgn_idx = 0; @@ -604,6 +633,19 @@ int ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) WARN_ON_ONCE(transfer_len > HPB_MULTI_CHUNK_HIGH); + if (hpb->is_hcm) { + /* + * in host control mode, reads are the main source for + * activation trials. + */ + ufshpb_iterate_rgn(hpb, rgn_idx, srgn_idx, srgn_offset, + transfer_len, false); + + /* keep those counters normalized */ + if (rgn->reads > hpb->entries_per_srgn) + schedule_work(&hpb->ufshpb_normalization_work); + } + spin_lock_irqsave(&hpb->rgn_state_lock, flags); if (ufshpb_test_ppn_dirty(hpb, rgn_idx, srgn_idx, srgn_offset, transfer_len)) { @@ -755,6 +797,8 @@ static void ufshpb_update_active_info(struct ufshpb_lu *hpb, int rgn_idx, if (list_empty(&srgn->list_act_srgn)) list_add_tail(&srgn->list_act_srgn, &hpb->lh_act_srgn); + + hpb->stats.rb_active_cnt++; } static void ufshpb_update_inactive_info(struct ufshpb_lu *hpb, int rgn_idx) @@ -770,6 +814,8 @@ static void ufshpb_update_inactive_info(struct ufshpb_lu *hpb, int rgn_idx) if (list_empty(&rgn->list_inact_rgn)) list_add_tail(&rgn->list_inact_rgn, &hpb->lh_inact_rgn); + + hpb->stats.rb_inactive_cnt++; } static void ufshpb_activate_subregion(struct ufshpb_lu *hpb, @@ -1090,6 +1136,7 @@ static int ufshpb_evict_region(struct ufshpb_lu *hpb, struct ufshpb_region *rgn) rgn->rgn_idx); goto out; } + if (!list_empty(&rgn->list_lru_rgn)) { if (ufshpb_check_srgns_issue_state(hpb, rgn)) { ret = -EBUSY; @@ -1284,7 +1331,6 @@ static void ufshpb_rsp_req_region_update(struct ufshpb_lu *hpb, if (srgn->srgn_state == HPB_SRGN_VALID) srgn->srgn_state = HPB_SRGN_INVALID; spin_unlock(&hpb->rgn_state_lock); - hpb->stats.rb_active_cnt++; } if (hpb->is_hcm) { @@ -1316,7 +1362,6 @@ static void ufshpb_rsp_req_region_update(struct ufshpb_lu *hpb, } spin_unlock(&hpb->rgn_state_lock); - hpb->stats.rb_inactive_cnt++; } out: @@ -1515,6 +1560,36 @@ static void ufshpb_run_inactive_region_list(struct ufshpb_lu *hpb) spin_unlock_irqrestore(&hpb->rsp_list_lock, flags); } +static void ufshpb_normalization_work_handler(struct work_struct *work) +{ + struct ufshpb_lu *hpb = container_of(work, struct ufshpb_lu, + ufshpb_normalization_work); + int rgn_idx; + + for (rgn_idx = 0; rgn_idx < hpb->rgns_per_lu; rgn_idx++) { + struct ufshpb_region *rgn = hpb->rgn_tbl + rgn_idx; + int srgn_idx; + + spin_lock(&rgn->rgn_lock); + rgn->reads = 0; + for (srgn_idx = 0; srgn_idx < hpb->srgns_per_rgn; srgn_idx++) { + struct ufshpb_subregion *srgn = rgn->srgn_tbl + srgn_idx; + + srgn->reads >>= 1; + rgn->reads += srgn->reads; + } + spin_unlock(&rgn->rgn_lock); + + if (rgn->rgn_state != HPB_RGN_ACTIVE || rgn->reads) + continue; + + /* if region is active but has no reads - inactivate it */ + spin_lock(&hpb->rsp_list_lock); + ufshpb_update_inactive_info(hpb, rgn->rgn_idx); + spin_unlock(&hpb->rsp_list_lock); + } +} + static void ufshpb_map_work_handler(struct work_struct *work) { struct ufshpb_lu *hpb = container_of(work, struct ufshpb_lu, map_work); @@ -1673,6 +1748,8 @@ static int ufshpb_alloc_region_tbl(struct ufs_hba *hba, struct ufshpb_lu *hpb) rgn = rgn_table + rgn_idx; rgn->rgn_idx = rgn_idx; + spin_lock_init(&rgn->rgn_lock); + INIT_LIST_HEAD(&rgn->list_inact_rgn); INIT_LIST_HEAD(&rgn->list_lru_rgn); @@ -1912,6 +1989,9 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb) INIT_LIST_HEAD(&hpb->list_hpb_lu); INIT_WORK(&hpb->map_work, ufshpb_map_work_handler); + if (hpb->is_hcm) + INIT_WORK(&hpb->ufshpb_normalization_work, + ufshpb_normalization_work_handler); hpb->map_req_cache = kmem_cache_create("ufshpb_req_cache", sizeof(struct ufshpb_req), 0, 0, NULL); @@ -2011,6 +2091,8 @@ static void ufshpb_discard_rsp_lists(struct ufshpb_lu *hpb) static void ufshpb_cancel_jobs(struct ufshpb_lu *hpb) { + if (hpb->is_hcm) + cancel_work_sync(&hpb->ufshpb_normalization_work); cancel_work_sync(&hpb->map_work); } diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h index 032672114881..87495e59fcf1 100644 --- a/drivers/scsi/ufs/ufshpb.h +++ b/drivers/scsi/ufs/ufshpb.h @@ -106,6 +106,10 @@ struct ufshpb_subregion { int rgn_idx; int srgn_idx; bool is_last; + + /* subregion reads - for host mode */ + unsigned int reads; + /* below information is used by rsp_list */ struct list_head list_act_srgn; }; @@ -123,6 +127,10 @@ struct ufshpb_region { struct list_head list_lru_rgn; unsigned long rgn_flags; #define RGN_FLAG_DIRTY 0 + + /* region reads - for host mode */ + spinlock_t rgn_lock; + unsigned int reads; }; #define for_each_sub_region(rgn, i, srgn) \ @@ -212,6 +220,7 @@ struct ufshpb_lu { /* for selecting victim */ struct victim_select_info lru_info; + struct work_struct ufshpb_normalization_work; /* pinned region information */ u32 lu_pinned_start; From patchwork Mon Jun 7 06:13:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 455632 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FD29C47082 for ; Mon, 7 Jun 2021 06:15:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 570A061164 for ; Mon, 7 Jun 2021 06:15:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230292AbhFGGRE (ORCPT ); Mon, 7 Jun 2021 02:17:04 -0400 Received: from esa4.hgst.iphmx.com ([216.71.154.42]:52776 "EHLO esa4.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230131AbhFGGRD (ORCPT ); Mon, 7 Jun 2021 02:17:03 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1623046512; x=1654582512; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=1XKwUM4VXln0o2FqHchK+4k1fpAauX0jY9w4mGCeyYw=; b=DkqPRYBnuxNQhcpJ2uG2M+HI3gcbPqev/80v6PcLTVKXyGScnLJjpGiJ GSk3KpaPwJseLCtQIS+UjPznFLj1IFne125qdHEELvcWUMazTS4JgNV1X nj+JgKeUM7M81ow9PGrNUsvyZvB4Fqn1ULnQa01XCDe/P9Atlv+7mxMwK 9sKIhbf2uuXfsN3vMS8kSNZyccbYLyHZvcgLqqvpNWMll9BskTy7RJeh9 gFD0a0AVLuPZtzDPcvwOaPi8GtohTE5zUwooSV59+YoIHZjBmcLKq2RKP bSBnhX9LEfns3r3ae6KJitvw+VVWsw6IWMLCYSJuu3UqJ8N2TUuX5i5HR Q==; IronPort-SDR: eZh1ax7/sSm+UvAJLxXyzHMP2YUxgiMRGzN5c2+Df/fnmNQ1O9p6+uqm+arj7G2qDDjYVe2HAN u0pLYhRbBLyQeFAvNPhxRSCynaL3zi/8LvILMIciXqYYDOU7LvLuEGVG9PbzsQzCsIx3C0Vr4I RWJn1zWRNfxDwFguGzxBdAmtuGcE5IGvo9HmCXf9a9USo7wstIJJiHbvLwsafZAuQNc3cpdhT1 6Yvo+Bm8q7IxonDtg1d2Y7dS/cHWSUBYAn4Pbs2BfYrPBkTT5m8JVU8ZxxD1n0kd6tkA/6AOTJ CUc= X-IronPort-AV: E=Sophos;i="5.83,254,1616428800"; d="scan'208";a="170277913" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 07 Jun 2021 14:15:10 +0800 IronPort-SDR: jazZu2vO/RDZfRCvTix9g2y59aShKTl6Kq13mgbCwzxK/UpuQj/KR2HgUITCFqEMBScTcV/nxM 8dkJ+XMSgAlrcg/DiIjaMbrUVoDthJoyPuvvcEBKYvfkRi21rMsk0QfYcrgtUv5MXntLEqH3c2 kU+xUEEQY7VSOeq4g2IGqbNJn47vKaegF3r2Vf8qSQ+eAm7OIS3brbQNcky9M+FJiE8f7P+Cxy lIVcyOCiAVdeTOOVj086FotnDuAVSzaqo/0mJZIDOi4/6RwDLiH7eFhSLllrx9J3B+DpqnhSGc tGZC5fEjadzNJOZOOiwgpvKA Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2021 22:54:19 -0700 IronPort-SDR: wmoWBl7FImJqVPoTa9p1//KgMMcZL8YnW+3XamLjw3SvZfuyZLKCI3WJ/6T7d6wSEorqp3XgAh ArKzQ53CaVL4mhvj+Vw7Sjzvpmlz+hBB+KUAiCgz4rD34sNOhDjLzLFfR7+aWOC8E/0qbud1TQ BMqaOFTx/Yt/lg53GUqQO4lorAoXdtiSXrNXbIrez9mAwQmSIi9RZH3+49r6OHb5ROZipJIS2K b1zGkdHThiqnXLf62dCXOi4ylNWU0Wm2g6LYr20k97iZjiUIxNEbA6/30dlRv0VB1qGFpRTg/Q 9o0= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip02.wdc.com with ESMTP; 06 Jun 2021 23:15:07 -0700 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v10 06/12] scsi: ufshpb: Region inactivation in host mode Date: Mon, 7 Jun 2021 09:13:55 +0300 Message-Id: <20210607061401.58884-7-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210607061401.58884-1-avri.altman@wdc.com> References: <20210607061401.58884-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org In host mode, the host is expected to send HPB-WRITE-BUFFER with buffer-id = 0x1 when it inactivates a region. Use the map-requests pool as there is no point in assigning a designated cache for umap-requests. Signed-off-by: Avri Altman Reviewed-by: Daejun Park --- drivers/scsi/ufs/ufshpb.c | 46 +++++++++++++++++++++++++++++++++------ drivers/scsi/ufs/ufshpb.h | 1 + 2 files changed, 40 insertions(+), 7 deletions(-) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index f9efef35316e..0ef46aa71045 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -691,7 +691,8 @@ int ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) } static struct ufshpb_req *ufshpb_get_req(struct ufshpb_lu *hpb, - int rgn_idx, enum req_opf dir) + int rgn_idx, enum req_opf dir, + bool atomic) { struct ufshpb_req *rq; struct request *req; @@ -705,7 +706,7 @@ static struct ufshpb_req *ufshpb_get_req(struct ufshpb_lu *hpb, req = blk_get_request(hpb->sdev_ufs_lu->request_queue, dir, BLK_MQ_REQ_NOWAIT); - if ((PTR_ERR(req) == -EWOULDBLOCK) && (--retries > 0)) { + if (!atomic && (PTR_ERR(req) == -EWOULDBLOCK) && (--retries > 0)) { usleep_range(3000, 3100); goto retry; } @@ -736,7 +737,7 @@ static struct ufshpb_req *ufshpb_get_map_req(struct ufshpb_lu *hpb, struct ufshpb_req *map_req; struct bio *bio; - map_req = ufshpb_get_req(hpb, srgn->rgn_idx, REQ_OP_SCSI_IN); + map_req = ufshpb_get_req(hpb, srgn->rgn_idx, REQ_OP_SCSI_IN, false); if (!map_req) return NULL; @@ -914,6 +915,7 @@ static int ufshpb_execute_umap_req(struct ufshpb_lu *hpb, blk_execute_rq_nowait(NULL, req, 1, ufshpb_umap_req_compl_fn); + hpb->stats.umap_req_cnt++; return 0; } @@ -1091,12 +1093,13 @@ static void ufshpb_purge_active_subregion(struct ufshpb_lu *hpb, } static int ufshpb_issue_umap_req(struct ufshpb_lu *hpb, - struct ufshpb_region *rgn) + struct ufshpb_region *rgn, + bool atomic) { struct ufshpb_req *umap_req; int rgn_idx = rgn ? rgn->rgn_idx : 0; - umap_req = ufshpb_get_req(hpb, rgn_idx, REQ_OP_SCSI_OUT); + umap_req = ufshpb_get_req(hpb, rgn_idx, REQ_OP_SCSI_OUT, atomic); if (!umap_req) return -ENOMEM; @@ -1110,13 +1113,19 @@ static int ufshpb_issue_umap_req(struct ufshpb_lu *hpb, return -EAGAIN; } +static int ufshpb_issue_umap_single_req(struct ufshpb_lu *hpb, + struct ufshpb_region *rgn) +{ + return ufshpb_issue_umap_req(hpb, rgn, true); +} + static int ufshpb_issue_umap_all_req(struct ufshpb_lu *hpb) { - return ufshpb_issue_umap_req(hpb, NULL); + return ufshpb_issue_umap_req(hpb, NULL, false); } static void __ufshpb_evict_region(struct ufshpb_lu *hpb, - struct ufshpb_region *rgn) + struct ufshpb_region *rgn) { struct victim_select_info *lru_info; struct ufshpb_subregion *srgn; @@ -1151,6 +1160,14 @@ static int ufshpb_evict_region(struct ufshpb_lu *hpb, struct ufshpb_region *rgn) goto out; } + if (hpb->is_hcm) { + spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); + ret = ufshpb_issue_umap_single_req(hpb, rgn); + spin_lock_irqsave(&hpb->rgn_state_lock, flags); + if (ret) + goto out; + } + __ufshpb_evict_region(hpb, rgn); } out: @@ -1285,6 +1302,18 @@ static int ufshpb_add_region(struct ufshpb_lu *hpb, struct ufshpb_region *rgn) "LRU full (%d), choose victim %d\n", atomic_read(&lru_info->active_cnt), victim_rgn->rgn_idx); + + if (hpb->is_hcm) { + spin_unlock_irqrestore(&hpb->rgn_state_lock, + flags); + ret = ufshpb_issue_umap_single_req(hpb, + victim_rgn); + spin_lock_irqsave(&hpb->rgn_state_lock, + flags); + if (ret) + goto out; + } + __ufshpb_evict_region(hpb, victim_rgn); } @@ -1853,6 +1882,7 @@ ufshpb_sysfs_attr_show_func(rb_noti_cnt); ufshpb_sysfs_attr_show_func(rb_active_cnt); ufshpb_sysfs_attr_show_func(rb_inactive_cnt); ufshpb_sysfs_attr_show_func(map_req_cnt); +ufshpb_sysfs_attr_show_func(umap_req_cnt); static struct attribute *hpb_dev_stat_attrs[] = { &dev_attr_hit_cnt.attr, @@ -1861,6 +1891,7 @@ static struct attribute *hpb_dev_stat_attrs[] = { &dev_attr_rb_active_cnt.attr, &dev_attr_rb_inactive_cnt.attr, &dev_attr_map_req_cnt.attr, + &dev_attr_umap_req_cnt.attr, NULL, }; @@ -1985,6 +2016,7 @@ static void ufshpb_stat_init(struct ufshpb_lu *hpb) hpb->stats.rb_active_cnt = 0; hpb->stats.rb_inactive_cnt = 0; hpb->stats.map_req_cnt = 0; + hpb->stats.umap_req_cnt = 0; } static void ufshpb_param_init(struct ufshpb_lu *hpb) diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h index 87495e59fcf1..1ea58c17a4de 100644 --- a/drivers/scsi/ufs/ufshpb.h +++ b/drivers/scsi/ufs/ufshpb.h @@ -191,6 +191,7 @@ struct ufshpb_stats { u64 rb_inactive_cnt; u64 map_req_cnt; u64 pre_req_cnt; + u64 umap_req_cnt; }; struct ufshpb_lu { From patchwork Mon Jun 7 06:13:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 455631 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60541C47082 for ; Mon, 7 Jun 2021 06:15:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 43D366121E for ; Mon, 7 Jun 2021 06:15:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230314AbhFGGRj (ORCPT ); Mon, 7 Jun 2021 02:17:39 -0400 Received: from esa4.hgst.iphmx.com ([216.71.154.42]:52823 "EHLO esa4.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230313AbhFGGRi (ORCPT ); Mon, 7 Jun 2021 02:17:38 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1623046547; x=1654582547; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=JCTBWphomUj5hPGqAy6IKZsirWcMAhErpLzwtjrGlx0=; b=W+UsDNMCKoJblmCaDhQq8CqRvtqabKRXj8eZpFeUPHobNvFWkuu6kCKr 2i6/6Ub7QUXhfTx1K7t1a0A11Yl+qaPc02eZXSgTCJ0ZNOiHrWzB5btg/ d2YKcvugxt6c1DqOLM3wesRCevKyA41Ccus2JBUUNzuGn58Z6wRqS1RwF IdfZVz6BL6n+gISIV1sQGGX4CfEYE8yvo3nXE/ywI0YvCSJsz+vXXnril 9skAtQ30nKul2l0D/cdMueWgAvV+nJPgnMazXi9pa8tXxUvXdYtKE6Ww1 cqTstf1QBgoo/8pjPByIBpFLDvsDNlQ+dguaIdASx702CX7toALka61gK w==; IronPort-SDR: IQT1cw8lHEvaUTtoEV1Qy+hCu/d0fMmd6NO9n4i8FQiKRytBKjU3fe7B9o3wzPbmeLIwF9wbHa idVT31frO+miVFIovq90I2okComJz334YsRYOLYEJfdwSyoD3qVz61javtM3e6L7ygIVRONmyi A/qQ4kuom9zHHvOlf800Nr7n1xEtSLizhuadcL4WrH7rrzCf4triJTSkv9WO7wNF67OrA3+4Jh 6d8jqjn+KfFnARl3E0kuUJmNiXmkPUxP+2vSD8ZZ7n62TLkr4kBcX2zvfHSBaDNaaC7+W103HA EDA= X-IronPort-AV: E=Sophos;i="5.83,254,1616428800"; d="scan'208";a="170277959" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 07 Jun 2021 14:15:47 +0800 IronPort-SDR: tRS/ybSJr0szVfLh8WhJeyZhgyO3vhbg+EFHzgIGhChlEUzeLLAZvn3wOgA6mYVMCAOXBTl0zp 5GaJIAV40ueFdMRmHuK2baw4fSgS+FLo21qt3kFwPZx5qr5/GtvI41Yo55zDIH5wq5AOEvAnfs UJzhMzg/PfVF0uHmkZczvv9Zug02TQjfzLGFUste4N7ObHabcP0Zlmph0CtYmbezvMEdVskjFV ojA3b9kEIqIbKvlV64bs/loUX6vBOeSAxgHGFQCWVgEP4HPWqqNXRpAL1IEfDU9ktc96xf85h6 deWbjLO6TXUbLcsiSRtClxLO Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2021 22:54:55 -0700 IronPort-SDR: 88VWmI3wymfRA+D2YtuhOWn3JNYjuMLR9UpQNGLFhyQt6qVBLmPLwlXtDXRibbl0MwW1GjOn9/ jJIe21GIvDTxZQRDn0K5mEWi+WDAAX7K2bY3CofSLgOUagVij92zQDJiF6AM5AVUvy9Qo/XHmp xRt5hB3x7ste91OsOgOYl/UfGb2pNbY95OMMqqrc4q8Ke3ziQ2W9aTqyM24Of4vj48BbpGRjj9 sDR4O9Sfm4PiziICra44KmysT2BAsuOvqR1qKfFuWElmvFHDmfh0u/DLKDtIz58KD185CrchCB uCM= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip02.wdc.com with ESMTP; 06 Jun 2021 23:15:43 -0700 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v10 08/12] scsi: ufshpb: Add "Cold" regions timer Date: Mon, 7 Jun 2021 09:13:57 +0300 Message-Id: <20210607061401.58884-9-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210607061401.58884-1-avri.altman@wdc.com> References: <20210607061401.58884-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org In order not to hang on to “cold” regions, we shall inactivate a region that has no READ access for a predefined amount of time - READ_TO_MS. For that purpose we shall monitor the active regions list, polling it on every POLLING_INTERVAL_MS. On timeout expiry we shall add the region to the "to-be-inactivated" list, unless it is clean and did not exhaust its READ_TO_EXPIRIES - another parameter. All this does not apply to pinned regions. Signed-off-by: Avri Altman Reviewed-by: Daejun Park --- drivers/scsi/ufs/ufshpb.c | 74 +++++++++++++++++++++++++++++++++++++-- drivers/scsi/ufs/ufshpb.h | 8 +++++ 2 files changed, 79 insertions(+), 3 deletions(-) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index 1a29fe491c62..a31a9a6979de 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -18,6 +18,9 @@ #define ACTIVATION_THRESHOLD 8 /* 8 IOs */ #define EVICTION_THRESHOLD (ACTIVATION_THRESHOLD << 5) /* 256 IOs */ +#define READ_TO_MS 1000 +#define READ_TO_EXPIRIES 100 +#define POLLING_INTERVAL_MS 200 /* memory management */ static struct kmem_cache *ufshpb_mctx_cache; @@ -1032,12 +1035,63 @@ static int ufshpb_check_srgns_issue_state(struct ufshpb_lu *hpb, return 0; } +static void ufshpb_read_to_handler(struct work_struct *work) +{ + struct ufshpb_lu *hpb = container_of(work, struct ufshpb_lu, + ufshpb_read_to_work.work); + struct victim_select_info *lru_info = &hpb->lru_info; + struct ufshpb_region *rgn, *next_rgn; + unsigned long flags; + LIST_HEAD(expired_list); + + if (test_and_set_bit(TIMEOUT_WORK_RUNNING, &hpb->work_data_bits)) + return; + + spin_lock_irqsave(&hpb->rgn_state_lock, flags); + + list_for_each_entry_safe(rgn, next_rgn, &lru_info->lh_lru_rgn, + list_lru_rgn) { + bool timedout = ktime_after(ktime_get(), rgn->read_timeout); + + if (timedout) { + rgn->read_timeout_expiries--; + if (is_rgn_dirty(rgn) || + rgn->read_timeout_expiries == 0) + list_add(&rgn->list_expired_rgn, &expired_list); + else + rgn->read_timeout = ktime_add_ms(ktime_get(), + READ_TO_MS); + } + } + + spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); + + list_for_each_entry_safe(rgn, next_rgn, &expired_list, + list_expired_rgn) { + list_del_init(&rgn->list_expired_rgn); + spin_lock_irqsave(&hpb->rsp_list_lock, flags); + ufshpb_update_inactive_info(hpb, rgn->rgn_idx); + spin_unlock_irqrestore(&hpb->rsp_list_lock, flags); + } + + ufshpb_kick_map_work(hpb); + + clear_bit(TIMEOUT_WORK_RUNNING, &hpb->work_data_bits); + + schedule_delayed_work(&hpb->ufshpb_read_to_work, + msecs_to_jiffies(POLLING_INTERVAL_MS)); +} + static void ufshpb_add_lru_info(struct victim_select_info *lru_info, struct ufshpb_region *rgn) { rgn->rgn_state = HPB_RGN_ACTIVE; list_add_tail(&rgn->list_lru_rgn, &lru_info->lh_lru_rgn); atomic_inc(&lru_info->active_cnt); + if (rgn->hpb->is_hcm) { + rgn->read_timeout = ktime_add_ms(ktime_get(), READ_TO_MS); + rgn->read_timeout_expiries = READ_TO_EXPIRIES; + } } static void ufshpb_hit_lru_info(struct victim_select_info *lru_info, @@ -1824,6 +1878,7 @@ static int ufshpb_alloc_region_tbl(struct ufs_hba *hba, struct ufshpb_lu *hpb) INIT_LIST_HEAD(&rgn->list_inact_rgn); INIT_LIST_HEAD(&rgn->list_lru_rgn); + INIT_LIST_HEAD(&rgn->list_expired_rgn); if (rgn_idx == hpb->rgns_per_lu - 1) { srgn_cnt = ((hpb->srgns_per_lu - 1) % @@ -1845,6 +1900,7 @@ static int ufshpb_alloc_region_tbl(struct ufs_hba *hba, struct ufshpb_lu *hpb) } rgn->rgn_flags = 0; + rgn->hpb = hpb; } return 0; @@ -2066,9 +2122,12 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb) INIT_LIST_HEAD(&hpb->list_hpb_lu); INIT_WORK(&hpb->map_work, ufshpb_map_work_handler); - if (hpb->is_hcm) + if (hpb->is_hcm) { INIT_WORK(&hpb->ufshpb_normalization_work, ufshpb_normalization_work_handler); + INIT_DELAYED_WORK(&hpb->ufshpb_read_to_work, + ufshpb_read_to_handler); + } hpb->map_req_cache = kmem_cache_create("ufshpb_req_cache", sizeof(struct ufshpb_req), 0, 0, NULL); @@ -2102,6 +2161,10 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb) ufshpb_stat_init(hpb); ufshpb_param_init(hpb); + if (hpb->is_hcm) + schedule_delayed_work(&hpb->ufshpb_read_to_work, + msecs_to_jiffies(POLLING_INTERVAL_MS)); + return 0; release_pre_req_mempool: @@ -2168,9 +2231,10 @@ static void ufshpb_discard_rsp_lists(struct ufshpb_lu *hpb) static void ufshpb_cancel_jobs(struct ufshpb_lu *hpb) { - if (hpb->is_hcm) + if (hpb->is_hcm) { + cancel_delayed_work_sync(&hpb->ufshpb_read_to_work); cancel_work_sync(&hpb->ufshpb_normalization_work); - + } cancel_work_sync(&hpb->map_work); } @@ -2278,6 +2342,10 @@ void ufshpb_resume(struct ufs_hba *hba) continue; ufshpb_set_state(hpb, HPB_PRESENT); ufshpb_kick_map_work(hpb); + if (hpb->is_hcm) + schedule_delayed_work(&hpb->ufshpb_read_to_work, + msecs_to_jiffies(POLLING_INTERVAL_MS)); + } } diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h index b863540e28d6..448062a94760 100644 --- a/drivers/scsi/ufs/ufshpb.h +++ b/drivers/scsi/ufs/ufshpb.h @@ -115,6 +115,7 @@ struct ufshpb_subregion { }; struct ufshpb_region { + struct ufshpb_lu *hpb; struct ufshpb_subregion *srgn_tbl; enum HPB_RGN_STATE rgn_state; int rgn_idx; @@ -132,6 +133,10 @@ struct ufshpb_region { /* region reads - for host mode */ spinlock_t rgn_lock; unsigned int reads; + /* region "cold" timer - for host mode */ + ktime_t read_timeout; + unsigned int read_timeout_expiries; + struct list_head list_expired_rgn; }; #define for_each_sub_region(rgn, i, srgn) \ @@ -223,6 +228,9 @@ struct ufshpb_lu { /* for selecting victim */ struct victim_select_info lru_info; struct work_struct ufshpb_normalization_work; + struct delayed_work ufshpb_read_to_work; + unsigned long work_data_bits; +#define TIMEOUT_WORK_RUNNING 0 /* pinned region information */ u32 lu_pinned_start; From patchwork Mon Jun 7 06:13:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 455630 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 738BFC47082 for ; Mon, 7 Jun 2021 06:16:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5CAD1611AD for ; Mon, 7 Jun 2021 06:16:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230373AbhFGGRy (ORCPT ); Mon, 7 Jun 2021 02:17:54 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:27093 "EHLO esa1.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230353AbhFGGRx (ORCPT ); Mon, 7 Jun 2021 02:17:53 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1623046563; x=1654582563; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ZJQ/NRNaJlybQ/31KgdHbsj2CZRmUMaLw3D3Q66Q6kY=; b=fBY7t35/5eKDT06HWpkVjNO43HQqSSXFtjJ8HMNBSbXn4gEgsAypyQaX u0ZTE9h+kCgbeEgl61ODSfNYHqjCFSDwSG6/cBjKHtc+zzNxlf1q9A4m2 3o62TyDPMzLnH11r0ULRlWIJ7VMZ9s7QAA+VBbhyXinEtWdxJVidH1iwg f4BxBUySlwHgpMlGw3POtAQlvFa3p6+56LzpR7Egpa86LcOAl4V+dTasL iS6uK/TNIU+T7fF/dOfQeWYM5n1ryrypQq3z6MbmBu1kePrFOM0XkwC4M F8voEcY5bGj+YxSaZYAw4E2qSJJG8VAHKQigYtyAnwOUu+Kqn+nIw3Nbv A==; IronPort-SDR: RG8t/6tvLy1wHUj6y3G/Y9NB9BKHZI9zEe0aRAcH+WL2FSAlMLFLQ5WDEs2btB8hUtg7DEgY4r pd/Vcg0o/uXR3RIicL6dKTYE7Nn4rxtjydYw2YszZxlcsos0mBZbwdG4kfIUYlBNC720qdpiFX oyFcebjVWiTqfeGwySQqUb0znzOhGGeeVH+wZ4Bk70lqEa8EeFtpGIq/qR8EEHTSAVIAfXV/Gd Mr4zeBuE4uhowmw+6QxJv377HfzfbxFLTGgR/R6DGj6pJkcj1P9gi0oZODI4sM9Y+/Rw3rp+4c dWk= X-IronPort-AV: E=Sophos;i="5.83,254,1616428800"; d="scan'208";a="282406603" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 07 Jun 2021 14:16:02 +0800 IronPort-SDR: MnCh90/+xdYfMuiresTvgLhnmqFQslpZgXFVI2Ux3HG1DH9RSZzD53RxG5BoYESgD29FzdrdWm YRRq9Ghn72W2+aC/oG7rg0F7RIMGfIVn79PBCYGlZt+C6ORCmtdr0xAkSYb65nnV2P+Xr+aYRQ IRly5OB3yaJoWlZZZ9iEpF6KmP3NXkp5j0zPpQSkFKcXMUWK95laUR476eGDF5CRAoA9uL5le+ EkL7gXtcdEe1LGdrObr0dmu6r7Elju6odlbOwoHIWcCgF34I0hGPGUJ8Us0PK92t8B+HJoUj1d pu+PUsqn0ieKGo3VIm6Wbn/k Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2021 22:53:45 -0700 IronPort-SDR: ERVSPo22VbVw8JD79DfhN4RDe02aOv6xnDSELMzfcmnkj84WA023GjVqFgYN7n/JLgWteiVClV NZ0KBjGtie3XtkR4h9X54LoxgQLbK0Zg5wzH+0lPaU7oNFmA/JmrqczF7mBzLw8RFt6mgCQTuW lXcQt/SOLiwWSuIfSJ4hos1Up2y6G09g/ohMMC0tzaTW4G4BDSnlOKvKKvDIqC6DGzJp9fi5Vy g0XcLY+gRB0oKecXGMKr7Htq9sdkYHSglx3pEvMIWeMYoIWm0ntYonLS7NyN6+uG40IMPUOdsg DuE= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip02.wdc.com with ESMTP; 06 Jun 2021 23:15:59 -0700 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v10 10/12] scsi: ufshpb: Do not send umap_all in host control mode Date: Mon, 7 Jun 2021 09:13:59 +0300 Message-Id: <20210607061401.58884-11-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210607061401.58884-1-avri.altman@wdc.com> References: <20210607061401.58884-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org HPB-WRITE-BUFFER with buffer-id = 0x3h is supported in device control mode only. Signed-off-by: Avri Altman --- drivers/scsi/ufs/ufshpb.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index bcfdf338244b..98c107ca4a4e 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -2461,7 +2461,8 @@ static void ufshpb_hpb_lu_prepared(struct ufs_hba *hba) ufshpb_set_state(hpb, HPB_PRESENT); if ((hpb->lu_pinned_end - hpb->lu_pinned_start) > 0) queue_work(ufshpb_wq, &hpb->map_work); - ufshpb_issue_umap_all_req(hpb); + if (!hpb->is_hcm) + ufshpb_issue_umap_all_req(hpb); } else { dev_err(hba->dev, "destroy HPB lu %d\n", hpb->lun); ufshpb_destroy_lu(hba, sdev); From patchwork Mon Jun 7 06:14:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 455629 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2B47DC47082 for ; Mon, 7 Jun 2021 06:16:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 11A1260C3F for ; Mon, 7 Jun 2021 06:16:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230410AbhFGGSK (ORCPT ); Mon, 7 Jun 2021 02:18:10 -0400 Received: from esa2.hgst.iphmx.com ([68.232.143.124]:14086 "EHLO esa2.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230396AbhFGGSI (ORCPT ); Mon, 7 Jun 2021 02:18:08 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1623046588; x=1654582588; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=eMBmmlv9OIKlJjChb+ZLGvGEKVN46GYejU3GC9QRMbI=; b=C1ioDxBhfVVURqCHlXZ557jnpdL6jswkrNMaC0/bUTE1P1rP270xjzD6 vATAJl5T79YJkS3TrD0794O1XfxkyoiMitiJYpaYIOYmprzudAMKbPYMG D4JWA9r9OvWIAuH/gZUKaAsbJutaYcQcoGx8llh/E5sJpy4rFKPxq4qYQ GFSNN4fZ+F9307LEr3zZ0OspkKOA2kwXWg+0uOIWvpeQZKXk8aXaL5y6q m4+9+0Ffax855/yhay/2shwhekCNwGNsDXefC4+siVly9chQvlFqCKwug c0z7j+VhG/L4jv5ejmaMwLjNoch8StFi4pRR/Mg5Z8o5f3ronthqAQvfd w==; IronPort-SDR: S24iRhsa1FVhzQTCzqKDMQST2mgIIh53RA7nuf5Yc3UZrOhguMFR0Hjxm05c2wSJF8kYhOtvmh yjNOjOIAQyoUAtF4QnPPki+hfJhKTZ9MjCauWa4CPpvcORPDalrr99l0c+2wcRkYyEE163qm3r eNkRUonVygIDGHhp4Qqwl3IFYRPTGqqarMsPtR8Igx9LGviHFPruvPcP8KXsUzMw5fhpk6PvTN iWMHeUvdr7r2hmj+pACTLdv2bduJQTavhWJwSaW52NcqWBWH2lhee8lx+JRG+zMrtmTcf7gVst +oU= X-IronPort-AV: E=Sophos;i="5.83,254,1616428800"; d="scan'208";a="274818303" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 07 Jun 2021 14:16:26 +0800 IronPort-SDR: PSAlg/Fae6YWBLrV8eQ7YXKKkKbcldis9UNJPAjWBnl3NvOUaDganbKpkDcrKbxVbz4s5Bqcuv tPs1rQufjnV98JJsmhCFdS8S5xpn++g0glItLljNWt0gAhgEBfYAQBhhp4ZvvUjbh0pqvYCUEu 3HTUl29ZanPS27hwAt1QzCTbYsxzj4lpRNDykxW3HgkwknmcqvoOnX8lsKplni/GTzKPf7lqbJ ch9vXuzZ1QM1ACjwsZoxszJY7NFID60wZOjtR7liAAlSv4GDiE4nnG4++h80Ka7Srq4A1AGIOf M6if9o7jHYpOcVr2cLxVeBi7 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2021 22:53:58 -0700 IronPort-SDR: w8c/e0FsIO61s/vOlxHZjaXz/5+chhls5RZIxRRWZXWTQ8rNx7yKrIZVhwk6cV3KW0uxfgc26F qMBMaoeXGH7WsR2xhId+T58TupeiVa3vdaU7kQ7c+PycLQEWBh/KeK7fI9dtVAfV01aMUT/OCq V9rT3L3ssgD976ypTIFN7FPk+9qfybvnPVXkV8R3UednrVYbvBDJ3C3ru5RLEo+fGFwM5dlAMZ rg6d9DfBIlTad6uVjaUFaawdmIZc6GvrdFIuaZ3VOGb3lhFn4YL6LIgLatu+xbeeKYZg/3799X rDE= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip02.wdc.com with ESMTP; 06 Jun 2021 23:16:12 -0700 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v10 12/12] scsi: ufshpb: Make host mode parameters configurable Date: Mon, 7 Jun 2021 09:14:01 +0300 Message-Id: <20210607061401.58884-13-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210607061401.58884-1-avri.altman@wdc.com> References: <20210607061401.58884-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org We can make use of this commit, to elaborate some more of the host control mode logic, explaining what role play each and every variable. While at it, allow those parameters to be configurable. Signed-off-by: Avri Altman Reviewed-by: Daejun Park --- Documentation/ABI/testing/sysfs-driver-ufs | 76 +++++- drivers/scsi/ufs/ufshpb.c | 288 +++++++++++++++++++-- drivers/scsi/ufs/ufshpb.h | 20 ++ 3 files changed, 367 insertions(+), 17 deletions(-) diff --git a/Documentation/ABI/testing/sysfs-driver-ufs b/Documentation/ABI/testing/sysfs-driver-ufs index 2501bad03487..ef2399bfc2e2 100644 --- a/Documentation/ABI/testing/sysfs-driver-ufs +++ b/Documentation/ABI/testing/sysfs-driver-ufs @@ -1449,7 +1449,7 @@ Description: This entry shows the maximum HPB data size for using single HPB The file is read only. -What: /sys/bus/platform/drivers/ufshcd/*/flags/wb_enable +What: /sys/bus/platform/drivers/ufshcd/*/flags/hpb_enable Date: June 2021 Contact: Daejun Park Description: This entry shows the status of HPB. @@ -1460,3 +1460,77 @@ Description: This entry shows the status of HPB. == ============================ The file is read only. + +What: /sys/class/scsi_device/*/device/hpb_param_sysfs/activation_thld +Date: February 2021 +Contact: Avri Altman +Description: In host control mode, reads are the major source of activation + trials. once this threshold hs met, the region is added to the + "to-be-activated" list. Since we reset the read counter upon + write, this include sending a rb command updating the region + ppn as well. + +What: /sys/class/scsi_device/*/device/hpb_param_sysfs/normalization_factor +Date: February 2021 +Contact: Avri Altman +Description: In host control mode, We think of the regions as "buckets". + Those buckets are being filled with reads, and emptied on write. + We use entries_per_srgn - the amount of blocks in a subregion as + our bucket size. This applies because HPB1.0 only concern a + single-block reads. Once the bucket size is crossed, we trigger + a normalization work - not only to avoid overflow, but mainly + because we want to keep those counters normalized, as we are + using those reads as a comparative score, to make various decisions. + The normalization is dividing (shift right) the read counter by + the normalization_factor. If during consecutive normalizations + an active region has exhaust its reads - inactivate it. + +What: /sys/class/scsi_device/*/device/hpb_param_sysfs/eviction_thld_enter +Date: February 2021 +Contact: Avri Altman +Description: Region deactivation is often due to the fact that eviction took + place: a region become active on the expense of another. This is + happening when the max-active-regions limit has crossed. + In host mode, eviction is considered an extreme measure. We + want to verify that the entering region has enough reads, and + the exiting region has much less reads. eviction_thld_enter is + the min reads that a region must have in order to be considered + as a candidate to evict other region. + +What: /sys/class/scsi_device/*/device/hpb_param_sysfs/eviction_thld_exit +Date: February 2021 +Contact: Avri Altman +Description: same as above for the exiting region. A region is consider to + be a candidate to be evicted, only if it has less reads than + eviction_thld_exit. + +What: /sys/class/scsi_device/*/device/hpb_param_sysfs/read_timeout_ms +Date: February 2021 +Contact: Avri Altman +Description: In order not to hang on to “cold” regions, we shall inactivate + a region that has no READ access for a predefined amount of + time - read_timeout_ms. If read_timeout_ms has expired, and the + region is dirty - it is less likely that we can make any use of + HPB-READing it. So we inactivate it. Still, deactivation has + its overhead, and we may still benefit from HPB-READing this + region if it is clean - see read_timeout_expiries. + +What: /sys/class/scsi_device/*/device/hpb_param_sysfs/read_timeout_expiries +Date: February 2021 +Contact: Avri Altman +Description: if the region read timeout has expired, but the region is clean, + just re-wind its timer for another spin. Do that as long as it + is clean and did not exhaust its read_timeout_expiries threshold. + +What: /sys/class/scsi_device/*/device/hpb_param_sysfs/timeout_polling_interval_ms +Date: February 2021 +Contact: Avri Altman +Description: the frequency in which the delayed worker that checks the + read_timeouts is awaken. + +What: /sys/class/scsi_device/*/device/hpb_param_sysfs/inflight_map_req +Date: February 2021 +Contact: Avri Altman +Description: in host control mode the host is the originator of map requests. + To not flood the device with map requests, use a simple throttling + mechanism that limits the number of inflight map requests. diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index 53f94ad5e7a3..1fe776470911 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -17,7 +17,6 @@ #include "../sd.h" #define ACTIVATION_THRESHOLD 8 /* 8 IOs */ -#define EVICTION_THRESHOLD (ACTIVATION_THRESHOLD << 5) /* 256 IOs */ #define READ_TO_MS 1000 #define READ_TO_EXPIRIES 100 #define POLLING_INTERVAL_MS 200 @@ -194,7 +193,7 @@ static void ufshpb_iterate_rgn(struct ufshpb_lu *hpb, int rgn_idx, int srgn_idx, } else { srgn->reads++; rgn->reads++; - if (srgn->reads == ACTIVATION_THRESHOLD) + if (srgn->reads == hpb->params.activation_thld) activate = true; } spin_unlock(&rgn->rgn_lock); @@ -743,10 +742,11 @@ static struct ufshpb_req *ufshpb_get_map_req(struct ufshpb_lu *hpb, struct bio *bio; if (hpb->is_hcm && - hpb->num_inflight_map_req >= THROTTLE_MAP_REQ_DEFAULT) { + hpb->num_inflight_map_req >= hpb->params.inflight_map_req) { dev_info(&hpb->sdev_ufs_lu->sdev_dev, "map_req throttle. inflight %d throttle %d", - hpb->num_inflight_map_req, THROTTLE_MAP_REQ_DEFAULT); + hpb->num_inflight_map_req, + hpb->params.inflight_map_req); return NULL; } @@ -1053,6 +1053,7 @@ static void ufshpb_read_to_handler(struct work_struct *work) struct victim_select_info *lru_info = &hpb->lru_info; struct ufshpb_region *rgn, *next_rgn; unsigned long flags; + unsigned int poll; LIST_HEAD(expired_list); if (test_and_set_bit(TIMEOUT_WORK_RUNNING, &hpb->work_data_bits)) @@ -1071,7 +1072,7 @@ static void ufshpb_read_to_handler(struct work_struct *work) list_add(&rgn->list_expired_rgn, &expired_list); else rgn->read_timeout = ktime_add_ms(ktime_get(), - READ_TO_MS); + hpb->params.read_timeout_ms); } } @@ -1089,8 +1090,9 @@ static void ufshpb_read_to_handler(struct work_struct *work) clear_bit(TIMEOUT_WORK_RUNNING, &hpb->work_data_bits); + poll = hpb->params.timeout_polling_interval_ms; schedule_delayed_work(&hpb->ufshpb_read_to_work, - msecs_to_jiffies(POLLING_INTERVAL_MS)); + msecs_to_jiffies(poll)); } static void ufshpb_add_lru_info(struct victim_select_info *lru_info, @@ -1100,8 +1102,11 @@ static void ufshpb_add_lru_info(struct victim_select_info *lru_info, list_add_tail(&rgn->list_lru_rgn, &lru_info->lh_lru_rgn); atomic_inc(&lru_info->active_cnt); if (rgn->hpb->is_hcm) { - rgn->read_timeout = ktime_add_ms(ktime_get(), READ_TO_MS); - rgn->read_timeout_expiries = READ_TO_EXPIRIES; + rgn->read_timeout = + ktime_add_ms(ktime_get(), + rgn->hpb->params.read_timeout_ms); + rgn->read_timeout_expiries = + rgn->hpb->params.read_timeout_expiries; } } @@ -1130,7 +1135,8 @@ static struct ufshpb_region *ufshpb_victim_lru_info(struct ufshpb_lu *hpb) * in host control mode, verify that the exiting region * has less reads */ - if (hpb->is_hcm && rgn->reads > (EVICTION_THRESHOLD >> 1)) + if (hpb->is_hcm && + rgn->reads > hpb->params.eviction_thld_exit) continue; victim_rgn = rgn; @@ -1351,7 +1357,8 @@ static int ufshpb_add_region(struct ufshpb_lu *hpb, struct ufshpb_region *rgn) * in host control mode, verify that the entering * region has enough reads */ - if (hpb->is_hcm && rgn->reads < EVICTION_THRESHOLD) { + if (hpb->is_hcm && + rgn->reads < hpb->params.eviction_thld_enter) { ret = -EACCES; goto out; } @@ -1702,6 +1709,7 @@ static void ufshpb_normalization_work_handler(struct work_struct *work) struct ufshpb_lu *hpb = container_of(work, struct ufshpb_lu, ufshpb_normalization_work); int rgn_idx; + u8 factor = hpb->params.normalization_factor; for (rgn_idx = 0; rgn_idx < hpb->rgns_per_lu; rgn_idx++) { struct ufshpb_region *rgn = hpb->rgn_tbl + rgn_idx; @@ -1712,7 +1720,7 @@ static void ufshpb_normalization_work_handler(struct work_struct *work) for (srgn_idx = 0; srgn_idx < hpb->srgns_per_rgn; srgn_idx++) { struct ufshpb_subregion *srgn = rgn->srgn_tbl + srgn_idx; - srgn->reads >>= 1; + srgn->reads >>= factor; rgn->reads += srgn->reads; } spin_unlock(&rgn->rgn_lock); @@ -2033,8 +2041,247 @@ requeue_timeout_ms_store(struct device *dev, struct device_attribute *attr, } static DEVICE_ATTR_RW(requeue_timeout_ms); +ufshpb_sysfs_param_show_func(activation_thld); +static ssize_t +activation_thld_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + if (val <= 0) + return -EINVAL; + + hpb->params.activation_thld = val; + + return count; +} +static DEVICE_ATTR_RW(activation_thld); + +ufshpb_sysfs_param_show_func(normalization_factor); +static ssize_t +normalization_factor_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + if (val <= 0 || val > ilog2(hpb->entries_per_srgn)) + return -EINVAL; + + hpb->params.normalization_factor = val; + + return count; +} +static DEVICE_ATTR_RW(normalization_factor); + +ufshpb_sysfs_param_show_func(eviction_thld_enter); +static ssize_t +eviction_thld_enter_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + if (val <= hpb->params.eviction_thld_exit) + return -EINVAL; + + hpb->params.eviction_thld_enter = val; + + return count; +} +static DEVICE_ATTR_RW(eviction_thld_enter); + +ufshpb_sysfs_param_show_func(eviction_thld_exit); +static ssize_t +eviction_thld_exit_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + if (val <= hpb->params.activation_thld) + return -EINVAL; + + hpb->params.eviction_thld_exit = val; + + return count; +} +static DEVICE_ATTR_RW(eviction_thld_exit); + +ufshpb_sysfs_param_show_func(read_timeout_ms); +static ssize_t +read_timeout_ms_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + /* read_timeout >> timeout_polling_interval */ + if (val < hpb->params.timeout_polling_interval_ms * 2) + return -EINVAL; + + hpb->params.read_timeout_ms = val; + + return count; +} +static DEVICE_ATTR_RW(read_timeout_ms); + +ufshpb_sysfs_param_show_func(read_timeout_expiries); +static ssize_t +read_timeout_expiries_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + if (val <= 0) + return -EINVAL; + + hpb->params.read_timeout_expiries = val; + + return count; +} +static DEVICE_ATTR_RW(read_timeout_expiries); + +ufshpb_sysfs_param_show_func(timeout_polling_interval_ms); +static ssize_t +timeout_polling_interval_ms_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + /* timeout_polling_interval << read_timeout */ + if (val <= 0 || val > hpb->params.read_timeout_ms / 2) + return -EINVAL; + + hpb->params.timeout_polling_interval_ms = val; + + return count; +} +static DEVICE_ATTR_RW(timeout_polling_interval_ms); + +ufshpb_sysfs_param_show_func(inflight_map_req); +static ssize_t inflight_map_req_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + if (val <= 0 || val > hpb->sdev_ufs_lu->queue_depth - 1) + return -EINVAL; + + hpb->params.inflight_map_req = val; + + return count; +} +static DEVICE_ATTR_RW(inflight_map_req); + +static void ufshpb_hcm_param_init(struct ufshpb_lu *hpb) +{ + hpb->params.activation_thld = ACTIVATION_THRESHOLD; + hpb->params.normalization_factor = 1; + hpb->params.eviction_thld_enter = (ACTIVATION_THRESHOLD << 5); + hpb->params.eviction_thld_exit = (ACTIVATION_THRESHOLD << 4); + hpb->params.read_timeout_ms = READ_TO_MS; + hpb->params.read_timeout_expiries = READ_TO_EXPIRIES; + hpb->params.timeout_polling_interval_ms = POLLING_INTERVAL_MS; + hpb->params.inflight_map_req = THROTTLE_MAP_REQ_DEFAULT; +} + static struct attribute *hpb_dev_param_attrs[] = { &dev_attr_requeue_timeout_ms.attr, + &dev_attr_activation_thld.attr, + &dev_attr_normalization_factor.attr, + &dev_attr_eviction_thld_enter.attr, + &dev_attr_eviction_thld_exit.attr, + &dev_attr_read_timeout_ms.attr, + &dev_attr_read_timeout_expiries.attr, + &dev_attr_timeout_polling_interval_ms.attr, + &dev_attr_inflight_map_req.attr, NULL, }; @@ -2118,6 +2365,8 @@ static void ufshpb_stat_init(struct ufshpb_lu *hpb) static void ufshpb_param_init(struct ufshpb_lu *hpb) { hpb->params.requeue_timeout_ms = HPB_REQUEUE_TIME_MS; + if (hpb->is_hcm) + ufshpb_hcm_param_init(hpb); } static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb) @@ -2172,9 +2421,13 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb) ufshpb_stat_init(hpb); ufshpb_param_init(hpb); - if (hpb->is_hcm) + if (hpb->is_hcm) { + unsigned int poll; + + poll = hpb->params.timeout_polling_interval_ms; schedule_delayed_work(&hpb->ufshpb_read_to_work, - msecs_to_jiffies(POLLING_INTERVAL_MS)); + msecs_to_jiffies(poll)); + } return 0; @@ -2353,10 +2606,13 @@ void ufshpb_resume(struct ufs_hba *hba) continue; ufshpb_set_state(hpb, HPB_PRESENT); ufshpb_kick_map_work(hpb); - if (hpb->is_hcm) - schedule_delayed_work(&hpb->ufshpb_read_to_work, - msecs_to_jiffies(POLLING_INTERVAL_MS)); + if (hpb->is_hcm) { + unsigned int poll = + hpb->params.timeout_polling_interval_ms; + schedule_delayed_work(&hpb->ufshpb_read_to_work, + msecs_to_jiffies(poll)); + } } } diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h index cfa0abac21db..68a5af0ff682 100644 --- a/drivers/scsi/ufs/ufshpb.h +++ b/drivers/scsi/ufs/ufshpb.h @@ -185,8 +185,28 @@ struct victim_select_info { atomic_t active_cnt; }; +/** + * ufshpb_params - ufs hpb parameters + * @requeue_timeout_ms - requeue threshold of wb command (0x2) + * @activation_thld - min reads [IOs] to activate/update a region + * @normalization_factor - shift right the region's reads + * @eviction_thld_enter - min reads [IOs] for the entering region in eviction + * @eviction_thld_exit - max reads [IOs] for the exiting region in eviction + * @read_timeout_ms - timeout [ms] from the last read IO to the region + * @read_timeout_expiries - amount of allowable timeout expireis + * @timeout_polling_interval_ms - frequency in which timeouts are checked + * @inflight_map_req - number of inflight map requests + */ struct ufshpb_params { unsigned int requeue_timeout_ms; + unsigned int activation_thld; + unsigned int normalization_factor; + unsigned int eviction_thld_enter; + unsigned int eviction_thld_exit; + unsigned int read_timeout_ms; + unsigned int read_timeout_expiries; + unsigned int timeout_polling_interval_ms; + unsigned int inflight_map_req; }; struct ufshpb_stats {