From patchwork Tue Feb 2 08:30:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 374922 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06310C433E6 for ; Tue, 2 Feb 2021 08:32:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7758D64ED4 for ; Tue, 2 Feb 2021 08:32:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232643AbhBBIbw (ORCPT ); Tue, 2 Feb 2021 03:31:52 -0500 Received: from esa2.hgst.iphmx.com ([68.232.143.124]:61803 "EHLO esa2.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232339AbhBBIbt (ORCPT ); Tue, 2 Feb 2021 03:31:49 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1612255213; x=1643791213; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=gyCwZHq1v+lKDMIZojckDSZqvBp22l96r/2T9aYBWGo=; b=Oxrnam8wNhyo4jApt7M3ah71kjtQu9WOBPOt2bUQ2GZ/sH5En8gfGEat lRPC16vnBAG5VZQqV7w4P5qZ+YEso6r1jlhBDTDFeQeqquinPUVCp1ZSh q8DrKvNrasNBNImc2taRfFRwSTEo4aEbBu1KleGNHUPHJ1zZUwVrHIKEw nVXC+fFddndHdF6GWVxkHf4XUzeTb9u4gcUdzxnvvXbiwCKgWjaDHSxQB syWU0jwKfaZLHTrIGmkT2Tx/gv7asRacbdUkOFvjYVSEewq60Vprhqbrk ja++FUfkEcpm7EsTkIapY19OTmhwhE9oS0Sjk5ZBJ0Qyaal6yCeCBH1e3 A==; IronPort-SDR: XOIYKLC5FigogSIMGcYwj9PeIM4NnOCCYN43ogFscwKYqA2TkhseKM+9utSAXKzy6VzD/oQ2ME z35qIQG2FCR7SaE4AF4EsqlL29fPV+3BMrXGTv1PoYOJpbMX2b90mWdrVfahJ/dK63CN5sHrAh 90zJ6Dtcnclq+7r6gAVThk3KRn6+cmtjjJruH7c2ZrMJNwcxDNz2jo3t209dYvNKXhzbkWuYr0 kqrZrO5G9DVH+d+1VVRO2lHBwnjVhbbmlUm731YbUvxCfewpte9ZbdEGQFbW5njh3ZiAffkAm/ pC0= X-IronPort-AV: E=Sophos;i="5.79,394,1602518400"; d="scan'208";a="262976952" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 02 Feb 2021 16:38:33 +0800 IronPort-SDR: hxiRJicvem0Hj3JMPJ1HoEbNROdYWcYmYw8kbzCldHMpayS6dr/bEvSInM8v1HsaYD+OrG55YS j5za5Jti6DRgBPWjS/VNf5riLGt9ta8k8TYJ7V09XTQsT7gmcK2oomp9b2bo8HLD5GcuFSKrfn OLV4B4E8sJjWyOghdJoIQb3TLt2AcXUC6QSttMsM8a7sGahea70dWDNomF7Wghq0CW79RsyX2M koU+UujbmiXqLXZP6qWESY8EdtYNhW+EZuUKKPtpGdhIpXjA1OtTHXpgGf4nmFRlSDeOlrwjra mLXnVAhAlUM+tuMymUihrJpV Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Feb 2021 00:14:51 -0800 IronPort-SDR: NCcdU79cT8jUEwguT5PeUMP30wdlC/0cms/LNNm4R1TAlRM1+0Rv+gKIgQGtIvaaDItMrFIxam Vx5YlhCvFruRmBX5sMPB55vTVDvzG4DfsdDK82ql01MINUJkRSl+Z3LwFbuTypKAC9MUJar+5w +BW2lsJLKUoH5gdx2AJWIz2jvBjnDlAB6Vn/zVuL/zmSkJmpDJ+StyL6d5OaBTBNn+IgPgJtZa jgUqo7/vPkZwfkgHXYYC/dAehnzGHRhsV+yV+gLkdeiooXEnm9KpLHbf4DzlzVp/Rv09jE5YCF UUg= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip01.wdc.com with ESMTP; 02 Feb 2021 00:30:39 -0800 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v2 2/9] scsi: ufshpb: Add host control mode support to rsp_upiu Date: Tue, 2 Feb 2021 10:30:00 +0200 Message-Id: <20210202083007.104050-3-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210202083007.104050-1-avri.altman@wdc.com> References: <20210202083007.104050-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org In device control mode, the device may recommend the host to either activate or inactivate a region, and the host should follow. Meaning those are not actually recommendations, but more of instructions. On the contrary, in host control mode, the recommendation protocol is slightly changed: a) The device may only recommend the host to update a subregion of an already-active region. And, b) The device may *not* recommend to inactivate a region. Furthermore, in host control mode, the host may choose not to follow any of the device's recommendations. However, in case of a recommendation to update an active and clean subregion, it is better to follow those recommendation because otherwise the host has no other way to know that some internal relocation took place. Signed-off-by: Avri Altman --- drivers/scsi/ufs/ufshpb.c | 35 +++++++++++++++++++++++++++++++++++ drivers/scsi/ufs/ufshpb.h | 6 ++++++ 2 files changed, 41 insertions(+) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index 46f6a7104e7e..61de80a778a7 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -138,6 +138,8 @@ static void ufshpb_set_ppn_dirty(struct ufshpb_lu *hpb, int rgn_idx, else set_bit_len = cnt; + set_bit(RGN_FLAG_DIRTY, &rgn->rgn_flags); + if (rgn->rgn_state != HPB_RGN_INACTIVE && srgn->srgn_state == HPB_SRGN_VALID) bitmap_set(srgn->mctx->ppn_dirty, srgn_offset, set_bit_len); @@ -199,6 +201,11 @@ static bool ufshpb_test_ppn_dirty(struct ufshpb_lu *hpb, int rgn_idx, return false; } +static inline bool is_rgn_dirty(struct ufshpb_region *rgn) +{ + return test_bit(RGN_FLAG_DIRTY, &rgn->rgn_flags); +} + static u64 ufshpb_get_ppn(struct ufshpb_lu *hpb, struct ufshpb_map_ctx *mctx, int pos, int *error) { @@ -380,8 +387,12 @@ static void ufshpb_put_map_req(struct ufshpb_lu *hpb, static int ufshpb_clear_dirty_bitmap(struct ufshpb_lu *hpb, struct ufshpb_subregion *srgn) { + struct ufshpb_region *rgn; + WARN_ON(!srgn->mctx); bitmap_zero(srgn->mctx->ppn_dirty, hpb->entries_per_srgn); + rgn = hpb->rgn_tbl + srgn->rgn_idx; + clear_bit(RGN_FLAG_DIRTY, &rgn->rgn_flags); return 0; } @@ -814,17 +825,39 @@ static void ufshpb_rsp_req_region_update(struct ufshpb_lu *hpb, */ spin_lock(&hpb->rsp_list_lock); for (i = 0; i < rsp_field->active_rgn_cnt; i++) { + struct ufshpb_region *rgn; + rgn_idx = be16_to_cpu(rsp_field->hpb_active_field[i].active_rgn); srgn_idx = be16_to_cpu(rsp_field->hpb_active_field[i].active_srgn); + rgn = hpb->rgn_tbl + rgn_idx; + if (hpb->is_hcm && + (rgn->rgn_state != HPB_RGN_ACTIVE || is_rgn_dirty(rgn))) { + /* + * in host control mode, subregion activation + * recommendations are only allowed to active regions. + * Also, ignore recommendations for dirty regions - the + * host will make decisions concerning those by himself + */ + continue; + } + dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, "activate(%d) region %d - %d\n", i, rgn_idx, srgn_idx); ufshpb_update_active_info(hpb, rgn_idx, srgn_idx); hpb->stats.rb_active_cnt++; } + if (hpb->is_hcm) { + /* + * in host control mode the device is not allowed to inactivate + * regions + */ + goto out_unlock; + } + for (i = 0; i < rsp_field->inactive_rgn_cnt; i++) { rgn_idx = be16_to_cpu(rsp_field->hpb_inactive_field[i]); dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, @@ -832,6 +865,8 @@ static void ufshpb_rsp_req_region_update(struct ufshpb_lu *hpb, ufshpb_update_inactive_info(hpb, rgn_idx); hpb->stats.rb_inactive_cnt++; } + +out_unlock: spin_unlock(&hpb->rsp_list_lock); dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, "Noti: #ACT %u #INACT %u\n", diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h index afeb6365daf8..5ec4023db74d 100644 --- a/drivers/scsi/ufs/ufshpb.h +++ b/drivers/scsi/ufs/ufshpb.h @@ -48,6 +48,11 @@ enum UFSHPB_MODE { HPB_DEVICE_CONTROL, }; +enum HPB_RGN_FLAGS { + RGN_FLAG_UPDATE = 0, + RGN_FLAG_DIRTY, +}; + enum UFSHPB_STATE { HPB_PRESENT = 1, HPB_SUSPEND, @@ -109,6 +114,7 @@ struct ufshpb_region { /* below information is used by lru */ struct list_head list_lru_rgn; + unsigned long rgn_flags; }; #define for_each_sub_region(rgn, i, srgn) \ From patchwork Tue Feb 2 08:30:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 374921 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 928F7C433DB for ; Tue, 2 Feb 2021 08:32:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 66B6064ED3 for ; Tue, 2 Feb 2021 08:32:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232710AbhBBIcP (ORCPT ); Tue, 2 Feb 2021 03:32:15 -0500 Received: from esa6.hgst.iphmx.com ([216.71.154.45]:42154 "EHLO esa6.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231429AbhBBIb4 (ORCPT ); Tue, 2 Feb 2021 03:31:56 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1612254716; x=1643790716; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=yE3IQNRIGv8P4YTxhrXqX1tuYBTk6wmw7N/UF17+BA4=; b=Eagx8Sm0nhs6F6vQsmCb4K9gSsyKHauoLfP3x8QtRK/d3utbOd+rnVqF bNqBOgMR/kHd2YXIeIXGkDqkrsO2J8IaHsVP9L6xJKspKmMF79ELy1tO5 pGnPAOBEc/BweMW6V65Wz6a0o21CxlfGGi53N4tzrstFCo9l2H7qsahQr EBDDFDXyLL1CvMtJhaBSn0odsbk8g3wldWK3P1jamvvfsXNUDJA60YhuZ So5RKvIlKWDgQOpGQCjqkiHauvsm8hFa7qbkt7H1ERsfzBAOq5Ex4fGce UdiE7oagkWfFcO6QzMDQdwg30g+51GKm5dXtLOXPNKagdyMNfhqlOb07S Q==; IronPort-SDR: EGJN8jlNPayLzVEfKMI6/7xaB16acbBZnL7WrXrtVqs3FaQ3GpctbmoM2TDBDftwDE7jKyUMf9 rdtXrcfpYo4yiN53XzlZFW9ZwwuPL9bzOwrL10dAV7b40PTO+DgxJkBMJ2dsV/6ouO2Zk6kgh7 SHdr5lVnXTq4rH5+OyqiwhSzCwFVqDPVp2KH/7dviTlIp5DNH1hRI53l1O2hDkAwBeJmHToGM/ 3qzr5Blyg+/fMhSy5ttpr7Ol15XfiNtkNEshhB9r1J4PZzkMVxbzEgKm4gN/v8rKbFgBopDD0J jAg= X-IronPort-AV: E=Sophos;i="5.79,394,1602518400"; d="scan'208";a="160083504" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 02 Feb 2021 16:30:50 +0800 IronPort-SDR: kXYUnD1HoWLAM6Q5zgSvPJ3THm0qWM3tZxhLoytMsrfrcOjCR6P+NNmp9D37VxZbpu6hz4Ri1U 3gka42BC9rFQeoP5ChqT07WwlF2Nc4iU90q0SsJZHdAtGDjBmbp5IesRGkEOK8M1s1Xa6tvGau KrLHyoFvM9jaOEfEiZYfEZ4URW5Rz+Mgk4PZxofrmgw++Ob1dxVy3m9yDV/16w+b+0dCH0+IZ9 oEOjbFu5tXNI2CmNsr+XfGIR2rGxoZjbSLrFAzSbDMViHd3+LC3cyFzcOutwqbRP+St28AquZq yzxr8Ra37IkYOgSjWWXT7P0o Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Feb 2021 00:12:59 -0800 IronPort-SDR: MI32eZ2EuH5vgypqpY3QF2brIIp6t4EGJ+ZBVQlSYCFs7IqFXCV27UGTfKT/SFYeqx0LH3qd2i vnOsd4vXHJWg7oQqHx3oSPhqwyBmXjjgte/PA08dy37CpIA5k8wSO242jp14EO3RG4/fCkNhT8 LJYUhRxPeo3Z63PzuTuXUfsslu0yQ2bCS/PG+iOhM77ja70CmsbwkdGcScjEqG9RBRva0geR4B sYQndWA5SDkW/d1sszLzR0Fe8vDCnv+1KGt5jitdH2I8/kcvcynKXxtaV/bzU+zdLKvDRYlyDQ 49E= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip01.wdc.com with ESMTP; 02 Feb 2021 00:30:47 -0800 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v2 3/9] scsi: ufshpb: Add region's reads counter Date: Tue, 2 Feb 2021 10:30:01 +0200 Message-Id: <20210202083007.104050-4-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210202083007.104050-1-avri.altman@wdc.com> References: <20210202083007.104050-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org In host control mode, reads are the major source of activation trials. Keep track of those reads counters, for both active as well inactive regions. We reset the read counter upon write - we are only interested in "clean" reads. less intuitive however, is that we also reset it upon region's deactivation. Region deactivation is often due to the fact that eviction took place: a region become active on the expense of another. This is happening when the max-active-regions limit has crossed. If we don’t reset the counter, we will trigger a lot of trashing of the HPB database, since few reads (or even one) to the region that was deactivated, will trigger a re-activation trial. Keep those counters normalized, as we are using those reads as a comparative score, to make various decisions. If during consecutive normalizations an active region has exhaust its reads - inactivate it. Signed-off-by: Avri Altman --- drivers/scsi/ufs/ufshpb.c | 109 ++++++++++++++++++++++++++++++++------ drivers/scsi/ufs/ufshpb.h | 6 +++ 2 files changed, 100 insertions(+), 15 deletions(-) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index 61de80a778a7..de4866d42df0 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -16,6 +16,9 @@ #include "ufshpb.h" #include "../sd.h" +#define WORK_PENDING 0 +#define ACTIVATION_THRSHLD 4 /* 4 IOs */ + /* memory management */ static struct kmem_cache *ufshpb_mctx_cache; static mempool_t *ufshpb_mctx_pool; @@ -261,6 +264,21 @@ ufshpb_set_hpb_read_to_upiu(struct ufshpb_lu *hpb, struct ufshcd_lrb *lrbp, lrbp->cmd->cmd_len = UFS_CDB_SIZE; } +static void ufshpb_update_active_info(struct ufshpb_lu *hpb, int rgn_idx, + int srgn_idx) +{ + struct ufshpb_region *rgn; + struct ufshpb_subregion *srgn; + + rgn = hpb->rgn_tbl + rgn_idx; + srgn = rgn->srgn_tbl + srgn_idx; + + list_del_init(&rgn->list_inact_rgn); + + if (list_empty(&srgn->list_act_srgn)) + list_add_tail(&srgn->list_act_srgn, &hpb->lh_act_srgn); +} + /* * This function will set up HPB read command using host-side L2P map data. * In HPB v1.0, maximum size of HPB read command is 4KB. @@ -306,12 +324,45 @@ void ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) ufshpb_set_ppn_dirty(hpb, rgn_idx, srgn_idx, srgn_offset, transfer_len); spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); + + if (hpb->is_hcm) { + spin_lock_irqsave(&rgn->rgn_lock, flags); + rgn->reads = 0; + spin_unlock_irqrestore(&rgn->rgn_lock, flags); + } + return; } if (!ufshpb_is_support_chunk(transfer_len)) return; + if (hpb->is_hcm) { + bool activate = false; + /* + * in host control mode, reads are the main source for + * activation trials. + */ + spin_lock_irqsave(&rgn->rgn_lock, flags); + rgn->reads++; + if (rgn->reads == ACTIVATION_THRSHLD) + activate = true; + spin_unlock_irqrestore(&rgn->rgn_lock, flags); + if (activate) { + spin_lock_irqsave(&hpb->rsp_list_lock, flags); + ufshpb_update_active_info(hpb, rgn_idx, srgn_idx); + hpb->stats.rb_active_cnt++; + spin_unlock_irqrestore(&hpb->rsp_list_lock, flags); + dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, + "activate region %d-%d\n", rgn_idx, srgn_idx); + } + + /* keep those counters normalized */ + if (rgn->reads > hpb->entries_per_srgn && + !test_and_set_bit(WORK_PENDING, &hpb->work_data_bits)) + schedule_work(&hpb->ufshpb_normalization_work); + } + spin_lock_irqsave(&hpb->rgn_state_lock, flags); if (ufshpb_test_ppn_dirty(hpb, rgn_idx, srgn_idx, srgn_offset, transfer_len)) { @@ -396,21 +447,6 @@ static int ufshpb_clear_dirty_bitmap(struct ufshpb_lu *hpb, return 0; } -static void ufshpb_update_active_info(struct ufshpb_lu *hpb, int rgn_idx, - int srgn_idx) -{ - struct ufshpb_region *rgn; - struct ufshpb_subregion *srgn; - - rgn = hpb->rgn_tbl + rgn_idx; - srgn = rgn->srgn_tbl + srgn_idx; - - list_del_init(&rgn->list_inact_rgn); - - if (list_empty(&srgn->list_act_srgn)) - list_add_tail(&srgn->list_act_srgn, &hpb->lh_act_srgn); -} - static void ufshpb_update_inactive_info(struct ufshpb_lu *hpb, int rgn_idx) { struct ufshpb_region *rgn; @@ -646,6 +682,14 @@ static void __ufshpb_evict_region(struct ufshpb_lu *hpb, ufshpb_cleanup_lru_info(lru_info, rgn); + if (hpb->is_hcm) { + unsigned long flags; + + spin_lock_irqsave(&rgn->rgn_lock, flags); + rgn->reads = 0; + spin_unlock_irqrestore(&rgn->rgn_lock, flags); + } + for_each_sub_region(rgn, srgn_idx, srgn) ufshpb_purge_active_subregion(hpb, srgn); } @@ -1044,6 +1088,36 @@ static void ufshpb_run_inactive_region_list(struct ufshpb_lu *hpb) spin_unlock_irqrestore(&hpb->rsp_list_lock, flags); } +static void ufshpb_normalization_work_handler(struct work_struct *work) +{ + struct ufshpb_lu *hpb; + int rgn_idx; + + hpb = container_of(work, struct ufshpb_lu, ufshpb_normalization_work); + + for (rgn_idx = 0; rgn_idx < hpb->rgns_per_lu; rgn_idx++) { + struct ufshpb_region *rgn = hpb->rgn_tbl + rgn_idx; + + if (rgn->reads) { + unsigned long flags; + + spin_lock_irqsave(&rgn->rgn_lock, flags); + rgn->reads = (rgn->reads >> 1); + spin_unlock_irqrestore(&rgn->rgn_lock, flags); + } + + if (rgn->rgn_state != HPB_RGN_ACTIVE || rgn->reads) + continue; + + /* if region is active but has no reads - inactivate it */ + spin_lock(&hpb->rsp_list_lock); + ufshpb_update_inactive_info(hpb, rgn->rgn_idx); + spin_unlock(&hpb->rsp_list_lock); + } + + clear_bit(WORK_PENDING, &hpb->work_data_bits); +} + static void ufshpb_map_work_handler(struct work_struct *work) { struct ufshpb_lu *hpb = container_of(work, struct ufshpb_lu, map_work); @@ -1313,6 +1387,9 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb) INIT_LIST_HEAD(&hpb->list_hpb_lu); INIT_WORK(&hpb->map_work, ufshpb_map_work_handler); + if (hpb->is_hcm) + INIT_WORK(&hpb->ufshpb_normalization_work, + ufshpb_normalization_work_handler); hpb->map_req_cache = kmem_cache_create("ufshpb_req_cache", sizeof(struct ufshpb_req), 0, 0, NULL); @@ -1399,6 +1476,8 @@ static void ufshpb_discard_rsp_lists(struct ufshpb_lu *hpb) static void ufshpb_cancel_jobs(struct ufshpb_lu *hpb) { + if (hpb->is_hcm) + cancel_work_sync(&hpb->ufshpb_normalization_work); cancel_work_sync(&hpb->map_work); } diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h index 5ec4023db74d..381b5fed61a5 100644 --- a/drivers/scsi/ufs/ufshpb.h +++ b/drivers/scsi/ufs/ufshpb.h @@ -115,6 +115,10 @@ struct ufshpb_region { /* below information is used by lru */ struct list_head list_lru_rgn; unsigned long rgn_flags; + + /* region reads - for host mode */ + spinlock_t rgn_lock; + unsigned int reads; }; #define for_each_sub_region(rgn, i, srgn) \ @@ -175,6 +179,8 @@ struct ufshpb_lu { /* for selecting victim */ struct victim_select_info lru_info; + struct work_struct ufshpb_normalization_work; + unsigned long work_data_bits; /* pinned region information */ u32 lu_pinned_start; From patchwork Tue Feb 2 08:30:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 374920 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48AEEC433E0 for ; Tue, 2 Feb 2021 08:33:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E20BE64EC3 for ; Tue, 2 Feb 2021 08:33:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232726AbhBBIcm (ORCPT ); Tue, 2 Feb 2021 03:32:42 -0500 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:10200 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232660AbhBBIcd (ORCPT ); Tue, 2 Feb 2021 03:32:33 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1612254752; x=1643790752; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ry04P+tqVL/wure0v7X6+p7Jj+Skd5MdDAlnOEQ0C8Y=; b=RNkMTxfQZ+gnRmuugNDuGvlvz5ysZW7nt8lVZiWgdZ/XwXSfXTkH+1P+ YRfcYQnLZEpPbFCnTlEsAHbYFCVDSDVsLn1s9k7v8rXF64nMonEBJXgls mE9sxSnJ6j/Od+YJJzoy/96L9GkeNzWnrc7AsB7oJeJ2jDmjsoGzlHYAW +eYHmySdm+s10UFegVQhjYDcCfYvatoZcePEypr+oL1Fmx3YvIwOfid7B 2epd/iMrSTHUSd7PyhgmpjvXgwe+1lk0pO9EDJVT3zCe40bgLKPbNwtlg WRQAHXL1dcUBFkkjIpljBUoEZ6csnvsgJvUsSpLJK2FdZemqwnxnP6NLu w==; IronPort-SDR: wOHIOQ0wdVzbIxkUbuEBs9IF+wI9uAzGDd5UvaBwjd5Ktm5zlb6SesRC9xOK7N47xMD6Y8B/Zi g/iooUgzNTzOFNs46tPcj2t+NBxkK0+X0UEMfAwVE3Bq68EdTk2EA8iqbtMjplWY2lUCcB8X8b IfrjGWthWtq9Dw7wXqMCx0mX3NjUWJ3LYU9Evq49AZPo0J4peV3vKwlouBoYbYcp6Pt/5oLm7s uIFhJYf7yuGWo/FGFgP18mM48uem2AQKEtgC/p9oS9fDtjF5oOOTwKMNO5NH64/8f8cl8SjZh5 rDU= X-IronPort-AV: E=Sophos;i="5.79,394,1602518400"; d="scan'208";a="158900021" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 02 Feb 2021 16:31:13 +0800 IronPort-SDR: 7R0YjQLV7jNRqSM9sLnfLhtx2m1aWlBtnW9dBhacyasJGWclMhNq3d+DBUjE01FdgfGSDXI+rj 1xDCJZaBvRAY9N0h5dK6FsEPj/RlOGtPd28cDiZXTRuPF8BXyUrRVVhNYQwFpYgw4Hh8X7oILh gr48T4N8rinBhg3u8VL9srWx3VQFhRVE9bbTko6A44B78aJGeHZCS4UIA5RnkN3d9kURMhgnp0 fxfdR5QHfNGvEkdyeIxfbSxWYHFNxVubUMkM41Ubvc+fVregVTb7Egj2fYFsp3/MiQ/Mmq9bz8 INy3kNBHDsOgz5PFoeEvBIF0 Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Feb 2021 00:13:22 -0800 IronPort-SDR: xkprG/iLEkUfluy30UCFNEonRnbZCZaBNqos5pWqoiEVOzaZAv7N4VE8EKcAayDuW0KEWuwRex 1oGLalbxdOPAbwxJlhpmCPyc1RgP15lMCU918ml7Kn3zqxFSqd5O5uRMLNzzEkRrpsFq/5RWZH ppnKBr1Ijf121YI2CFKp6WMdpMIbvE0qT6h8nq1YamWCi93PZCJV82ui2Ovmjv9EPF8Rw4VKZo qjj+sGk82smWWzNLAfJN9h+kpVZ3eJll1ir50YibEzKBx0KbMSjmbDuSV8nmcaa1eBrIS+NtHF n+o= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip01.wdc.com with ESMTP; 02 Feb 2021 00:31:10 -0800 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v2 6/9] scsi: ufshpb: Add hpb dev reset response Date: Tue, 2 Feb 2021 10:30:04 +0200 Message-Id: <20210202083007.104050-7-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210202083007.104050-1-avri.altman@wdc.com> References: <20210202083007.104050-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org The spec does not define what is the host's recommended response when the device send hpb dev reset response (oper 0x2). We will update all active hpb regions: mark them and do that on the next read. Signed-off-by: Avri Altman --- drivers/scsi/ufs/ufshpb.c | 54 ++++++++++++++++++++++++++++++++++++--- drivers/scsi/ufs/ufshpb.h | 1 + 2 files changed, 52 insertions(+), 3 deletions(-) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index 49c74de539b7..28e0025507a1 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -17,6 +17,7 @@ #include "../sd.h" #define WORK_PENDING 0 +#define RESET_PENDING 1 #define ACTIVATION_THRSHLD 4 /* 4 IOs */ #define EVICTION_THRSHLD (ACTIVATION_THRSHLD << 6) /* 256 IOs */ @@ -349,7 +350,8 @@ void ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) if (rgn->reads == ACTIVATION_THRSHLD) activate = true; spin_unlock_irqrestore(&rgn->rgn_lock, flags); - if (activate) { + if (activate || + test_and_clear_bit(RGN_FLAG_UPDATE, &rgn->rgn_flags)) { spin_lock_irqsave(&hpb->rsp_list_lock, flags); ufshpb_update_active_info(hpb, rgn_idx, srgn_idx); hpb->stats.rb_active_cnt++; @@ -1068,6 +1070,24 @@ void ufshpb_rsp_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) case HPB_RSP_DEV_RESET: dev_warn(&hpb->sdev_ufs_lu->sdev_dev, "UFS device lost HPB information during PM.\n"); + + if (hpb->is_hcm) { + struct ufshpb_lu *h; + struct scsi_device *sdev; + + shost_for_each_device(sdev, hba->host) { + h = sdev->hostdata; + if (!h) + continue; + + if (test_and_set_bit(RESET_PENDING, + &h->work_data_bits)) + continue; + + schedule_work(&h->ufshpb_lun_reset_work); + } + } + break; default: dev_notice(&hpb->sdev_ufs_lu->sdev_dev, @@ -1200,6 +1220,27 @@ static void ufshpb_run_inactive_region_list(struct ufshpb_lu *hpb) spin_unlock_irqrestore(&hpb->rsp_list_lock, flags); } +static void ufshpb_reset_work_handler(struct work_struct *work) +{ + struct ufshpb_lu *hpb; + struct victim_select_info *lru_info; + struct ufshpb_region *rgn; + unsigned long flags; + + hpb = container_of(work, struct ufshpb_lu, ufshpb_lun_reset_work); + + lru_info = &hpb->lru_info; + + spin_lock_irqsave(&hpb->rgn_state_lock, flags); + + list_for_each_entry(rgn, &lru_info->lh_lru_rgn, list_lru_rgn) + set_bit(RGN_FLAG_UPDATE, &rgn->rgn_flags); + + spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); + + clear_bit(RESET_PENDING, &hpb->work_data_bits); +} + static void ufshpb_normalization_work_handler(struct work_struct *work) { struct ufshpb_lu *hpb; @@ -1392,6 +1433,8 @@ static int ufshpb_alloc_region_tbl(struct ufs_hba *hba, struct ufshpb_lu *hpb) } else { rgn->rgn_state = HPB_RGN_INACTIVE; } + + rgn->rgn_flags = 0; } return 0; @@ -1502,9 +1545,12 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb) INIT_LIST_HEAD(&hpb->list_hpb_lu); INIT_WORK(&hpb->map_work, ufshpb_map_work_handler); - if (hpb->is_hcm) + if (hpb->is_hcm) { INIT_WORK(&hpb->ufshpb_normalization_work, ufshpb_normalization_work_handler); + INIT_WORK(&hpb->ufshpb_lun_reset_work, + ufshpb_reset_work_handler); + } hpb->map_req_cache = kmem_cache_create("ufshpb_req_cache", sizeof(struct ufshpb_req), 0, 0, NULL); @@ -1591,8 +1637,10 @@ static void ufshpb_discard_rsp_lists(struct ufshpb_lu *hpb) static void ufshpb_cancel_jobs(struct ufshpb_lu *hpb) { - if (hpb->is_hcm) + if (hpb->is_hcm) { + cancel_work_sync(&hpb->ufshpb_lun_reset_work); cancel_work_sync(&hpb->ufshpb_normalization_work); + } cancel_work_sync(&hpb->map_work); } diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h index 71b082ee7876..e55892ceb3fc 100644 --- a/drivers/scsi/ufs/ufshpb.h +++ b/drivers/scsi/ufs/ufshpb.h @@ -184,6 +184,7 @@ struct ufshpb_lu { /* for selecting victim */ struct victim_select_info lru_info; struct work_struct ufshpb_normalization_work; + struct work_struct ufshpb_lun_reset_work; unsigned long work_data_bits; /* pinned region information */ From patchwork Tue Feb 2 08:30:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 374919 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AAB1BC433E0 for ; Tue, 2 Feb 2021 08:33:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4A85564DA5 for ; Tue, 2 Feb 2021 08:33:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232731AbhBBIdP (ORCPT ); Tue, 2 Feb 2021 03:33:15 -0500 Received: from esa6.hgst.iphmx.com ([216.71.154.45]:42154 "EHLO esa6.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232556AbhBBIcr (ORCPT ); Tue, 2 Feb 2021 03:32:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1612254765; x=1643790765; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/CXkI06zrTCTDCLIGSit9yI0yNHWnYJP68CgDGiaPi4=; b=i8pOVeInKgaI1KRLQ+4OPcE9Ag8bAHdw4jSANsnfks0+WN+KyE+QE8QP N4iky6cRqVRXgtjLmEekyWzoNI1d3CtAbLoJjKxndUBhvvYEdY+arn0M/ ECidODTia8IFtRVItOp/StiX1vyQxdhD8sVh3vF4ePNGIu98gxs66/bla 5Gffer2br+iW8770VvCyGoYN1Z0gQ/uBTwwkavJAreWNcknl1BPk5CqEW tOYyY0Vp0eTKAsK3puKcHThIROMqLzOTQoDgaSCAm7gCMFNoNXjJluLzL E+sGCRw6H6IJ8uaOzElacjT2xKaOOY4tGnV8wkjwqqBoSo1dpePO4+8Mo Q==; IronPort-SDR: NvHZjNaseUiBxn5uyVlHNzbiwoPXG28ipj+fYtUW7NQ61v6BjX0ikGlYnSX89zEwF09tjQ14Qb xm3Bzjf31y/40LmpHw67f1fwQucdP2W+E3IRJRvc7NI/M3niAC8QIYH8C2FMGUkaNMsmrZit6u ujjGVnn7q2BMqkfYyaZ/G9FPmH1aV/lZS/1NqWzaOAZ+lImNwbpWy8jFsADl+eGmwY8pWUEqSv 5UVhwd1zu02JcJf3rncbLCP1QZ5eWJPo3EBUrXxDlPIMNJutG7A6gNO7G8BA1b6oyDLl6RXCnr +TQ= X-IronPort-AV: E=Sophos;i="5.79,394,1602518400"; d="scan'208";a="160083631" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 02 Feb 2021 16:31:58 +0800 IronPort-SDR: /hzAcCydIV7/Qpni2XANgMV9oo2cesNR66IFmDVo3kkKIrIHgWiiGrM6ICA6FbRYqmoXASuMyl wP5zlFKBDFND68oopGzdy5xrm4AjfuEJ4/zWVhlWDcEl2TUkxFPw7QiqyipEaMZqQqzA8vtDa8 VW6yj25BHN6r85bdkbGhwk3Bg8bFyYOhaIsSsnUzJDyUy5t0nYsGq3czs2xEOJIKSqHq/24daC WYf9m5cFERpKzeKeC9tfybk4K/5+VzoJ/IKdLMP5ji587YD4o2rE3zaVXx3XqOc/8dFdsOFcxc FbJUH9tG+uJVC83Q74s3EwmR Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Feb 2021 00:16:06 -0800 IronPort-SDR: z1VJcjcPSmCfRBBKoi5UGFY0ZX7IMKOz/Z+pJEBNrd14MWmi8KVp2RomlqeT4fb6ck5+rpqXA6 WwkBkwsh7CyTA/G7CgrSCC/TRu6MzL2ejYRfq48c0ZQWt3tHQFbebqrwEPxRWSsx7vIDIaJ20t bwgEkmKmTXtElo3RRY6gWu3yWEDg6hithYv/bVbmOvmbu8KRHWisiHetv13JQLeR7DcArF1h8W /fdJ2RwaHtLmIHUhLxrzBL0CmtASpqL+3/eXfaSU66WYwkYK8KSPr5sN9M4pJUqYQtSj67JLlp hqM= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip01.wdc.com with ESMTP; 02 Feb 2021 00:31:54 -0800 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v2 9/9] scsi: ufshpb: Make host mode parameters configurable Date: Tue, 2 Feb 2021 10:30:07 +0200 Message-Id: <20210202083007.104050-10-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210202083007.104050-1-avri.altman@wdc.com> References: <20210202083007.104050-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org We can make use of this commit, to elaborate some more of the host control mode logic, explaining what role play each and every variable: - activation_thld - In host control mode, reads are the major source of activation trials. once this threshold hs met, the region is added to the "to-be-activated" list. Since we reset the read counter upon write, this include sending a rb command updating the region ppn as well. - normalization_factor - We think of the regions as "buckets". Those buckets are being filled with reads, and emptied on write. We use entries_per_srgn - the amount of blocks in a subregion as our bucket size. This applies because HPB1.0 only concern a single-block reads. Once the bucket size is crossed, we trigger a normalization work - not only to avoid overflow, but mainly because we want to keep those counters normalized, as we are using those reads as a comparative score, to make various decisions. The normalization is dividing (shift right) the read counter by the normalization_factor. If during consecutive normalizations an active region has exhaust its reads - inactivate it. - eviction_thld_enter - Region deactivation is often due to the fact that eviction took place: a region become active on the expense of another. This is happening when the max-active-regions limit has crossed. In host mode, eviction is considered an extreme measure. We want to verify that the entering region has enough reads, and the exiting region has much less reads. eviction_thld_enter is the min reads that a region must have in order to be considered as a candidate to evict other region. - eviction_thld_exit - same as above for the exiting region. A region is consider to be a candidate to be evicted, only if it has less reads than eviction_thld_exit. - read_timeout_ms - In order not to hang on to “cold” regions, we shall inactivate a region that has no READ access for a predefined amount of time - read_timeout_ms. If read_timeout_ms has expired, and the region is dirty - it is less likely that we can make any use of HPB-READing it. So we inactivate it. Still, deactivation has its overhead, and we may still benefit from HPB-READing this region if it is clean - see read_timeout_expiries. - read_timeout_expiries - if the region read timeout has expired, but the region is clean, just re-wind its timer for another spin. Do that as long as it is clean and did not exhaust its read_timeout_expiries threshold. - timeout_polling_interval_ms - the frequency in which the delayed worker that checks the read_timeouts is awaken. Signed-off-by: Avri Altman --- drivers/scsi/ufs/ufshcd.c | 1 + drivers/scsi/ufs/ufshpb.c | 284 +++++++++++++++++++++++++++++++++++--- drivers/scsi/ufs/ufshpb.h | 22 +++ 3 files changed, 290 insertions(+), 17 deletions(-) diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index 1b521b366067..8dac66783c46 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -8014,6 +8014,7 @@ static const struct attribute_group *ufshcd_driver_groups[] = { &ufs_sysfs_lun_attributes_group, #ifdef CONFIG_SCSI_UFS_HPB &ufs_sysfs_hpb_stat_group, + &ufs_sysfs_hpb_param_group, #endif NULL, }; diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index cec6f641a103..69a742acf0ee 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -351,7 +351,7 @@ void ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) */ spin_lock_irqsave(&rgn->rgn_lock, flags); rgn->reads++; - if (rgn->reads == ACTIVATION_THRSHLD) + if (rgn->reads == hpb->params.activation_thld) activate = true; spin_unlock_irqrestore(&rgn->rgn_lock, flags); if (activate || @@ -687,6 +687,7 @@ static void ufshpb_read_to_handler(struct work_struct *work) struct victim_select_info *lru_info; struct ufshpb_region *rgn; unsigned long flags; + unsigned int poll; LIST_HEAD(expired_list); hpb = container_of(dwork, struct ufshpb_lu, ufshpb_read_to_work); @@ -713,8 +714,9 @@ static void ufshpb_read_to_handler(struct work_struct *work) if (dirty || expired) list_add(&rgn->list_expired_rgn, &expired_list); else - rgn->read_timeout = ktime_add_ms(ktime_get(), - READ_TO_MS); + rgn->read_timeout = + ktime_add_ms(ktime_get(), + hpb->params.read_timeout_ms); } spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); @@ -729,8 +731,9 @@ static void ufshpb_read_to_handler(struct work_struct *work) clear_bit(TIMEOUT_WORK_PENDING, &hpb->work_data_bits); + poll = hpb->params.timeout_polling_interval_ms; schedule_delayed_work(&hpb->ufshpb_read_to_work, - msecs_to_jiffies(POLLING_INTERVAL_MS)); + msecs_to_jiffies(poll)); } static void ufshpb_add_lru_info(struct victim_select_info *lru_info, @@ -740,8 +743,11 @@ static void ufshpb_add_lru_info(struct victim_select_info *lru_info, list_add_tail(&rgn->list_lru_rgn, &lru_info->lh_lru_rgn); atomic_inc(&lru_info->active_cnt); if (rgn->hpb->is_hcm) { - rgn->read_timeout = ktime_add_ms(ktime_get(), READ_TO_MS); - rgn->read_timeout_expiries = READ_TO_EXPIRIES; + rgn->read_timeout = + ktime_add_ms(ktime_get(), + rgn->hpb->params.read_timeout_ms); + rgn->read_timeout_expiries = + rgn->hpb->params.read_timeout_expiries; } } @@ -765,7 +771,8 @@ static struct ufshpb_region *ufshpb_victim_lru_info(struct ufshpb_lu *hpb) * in host control mode, verify that the exiting region * has less reads */ - if (hpb->is_hcm && rgn->reads > (EVICTION_THRSHLD >> 1)) + if (hpb->is_hcm && + rgn->reads > hpb->params.eviction_thld_exit) continue; victim_rgn = rgn; @@ -979,7 +986,8 @@ static int ufshpb_add_region(struct ufshpb_lu *hpb, struct ufshpb_region *rgn) * in host control mode, verify that the entering * region has enough reads */ - if (hpb->is_hcm && rgn->reads < EVICTION_THRSHLD) { + if (hpb->is_hcm && + rgn->reads < hpb->params.eviction_thld_enter) { ret = -EACCES; goto out; } @@ -1306,8 +1314,10 @@ static void ufshpb_normalization_work_handler(struct work_struct *work) { struct ufshpb_lu *hpb; int rgn_idx; + u8 factor; hpb = container_of(work, struct ufshpb_lu, ufshpb_normalization_work); + factor = hpb->params.normalization_factor; for (rgn_idx = 0; rgn_idx < hpb->rgns_per_lu; rgn_idx++) { struct ufshpb_region *rgn = hpb->rgn_tbl + rgn_idx; @@ -1316,7 +1326,7 @@ static void ufshpb_normalization_work_handler(struct work_struct *work) unsigned long flags; spin_lock_irqsave(&rgn->rgn_lock, flags); - rgn->reads = (rgn->reads >> 1); + rgn->reads = (rgn->reads >> factor); spin_unlock_irqrestore(&rgn->rgn_lock, flags); } @@ -1546,6 +1556,238 @@ static void ufshpb_destroy_region_tbl(struct ufshpb_lu *hpb) } /* SYSFS functions */ +#define ufshpb_sysfs_param_show_func(__name) \ +static ssize_t __name##_show(struct device *dev, \ + struct device_attribute *attr, char *buf) \ +{ \ + struct scsi_device *sdev = to_scsi_device(dev); \ + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); \ + if (!hpb) \ + return -ENODEV; \ + if (!hpb->is_hcm) \ + return -EOPNOTSUPP; \ + \ + return sysfs_emit(buf, "%d\n", hpb->params.__name); \ +} + + +ufshpb_sysfs_param_show_func(activation_thld); +static ssize_t +activation_thld_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + if (val <= 0) + return -EINVAL; + + hpb->params.activation_thld = val; + + return count; +} +static DEVICE_ATTR_RW(activation_thld); + +ufshpb_sysfs_param_show_func(normalization_factor); +static ssize_t +normalization_factor_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + if (val <= 0 || val > ilog2(hpb->entries_per_srgn)) + return -EINVAL; + + hpb->params.normalization_factor = val; + + return count; +} +static DEVICE_ATTR_RW(normalization_factor); + +ufshpb_sysfs_param_show_func(eviction_thld_enter); +static ssize_t +eviction_thld_enter_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + if (val <= hpb->params.eviction_thld_exit) + return -EINVAL; + + hpb->params.eviction_thld_enter = val; + + return count; +} +static DEVICE_ATTR_RW(eviction_thld_enter); + +ufshpb_sysfs_param_show_func(eviction_thld_exit); +static ssize_t +eviction_thld_exit_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + if (val <= hpb->params.activation_thld) + return -EINVAL; + + hpb->params.eviction_thld_exit = val; + + return count; +} +static DEVICE_ATTR_RW(eviction_thld_exit); + +ufshpb_sysfs_param_show_func(read_timeout_ms); +static ssize_t +read_timeout_ms_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + if (val <= 0) + return -EINVAL; + + hpb->params.read_timeout_ms = val; + + return count; +} +static DEVICE_ATTR_RW(read_timeout_ms); + +ufshpb_sysfs_param_show_func(read_timeout_expiries); +static ssize_t +read_timeout_expiries_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + if (val <= 0) + return -EINVAL; + + hpb->params.read_timeout_expiries = val; + + return count; +} +static DEVICE_ATTR_RW(read_timeout_expiries); + +ufshpb_sysfs_param_show_func(timeout_polling_interval_ms); +static ssize_t +timeout_polling_interval_ms_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + if (val <= 0) + return -EINVAL; + + hpb->params.timeout_polling_interval_ms = val; + + return count; +} +static DEVICE_ATTR_RW(timeout_polling_interval_ms); + +static struct attribute *hpb_dev_param_attrs[] = { + &dev_attr_activation_thld.attr, + &dev_attr_normalization_factor.attr, + &dev_attr_eviction_thld_enter.attr, + &dev_attr_eviction_thld_exit.attr, + &dev_attr_read_timeout_ms.attr, + &dev_attr_read_timeout_expiries.attr, + &dev_attr_timeout_polling_interval_ms.attr, + NULL, +}; + +struct attribute_group ufs_sysfs_hpb_param_group = { + .name = "hpb_param_sysfs", + .attrs = hpb_dev_param_attrs, +}; + +static void ufshpb_param_init(struct ufshpb_lu *hpb) +{ + hpb->params.activation_thld = ACTIVATION_THRSHLD; + hpb->params.normalization_factor = 1; + hpb->params.eviction_thld_enter = (ACTIVATION_THRSHLD << 6); + hpb->params.eviction_thld_exit = (ACTIVATION_THRSHLD << 5); + hpb->params.read_timeout_ms = READ_TO_MS; + hpb->params.read_timeout_expiries = READ_TO_EXPIRIES; + hpb->params.timeout_polling_interval_ms = POLLING_INTERVAL_MS; +} + #define ufshpb_sysfs_attr_show_func(__name) \ static ssize_t __name##_show(struct device *dev, \ struct device_attribute *attr, char *buf) \ @@ -1568,7 +1810,7 @@ ufshpb_sysfs_attr_show_func(rb_inactive_cnt); ufshpb_sysfs_attr_show_func(map_req_cnt); ufshpb_sysfs_attr_show_func(umap_req_cnt); -static struct attribute *hpb_dev_attrs[] = { +static struct attribute *hpb_dev_stat_attrs[] = { &dev_attr_hit_cnt.attr, &dev_attr_miss_cnt.attr, &dev_attr_rb_noti_cnt.attr, @@ -1580,8 +1822,8 @@ static struct attribute *hpb_dev_attrs[] = { }; struct attribute_group ufs_sysfs_hpb_stat_group = { - .name = "hpb_sysfs", - .attrs = hpb_dev_attrs, + .name = "hpb_stat_sysfs", + .attrs = hpb_dev_stat_attrs, }; static void ufshpb_stat_init(struct ufshpb_lu *hpb) @@ -1641,9 +1883,14 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb) ufshpb_stat_init(hpb); - if (hpb->is_hcm) + if (hpb->is_hcm) { + unsigned int poll; + + ufshpb_param_init(hpb); + poll = hpb->params.timeout_polling_interval_ms; schedule_delayed_work(&hpb->ufshpb_read_to_work, - msecs_to_jiffies(POLLING_INTERVAL_MS)); + msecs_to_jiffies(poll)); + } return 0; @@ -1818,10 +2065,13 @@ void ufshpb_resume(struct ufs_hba *hba) continue; ufshpb_set_state(hpb, HPB_PRESENT); ufshpb_kick_map_work(hpb); - if (hpb->is_hcm) - schedule_delayed_work(&hpb->ufshpb_read_to_work, - msecs_to_jiffies(POLLING_INTERVAL_MS)); + if (hpb->is_hcm) { + unsigned int poll = + hpb->params.timeout_polling_interval_ms; + schedule_delayed_work(&hpb->ufshpb_read_to_work, + msecs_to_jiffies(poll)); + } } } diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h index 207925cf1f44..fafc64943c53 100644 --- a/drivers/scsi/ufs/ufshpb.h +++ b/drivers/scsi/ufs/ufshpb.h @@ -160,6 +160,26 @@ struct victim_select_info { atomic_t active_cnt; }; +/** + * ufshpb_params - parameters for host control logic + * @activation_thld - min reads [IOs] to activate/update a region + * @normalization_factor - shift right the region's reads + * @eviction_thld_enter - min reads [IOs] for the entering region in eviction + * @eviction_thld_exit - max reads [IOs] for the exiting region in eviction + * @read_timeout_ms - timeout [ms] from the last read IO to the region + * @read_timeout_expiries - amount of allowable timeout expireis + * @timeout_polling_interval_ms - frequency in which timeouts are checked + */ +struct ufshpb_params { + unsigned int activation_thld; + unsigned int normalization_factor; + unsigned int eviction_thld_enter; + unsigned int eviction_thld_exit; + unsigned int read_timeout_ms; + unsigned int read_timeout_expiries; + unsigned int timeout_polling_interval_ms; +}; + struct ufshpb_stats { u64 hit_cnt; u64 miss_cnt; @@ -212,6 +232,7 @@ struct ufshpb_lu { bool is_hcm; struct ufshpb_stats stats; + struct ufshpb_params params; struct kmem_cache *map_req_cache; struct kmem_cache *m_page_cache; @@ -251,6 +272,7 @@ bool ufshpb_is_allowed(struct ufs_hba *hba); void ufshpb_get_geo_info(struct ufs_hba *hba, u8 *geo_buf); void ufshpb_get_dev_info(struct ufs_hba *hba, u8 *desc_buf); extern struct attribute_group ufs_sysfs_hpb_stat_group; +extern struct attribute_group ufs_sysfs_hpb_param_group; #endif #endif /* End of Header */