From patchwork Wed Sep 23 06:45:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daejun Park X-Patchwork-Id: 296681 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0ED9EC4363D for ; Wed, 23 Sep 2020 06:48:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 968E0221E8 for ; Wed, 23 Sep 2020 06:48:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=samsung.com header.i=@samsung.com header.b="jIJwvP0v" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726672AbgIWGsH (ORCPT ); Wed, 23 Sep 2020 02:48:07 -0400 Received: from mailout2.samsung.com ([203.254.224.25]:29236 "EHLO mailout2.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726656AbgIWGsH (ORCPT ); Wed, 23 Sep 2020 02:48:07 -0400 Received: from epcas1p4.samsung.com (unknown [182.195.41.48]) by mailout2.samsung.com (KnoxPortal) with ESMTP id 20200923064803epoutp022ce604507a5e22163f633ff35a5002ea~3Vtk2cBwG2471724717epoutp02G for ; Wed, 23 Sep 2020 06:48:03 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout2.samsung.com 20200923064803epoutp022ce604507a5e22163f633ff35a5002ea~3Vtk2cBwG2471724717epoutp02G DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1600843683; bh=hCVX5Vu4Qj7Beu6byA/RHJt595BSqVdx0/rOLo5wadM=; h=Subject:Reply-To:From:To:CC:In-Reply-To:Date:References:From; b=jIJwvP0vpEvhdzKL1UZI3u1FnSelVf/rN4SRuV48/KBiaW0EmExWVR0rMbD+Ljsxg vBa0suQ3MwEcTMg+qmp+89imhxv0hu0fwYvsxCFYqoLrxXd4y/WvpkkBi9lPq4Sqnv S7DwUIIrD2dx3ZdMY+mzflCzHot+gsOkbwBS/1Oo= Received: from epcpadp2 (unknown [182.195.40.12]) by epcas1p3.samsung.com (KnoxPortal) with ESMTP id 20200923064803epcas1p3170daf84aa31b287fec806d3fd9a5b82~3VtkcrJi42284422844epcas1p3P; Wed, 23 Sep 2020 06:48:03 +0000 (GMT) Mime-Version: 1.0 Subject: [PATCH v12 4/4] scsi: ufs: Prepare HPB read for cached sub-region Reply-To: daejun7.park@samsung.com Sender: Daejun Park From: Daejun Park To: Daejun Park , "avri.altman@wdc.com" , "jejb@linux.ibm.com" , "martin.petersen@oracle.com" , "asutoshd@codeaurora.org" , "beanhuo@micron.com" , "stanley.chu@mediatek.com" , "cang@codeaurora.org" , "bvanassche@acm.org" , "tomas.winkler@intel.com" , ALIM AKHTAR CC: "linux-scsi@vger.kernel.org" , "linux-kernel@vger.kernel.org" , Sang-yoon Oh , Sung-Jun Park , yongmyung lee , Jinyoung CHOI , Adel Choi , BoRam Shin , SEUNGUK SHIN X-Priority: 3 X-Content-Kind-Code: NORMAL In-Reply-To: <231786897.01600843382167.JavaMail.epsvc@epcpadp2> X-CPGS-Detection: blocking_info_exchange X-Drm-Type: N,general X-Msg-Generator: Mail X-Msg-Type: PERSONAL X-Reply-Demand: N Message-ID: <1239183618.61600843683026.JavaMail.epsvc@epcpadp2> Date: Wed, 23 Sep 2020 15:45:58 +0900 X-CMS-MailID: 20200923064558epcms2p24c57a1e36af17acc243041d6ffc122cd X-Sendblock-Type: AUTO_CONFIDENTIAL X-CPGSPASS: Y X-CPGSPASS: Y X-Hop-Count: 3 X-CMS-RootMailID: 20200923063922epcms2p8ba87e252935bc8cc10f55639c0e2d601 References: <231786897.01600843382167.JavaMail.epsvc@epcpadp2> Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This patch changes the read I/O to the HPB read I/O. If the logical address of the read I/O belongs to active sub-region, the HPB driver modifies the read I/O command to HPB read. It modifies the UPIU command of UFS instead of modifying the existing SCSI command. In the HPB version 1.0, the maximum read I/O size that can be converted to HPB read is 4KB. The dirty map of the active sub-region prevents an incorrect HPB read that has stale physical page number which is updated by previous write I/O. Acked-by: Avri Altman Reviewed-by: Bart Van Assche Tested-by: Bean Huo Signed-off-by: Daejun Park --- drivers/scsi/ufs/ufshpb.c | 231 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 231 insertions(+) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index 0dd185758dde..03485853a45b 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -31,6 +31,29 @@ bool ufshpb_is_allowed(struct ufs_hba *hba) return !(hba->ufshpb_dev.hpb_disabled); } +static int ufshpb_is_valid_srgn(struct ufshpb_region *rgn, + struct ufshpb_subregion *srgn) +{ + return rgn->rgn_state != HPB_RGN_INACTIVE && + srgn->srgn_state == HPB_SRGN_VALID; +} + +static bool ufshpb_is_read_cmd(struct scsi_cmnd *cmd) +{ + return req_op(cmd->request) == REQ_OP_READ; +} + +static bool ufshpb_is_write_discard_cmd(struct scsi_cmnd *cmd) +{ + return op_is_write(req_op(cmd->request)) || + op_is_discard(req_op(cmd->request)); +} + +static bool ufshpb_is_support_chunk(int transfer_len) +{ + return transfer_len <= HPB_MULTI_CHUNK_HIGH; +} + static bool ufshpb_is_general_lun(int lun) { return lun < UFS_UPIU_MAX_UNIT_NUM_ID; @@ -97,8 +120,216 @@ static void ufshpb_set_state(struct ufshpb_lu *hpb, int state) atomic_set(&hpb->hpb_state, state); } +static void ufshpb_set_ppn_dirty(struct ufshpb_lu *hpb, int rgn_idx, + int srgn_idx, int srgn_offset, int cnt) +{ + struct ufshpb_region *rgn; + struct ufshpb_subregion *srgn; + int set_bit_len; + int bitmap_len = hpb->entries_per_srgn; + +next_srgn: + rgn = hpb->rgn_tbl + rgn_idx; + srgn = rgn->srgn_tbl + srgn_idx; + + if ((srgn_offset + cnt) > bitmap_len) + set_bit_len = bitmap_len - srgn_offset; + else + set_bit_len = cnt; + + if (rgn->rgn_state != HPB_RGN_INACTIVE && + srgn->srgn_state == HPB_SRGN_VALID) + bitmap_set(srgn->mctx->ppn_dirty, srgn_offset, set_bit_len); + + srgn_offset = 0; + if (++srgn_idx == hpb->srgns_per_rgn) { + srgn_idx = 0; + rgn_idx++; + } + + cnt -= set_bit_len; + if (cnt > 0) + goto next_srgn; + + WARN_ON(cnt < 0); +} + +static bool ufshpb_test_ppn_dirty(struct ufshpb_lu *hpb, int rgn_idx, + int srgn_idx, int srgn_offset, int cnt) +{ + struct ufshpb_region *rgn; + struct ufshpb_subregion *srgn; + int bitmap_len = hpb->entries_per_srgn; + int bit_len; + +next_srgn: + rgn = hpb->rgn_tbl + rgn_idx; + srgn = rgn->srgn_tbl + srgn_idx; + + if (!ufshpb_is_valid_srgn(rgn, srgn)) + return true; + + /* + * If the region state is active, mctx must be allocated. + * In this case, check whether the region is evicted or + * mctx allcation fail. + */ + WARN_ON(!srgn->mctx); + + if ((srgn_offset + cnt) > bitmap_len) + bit_len = bitmap_len - srgn_offset; + else + bit_len = cnt; + + if (find_next_bit(srgn->mctx->ppn_dirty, + bit_len, srgn_offset) >= srgn_offset) + return true; + + srgn_offset = 0; + if (++srgn_idx == hpb->srgns_per_rgn) { + srgn_idx = 0; + rgn_idx++; + } + + cnt -= bit_len; + if (cnt > 0) + goto next_srgn; + + return false; +} + +static u64 ufshpb_get_ppn(struct ufshpb_lu *hpb, + struct ufshpb_map_ctx *mctx, int pos, int *error) +{ + u64 *ppn_table; + struct page *page; + int index, offset; + + index = pos / (PAGE_SIZE / HPB_ENTRY_SIZE); + offset = pos % (PAGE_SIZE / HPB_ENTRY_SIZE); + + page = mctx->m_page[index]; + if (unlikely(!page)) { + *error = -ENOMEM; + dev_err(&hpb->sdev_ufs_lu->sdev_dev, + "error. cannot find page in mctx\n"); + return 0; + } + + ppn_table = page_address(page); + if (unlikely(!ppn_table)) { + *error = -ENOMEM; + dev_err(&hpb->sdev_ufs_lu->sdev_dev, + "error. cannot get ppn_table\n"); + return 0; + } + + return ppn_table[offset]; +} + +static void +ufshpb_get_pos_from_lpn(struct ufshpb_lu *hpb, unsigned long lpn, int *rgn_idx, + int *srgn_idx, int *offset) +{ + int rgn_offset; + + *rgn_idx = lpn >> hpb->entries_per_rgn_shift; + rgn_offset = lpn & hpb->entries_per_rgn_mask; + *srgn_idx = rgn_offset >> hpb->entries_per_srgn_shift; + *offset = rgn_offset & hpb->entries_per_srgn_mask; +} + +static void +ufshpb_set_hpb_read_to_upiu(struct ufshpb_lu *hpb, struct ufshcd_lrb *lrbp, + u32 lpn, u64 ppn, unsigned int transfer_len) +{ + unsigned char *cdb = lrbp->ucd_req_ptr->sc.cdb; + + cdb[0] = UFSHPB_READ; + + put_unaligned_be64(ppn, &cdb[6]); + cdb[14] = transfer_len; +} + +/* + * This function will set up HPB read command using host-side L2P map data. + * In HPB v1.0, maximum size of HPB read command is 4KB. + */ void ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) { + struct ufshpb_lu *hpb; + struct ufshpb_region *rgn; + struct ufshpb_subregion *srgn; + struct scsi_cmnd *cmd = lrbp->cmd; + u32 lpn; + u64 ppn; + unsigned long flags; + int transfer_len, rgn_idx, srgn_idx, srgn_offset; + int err = 0; + + hpb = ufshpb_get_hpb_data(cmd); + if (!hpb) + return; + + if (ufshpb_get_state(hpb) != HPB_PRESENT) { + dev_notice(&hpb->sdev_ufs_lu->sdev_dev, + "%s: ufshpb state is not PRESENT", __func__); + return; + } + + WARN_ON(hpb->lun != cmd->device->lun); + if (!ufshpb_is_write_discard_cmd(cmd) && + !ufshpb_is_read_cmd(cmd)) + return; + + transfer_len = sectors_to_logical(cmd->device, blk_rq_sectors(cmd->request)); + if (unlikely(!transfer_len)) + return; + + lpn = sectors_to_logical(cmd->device, blk_rq_pos(cmd->request)); + ufshpb_get_pos_from_lpn(hpb, lpn, &rgn_idx, &srgn_idx, &srgn_offset); + rgn = hpb->rgn_tbl + rgn_idx; + srgn = rgn->srgn_tbl + srgn_idx; + + /* If command type is WRITE or DISCARD, set bitmap as drity */ + if (ufshpb_is_write_discard_cmd(cmd)) { + spin_lock_irqsave(&hpb->hpb_state_lock, flags); + ufshpb_set_ppn_dirty(hpb, rgn_idx, srgn_idx, srgn_offset, + transfer_len); + spin_unlock_irqrestore(&hpb->hpb_state_lock, flags); + return; + } + + WARN_ON(!ufshpb_is_read_cmd(cmd)); + + if (!ufshpb_is_support_chunk(transfer_len)) + return; + + spin_lock_irqsave(&hpb->hpb_state_lock, flags); + if (ufshpb_test_ppn_dirty(hpb, rgn_idx, srgn_idx, srgn_offset, + transfer_len)) { + atomic_inc(&hpb->stats.miss_cnt); + spin_unlock_irqrestore(&hpb->hpb_state_lock, flags); + return; + } + + ppn = ufshpb_get_ppn(hpb, srgn->mctx, srgn_offset, &err); + spin_unlock_irqrestore(&hpb->hpb_state_lock, flags); + if (unlikely(err)) { + /* + * In this case, the region state is active, + * but the ppn table is not allocated. + * Make sure that ppn table must be allocated on + * active state. + */ + WARN_ON(true); + dev_err(hba->dev, "ufshpb_get_ppn failed. err %d\n", err); + return; + } + + ufshpb_set_hpb_read_to_upiu(hpb, lrbp, lpn, ppn, transfer_len); + + atomic_inc(&hpb->stats.hit_cnt); } static struct ufshpb_req *ufshpb_get_map_req(struct ufshpb_lu *hpb,