From patchwork Wed Dec 7 20:46:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Asutosh Das X-Patchwork-Id: 633216 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7617AC6370A for ; Wed, 7 Dec 2022 20:47:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229793AbiLGUr2 (ORCPT ); Wed, 7 Dec 2022 15:47:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59964 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229865AbiLGUrY (ORCPT ); Wed, 7 Dec 2022 15:47:24 -0500 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 89C6D6DCCD; Wed, 7 Dec 2022 12:47:14 -0800 (PST) Received: from pps.filterd (m0279870.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2B7JvYBq028127; Wed, 7 Dec 2022 20:46:47 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=7S4T/d7o/o3m7oWFT/7SEWHru8hmIy5SxOT8QbpPUxE=; b=ElUEE2f1hyvqDX+Boq3hZgcRPVykO/Pt8NGUVqtu8hTmbp3HEUxFXzbmyrog4n/StMjW 5ukV4J0RRZD36sPRB+DSn4ke9AP56uSNagdKAHnJUlsbDxj41IZzw+RBGGIWuB7MxM+2 d+urTA4xdSn4PBYJOvtORXF+t2pXJ4dh7J/lrGPR6BEjhPx59zmdHcQ6NiIisEtR3epQ uDxhOY6+sRX9LPA8I52ZbXkyReYzBo49g+hFZiDtLH54PyIUab1b+/uucgnYS/4+y0oG kBOq1nvfbEGAeoz8Fzqlb687hP+ziI0jDwRGOjrcf7ZwcG196qwripxIyPhFsmS5/n3f Xg== Received: from nasanppmta03.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3maype0ab3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 07 Dec 2022 20:46:47 +0000 Received: from nasanex01a.na.qualcomm.com ([10.52.223.231]) by NASANPPMTA03.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 2B7KkkZp019065 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 7 Dec 2022 20:46:46 GMT Received: from asutoshd-linux1.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 7 Dec 2022 12:46:45 -0800 From: Asutosh Das To: , , CC: , , , , , , , , , Asutosh Das , , Alim Akhtar , "James E.J. Bottomley" , Jinyoung Choi , open list Subject: [PATCH v10 01/16] ufs: core: Optimize duplicate code to read extended feature Date: Wed, 7 Dec 2022 12:46:12 -0800 Message-ID: <08bb2c55df99717201238acb3a3b024b907309fb.1670445698.git.quic_asutoshd@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: 5ZFfwJ3n5oTjZAdeQaLUCqIIfDsVaAcf X-Proofpoint-ORIG-GUID: 5ZFfwJ3n5oTjZAdeQaLUCqIIfDsVaAcf X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.923,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-12-07_09,2022-12-07_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 clxscore=1015 bulkscore=0 suspectscore=0 priorityscore=1501 adultscore=0 mlxlogscore=999 impostorscore=0 malwarescore=0 phishscore=0 lowpriorityscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2212070174 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org The code to parse the extended feature is duplicated twice in the ufs core. Replace the duplicated code with a function. Signed-off-by: Asutosh Das Reviewed-by: Bart Van Assche Reviewed-by: Manivannan Sadhasivam --- drivers/ufs/core/ufshcd.c | 21 +++++++++++++-------- 1 file changed, 13 insertions(+), 8 deletions(-) diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index 2dbe249..6ea22b5 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -215,6 +215,17 @@ ufs_get_desired_pm_lvl_for_dev_link_state(enum ufs_dev_pwr_mode dev_state, return UFS_PM_LVL_0; } +static unsigned int ufs_get_ext_ufs_feature(struct ufs_hba *hba, + const u8 *desc_buf) +{ + if (hba->desc_size[QUERY_DESC_IDN_DEVICE] < + DEVICE_DESC_PARAM_EXT_UFS_FEATURE_SUP + 4) + return 0; + + return get_unaligned_be32(desc_buf + + DEVICE_DESC_PARAM_EXT_UFS_FEATURE_SUP); +} + static const struct ufs_dev_quirk ufs_fixups[] = { /* UFS cards deviations table */ { .wmanufacturerid = UFS_VENDOR_MICRON, @@ -7608,13 +7619,7 @@ static void ufshcd_wb_probe(struct ufs_hba *hba, const u8 *desc_buf) (hba->dev_quirks & UFS_DEVICE_QUIRK_SUPPORT_EXTENDED_FEATURES))) goto wb_disabled; - if (hba->desc_size[QUERY_DESC_IDN_DEVICE] < - DEVICE_DESC_PARAM_EXT_UFS_FEATURE_SUP + 4) - goto wb_disabled; - - ext_ufs_feature = get_unaligned_be32(desc_buf + - DEVICE_DESC_PARAM_EXT_UFS_FEATURE_SUP); - + ext_ufs_feature = ufs_get_ext_ufs_feature(hba, desc_buf); if (!(ext_ufs_feature & UFS_DEV_WRITE_BOOSTER_SUP)) goto wb_disabled; @@ -7668,7 +7673,7 @@ static void ufshcd_temp_notif_probe(struct ufs_hba *hba, const u8 *desc_buf) if (!(hba->caps & UFSHCD_CAP_TEMP_NOTIF) || dev_info->wspecversion < 0x300) return; - ext_ufs_feature = get_unaligned_be32(desc_buf + DEVICE_DESC_PARAM_EXT_UFS_FEATURE_SUP); + ext_ufs_feature = ufs_get_ext_ufs_feature(hba, desc_buf); if (ext_ufs_feature & UFS_DEV_LOW_TEMP_NOTIF) mask |= MASK_EE_TOO_LOW_TEMP; From patchwork Wed Dec 7 20:46:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Asutosh Das X-Patchwork-Id: 633217 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F1409C63709 for ; Wed, 7 Dec 2022 20:47:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229732AbiLGUrH (ORCPT ); Wed, 7 Dec 2022 15:47:07 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59522 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229811AbiLGUq4 (ORCPT ); Wed, 7 Dec 2022 15:46:56 -0500 Received: from alexa-out-sd-02.qualcomm.com (alexa-out-sd-02.qualcomm.com [199.106.114.39]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ECB2166C82; Wed, 7 Dec 2022 12:46:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1670446015; x=1701982015; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=zkh5XXEIdZfFBZvLVeyb6djySik3rlD7agJhw3idEWs=; b=qkimy7l4pMRiN8xVy5EZBKrZFUB9ezkYqzsEj6HE74fX91/rVSBFKnvp 7tsnBFU1b8iZAS8baqhTQ7545ZksNR/ga/ig3x55CpX7oJtVpjnSZRlr5 dG9TuTsT41An0R8pkg2esZsS1xzd1Rrf5Wm98eOK+/H3MN+FSbLZIbXVX M=; Received: from unknown (HELO ironmsg04-sd.qualcomm.com) ([10.53.140.144]) by alexa-out-sd-02.qualcomm.com with ESMTP; 07 Dec 2022 12:46:55 -0800 X-QCInternal: smtphost Received: from unknown (HELO nasanex01a.na.qualcomm.com) ([10.52.223.231]) by ironmsg04-sd.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Dec 2022 12:46:55 -0800 Received: from asutoshd-linux1.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 7 Dec 2022 12:46:54 -0800 From: Asutosh Das To: , , CC: , , , , , , , , , Asutosh Das , , Alim Akhtar , "James E.J. Bottomley" , Jinyoung Choi , Yoshihiro Shimoda , Keoseong Park , Kiwoong Kim , open list Subject: [PATCH v10 02/16] ufs: core: Probe for ext_iid support Date: Wed, 7 Dec 2022 12:46:13 -0800 Message-ID: X-Mailer: git-send-email 2.7.4 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nasanex01a.na.qualcomm.com (10.52.223.231) Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Task Tag is limited to 8 bits and this restricts the number of active IOs to 255. In Multi-circular queue mode, this may not be enough. The specification provides EXT_IID which can be used to increase the number of IOs if the UFS device and UFSHC support it. This patch adds support to probe for ext_iid support in ufs device and UFSHC. Co-developed-by: Can Guo Signed-off-by: Can Guo Signed-off-by: Asutosh Das Reviewed-by: Bart Van Assche Reviewed-by: Avri Altman Reviewed-by: Manivannan Sadhasivam --- drivers/ufs/core/ufshcd.c | 31 +++++++++++++++++++++++++++++++ include/ufs/ufs.h | 4 ++++ include/ufs/ufshcd.h | 4 ++++ include/ufs/ufshci.h | 7 +++++++ 4 files changed, 46 insertions(+) diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index 6ea22b5..595fd3c 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -2258,6 +2258,10 @@ static inline int ufshcd_hba_capabilities(struct ufs_hba *hba) if (err) dev_err(hba->dev, "crypto setup failed\n"); + hba->mcq_capabilities = ufshcd_readl(hba, REG_MCQCAP); + hba->ext_iid_sup = FIELD_GET(MASK_EXT_IID_SUPPORT, + hba->mcq_capabilities); + return err; } @@ -7687,6 +7691,30 @@ static void ufshcd_temp_notif_probe(struct ufs_hba *hba, const u8 *desc_buf) } } +static void ufshcd_ext_iid_probe(struct ufs_hba *hba, u8 *desc_buf) +{ + struct ufs_dev_info *dev_info = &hba->dev_info; + u32 ext_ufs_feature; + u32 ext_iid_en = 0; + int err; + + /* Only UFS-4.0 and above may support EXT_IID */ + if (dev_info->wspecversion < 0x400) + goto out; + + ext_ufs_feature = ufs_get_ext_ufs_feature(hba, desc_buf); + if (!(ext_ufs_feature & UFS_DEV_EXT_IID_SUP)) + goto out; + + err = ufshcd_query_attr_retry(hba, UPIU_QUERY_OPCODE_READ_ATTR, + QUERY_ATTR_IDN_EXT_IID_EN, 0, 0, &ext_iid_en); + if (err) + dev_err(hba->dev, "failed reading bEXTIIDEn. err = %d\n", err); + +out: + dev_info->b_ext_iid_en = ext_iid_en; +} + void ufshcd_fixup_dev_quirks(struct ufs_hba *hba, const struct ufs_dev_quirk *fixups) { @@ -7785,6 +7813,9 @@ static int ufs_get_device_desc(struct ufs_hba *hba) ufshcd_temp_notif_probe(hba, desc_buf); + if (hba->ext_iid_sup) + ufshcd_ext_iid_probe(hba, desc_buf); + /* * ufshcd_read_string_desc returns size of the string * reset the error value diff --git a/include/ufs/ufs.h b/include/ufs/ufs.h index 1bba3fe..ba2a1d8 100644 --- a/include/ufs/ufs.h +++ b/include/ufs/ufs.h @@ -165,6 +165,7 @@ enum attr_idn { QUERY_ATTR_IDN_AVAIL_WB_BUFF_SIZE = 0x1D, QUERY_ATTR_IDN_WB_BUFF_LIFE_TIME_EST = 0x1E, QUERY_ATTR_IDN_CURR_WB_BUFF_SIZE = 0x1F, + QUERY_ATTR_IDN_EXT_IID_EN = 0x2A, }; /* Descriptor idn for Query requests */ @@ -352,6 +353,7 @@ enum { UFS_DEV_EXT_TEMP_NOTIF = BIT(6), UFS_DEV_HPB_SUPPORT = BIT(7), UFS_DEV_WRITE_BOOSTER_SUP = BIT(8), + UFS_DEV_EXT_IID_SUP = BIT(16), }; #define UFS_DEV_HPB_SUPPORT_VERSION 0x310 @@ -601,6 +603,8 @@ struct ufs_dev_info { bool b_rpm_dev_flush_capable; u8 b_presrv_uspc_en; + /* UFS EXT_IID Enable */ + bool b_ext_iid_en; }; /* diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h index 5cf81df..aec37cb9 100644 --- a/include/ufs/ufshcd.h +++ b/include/ufs/ufshcd.h @@ -747,6 +747,7 @@ struct ufs_hba_monitor { * @outstanding_lock: Protects @outstanding_reqs. * @outstanding_reqs: Bits representing outstanding transfer requests * @capabilities: UFS Controller Capabilities + * @mcq_capabilities: UFS Multi Circular Queue capabilities * @nutrs: Transfer Request Queue depth supported by controller * @nutmrs: Task Management Queue depth supported by controller * @reserved_slot: Used to submit device commands. Protected by @dev_cmd.lock. @@ -830,6 +831,7 @@ struct ufs_hba_monitor { * device * @complete_put: whether or not to call ufshcd_rpm_put() from inside * ufshcd_resume_complete() + * @ext_iid_sup: is EXT_IID is supported by UFSHC */ struct ufs_hba { void __iomem *mmio_base; @@ -871,6 +873,7 @@ struct ufs_hba { u32 capabilities; int nutrs; + u32 mcq_capabilities; int nutmrs; u32 reserved_slot; u32 ufs_version; @@ -978,6 +981,7 @@ struct ufs_hba { #endif u32 luns_avail; bool complete_put; + bool ext_iid_sup; }; /* Returns true if clocks can be gated. Otherwise false */ diff --git a/include/ufs/ufshci.h b/include/ufs/ufshci.h index f525566..4d4da06 100644 --- a/include/ufs/ufshci.h +++ b/include/ufs/ufshci.h @@ -22,6 +22,7 @@ enum { /* UFSHCI Registers */ enum { REG_CONTROLLER_CAPABILITIES = 0x00, + REG_MCQCAP = 0x04, REG_UFS_VERSION = 0x08, REG_CONTROLLER_DEV_ID = 0x10, REG_CONTROLLER_PROD_ID = 0x14, @@ -68,6 +69,12 @@ enum { MASK_OUT_OF_ORDER_DATA_DELIVERY_SUPPORT = 0x02000000, MASK_UIC_DME_TEST_MODE_SUPPORT = 0x04000000, MASK_CRYPTO_SUPPORT = 0x10000000, + MASK_MCQ_SUPPORT = 0x40000000, +}; + +/* MCQ capability mask */ +enum { + MASK_EXT_IID_SUPPORT = 0x00000400, }; #define UFS_MASK(mask, offset) ((mask) << (offset)) From patchwork Wed Dec 7 20:46:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Asutosh Das X-Patchwork-Id: 631832 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F34F9C63708 for ; Wed, 7 Dec 2022 20:47:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229699AbiLGUrF (ORCPT ); Wed, 7 Dec 2022 15:47:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59662 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229961AbiLGUrE (ORCPT ); Wed, 7 Dec 2022 15:47:04 -0500 Received: from alexa-out-sd-01.qualcomm.com (alexa-out-sd-01.qualcomm.com [199.106.114.38]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6D888286CE; Wed, 7 Dec 2022 12:47:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1670446023; x=1701982023; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=JJLSbebFw9qNK2ogzbe+/HZ6FO1nbhq42JbF/wiwgGA=; b=xhRn1Kmfcrml4EQlotNpqvIUIVEOvAW082up6hTPexBrUuX1bjMDaCCM sczV3SvoU9TYoI1NbLXI0BdyAWOVr4UrO2oLZRXnaUrpSTAsZhlumVlfv vZC8aFdnUXU7+VyrvwtWyMI63Wa7JXhJUvYapXYomlk60kPmVQ1FsH/9u E=; Received: from unknown (HELO ironmsg-SD-alpha.qualcomm.com) ([10.53.140.30]) by alexa-out-sd-01.qualcomm.com with ESMTP; 07 Dec 2022 12:47:03 -0800 X-QCInternal: smtphost Received: from unknown (HELO nasanex01a.na.qualcomm.com) ([10.52.223.231]) by ironmsg-SD-alpha.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Dec 2022 12:47:02 -0800 Received: from asutoshd-linux1.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 7 Dec 2022 12:47:01 -0800 From: Asutosh Das To: , , CC: , , , , , , , , , Asutosh Das , , Alim Akhtar , "James E.J. Bottomley" , Jinyoung Choi , Keoseong Park , Yoshihiro Shimoda , open list Subject: [PATCH v10 03/16] ufs: core: Introduce Multi-circular queue capability Date: Wed, 7 Dec 2022 12:46:14 -0800 Message-ID: X-Mailer: git-send-email 2.7.4 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nasanex01a.na.qualcomm.com (10.52.223.231) Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Add support to check for MCQ capability in the UFSHC. Add a module parameter to disable MCQ if needed. Co-developed-by: Can Guo Signed-off-by: Can Guo Signed-off-by: Asutosh Das Reviewed-by: Bart Van Assche Reviewed-by: Manivannan Sadhasivam --- drivers/ufs/core/ufshcd.c | 26 ++++++++++++++++++++++++++ include/ufs/ufshcd.h | 2 ++ 2 files changed, 28 insertions(+) diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index 595fd3c..eca15b0 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -89,6 +89,28 @@ /* Polling time to wait for fDeviceInit */ #define FDEVICEINIT_COMPL_TIMEOUT 1500 /* millisecs */ +/* UFSHC 4.0 compliant HC support this mode, refer param_set_mcq_mode() */ +static bool use_mcq_mode = true; + +static int param_set_mcq_mode(const char *val, const struct kernel_param *kp) +{ + int ret; + + ret = param_set_bool(val, kp); + if (ret) + return ret; + + return 0; +} + +static const struct kernel_param_ops mcq_mode_ops = { + .set = param_set_mcq_mode, + .get = param_get_bool, +}; + +module_param_cb(use_mcq_mode, &mcq_mode_ops, &use_mcq_mode, 0644); +MODULE_PARM_DESC(use_mcq_mode, "Control MCQ mode for controllers starting from UFSHCI 4.0. 1 - enable MCQ, 0 - disable MCQ. MCQ is enabled by default"); + #define ufshcd_toggle_vreg(_dev, _vreg, _on) \ ({ \ int _ret; \ @@ -2258,6 +2280,10 @@ static inline int ufshcd_hba_capabilities(struct ufs_hba *hba) if (err) dev_err(hba->dev, "crypto setup failed\n"); + hba->mcq_sup = FIELD_GET(MASK_MCQ_SUPPORT, hba->capabilities); + if (!hba->mcq_sup) + return err; + hba->mcq_capabilities = ufshcd_readl(hba, REG_MCQCAP); hba->ext_iid_sup = FIELD_GET(MASK_EXT_IID_SUPPORT, hba->mcq_capabilities); diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h index aec37cb9..70c0f9f 100644 --- a/include/ufs/ufshcd.h +++ b/include/ufs/ufshcd.h @@ -832,6 +832,7 @@ struct ufs_hba_monitor { * @complete_put: whether or not to call ufshcd_rpm_put() from inside * ufshcd_resume_complete() * @ext_iid_sup: is EXT_IID is supported by UFSHC + * @mcq_sup: is mcq supported by UFSHC */ struct ufs_hba { void __iomem *mmio_base; @@ -982,6 +983,7 @@ struct ufs_hba { u32 luns_avail; bool complete_put; bool ext_iid_sup; + bool mcq_sup; }; /* Returns true if clocks can be gated. Otherwise false */ From patchwork Wed Dec 7 20:46:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Asutosh Das X-Patchwork-Id: 633215 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CF214C63708 for ; Wed, 7 Dec 2022 20:47:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229865AbiLGUrs (ORCPT ); Wed, 7 Dec 2022 15:47:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60012 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229895AbiLGUrb (ORCPT ); Wed, 7 Dec 2022 15:47:31 -0500 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D93BF66CAD; Wed, 7 Dec 2022 12:47:26 -0800 (PST) Received: from pps.filterd (m0279864.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2B7IUOgh021148; Wed, 7 Dec 2022 20:47:06 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=XoJjdTNXumk8Uumod1/jhp8Wj2FhsoysZ80e4BVp310=; b=GOuRrunS9qEC/+bBh2u08GblPdSdI6ZC9H1WkKmf2telGWb9ryCMRQlfFaQXuQqWsdSi 6D0VTmSRrPCIL1/Xz5/NpYtUTTwK7MKQ4r4TZrkYaXsM448Y8fUvml2JWPEx4S0jh7nX S1/VGVKxKnjP677qy5B/O2rRsUVDf0YSqY5g4GF//1/P9gomtUK969hanPkF3+C65kea /byrM2Wm40RD8AkQqrFNwagM1E42lPB+KnPRAuBuJeQwFbMsjqymzr75Poas5p3mqRuI SUWYNghDkRsEYumJJq16L6n9a5OkndNHeD87+YnvHA/M7KmDado90jTckCXGPPH7CLwn DQ== Received: from nasanppmta05.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3maww68r84-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 07 Dec 2022 20:47:06 +0000 Received: from nasanex01a.na.qualcomm.com ([10.52.223.231]) by NASANPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 2B7Kl5Kh004022 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 7 Dec 2022 20:47:05 GMT Received: from asutoshd-linux1.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 7 Dec 2022 12:47:04 -0800 From: Asutosh Das To: , , CC: , , , , , , , , , Asutosh Das , , Alim Akhtar , "James E.J. Bottomley" , Jinyoung Choi , open list Subject: [PATCH v10 04/16] ufs: core: Defer adding host to scsi if mcq is supported Date: Wed, 7 Dec 2022 12:46:15 -0800 Message-ID: X-Mailer: git-send-email 2.7.4 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: kX4lhPboR2UHgm54t2485f9ddGntK4eD X-Proofpoint-ORIG-GUID: kX4lhPboR2UHgm54t2485f9ddGntK4eD X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.923,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-12-07_09,2022-12-07_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 phishscore=0 impostorscore=0 lowpriorityscore=0 malwarescore=0 adultscore=0 priorityscore=1501 mlxlogscore=999 suspectscore=0 bulkscore=0 clxscore=1015 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2212070174 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org If MCQ support is present, enabling it after MCQ support has been configured would require reallocating tags and memory. It would also free up the already allocated memory in Single Doorbell Mode. So defer invoking scsi_add_host() until MCQ is configured. Co-developed-by: Can Guo Signed-off-by: Can Guo Signed-off-by: Asutosh Das Reviewed-by: Bart Van Assche Reviewed-by: Manivannan Sadhasivam --- drivers/ufs/core/ufshcd.c | 24 ++++++++++++++++++++---- 1 file changed, 20 insertions(+), 4 deletions(-) diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index eca15b0..869e495 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -92,6 +92,11 @@ /* UFSHC 4.0 compliant HC support this mode, refer param_set_mcq_mode() */ static bool use_mcq_mode = true; +static bool is_mcq_supported(struct ufs_hba *hba) +{ + return hba->mcq_sup && use_mcq_mode; +} + static int param_set_mcq_mode(const char *val, const struct kernel_param *kp) { int ret; @@ -8227,6 +8232,7 @@ static int ufshcd_probe_hba(struct ufs_hba *hba, bool init_dev_params) int ret; unsigned long flags; ktime_t start = ktime_get(); + struct Scsi_Host *host = hba->host; hba->ufshcd_state = UFSHCD_STATE_RESET; @@ -8261,6 +8267,14 @@ static int ufshcd_probe_hba(struct ufs_hba *hba, bool init_dev_params) ret = ufshcd_device_params_init(hba); if (ret) goto out; + + if (is_mcq_supported(hba)) { + ret = scsi_add_host(host, hba->dev); + if (ret) { + dev_err(hba->dev, "scsi_add_host failed\n"); + goto out; + } + } } ufshcd_tune_unipro_params(hba); @@ -9857,10 +9871,12 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq) hba->is_irq_enabled = true; } - err = scsi_add_host(host, hba->dev); - if (err) { - dev_err(hba->dev, "scsi_add_host failed\n"); - goto out_disable; + if (!is_mcq_supported(hba)) { + err = scsi_add_host(host, hba->dev); + if (err) { + dev_err(hba->dev, "scsi_add_host failed\n"); + goto out_disable; + } } hba->tmf_tag_set = (struct blk_mq_tag_set) { From patchwork Wed Dec 7 20:46:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Asutosh Das X-Patchwork-Id: 631830 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4C66C63705 for ; Wed, 7 Dec 2022 20:47:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229894AbiLGUrr (ORCPT ); Wed, 7 Dec 2022 15:47:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60064 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229948AbiLGUr3 (ORCPT ); Wed, 7 Dec 2022 15:47:29 -0500 Received: from alexa-out-sd-02.qualcomm.com (alexa-out-sd-02.qualcomm.com [199.106.114.39]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 289B27DA45; Wed, 7 Dec 2022 12:47:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1670446043; x=1701982043; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=KUmDDQbEKYIrvM5vrZzKHPw9tSj7QBRyxDFFydUs+Yc=; b=lXQBmL7+Ip+ZsWodJEXlCAcllfJKP3PWNa6Tbnio7KmPKzRh/f3W3dZI 6xtKYFVOjH+PYDp4w0yhU2em4hwWOnzWzUfV1eQhCQ3mSnvs1VQbbQIex X5r96IWtoW68biNSaOPfi37xA8jOxetjUBZfqBxxDtvkkJDkXE4VLQN0B k=; Received: from unknown (HELO ironmsg-SD-alpha.qualcomm.com) ([10.53.140.30]) by alexa-out-sd-02.qualcomm.com with ESMTP; 07 Dec 2022 12:47:22 -0800 X-QCInternal: smtphost Received: from unknown (HELO nasanex01a.na.qualcomm.com) ([10.52.223.231]) by ironmsg-SD-alpha.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Dec 2022 12:47:22 -0800 Received: from asutoshd-linux1.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 7 Dec 2022 12:47:22 -0800 From: Asutosh Das To: , , CC: , , , , , , , , , Asutosh Das , , Alim Akhtar , "James E.J. Bottomley" , Krzysztof Kozlowski , Jinyoung Choi , Yoshihiro Shimoda , Keoseong Park , open list Subject: [PATCH v10 05/16] ufs: core: mcq: Add support to allocate multiple queues Date: Wed, 7 Dec 2022 12:46:16 -0800 Message-ID: X-Mailer: git-send-email 2.7.4 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nasanex01a.na.qualcomm.com (10.52.223.231) Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Multi-circular queue (MCQ) has been added in UFSHC v4.0 standard in addition to the Single Doorbell mode. The MCQ mode supports multiple submission and completion queues. Add support to allocate and configure the queues. Add module parameters support to configure the queues. Co-developed-by: Can Guo Signed-off-by: Can Guo Signed-off-by: Asutosh Das Reviewed-by: Bart Van Assche Reviewed-by: Manivannan Sadhasivam --- drivers/ufs/core/Makefile | 2 +- drivers/ufs/core/ufs-mcq.c | 124 +++++++++++++++++++++++++++++++++++++++++ drivers/ufs/core/ufshcd-priv.h | 1 + drivers/ufs/core/ufshcd.c | 12 ++++ include/ufs/ufshcd.h | 4 ++ 5 files changed, 142 insertions(+), 1 deletion(-) create mode 100644 drivers/ufs/core/ufs-mcq.c diff --git a/drivers/ufs/core/Makefile b/drivers/ufs/core/Makefile index 62f38c5..4d02e0f 100644 --- a/drivers/ufs/core/Makefile +++ b/drivers/ufs/core/Makefile @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_SCSI_UFSHCD) += ufshcd-core.o -ufshcd-core-y += ufshcd.o ufs-sysfs.o +ufshcd-core-y += ufshcd.o ufs-sysfs.o ufs-mcq.o ufshcd-core-$(CONFIG_DEBUG_FS) += ufs-debugfs.o ufshcd-core-$(CONFIG_SCSI_UFS_BSG) += ufs_bsg.o ufshcd-core-$(CONFIG_SCSI_UFS_CRYPTO) += ufshcd-crypto.o diff --git a/drivers/ufs/core/ufs-mcq.c b/drivers/ufs/core/ufs-mcq.c new file mode 100644 index 0000000..6ed3625 --- /dev/null +++ b/drivers/ufs/core/ufs-mcq.c @@ -0,0 +1,124 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2022 Qualcomm Innovation Center. All rights reserved. + * + * Authors: + * Asutosh Das + * Can Guo + */ + +#include +#include +#include +#include +#include "ufshcd-priv.h" + +#define MAX_QUEUE_SUP GENMASK(7, 0) +#define UFS_MCQ_MIN_RW_QUEUES 2 +#define UFS_MCQ_MIN_READ_QUEUES 0 +#define UFS_MCQ_NUM_DEV_CMD_QUEUES 1 +#define UFS_MCQ_MIN_POLL_QUEUES 0 + +static int rw_queue_count_set(const char *val, const struct kernel_param *kp) +{ + return param_set_uint_minmax(val, kp, UFS_MCQ_MIN_RW_QUEUES, + num_possible_cpus()); +} + +static const struct kernel_param_ops rw_queue_count_ops = { + .set = rw_queue_count_set, + .get = param_get_uint, +}; + +static unsigned int rw_queues; +module_param_cb(rw_queues, &rw_queue_count_ops, &rw_queues, 0644); +MODULE_PARM_DESC(rw_queues, + "Number of interrupt driven I/O queues used for rw. Default value is nr_cpus"); + +static int read_queue_count_set(const char *val, const struct kernel_param *kp) +{ + return param_set_uint_minmax(val, kp, UFS_MCQ_MIN_READ_QUEUES, + num_possible_cpus()); +} + +static const struct kernel_param_ops read_queue_count_ops = { + .set = read_queue_count_set, + .get = param_get_uint, +}; + +static unsigned int read_queues; +module_param_cb(read_queues, &read_queue_count_ops, &read_queues, 0644); +MODULE_PARM_DESC(read_queues, + "Number of interrupt driven read queues used for read. Default value is 0"); + +static int poll_queue_count_set(const char *val, const struct kernel_param *kp) +{ + return param_set_uint_minmax(val, kp, UFS_MCQ_MIN_POLL_QUEUES, + num_possible_cpus()); +} + +static const struct kernel_param_ops poll_queue_count_ops = { + .set = poll_queue_count_set, + .get = param_get_uint, +}; + +static unsigned int poll_queues = 1; +module_param_cb(poll_queues, &poll_queue_count_ops, &poll_queues, 0644); +MODULE_PARM_DESC(poll_queues, + "Number of poll queues used for r/w. Default value is 1"); + +static int ufshcd_mcq_config_nr_queues(struct ufs_hba *hba) +{ + int i; + u32 hba_maxq, rem, tot_queues; + struct Scsi_Host *host = hba->host; + + hba_maxq = FIELD_GET(MAX_QUEUE_SUP, hba->mcq_capabilities); + + tot_queues = UFS_MCQ_NUM_DEV_CMD_QUEUES + read_queues + poll_queues + + rw_queues; + + if (hba_maxq < tot_queues) { + dev_err(hba->dev, "Total queues (%d) exceeds HC capacity (%d)\n", + tot_queues, hba_maxq); + return -EOPNOTSUPP; + } + + rem = hba_maxq - UFS_MCQ_NUM_DEV_CMD_QUEUES; + + if (rw_queues) { + hba->nr_queues[HCTX_TYPE_DEFAULT] = rw_queues; + rem -= hba->nr_queues[HCTX_TYPE_DEFAULT]; + } else { + rw_queues = num_possible_cpus(); + } + + if (poll_queues) { + hba->nr_queues[HCTX_TYPE_POLL] = poll_queues; + rem -= hba->nr_queues[HCTX_TYPE_POLL]; + } + + if (read_queues) { + hba->nr_queues[HCTX_TYPE_READ] = read_queues; + rem -= hba->nr_queues[HCTX_TYPE_READ]; + } + + if (!hba->nr_queues[HCTX_TYPE_DEFAULT]) + hba->nr_queues[HCTX_TYPE_DEFAULT] = min3(rem, rw_queues, + num_possible_cpus()); + + for (i = 0; i < HCTX_MAX_TYPES; i++) + host->nr_hw_queues += hba->nr_queues[i]; + + hba->nr_hw_queues = host->nr_hw_queues + UFS_MCQ_NUM_DEV_CMD_QUEUES; + return 0; +} + +int ufshcd_mcq_init(struct ufs_hba *hba) +{ + int ret; + + ret = ufshcd_mcq_config_nr_queues(hba); + + return ret; +} diff --git a/drivers/ufs/core/ufshcd-priv.h b/drivers/ufs/core/ufshcd-priv.h index a9e8e1f..9368ba2 100644 --- a/drivers/ufs/core/ufshcd-priv.h +++ b/drivers/ufs/core/ufshcd-priv.h @@ -61,6 +61,7 @@ int ufshcd_query_attr(struct ufs_hba *hba, enum query_opcode opcode, int ufshcd_query_flag(struct ufs_hba *hba, enum query_opcode opcode, enum flag_idn idn, u8 index, bool *flag_res); void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit); +int ufshcd_mcq_init(struct ufs_hba *hba); #define SD_ASCII_STD true #define SD_RAW false diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index 869e495..4e2355b 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -8220,6 +8220,11 @@ static int ufshcd_add_lus(struct ufs_hba *hba) return ret; } +static int ufshcd_alloc_mcq(struct ufs_hba *hba) +{ + return ufshcd_mcq_init(hba); +} + /** * ufshcd_probe_hba - probe hba to detect device and initialize it * @hba: per-adapter instance @@ -8269,6 +8274,13 @@ static int ufshcd_probe_hba(struct ufs_hba *hba, bool init_dev_params) goto out; if (is_mcq_supported(hba)) { + ret = ufshcd_alloc_mcq(hba); + if (ret) { + /* Continue with SDB mode */ + use_mcq_mode = false; + dev_err(hba->dev, "MCQ mode is disabled, err=%d\n", + ret); + } ret = scsi_add_host(host, hba->dev); if (ret) { dev_err(hba->dev, "scsi_add_host failed\n"); diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h index 70c0f9f..146b613 100644 --- a/include/ufs/ufshcd.h +++ b/include/ufs/ufshcd.h @@ -829,6 +829,8 @@ struct ufs_hba_monitor { * ee_ctrl_mask * @luns_avail: number of regular and well known LUNs supported by the UFS * device + * @nr_hw_queues: number of hardware queues configured + * @nr_queues: number of Queues of different queue types * @complete_put: whether or not to call ufshcd_rpm_put() from inside * ufshcd_resume_complete() * @ext_iid_sup: is EXT_IID is supported by UFSHC @@ -981,6 +983,8 @@ struct ufs_hba { u32 debugfs_ee_rate_limit_ms; #endif u32 luns_avail; + unsigned int nr_hw_queues; + unsigned int nr_queues[HCTX_MAX_TYPES]; bool complete_put; bool ext_iid_sup; bool mcq_sup; From patchwork Wed Dec 7 20:46:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Asutosh Das X-Patchwork-Id: 633214 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3B82C6370A for ; Wed, 7 Dec 2022 20:48:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230061AbiLGUs0 (ORCPT ); Wed, 7 Dec 2022 15:48:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33238 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230021AbiLGUsB (ORCPT ); Wed, 7 Dec 2022 15:48:01 -0500 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C38D6E55F; Wed, 7 Dec 2022 12:47:54 -0800 (PST) Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2B7KjRkA005332; Wed, 7 Dec 2022 20:47:36 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=coMgtVIQV0KNJH47ZivlYLl4PuyO+OasqaGfgycFad8=; b=fLfNBXbrmf9c6KPSfNHFL8ibpwW0LozmISLtwhy6T9Srt/QuuAL4TLlosDXy6S9PLjrh 7OGNP6P4WgcnFLE4330boFJCDCTuJwdM/K2QHbmqiri7kR+tzjQ73ETtsceTRE4JJR2u 7bIYJp6HhlkxXCkiTucXpCTSiO4D9vBSWcAzk9o/v6umsRRUs7rYYkgblknLSSULdn28 E0lX499OgH8V0xhANtxAbqVIR5DoaYrF3570PcsFvgfL+CDqFcgL/BuCBAy9zn/kHLA9 6+B+C8VDTF3G4Lqb1NB1DGuXVgejBn7+Tf8B+x9UUaUUHeoOxaKZVap2WncnhoaOlM7w mg== Received: from nasanppmta03.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3majt4a1e7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 07 Dec 2022 20:47:35 +0000 Received: from nasanex01a.na.qualcomm.com ([10.52.223.231]) by NASANPPMTA03.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 2B7KlZGM019427 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 7 Dec 2022 20:47:35 GMT Received: from asutoshd-linux1.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 7 Dec 2022 12:47:34 -0800 From: Asutosh Das To: , , CC: , , , , , , , , , Asutosh Das , , Alim Akhtar , "James E.J. Bottomley" , "Andy Gross" , Bjorn Andersson , "Konrad Dybcio" , Krzysztof Kozlowski , Jinyoung Choi , Arthur Simchaev , Yoshihiro Shimoda , Keoseong Park , open list Subject: [PATCH v10 06/16] ufs: core: mcq: Configure resource regions Date: Wed, 7 Dec 2022 12:46:17 -0800 Message-ID: X-Mailer: git-send-email 2.7.4 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: MDDbGsoyzmbQenUqdoHrckGczqDGPzfP X-Proofpoint-ORIG-GUID: MDDbGsoyzmbQenUqdoHrckGczqDGPzfP X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.923,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-12-07_09,2022-12-07_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 mlxlogscore=999 bulkscore=0 clxscore=1011 adultscore=0 mlxscore=0 priorityscore=1501 impostorscore=0 phishscore=0 spamscore=0 malwarescore=0 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2212070174 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Define the mcq resources and add support to ioremap the resource regions. Co-developed-by: Can Guo Signed-off-by: Can Guo Signed-off-by: Asutosh Das Reviewed-by: Manivannan Sadhasivam Reviewed-by: Bart Van Assche --- drivers/ufs/core/ufs-mcq.c | 3 ++ drivers/ufs/core/ufshcd-priv.h | 8 ++++ drivers/ufs/host/ufs-qcom.c | 101 +++++++++++++++++++++++++++++++++++++++++ include/ufs/ufshcd.h | 30 ++++++++++++ 4 files changed, 142 insertions(+) diff --git a/drivers/ufs/core/ufs-mcq.c b/drivers/ufs/core/ufs-mcq.c index 6ed3625..65c0037 100644 --- a/drivers/ufs/core/ufs-mcq.c +++ b/drivers/ufs/core/ufs-mcq.c @@ -119,6 +119,9 @@ int ufshcd_mcq_init(struct ufs_hba *hba) int ret; ret = ufshcd_mcq_config_nr_queues(hba); + if (ret) + return ret; + ret = ufshcd_vops_mcq_config_resource(hba); return ret; } diff --git a/drivers/ufs/core/ufshcd-priv.h b/drivers/ufs/core/ufshcd-priv.h index 9368ba2..3e21242 100644 --- a/drivers/ufs/core/ufshcd-priv.h +++ b/drivers/ufs/core/ufshcd-priv.h @@ -227,6 +227,14 @@ static inline void ufshcd_vops_config_scaling_param(struct ufs_hba *hba, hba->vops->config_scaling_param(hba, p, data); } +static inline int ufshcd_vops_mcq_config_resource(struct ufs_hba *hba) +{ + if (hba->vops && hba->vops->mcq_config_resource) + return hba->vops->mcq_config_resource(hba); + + return -EOPNOTSUPP; +} + extern const struct ufs_pm_lvl_states ufs_pm_lvl_states[]; /** diff --git a/drivers/ufs/host/ufs-qcom.c b/drivers/ufs/host/ufs-qcom.c index 8ad1415..045f8ab 100644 --- a/drivers/ufs/host/ufs-qcom.c +++ b/drivers/ufs/host/ufs-qcom.c @@ -25,6 +25,12 @@ #define UFS_QCOM_DEFAULT_DBG_PRINT_EN \ (UFS_QCOM_DBG_PRINT_REGS_EN | UFS_QCOM_DBG_PRINT_TEST_BUS_EN) +#define MCQ_QCFGPTR_MASK GENMASK(7, 0) +#define MCQ_QCFGPTR_UNIT 0x200 +#define MCQ_SQATTR_OFFSET(c) \ + ((((c) >> 16) & MCQ_QCFGPTR_MASK) * MCQ_QCFGPTR_UNIT) +#define MCQ_QCFG_SIZE 0x40 + enum { TSTBUS_UAWM, TSTBUS_UARM, @@ -1424,6 +1430,100 @@ static void ufs_qcom_config_scaling_param(struct ufs_hba *hba, } #endif +/* Resources */ +static const struct ufshcd_res_info ufs_res_info[RES_MAX] = { + {.name = "ufs_mem",}, + {.name = "mcq",}, + /* Submission Queue DAO */ + {.name = "mcq_sqd",}, + /* Submission Queue Interrupt Status */ + {.name = "mcq_sqis",}, + /* Completion Queue DAO */ + {.name = "mcq_cqd",}, + /* Completion Queue Interrupt Status */ + {.name = "mcq_cqis",}, + /* MCQ vendor specific */ + {.name = "mcq_vs",}, +}; + +static int ufs_qcom_mcq_config_resource(struct ufs_hba *hba) +{ + struct platform_device *pdev = to_platform_device(hba->dev); + struct ufshcd_res_info *res; + struct resource *res_mem, *res_mcq; + int i, ret = 0; + + memcpy(hba->res, ufs_res_info, sizeof(ufs_res_info)); + + for (i = 0; i < RES_MAX; i++) { + res = &hba->res[i]; + res->resource = platform_get_resource_byname(pdev, + IORESOURCE_MEM, + res->name); + if (!res->resource) { + dev_info(hba->dev, "Resource %s not provided\n", res->name); + if (i == RES_UFS) + return -ENOMEM; + continue; + } else if (i == RES_UFS) { + res_mem = res->resource; + res->base = hba->mmio_base; + continue; + } + + res->base = devm_ioremap_resource(hba->dev, res->resource); + if (IS_ERR(res->base)) { + dev_err(hba->dev, "Failed to map res %s, err=%d\n", + res->name, (int)PTR_ERR(res->base)); + res->base = NULL; + ret = PTR_ERR(res->base); + return ret; + } + } + + /* MCQ resource provided in DT */ + res = &hba->res[RES_MCQ]; + /* Bail if MCQ resource is provided */ + if (res->base) + goto out; + + /* Explicitly allocate MCQ resource from ufs_mem */ + res_mcq = devm_kzalloc(hba->dev, sizeof(*res_mcq), GFP_KERNEL); + if (!res_mcq) + return ret; + + res_mcq->start = res_mem->start + + MCQ_SQATTR_OFFSET(hba->mcq_capabilities); + res_mcq->end = res_mcq->start + hba->nr_hw_queues * MCQ_QCFG_SIZE - 1; + res_mcq->flags = res_mem->flags; + res_mcq->name = "mcq"; + + ret = insert_resource(&iomem_resource, res_mcq); + if (ret) { + dev_err(hba->dev, "Failed to insert MCQ resource, err=%d\n", + ret); + goto insert_res_err; + } + + res->base = devm_ioremap_resource(hba->dev, res_mcq); + if (IS_ERR(res->base)) { + dev_err(hba->dev, "MCQ registers mapping failed, err=%d\n", + (int)PTR_ERR(res->base)); + ret = PTR_ERR(res->base); + goto ioremap_err; + } + +out: + hba->mcq_base = res->base; + return 0; +ioremap_err: + res->base = NULL; + remove_resource(res_mcq); +insert_res_err: + devm_kfree(hba->dev, res_mcq); + return ret; +} + /* * struct ufs_hba_qcom_vops - UFS QCOM specific variant operations * @@ -1447,6 +1547,7 @@ static const struct ufs_hba_variant_ops ufs_hba_qcom_vops = { .device_reset = ufs_qcom_device_reset, .config_scaling_param = ufs_qcom_config_scaling_param, .program_key = ufs_qcom_ice_program_key, + .mcq_config_resource = ufs_qcom_mcq_config_resource, }; /** diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h index 146b613..0e21a6a 100644 --- a/include/ufs/ufshcd.h +++ b/include/ufs/ufshcd.h @@ -297,6 +297,7 @@ struct ufs_pwr_mode_info { * @config_scaling_param: called to configure clock scaling parameters * @program_key: program or evict an inline encryption key * @event_notify: called to notify important events + * @mcq_config_resource: called to configure MCQ platform resources */ struct ufs_hba_variant_ops { const char *name; @@ -335,6 +336,7 @@ struct ufs_hba_variant_ops { const union ufs_crypto_cfg_entry *cfg, int slot); void (*event_notify)(struct ufs_hba *hba, enum ufs_event_type evt, void *data); + int (*mcq_config_resource)(struct ufs_hba *hba); }; /* clock gating state */ @@ -724,6 +726,30 @@ struct ufs_hba_monitor { }; /** + * struct ufshcd_res_info_t - MCQ related resource regions + * + * @name: resource name + * @resource: pointer to resource region + * @base: register base address + */ +struct ufshcd_res_info { + const char *name; + struct resource *resource; + void __iomem *base; +}; + +enum ufshcd_res { + RES_UFS, + RES_MCQ, + RES_MCQ_SQD, + RES_MCQ_SQIS, + RES_MCQ_CQD, + RES_MCQ_CQIS, + RES_MCQ_VS, + RES_MAX, +}; + +/** * struct ufs_hba - per adapter private structure * @mmio_base: UFSHCI base register address * @ucdl_base_addr: UFS Command Descriptor base address @@ -835,6 +861,8 @@ struct ufs_hba_monitor { * ufshcd_resume_complete() * @ext_iid_sup: is EXT_IID is supported by UFSHC * @mcq_sup: is mcq supported by UFSHC + * @res: array of resource info of MCQ registers + * @mcq_base: Multi circular queue registers base address */ struct ufs_hba { void __iomem *mmio_base; @@ -988,6 +1016,8 @@ struct ufs_hba { bool complete_put; bool ext_iid_sup; bool mcq_sup; + struct ufshcd_res_info res[RES_MAX]; + void __iomem *mcq_base; }; /* Returns true if clocks can be gated. Otherwise false */ From patchwork Wed Dec 7 20:46:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Asutosh Das X-Patchwork-Id: 631829 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2B9EAC63708 for ; Wed, 7 Dec 2022 20:48:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230019AbiLGUs0 (ORCPT ); Wed, 7 Dec 2022 15:48:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60588 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229741AbiLGUsA (ORCPT ); Wed, 7 Dec 2022 15:48:00 -0500 Received: from alexa-out-sd-02.qualcomm.com (alexa-out-sd-02.qualcomm.com [199.106.114.39]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9FDB369A93; Wed, 7 Dec 2022 12:47:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1670446072; x=1701982072; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=Q4zhnEaUSNIwgRgYc8JFmnlUC0QDACvBhC+35B4jMXo=; b=hpQfTKakxQE6u3eW+9/vn5FC9ZxkfLMjwUR3o9OJP/CyjNt6OD8COT0N BvGp/rRKrIidvkP/L+eXV9uPRtr9QC7fvVOV77iJhHJXDoVLeNbDKoneF 19RPDSmo8Eti3nn+M1yRlGWIWIFnqbQCr4vZErLJjo6UgzTMDVEPyJaX8 c=; Received: from unknown (HELO ironmsg04-sd.qualcomm.com) ([10.53.140.144]) by alexa-out-sd-02.qualcomm.com with ESMTP; 07 Dec 2022 12:47:52 -0800 X-QCInternal: smtphost Received: from unknown (HELO nasanex01a.na.qualcomm.com) ([10.52.223.231]) by ironmsg04-sd.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Dec 2022 12:47:52 -0800 Received: from asutoshd-linux1.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 7 Dec 2022 12:47:51 -0800 From: Asutosh Das To: , , CC: , , , , , , , , , Asutosh Das , , Alim Akhtar , "James E.J. Bottomley" , "Andy Gross" , Bjorn Andersson , "Konrad Dybcio" , Jinyoung Choi , Krzysztof Kozlowski , Keoseong Park , Yoshihiro Shimoda , Kiwoong Kim , open list Subject: [PATCH v10 07/16] ufs: core: mcq: Calculate queue depth Date: Wed, 7 Dec 2022 12:46:18 -0800 Message-ID: <789527fa325104a4819af2022d76c51113ca9231.1670445698.git.quic_asutoshd@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nasanex01a.na.qualcomm.com (10.52.223.231) Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org The ufs device defines the supported queuedepth by bqueuedepth which has a max value of 256. The HC defines MAC (Max Active Commands) that define the max number of commands that in flight to the ufs device. Calculate and configure the nutrs based on both these values. Co-developed-by: Can Guo Signed-off-by: Can Guo Signed-off-by: Asutosh Das Reviewed-by: Manivannan Sadhasivam Reviewed-by: Bart Van Assche --- drivers/ufs/core/ufs-mcq.c | 35 +++++++++++++++++++++++++++++++++++ drivers/ufs/core/ufshcd-priv.h | 9 +++++++++ drivers/ufs/core/ufshcd.c | 17 ++++++++++++++++- drivers/ufs/host/ufs-qcom.c | 7 +++++++ drivers/ufs/host/ufs-qcom.h | 1 + include/ufs/ufs.h | 2 ++ include/ufs/ufshcd.h | 2 ++ include/ufs/ufshci.h | 1 + 8 files changed, 73 insertions(+), 1 deletion(-) diff --git a/drivers/ufs/core/ufs-mcq.c b/drivers/ufs/core/ufs-mcq.c index 65c0037..2f680ff 100644 --- a/drivers/ufs/core/ufs-mcq.c +++ b/drivers/ufs/core/ufs-mcq.c @@ -19,6 +19,9 @@ #define UFS_MCQ_NUM_DEV_CMD_QUEUES 1 #define UFS_MCQ_MIN_POLL_QUEUES 0 +#define MAX_DEV_CMD_ENTRIES 2 +#define MCQ_CFG_MAC_MASK GENMASK(16, 8) + static int rw_queue_count_set(const char *val, const struct kernel_param *kp) { return param_set_uint_minmax(val, kp, UFS_MCQ_MIN_RW_QUEUES, @@ -67,6 +70,38 @@ module_param_cb(poll_queues, &poll_queue_count_ops, &poll_queues, 0644); MODULE_PARM_DESC(poll_queues, "Number of poll queues used for r/w. Default value is 1"); +/** + * ufshcd_mcq_decide_queue_depth - decide the queue depth + * @hba - per adapter instance + * + * Returns queue-depth on success, non-zero on error + * + * MAC - Max. Active Command of the Host Controller (HC) + * HC wouldn't send more than this commands to the device. + * It is mandatory to implement get_hba_mac() to enable MCQ mode. + * Calculates and adjusts the queue depth based on the depth + * supported by the HC and ufs device. + */ +int ufshcd_mcq_decide_queue_depth(struct ufs_hba *hba) +{ + int mac; + + /* Mandatory to implement get_hba_mac() */ + mac = ufshcd_mcq_vops_get_hba_mac(hba); + if (mac < 0) { + dev_err(hba->dev, "Failed to get mac, err=%d\n", mac); + return mac; + } + + WARN_ON_ONCE(!hba->dev_info.bqueuedepth); + /* + * max. value of bqueuedepth = 256, mac is host dependent. + * It is mandatory for UFS device to define bQueueDepth if + * shared queuing architecture is enabled. + */ + return min_t(int, mac, hba->dev_info.bqueuedepth); +} + static int ufshcd_mcq_config_nr_queues(struct ufs_hba *hba) { int i; diff --git a/drivers/ufs/core/ufshcd-priv.h b/drivers/ufs/core/ufshcd-priv.h index 3e21242..d9b2087 100644 --- a/drivers/ufs/core/ufshcd-priv.h +++ b/drivers/ufs/core/ufshcd-priv.h @@ -62,6 +62,7 @@ int ufshcd_query_flag(struct ufs_hba *hba, enum query_opcode opcode, enum flag_idn idn, u8 index, bool *flag_res); void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit); int ufshcd_mcq_init(struct ufs_hba *hba); +int ufshcd_mcq_decide_queue_depth(struct ufs_hba *hba); #define SD_ASCII_STD true #define SD_RAW false @@ -235,6 +236,14 @@ static inline int ufshcd_vops_mcq_config_resource(struct ufs_hba *hba) return -EOPNOTSUPP; } +static inline int ufshcd_mcq_vops_get_hba_mac(struct ufs_hba *hba) +{ + if (hba->vops && hba->vops->get_hba_mac) + return hba->vops->get_hba_mac(hba); + + return -EOPNOTSUPP; +} + extern const struct ufs_pm_lvl_states ufs_pm_lvl_states[]; /** diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index 4e2355b..8218b2a 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -7807,6 +7807,7 @@ static int ufs_get_device_desc(struct ufs_hba *hba) /* getting Specification Version in big endian format */ dev_info->wspecversion = desc_buf[DEVICE_DESC_PARAM_SPEC_VER] << 8 | desc_buf[DEVICE_DESC_PARAM_SPEC_VER + 1]; + dev_info->bqueuedepth = desc_buf[DEVICE_DESC_PARAM_Q_DPTH]; b_ufs_feature_sup = desc_buf[DEVICE_DESC_PARAM_UFS_FEAT]; model_index = desc_buf[DEVICE_DESC_PARAM_PRDCT_NAME]; @@ -8222,7 +8223,21 @@ static int ufshcd_add_lus(struct ufs_hba *hba) static int ufshcd_alloc_mcq(struct ufs_hba *hba) { - return ufshcd_mcq_init(hba); + int ret; + int old_nutrs = hba->nutrs; + + ret = ufshcd_mcq_decide_queue_depth(hba); + if (ret < 0) + return ret; + + hba->nutrs = ret; + ret = ufshcd_mcq_init(hba); + if (ret) { + hba->nutrs = old_nutrs; + return ret; + } + + return 0; } /** diff --git a/drivers/ufs/host/ufs-qcom.c b/drivers/ufs/host/ufs-qcom.c index 045f8ab..a52db10 100644 --- a/drivers/ufs/host/ufs-qcom.c +++ b/drivers/ufs/host/ufs-qcom.c @@ -1524,6 +1524,12 @@ static int ufs_qcom_mcq_config_resource(struct ufs_hba *hba) return ret; } +static int ufs_qcom_get_hba_mac(struct ufs_hba *hba) +{ + /* Qualcomm HC supports up to 64 */ + return MAX_SUPP_MAC; +} + /* * struct ufs_hba_qcom_vops - UFS QCOM specific variant operations * @@ -1548,6 +1554,7 @@ static const struct ufs_hba_variant_ops ufs_hba_qcom_vops = { .config_scaling_param = ufs_qcom_config_scaling_param, .program_key = ufs_qcom_ice_program_key, .mcq_config_resource = ufs_qcom_mcq_config_resource, + .get_hba_mac = ufs_qcom_get_hba_mac, }; /** diff --git a/drivers/ufs/host/ufs-qcom.h b/drivers/ufs/host/ufs-qcom.h index 44466a3..f86e532 100644 --- a/drivers/ufs/host/ufs-qcom.h +++ b/drivers/ufs/host/ufs-qcom.h @@ -16,6 +16,7 @@ #define HBRN8_POLL_TOUT_MS 100 #define DEFAULT_CLK_RATE_HZ 1000000 #define BUS_VECTOR_NAME_LEN 32 +#define MAX_SUPP_MAC 64 #define UFS_HW_VER_MAJOR_SHFT (28) #define UFS_HW_VER_MAJOR_MASK (0x000F << UFS_HW_VER_MAJOR_SHFT) diff --git a/include/ufs/ufs.h b/include/ufs/ufs.h index ba2a1d8..5112418 100644 --- a/include/ufs/ufs.h +++ b/include/ufs/ufs.h @@ -591,6 +591,8 @@ struct ufs_dev_info { u8 *model; u16 wspecversion; u32 clk_gating_wait_us; + /* Stores the depth of queue in UFS device */ + u8 bqueuedepth; /* UFS HPB related flag */ bool hpb_enabled; diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h index 0e21a6a..9d7829a 100644 --- a/include/ufs/ufshcd.h +++ b/include/ufs/ufshcd.h @@ -298,6 +298,7 @@ struct ufs_pwr_mode_info { * @program_key: program or evict an inline encryption key * @event_notify: called to notify important events * @mcq_config_resource: called to configure MCQ platform resources + * @get_hba_mac: called to get vendor specific mac value, mandatory for mcq mode */ struct ufs_hba_variant_ops { const char *name; @@ -337,6 +338,7 @@ struct ufs_hba_variant_ops { void (*event_notify)(struct ufs_hba *hba, enum ufs_event_type evt, void *data); int (*mcq_config_resource)(struct ufs_hba *hba); + int (*get_hba_mac)(struct ufs_hba *hba); }; /* clock gating state */ diff --git a/include/ufs/ufshci.h b/include/ufs/ufshci.h index 4d4da06..67fcebd 100644 --- a/include/ufs/ufshci.h +++ b/include/ufs/ufshci.h @@ -57,6 +57,7 @@ enum { REG_UFS_CCAP = 0x100, REG_UFS_CRYPTOCAP = 0x104, + REG_UFS_MCQ_CFG = 0x380, UFSHCI_CRYPTO_REG_SPACE_SIZE = 0x400, }; From patchwork Wed Dec 7 20:46:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Asutosh Das X-Patchwork-Id: 631828 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E8DDC63708 for ; Wed, 7 Dec 2022 20:49:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229895AbiLGUtN (ORCPT ); Wed, 7 Dec 2022 15:49:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60588 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230003AbiLGUsY (ORCPT ); Wed, 7 Dec 2022 15:48:24 -0500 Received: from alexa-out-sd-01.qualcomm.com (alexa-out-sd-01.qualcomm.com [199.106.114.38]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D48862720; Wed, 7 Dec 2022 12:48:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1670446088; x=1701982088; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=8EEHPJSMhn3hc7LJEQnlAQmMjnTVkHU5n4um4Y9V7Uk=; b=rjf3rqY/lW2GUgAsjHs5MdLsz+gZUEUSoqbROT9LoYY//7Ew+lDPlTW0 gQOL76XgxEbEMBKd61f8G1vqh6V4ONS+0HbZdWJZr5+ftir2YEgGg8DEW A1XB2M/FADUdLPLckVGMe7ReyPGv56Fq99CEjJ9mQW4EkmhlgvqzyOFfa c=; Received: from unknown (HELO ironmsg-SD-alpha.qualcomm.com) ([10.53.140.30]) by alexa-out-sd-01.qualcomm.com with ESMTP; 07 Dec 2022 12:48:08 -0800 X-QCInternal: smtphost Received: from unknown (HELO nasanex01a.na.qualcomm.com) ([10.52.223.231]) by ironmsg-SD-alpha.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Dec 2022 12:48:08 -0800 Received: from asutoshd-linux1.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 7 Dec 2022 12:48:07 -0800 From: Asutosh Das To: , , CC: , , , , , , , , , Asutosh Das , , Alim Akhtar , "James E.J. Bottomley" , Krzysztof Kozlowski , Jinyoung Choi , Yoshihiro Shimoda , Keoseong Park , Kiwoong Kim , open list Subject: [PATCH v10 08/16] ufs: core: mcq: Allocate memory for mcq mode Date: Wed, 7 Dec 2022 12:46:19 -0800 Message-ID: <496dc1983f52d91b52f95a6c75d2f20d3c68c174.1670445698.git.quic_asutoshd@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nasanex01a.na.qualcomm.com (10.52.223.231) Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org To read the bqueuedepth, the device descriptor is fetched in Single Doorbell Mode. This allocated memory may not be enough for MCQ mode because the number of tags supported in MCQ mode may be larger than in SDB mode. Hence, release the memory allocated in SDB mode and allocate memory for MCQ mode operation. Define the ufs hardware queue and Completion Queue Entry. Co-developed-by: Can Guo Signed-off-by: Can Guo Signed-off-by: Asutosh Das Reviewed-by: Manivannan Sadhasivam Reviewed-by: Bart Van Assche --- drivers/ufs/core/ufs-mcq.c | 59 ++++++++++++++++++++++++++++++++++++++++-- drivers/ufs/core/ufshcd-priv.h | 1 + drivers/ufs/core/ufshcd.c | 48 +++++++++++++++++++++++++++++++--- include/ufs/ufshcd.h | 20 ++++++++++++++ include/ufs/ufshci.h | 22 ++++++++++++++++ 5 files changed, 145 insertions(+), 5 deletions(-) diff --git a/drivers/ufs/core/ufs-mcq.c b/drivers/ufs/core/ufs-mcq.c index 2f680ff..c77bc54 100644 --- a/drivers/ufs/core/ufs-mcq.c +++ b/drivers/ufs/core/ufs-mcq.c @@ -149,14 +149,69 @@ static int ufshcd_mcq_config_nr_queues(struct ufs_hba *hba) return 0; } +int ufshcd_mcq_memory_alloc(struct ufs_hba *hba) +{ + struct ufs_hw_queue *hwq; + size_t utrdl_size, cqe_size; + int i; + + for (i = 0; i < hba->nr_hw_queues; i++) { + hwq = &hba->uhq[i]; + + utrdl_size = sizeof(struct utp_transfer_req_desc) * + hwq->max_entries; + hwq->sqe_base_addr = dmam_alloc_coherent(hba->dev, utrdl_size, + &hwq->sqe_dma_addr, + GFP_KERNEL); + if (!hwq->sqe_dma_addr) { + dev_err(hba->dev, "SQE allocation failed\n"); + return -ENOMEM; + } + + cqe_size = sizeof(struct cq_entry) * hwq->max_entries; + hwq->cqe_base_addr = dmam_alloc_coherent(hba->dev, cqe_size, + &hwq->cqe_dma_addr, + GFP_KERNEL); + if (!hwq->cqe_dma_addr) { + dev_err(hba->dev, "CQE allocation failed\n"); + return -ENOMEM; + } + } + + return 0; +} + + int ufshcd_mcq_init(struct ufs_hba *hba) { - int ret; + struct ufs_hw_queue *hwq; + int ret, i; ret = ufshcd_mcq_config_nr_queues(hba); if (ret) return ret; ret = ufshcd_vops_mcq_config_resource(hba); - return ret; + if (ret) + return ret; + + hba->uhq = devm_kzalloc(hba->dev, + hba->nr_hw_queues * sizeof(struct ufs_hw_queue), + GFP_KERNEL); + if (!hba->uhq) { + dev_err(hba->dev, "ufs hw queue memory allocation failed\n"); + return -ENOMEM; + } + + for (i = 0; i < hba->nr_hw_queues; i++) { + hwq = &hba->uhq[i]; + hwq->max_entries = hba->nutrs; + } + + /* The very first HW queue serves device commands */ + hba->dev_cmd_queue = &hba->uhq[0]; + /* Give dev_cmd_queue the minimal number of entries */ + hba->dev_cmd_queue->max_entries = MAX_DEV_CMD_ENTRIES; + + return 0; } diff --git a/drivers/ufs/core/ufshcd-priv.h b/drivers/ufs/core/ufshcd-priv.h index d9b2087..93ebfec 100644 --- a/drivers/ufs/core/ufshcd-priv.h +++ b/drivers/ufs/core/ufshcd-priv.h @@ -63,6 +63,7 @@ int ufshcd_query_flag(struct ufs_hba *hba, enum query_opcode opcode, void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit); int ufshcd_mcq_init(struct ufs_hba *hba); int ufshcd_mcq_decide_queue_depth(struct ufs_hba *hba); +int ufshcd_mcq_memory_alloc(struct ufs_hba *hba); #define SD_ASCII_STD true #define SD_RAW false diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index 8218b2a..f2571eb 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -3740,6 +3740,14 @@ static int ufshcd_memory_alloc(struct ufs_hba *hba) } /* + * Skip utmrdl allocation; it may have been + * allocated during first pass and not released during + * MCQ memory allocation. + * See ufshcd_release_sdb_queue() and ufshcd_config_mcq() + */ + if (hba->utmrdl_base_addr) + goto skip_utmrdl; + /* * Allocate memory for UTP Task Management descriptors * UFSHCI requires 1024 byte alignment of UTMRD */ @@ -3755,6 +3763,7 @@ static int ufshcd_memory_alloc(struct ufs_hba *hba) goto out; } +skip_utmrdl: /* Allocate memory for local reference block */ hba->lrb = devm_kcalloc(hba->dev, hba->nutrs, sizeof(struct ufshcd_lrb), @@ -8221,6 +8230,22 @@ static int ufshcd_add_lus(struct ufs_hba *hba) return ret; } +/* SDB - Single Doorbell */ +static void ufshcd_release_sdb_queue(struct ufs_hba *hba, int nutrs) +{ + size_t ucdl_size, utrdl_size; + + ucdl_size = sizeof(struct utp_transfer_cmd_desc) * nutrs; + dmam_free_coherent(hba->dev, ucdl_size, hba->ucdl_base_addr, + hba->ucdl_dma_addr); + + utrdl_size = sizeof(struct utp_transfer_req_desc) * nutrs; + dmam_free_coherent(hba->dev, utrdl_size, hba->utrdl_base_addr, + hba->utrdl_dma_addr); + + devm_kfree(hba->dev, hba->lrb); +} + static int ufshcd_alloc_mcq(struct ufs_hba *hba) { int ret; @@ -8232,12 +8257,29 @@ static int ufshcd_alloc_mcq(struct ufs_hba *hba) hba->nutrs = ret; ret = ufshcd_mcq_init(hba); - if (ret) { - hba->nutrs = old_nutrs; - return ret; + if (ret) + goto err; + + /* + * Previously allocated memory for nutrs may not be enough in MCQ mode. + * Number of supported tags in MCQ mode may be larger than SDB mode. + */ + if (hba->nutrs != old_nutrs) { + ufshcd_release_sdb_queue(hba, old_nutrs); + ret = ufshcd_memory_alloc(hba); + if (ret) + goto err; + ufshcd_host_memory_configure(hba); } + ret = ufshcd_mcq_memory_alloc(hba); + if (ret) + goto err; + return 0; +err: + hba->nutrs = old_nutrs; + return ret; } /** diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h index 9d7829a..90461f43 100644 --- a/include/ufs/ufshcd.h +++ b/include/ufs/ufshcd.h @@ -865,6 +865,8 @@ enum ufshcd_res { * @mcq_sup: is mcq supported by UFSHC * @res: array of resource info of MCQ registers * @mcq_base: Multi circular queue registers base address + * @uhq: array of supported hardware queues + * @dev_cmd_queue: Queue for issuing device management commands */ struct ufs_hba { void __iomem *mmio_base; @@ -1020,6 +1022,24 @@ struct ufs_hba { bool mcq_sup; struct ufshcd_res_info res[RES_MAX]; void __iomem *mcq_base; + struct ufs_hw_queue *uhq; + struct ufs_hw_queue *dev_cmd_queue; +}; + +/** + * struct ufs_hw_queue - per hardware queue structure + * @sqe_base_addr: submission queue entry base address + * @sqe_dma_addr: submission queue dma address + * @cqe_base_addr: completion queue base address + * @cqe_dma_addr: completion queue dma address + * @max_entries: max number of slots in this hardware queue + */ +struct ufs_hw_queue { + void *sqe_base_addr; + dma_addr_t sqe_dma_addr; + struct cq_entry *cqe_base_addr; + dma_addr_t cqe_dma_addr; + u32 max_entries; }; /* Returns true if clocks can be gated. Otherwise false */ diff --git a/include/ufs/ufshci.h b/include/ufs/ufshci.h index 67fcebd..15d1ea2 100644 --- a/include/ufs/ufshci.h +++ b/include/ufs/ufshci.h @@ -486,6 +486,28 @@ struct utp_transfer_req_desc { __le16 prd_table_offset; }; +/* MCQ Completion Queue Entry */ +struct cq_entry { + /* DW 0-1 */ + __le64 command_desc_base_addr; + + /* DW 2 */ + __le16 response_upiu_length; + __le16 response_upiu_offset; + + /* DW 3 */ + __le16 prd_table_length; + __le16 prd_table_offset; + + /* DW 4 */ + __le32 status; + + /* DW 5-7 */ + __le32 reserved[3]; +}; + +static_assert(sizeof(struct cq_entry) == 32); + /* * UTMRD structure. */ From patchwork Wed Dec 7 20:46:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Asutosh Das X-Patchwork-Id: 631827 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8BC9C63709 for ; Wed, 7 Dec 2022 20:49:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229814AbiLGUtg (ORCPT ); Wed, 7 Dec 2022 15:49:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60012 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229898AbiLGUtH (ORCPT ); Wed, 7 Dec 2022 15:49:07 -0500 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6DC4073F47; Wed, 7 Dec 2022 12:48:40 -0800 (PST) Received: from pps.filterd (m0279863.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2B7JRQUq009568; Wed, 7 Dec 2022 20:48:25 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=qccsNrb5j9fofD9FI295IKfJhufvTATYhOsqNDxKxjQ=; b=mu5mYLXneILzYIIJLTyM3A8vDcjyJVE/OGbPtvoJdDJP4aDuBVkid068lC6qorvAP05u RcYnmsqk0Rxr0P8EkoRqPf2+pxaQwFWA3zTembWsl40lRPfEwBp6sljN+0snQp5HQCdC 8xJS87eylGvKOq0kgL6LEZzbUOvOYbMsuNb3YaufR4+pJmChSSfsPRY/8nnM2WzPLI76 FilpdEPdqw/tFfFkM5CC8ywo4WMADp5q08FhwLQ38gohJNGdrdIc3FXljLw6E3bBl3VY zB31Kj2qr/y8LHn9Z6M14JtQI/cHxV9sjctJiMnmf7m4AdBaMh1Y5sD4QVrJpHScI6Sr 3A== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3mavtf0xfj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 07 Dec 2022 20:48:24 +0000 Received: from nasanex01a.na.qualcomm.com (corens_vlan604_snip.qualcomm.com [10.53.140.1]) by NASANPPMTA02.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 2B7KmOOV001848 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 7 Dec 2022 20:48:24 GMT Received: from asutoshd-linux1.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 7 Dec 2022 12:48:23 -0800 From: Asutosh Das To: , , CC: , , , , , , , , , Asutosh Das , , Alim Akhtar , "James E.J. Bottomley" , "Andy Gross" , Bjorn Andersson , "Konrad Dybcio" , Krzysztof Kozlowski , Arthur Simchaev , Jinyoung Choi , Keoseong Park , Yoshihiro Shimoda , Kiwoong Kim , "open list" Subject: [PATCH v10 09/16] ufs: core: mcq: Configure operation and runtime interface Date: Wed, 7 Dec 2022 12:46:20 -0800 Message-ID: X-Mailer: git-send-email 2.7.4 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: o6QRZ8JbZxYcliy8z7Er_92YF7ffGIeB X-Proofpoint-ORIG-GUID: o6QRZ8JbZxYcliy8z7Er_92YF7ffGIeB X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.923,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-12-07_09,2022-12-07_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 lowpriorityscore=0 suspectscore=0 mlxscore=0 mlxlogscore=999 malwarescore=0 clxscore=1015 adultscore=0 spamscore=0 phishscore=0 bulkscore=0 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2212070174 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Runtime and operation registers are defined per Submission and Completion queue. The location of these registers is not defined in the spec; meaning the offsets and stride may vary for different HC vendors. Establish the stride, base address and doorbell address offsets from vendor host driver and program it. Co-developed-by: Can Guo Signed-off-by: Can Guo Signed-off-by: Asutosh Das Reviewed-by: Manivannan Sadhasivam Reviewed-by: Bart Van Assche --- drivers/ufs/core/ufs-mcq.c | 102 +++++++++++++++++++++++++++++++++++++++++ drivers/ufs/core/ufshcd-priv.h | 11 +++++ drivers/ufs/core/ufshcd.c | 27 +++++++++++ drivers/ufs/host/ufs-qcom.c | 24 ++++++++++ include/ufs/ufshcd.h | 52 +++++++++++++++++++++ include/ufs/ufshci.h | 31 +++++++++++++ 6 files changed, 247 insertions(+) diff --git a/drivers/ufs/core/ufs-mcq.c b/drivers/ufs/core/ufs-mcq.c index c77bc54..496e2b6 100644 --- a/drivers/ufs/core/ufs-mcq.c +++ b/drivers/ufs/core/ufs-mcq.c @@ -18,9 +18,13 @@ #define UFS_MCQ_MIN_READ_QUEUES 0 #define UFS_MCQ_NUM_DEV_CMD_QUEUES 1 #define UFS_MCQ_MIN_POLL_QUEUES 0 +#define QUEUE_EN_OFFSET 31 +#define QUEUE_ID_OFFSET 16 #define MAX_DEV_CMD_ENTRIES 2 #define MCQ_CFG_MAC_MASK GENMASK(16, 8) +#define MCQ_QCFG_SIZE 0x40 +#define MCQ_ENTRY_SIZE_IN_DWORD 8 static int rw_queue_count_set(const char *val, const struct kernel_param *kp) { @@ -71,6 +75,24 @@ MODULE_PARM_DESC(poll_queues, "Number of poll queues used for r/w. Default value is 1"); /** + * ufshcd_mcq_config_mac - Set the #Max Activ Cmds. + * @hba - per adapter instance + * @max_active_cmds - maximum # of active commands to the device at any time. + * + * The controller won't send more than the max_active_cmds to the device at + * any time. + */ +void ufshcd_mcq_config_mac(struct ufs_hba *hba, u32 max_active_cmds) +{ + u32 val; + + val = ufshcd_readl(hba, REG_UFS_MCQ_CFG); + val &= ~MCQ_CFG_MAC_MASK; + val |= FIELD_PREP(MCQ_CFG_MAC_MASK, max_active_cmds); + ufshcd_writel(hba, val, REG_UFS_MCQ_CFG); +} + +/** * ufshcd_mcq_decide_queue_depth - decide the queue depth * @hba - per adapter instance * @@ -182,6 +204,80 @@ int ufshcd_mcq_memory_alloc(struct ufs_hba *hba) } +/* Operation and runtime registers configuration */ +#define MCQ_CFG_n(r, i) ((r) + MCQ_QCFG_SIZE * (i)) +#define MCQ_OPR_OFFSET_n(p, i) \ + (hba->mcq_opr[(p)].offset + hba->mcq_opr[(p)].stride * (i)) + +static void __iomem *mcq_opr_base(struct ufs_hba *hba, + enum ufshcd_mcq_opr n, int i) +{ + struct ufshcd_mcq_opr_info_t *opr = &hba->mcq_opr[n]; + + return opr->base + opr->stride * i; +} + +void ufshcd_mcq_make_queues_operational(struct ufs_hba *hba) +{ + struct ufs_hw_queue *hwq; + u16 qsize; + int i; + + for (i = 0; i < hba->nr_hw_queues; i++) { + hwq = &hba->uhq[i]; + hwq->id = i; + qsize = hwq->max_entries * MCQ_ENTRY_SIZE_IN_DWORD - 1; + + /* Submission Queue Lower Base Address */ + ufsmcq_writelx(hba, lower_32_bits(hwq->sqe_dma_addr), + MCQ_CFG_n(REG_SQLBA, i)); + /* Submission Queue Upper Base Address */ + ufsmcq_writelx(hba, upper_32_bits(hwq->sqe_dma_addr), + MCQ_CFG_n(REG_SQUBA, i)); + /* Submission Queue Doorbell Address Offset */ + ufsmcq_writelx(hba, MCQ_OPR_OFFSET_n(OPR_SQD, i), + MCQ_CFG_n(REG_SQDAO, i)); + /* Submission Queue Interrupt Status Address Offset */ + ufsmcq_writelx(hba, MCQ_OPR_OFFSET_n(OPR_SQIS, i), + MCQ_CFG_n(REG_SQISAO, i)); + + /* Completion Queue Lower Base Address */ + ufsmcq_writelx(hba, lower_32_bits(hwq->cqe_dma_addr), + MCQ_CFG_n(REG_CQLBA, i)); + /* Completion Queue Upper Base Address */ + ufsmcq_writelx(hba, upper_32_bits(hwq->cqe_dma_addr), + MCQ_CFG_n(REG_CQUBA, i)); + /* Completion Queue Doorbell Address Offset */ + ufsmcq_writelx(hba, MCQ_OPR_OFFSET_n(OPR_CQD, i), + MCQ_CFG_n(REG_CQDAO, i)); + /* Completion Queue Interrupt Status Address Offset */ + ufsmcq_writelx(hba, MCQ_OPR_OFFSET_n(OPR_CQIS, i), + MCQ_CFG_n(REG_CQISAO, i)); + + /* Save the base addresses for quicker access */ + hwq->mcq_sq_head = mcq_opr_base(hba, OPR_SQD, i) + REG_SQHP; + hwq->mcq_sq_tail = mcq_opr_base(hba, OPR_SQD, i) + REG_SQTP; + hwq->mcq_cq_head = mcq_opr_base(hba, OPR_CQD, i) + REG_CQHP; + hwq->mcq_cq_tail = mcq_opr_base(hba, OPR_CQD, i) + REG_CQTP; + + /* Enable Tail Entry Push Status interrupt only for non-poll queues */ + if (i < hba->nr_hw_queues - hba->nr_queues[HCTX_TYPE_POLL]) + writel(1, mcq_opr_base(hba, OPR_CQIS, i) + REG_CQIE); + + /* Completion Queue Enable|Size to Completion Queue Attribute */ + ufsmcq_writel(hba, (1 << QUEUE_EN_OFFSET) | qsize, + MCQ_CFG_n(REG_CQATTR, i)); + + /* + * Submission Qeueue Enable|Size|Completion Queue ID to + * Submission Queue Attribute + */ + ufsmcq_writel(hba, (1 << QUEUE_EN_OFFSET) | qsize | + (i << QUEUE_ID_OFFSET), + MCQ_CFG_n(REG_SQATTR, i)); + } +} + int ufshcd_mcq_init(struct ufs_hba *hba) { struct ufs_hw_queue *hwq; @@ -195,6 +291,12 @@ int ufshcd_mcq_init(struct ufs_hba *hba) if (ret) return ret; + ret = ufshcd_mcq_vops_op_runtime_config(hba); + if (ret) { + dev_err(hba->dev, "Operation runtime config failed, ret=%d\n", + ret); + return ret; + } hba->uhq = devm_kzalloc(hba->dev, hba->nr_hw_queues * sizeof(struct ufs_hw_queue), GFP_KERNEL); diff --git a/drivers/ufs/core/ufshcd-priv.h b/drivers/ufs/core/ufshcd-priv.h index 93ebfec..0e35cfa 100644 --- a/drivers/ufs/core/ufshcd-priv.h +++ b/drivers/ufs/core/ufshcd-priv.h @@ -64,6 +64,9 @@ void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit); int ufshcd_mcq_init(struct ufs_hba *hba); int ufshcd_mcq_decide_queue_depth(struct ufs_hba *hba); int ufshcd_mcq_memory_alloc(struct ufs_hba *hba); +void ufshcd_mcq_make_queues_operational(struct ufs_hba *hba); +void ufshcd_mcq_config_mac(struct ufs_hba *hba, u32 max_active_cmds); +void ufshcd_mcq_select_mcq_mode(struct ufs_hba *hba); #define SD_ASCII_STD true #define SD_RAW false @@ -245,6 +248,14 @@ static inline int ufshcd_mcq_vops_get_hba_mac(struct ufs_hba *hba) return -EOPNOTSUPP; } +static inline int ufshcd_mcq_vops_op_runtime_config(struct ufs_hba *hba) +{ + if (hba->vops && hba->vops->op_runtime_config) + return hba->vops->op_runtime_config(hba); + + return -EOPNOTSUPP; +} + extern const struct ufs_pm_lvl_states ufs_pm_lvl_states[]; /** diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index f2571eb..5d14d3f 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -43,6 +43,12 @@ #define UFSHCD_ENABLE_INTRS (UTP_TRANSFER_REQ_COMPL |\ UTP_TASK_REQ_COMPL |\ UFSHCD_ERROR_MASK) + +#define UFSHCD_ENABLE_MCQ_INTRS (UTP_TASK_REQ_COMPL |\ + UFSHCD_ERROR_MASK |\ + MCQ_CQ_EVENT_STATUS) + + /* UIC command timeout, unit: ms */ #define UIC_CMD_TIMEOUT 500 @@ -8282,6 +8288,20 @@ static int ufshcd_alloc_mcq(struct ufs_hba *hba) return ret; } +static void ufshcd_config_mcq(struct ufs_hba *hba) +{ + ufshcd_enable_intr(hba, UFSHCD_ENABLE_MCQ_INTRS); + ufshcd_mcq_make_queues_operational(hba); + ufshcd_mcq_config_mac(hba, hba->nutrs); + + hba->host->can_queue = hba->nutrs - UFSHCD_NUM_RESERVED; + hba->reserved_slot = hba->nutrs - UFSHCD_NUM_RESERVED; + dev_info(hba->dev, "MCQ configured, nr_queues=%d, io_queues=%d, read_queue=%d, poll_queues=%d, queue_depth=%d\n", + hba->nr_hw_queues, hba->nr_queues[HCTX_TYPE_DEFAULT], + hba->nr_queues[HCTX_TYPE_READ], hba->nr_queues[HCTX_TYPE_POLL], + hba->nutrs); +} + /** * ufshcd_probe_hba - probe hba to detect device and initialize it * @hba: per-adapter instance @@ -8311,6 +8331,10 @@ static int ufshcd_probe_hba(struct ufs_hba *hba, bool init_dev_params) /* UniPro link is active now */ ufshcd_set_link_active(hba); + /* Reconfigure MCQ upon reset */ + if (is_mcq_enabled(hba) && !init_dev_params) + ufshcd_config_mcq(hba); + /* Verify device initialization by sending NOP OUT UPIU */ ret = ufshcd_verify_dev_init(hba); if (ret) @@ -8343,6 +8367,9 @@ static int ufshcd_probe_hba(struct ufs_hba *hba, bool init_dev_params) dev_err(hba->dev, "scsi_add_host failed\n"); goto out; } + /* MCQ may be disabled if ufshcd_alloc_mcq() fails */ + if (use_mcq_mode) + ufshcd_config_mcq(hba); } } diff --git a/drivers/ufs/host/ufs-qcom.c b/drivers/ufs/host/ufs-qcom.c index a52db10..61bb1c8 100644 --- a/drivers/ufs/host/ufs-qcom.c +++ b/drivers/ufs/host/ufs-qcom.c @@ -1524,6 +1524,29 @@ static int ufs_qcom_mcq_config_resource(struct ufs_hba *hba) return ret; } +static int ufs_qcom_op_runtime_config(struct ufs_hba *hba) +{ + struct ufshcd_res_info *mem_res, *sqdao_res; + struct ufshcd_mcq_opr_info_t *opr; + int i; + + mem_res = &hba->res[RES_UFS]; + sqdao_res = &hba->res[RES_MCQ_SQD]; + + if (!mem_res->base || !sqdao_res->base) + return -EINVAL; + + for (i = 0; i < OPR_MAX; i++) { + opr = &hba->mcq_opr[i]; + opr->offset = sqdao_res->resource->start - + mem_res->resource->start + 0x40 * i; + opr->stride = 0x100; + opr->base = sqdao_res->base + 0x40 * i; + } + + return 0; +} + static int ufs_qcom_get_hba_mac(struct ufs_hba *hba) { /* Qualcomm HC supports up to 64 */ @@ -1555,6 +1578,7 @@ static const struct ufs_hba_variant_ops ufs_hba_qcom_vops = { .program_key = ufs_qcom_ice_program_key, .mcq_config_resource = ufs_qcom_mcq_config_resource, .get_hba_mac = ufs_qcom_get_hba_mac, + .op_runtime_config = ufs_qcom_op_runtime_config, }; /** diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h index 90461f43..ac46d36 100644 --- a/include/ufs/ufshcd.h +++ b/include/ufs/ufshcd.h @@ -299,6 +299,7 @@ struct ufs_pwr_mode_info { * @event_notify: called to notify important events * @mcq_config_resource: called to configure MCQ platform resources * @get_hba_mac: called to get vendor specific mac value, mandatory for mcq mode + * @op_runtime_config: called to config Operation and runtime regs Pointers */ struct ufs_hba_variant_ops { const char *name; @@ -339,6 +340,7 @@ struct ufs_hba_variant_ops { enum ufs_event_type evt, void *data); int (*mcq_config_resource)(struct ufs_hba *hba); int (*get_hba_mac)(struct ufs_hba *hba); + int (*op_runtime_config)(struct ufs_hba *hba); }; /* clock gating state */ @@ -752,6 +754,27 @@ enum ufshcd_res { }; /** + * struct ufshcd_mcq_opr_info_t - Operation and Runtime registers + * + * @offset: Doorbell Address Offset + * @stride: Steps proportional to queue [0...31] + * @base: base address + */ +struct ufshcd_mcq_opr_info_t { + unsigned long offset; + unsigned long stride; + void __iomem *base; +}; + +enum ufshcd_mcq_opr { + OPR_SQD, + OPR_SQIS, + OPR_CQD, + OPR_CQIS, + OPR_MAX, +}; + +/** * struct ufs_hba - per adapter private structure * @mmio_base: UFSHCI base register address * @ucdl_base_addr: UFS Command Descriptor base address @@ -863,6 +886,7 @@ enum ufshcd_res { * ufshcd_resume_complete() * @ext_iid_sup: is EXT_IID is supported by UFSHC * @mcq_sup: is mcq supported by UFSHC + * @mcq_enabled: is mcq ready to accept requests * @res: array of resource info of MCQ registers * @mcq_base: Multi circular queue registers base address * @uhq: array of supported hardware queues @@ -1020,28 +1044,46 @@ struct ufs_hba { bool complete_put; bool ext_iid_sup; bool mcq_sup; + bool mcq_enabled; struct ufshcd_res_info res[RES_MAX]; void __iomem *mcq_base; struct ufs_hw_queue *uhq; struct ufs_hw_queue *dev_cmd_queue; + struct ufshcd_mcq_opr_info_t mcq_opr[OPR_MAX]; }; /** * struct ufs_hw_queue - per hardware queue structure + * @mcq_sq_head: base address of submission queue head pointer + * @mcq_sq_tail: base address of submission queue tail pointer + * @mcq_cq_head: base address of completion queue head pointer + * @mcq_cq_tail: base address of completion queue tail pointer * @sqe_base_addr: submission queue entry base address * @sqe_dma_addr: submission queue dma address * @cqe_base_addr: completion queue base address * @cqe_dma_addr: completion queue dma address * @max_entries: max number of slots in this hardware queue + * @id: hardware queue ID */ struct ufs_hw_queue { + void __iomem *mcq_sq_head; + void __iomem *mcq_sq_tail; + void __iomem *mcq_cq_head; + void __iomem *mcq_cq_tail; + void *sqe_base_addr; dma_addr_t sqe_dma_addr; struct cq_entry *cqe_base_addr; dma_addr_t cqe_dma_addr; u32 max_entries; + u32 id; }; +static inline bool is_mcq_enabled(struct ufs_hba *hba) +{ + return hba->mcq_enabled; +} + /* Returns true if clocks can be gated. Otherwise false */ static inline bool ufshcd_is_clkgating_allowed(struct ufs_hba *hba) { @@ -1097,6 +1139,16 @@ static inline bool ufshcd_enable_wb_if_scaling_up(struct ufs_hba *hba) return hba->caps & UFSHCD_CAP_WB_WITH_CLK_SCALING; } +#define ufsmcq_writel(hba, val, reg) \ + writel((val), (hba)->mcq_base + (reg)) +#define ufsmcq_readl(hba, reg) \ + readl((hba)->mcq_base + (reg)) + +#define ufsmcq_writelx(hba, val, reg) \ + writel_relaxed((val), (hba)->mcq_base + (reg)) +#define ufsmcq_readlx(hba, reg) \ + readl_relaxed((hba)->mcq_base + (reg)) + #define ufshcd_writel(hba, val, reg) \ writel((val), (hba)->mmio_base + (reg)) #define ufshcd_readl(hba, reg) \ diff --git a/include/ufs/ufshci.h b/include/ufs/ufshci.h index 15d1ea2..8784b88 100644 --- a/include/ufs/ufshci.h +++ b/include/ufs/ufshci.h @@ -57,6 +57,7 @@ enum { REG_UFS_CCAP = 0x100, REG_UFS_CRYPTOCAP = 0x104, + REG_UFS_MEM_CFG = 0x300, REG_UFS_MCQ_CFG = 0x380, UFSHCI_CRYPTO_REG_SPACE_SIZE = 0x400, }; @@ -78,6 +79,35 @@ enum { MASK_EXT_IID_SUPPORT = 0x00000400, }; +enum { + REG_SQATTR = 0x0, + REG_SQLBA = 0x4, + REG_SQUBA = 0x8, + REG_SQDAO = 0xC, + REG_SQISAO = 0x10, + + REG_CQATTR = 0x20, + REG_CQLBA = 0x24, + REG_CQUBA = 0x28, + REG_CQDAO = 0x2C, + REG_CQISAO = 0x30, +}; + +enum { + REG_SQHP = 0x0, + REG_SQTP = 0x4, +}; + +enum { + REG_CQHP = 0x0, + REG_CQTP = 0x4, +}; + +enum { + REG_CQIS = 0x0, + REG_CQIE = 0x4, +}; + #define UFS_MASK(mask, offset) ((mask) << (offset)) /* UFS Version 08h */ @@ -134,6 +164,7 @@ static inline u32 ufshci_version(u32 major, u32 minor) #define CONTROLLER_FATAL_ERROR 0x10000 #define SYSTEM_BUS_FATAL_ERROR 0x20000 #define CRYPTO_ENGINE_FATAL_ERROR 0x40000 +#define MCQ_CQ_EVENT_STATUS 0x100000 #define UFSHCD_UIC_HIBERN8_MASK (UIC_HIBERNATE_ENTER |\ UIC_HIBERNATE_EXIT) From patchwork Wed Dec 7 20:46:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Asutosh Das X-Patchwork-Id: 633213 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7840CC63708 for ; Wed, 7 Dec 2022 20:49:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229938AbiLGUtd (ORCPT ); Wed, 7 Dec 2022 15:49:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60880 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229928AbiLGUtE (ORCPT ); Wed, 7 Dec 2022 15:49:04 -0500 Received: from alexa-out-sd-02.qualcomm.com (alexa-out-sd-02.qualcomm.com [199.106.114.39]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C801D6E55F; Wed, 7 Dec 2022 12:48:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1670446109; x=1701982109; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=v/zRX5brnhFn2VsvDnEdtw/cPH4pi8o8AfKrkRUJDVY=; b=eidgwvQ1AcPaaFCdG0qRnx+I++rmqJDjgeKg3GlT6pCDMMdPA3UvLECj rYrGF6OuaMRrH+HNm40jIPlugV+EWm3UDVaVJp57Zdwr6BTxhD/JCZIuD 4rPEk4C2WIwoSbPCeDlYh78+g9WdLFhq1n9GMwLVTguMxd+9BxaXJADy5 g=; Received: from unknown (HELO ironmsg04-sd.qualcomm.com) ([10.53.140.144]) by alexa-out-sd-02.qualcomm.com with ESMTP; 07 Dec 2022 12:48:29 -0800 X-QCInternal: smtphost Received: from unknown (HELO nasanex01a.na.qualcomm.com) ([10.52.223.231]) by ironmsg04-sd.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Dec 2022 12:48:29 -0800 Received: from asutoshd-linux1.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 7 Dec 2022 12:48:29 -0800 From: Asutosh Das To: , , CC: , , , , , , , , , Asutosh Das , , Alim Akhtar , "James E.J. Bottomley" , Jinyoung Choi , open list Subject: [PATCH v10 10/16] ufs: core: mcq: Use shared tags for MCQ mode Date: Wed, 7 Dec 2022 12:46:21 -0800 Message-ID: <96084be6bc6259fb62ab61cfbc31159bba235bba.1670445699.git.quic_asutoshd@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nasanex01a.na.qualcomm.com (10.52.223.231) Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Enable shared tags for MCQ. For UFS, this should not have a huge performance impact. It however simplifies the MCQ implementation and reuses most of the existing code in the issue and completion path. Also add multiple queue mapping to map_queue(). Co-developed-by: Can Guo Signed-off-by: Can Guo Signed-off-by: Asutosh Das Reviewed-by: Bart Van Assche Reviewed-by: Manivannan Sadhasivam --- drivers/ufs/core/ufs-mcq.c | 2 ++ drivers/ufs/core/ufshcd.c | 28 ++++++++++++++++------------ 2 files changed, 18 insertions(+), 12 deletions(-) diff --git a/drivers/ufs/core/ufs-mcq.c b/drivers/ufs/core/ufs-mcq.c index 496e2b6..8bf222f 100644 --- a/drivers/ufs/core/ufs-mcq.c +++ b/drivers/ufs/core/ufs-mcq.c @@ -280,6 +280,7 @@ void ufshcd_mcq_make_queues_operational(struct ufs_hba *hba) int ufshcd_mcq_init(struct ufs_hba *hba) { + struct Scsi_Host *host = hba->host; struct ufs_hw_queue *hwq; int ret, i; @@ -315,5 +316,6 @@ int ufshcd_mcq_init(struct ufs_hba *hba) /* Give dev_cmd_queue the minimal number of entries */ hba->dev_cmd_queue->max_entries = MAX_DEV_CMD_ENTRIES; + host->host_tagset = 1; return 0; } diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index 5d14d3f..21cfa37 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -2763,24 +2763,28 @@ static inline bool is_device_wlun(struct scsi_device *sdev) */ static void ufshcd_map_queues(struct Scsi_Host *shost) { - int i; + struct ufs_hba *hba = shost_priv(shost); + int i, queue_offset = 0; + + if (!is_mcq_supported(hba)) { + hba->nr_queues[HCTX_TYPE_DEFAULT] = 1; + hba->nr_queues[HCTX_TYPE_READ] = 0; + hba->nr_queues[HCTX_TYPE_POLL] = 1; + hba->nr_hw_queues = 1; + } for (i = 0; i < shost->nr_maps; i++) { struct blk_mq_queue_map *map = &shost->tag_set.map[i]; - switch (i) { - case HCTX_TYPE_DEFAULT: - case HCTX_TYPE_POLL: - map->nr_queues = 1; - break; - case HCTX_TYPE_READ: - map->nr_queues = 0; + map->nr_queues = hba->nr_queues[i]; + if (!map->nr_queues) continue; - default: - WARN_ON_ONCE(true); - } - map->queue_offset = 0; + map->queue_offset = queue_offset; + if (i == HCTX_TYPE_POLL && !is_mcq_supported(hba)) + map->queue_offset = 0; + blk_mq_map_queues(map); + queue_offset += map->nr_queues; } } From patchwork Wed Dec 7 20:46:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Asutosh Das X-Patchwork-Id: 631826 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3361CC63708 for ; Wed, 7 Dec 2022 20:49:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230001AbiLGUtm (ORCPT ); Wed, 7 Dec 2022 15:49:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60618 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229995AbiLGUtM (ORCPT ); Wed, 7 Dec 2022 15:49:12 -0500 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4A6FB7E431; Wed, 7 Dec 2022 12:48:57 -0800 (PST) Received: from pps.filterd (m0279866.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2B7KeBIH018632; Wed, 7 Dec 2022 20:48:45 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=TtypK3xzSz0AV9H3azyDMJxhM0hKsRMwVxsRSfIsESE=; b=a6OrPpBoOXORv+K1aUJx2ejyhzELMv70BzCJ82pLBzlaZumAHmfgGgbFqQIYkvrm84hR Hp5dCVph6X5IXo62YwJ7TIpYl2FIpa3HDrB5MWpZTiy1M6Vhdscg3Q9+NTu0Zm5smzJP ofbMpYh2OjBijMaU2jCDbPCAMgAD3SjGgORe9fVRMiukySLIUCItePy565rPHoGpkYn1 NWgpZcherfRsbEJx9/xx3N8z+QuyPybREiUsYtUCdkHYTF5+oGbkTlvfd/2Jo0A9M777 WhyBpv7IJPqpXK07rYMuyBwnRcjl8T3F4Q8aJyD+ou7bW1ebSLfWT+WtckLnkmixhziJ xA== Received: from nasanppmta04.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3maj5w243t-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 07 Dec 2022 20:48:45 +0000 Received: from nasanex01a.na.qualcomm.com ([10.52.223.231]) by NASANPPMTA04.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 2B7KmiY2014074 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 7 Dec 2022 20:48:44 GMT Received: from asutoshd-linux1.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 7 Dec 2022 12:48:43 -0800 From: Asutosh Das To: , , CC: , , , , , , , , , Asutosh Das , , Alim Akhtar , "James E.J. Bottomley" , Arthur Simchaev , Krzysztof Kozlowski , Jinyoung Choi , Keoseong Park , Yoshihiro Shimoda , open list Subject: [PATCH v10 11/16] ufs: core: Prepare ufshcd_send_command for mcq Date: Wed, 7 Dec 2022 12:46:22 -0800 Message-ID: <8f8c75294df2a216960b57a837abfb420bc1b171.1670445699.git.quic_asutoshd@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: Q1XoBE-AptrN0IEBvn1RJKKI2J5ySouk X-Proofpoint-ORIG-GUID: Q1XoBE-AptrN0IEBvn1RJKKI2J5ySouk X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.923,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-12-07_09,2022-12-07_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 mlxlogscore=999 spamscore=0 lowpriorityscore=0 impostorscore=0 suspectscore=0 phishscore=0 priorityscore=1501 adultscore=0 malwarescore=0 clxscore=1015 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2212070174 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Add support to send commands using multiple submission queues in MCQ mode. Modify the functions that use ufshcd_send_command(). Co-developed-by: Can Guo Signed-off-by: Can Guo Signed-off-by: Asutosh Das Reviewed-by: Bart Van Assche Reviewed-by: Manivannan Sadhasivam --- drivers/ufs/core/ufs-mcq.c | 1 + drivers/ufs/core/ufshcd-priv.h | 10 ++++++++++ drivers/ufs/core/ufshcd.c | 36 ++++++++++++++++++++++++++---------- include/ufs/ufshcd.h | 5 +++++ 4 files changed, 42 insertions(+), 10 deletions(-) diff --git a/drivers/ufs/core/ufs-mcq.c b/drivers/ufs/core/ufs-mcq.c index 8bf222f..6815825 100644 --- a/drivers/ufs/core/ufs-mcq.c +++ b/drivers/ufs/core/ufs-mcq.c @@ -309,6 +309,7 @@ int ufshcd_mcq_init(struct ufs_hba *hba) for (i = 0; i < hba->nr_hw_queues; i++) { hwq = &hba->uhq[i]; hwq->max_entries = hba->nutrs; + spin_lock_init(&hwq->sq_lock); } /* The very first HW queue serves device commands */ diff --git a/drivers/ufs/core/ufshcd-priv.h b/drivers/ufs/core/ufshcd-priv.h index 0e35cfa..d295be4 100644 --- a/drivers/ufs/core/ufshcd-priv.h +++ b/drivers/ufs/core/ufshcd-priv.h @@ -332,4 +332,14 @@ static inline bool ufs_is_valid_unit_desc_lun(struct ufs_dev_info *dev_info, u8 return lun == UFS_UPIU_RPMB_WLUN || (lun < dev_info->max_lu_supported); } +static inline void ufshcd_inc_sq_tail(struct ufs_hw_queue *q) +{ + u32 mask = q->max_entries - 1; + u32 val; + + q->sq_tail_slot = (q->sq_tail_slot + 1) & mask; + val = q->sq_tail_slot * sizeof(struct utp_transfer_req_desc); + writel(val, q->mcq_sq_tail); +} + #endif /* _UFSHCD_PRIV_H_ */ diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index 21cfa37..64fc95e 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -2182,9 +2182,11 @@ static void ufshcd_update_monitor(struct ufs_hba *hba, const struct ufshcd_lrb * * ufshcd_send_command - Send SCSI or device management commands * @hba: per adapter instance * @task_tag: Task tag of the command + * @hwq: pointer to hardware queue instance */ static inline -void ufshcd_send_command(struct ufs_hba *hba, unsigned int task_tag) +void ufshcd_send_command(struct ufs_hba *hba, unsigned int task_tag, + struct ufs_hw_queue *hwq) { struct ufshcd_lrb *lrbp = &hba->lrb[task_tag]; unsigned long flags; @@ -2198,12 +2200,24 @@ void ufshcd_send_command(struct ufs_hba *hba, unsigned int task_tag) if (unlikely(ufshcd_should_inform_monitor(hba, lrbp))) ufshcd_start_monitor(hba, lrbp); - spin_lock_irqsave(&hba->outstanding_lock, flags); - if (hba->vops && hba->vops->setup_xfer_req) - hba->vops->setup_xfer_req(hba, task_tag, !!lrbp->cmd); - __set_bit(task_tag, &hba->outstanding_reqs); - ufshcd_writel(hba, 1 << task_tag, REG_UTP_TRANSFER_REQ_DOOR_BELL); - spin_unlock_irqrestore(&hba->outstanding_lock, flags); + if (is_mcq_enabled(hba)) { + int utrd_size = sizeof(struct utp_transfer_req_desc); + + spin_lock(&hwq->sq_lock); + memcpy(hwq->sqe_base_addr + (hwq->sq_tail_slot * utrd_size), + lrbp->utr_descriptor_ptr, utrd_size); + ufshcd_inc_sq_tail(hwq); + spin_unlock(&hwq->sq_lock); + } else { + spin_lock_irqsave(&hba->outstanding_lock, flags); + if (hba->vops && hba->vops->setup_xfer_req) + hba->vops->setup_xfer_req(hba, lrbp->task_tag, + !!lrbp->cmd); + __set_bit(lrbp->task_tag, &hba->outstanding_reqs); + ufshcd_writel(hba, 1 << lrbp->task_tag, + REG_UTP_TRANSFER_REQ_DOOR_BELL); + spin_unlock_irqrestore(&hba->outstanding_lock, flags); + } } /** @@ -2822,6 +2836,7 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd) int tag = scsi_cmd_to_rq(cmd)->tag; struct ufshcd_lrb *lrbp; int err = 0; + struct ufs_hw_queue *hwq = NULL; WARN_ONCE(tag < 0 || tag >= hba->nutrs, "Invalid tag %d\n", tag); @@ -2906,7 +2921,7 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd) goto out; } - ufshcd_send_command(hba, tag); + ufshcd_send_command(hba, tag, hwq); out: rcu_read_unlock(); @@ -3101,10 +3116,11 @@ static int ufshcd_exec_dev_cmd(struct ufs_hba *hba, goto out; hba->dev_cmd.complete = &wait; + hba->dev_cmd.cqe = NULL; ufshcd_add_query_upiu_trace(hba, UFS_QUERY_SEND, lrbp->ucd_req_ptr); - ufshcd_send_command(hba, tag); + ufshcd_send_command(hba, tag, hba->dev_cmd_queue); err = ufshcd_wait_for_dev_cmd(hba, lrbp, timeout); ufshcd_add_query_upiu_trace(hba, err ? UFS_QUERY_ERR : UFS_QUERY_COMP, (struct utp_upiu_req *)lrbp->ucd_rsp_ptr); @@ -6952,7 +6968,7 @@ static int ufshcd_issue_devman_upiu_cmd(struct ufs_hba *hba, ufshcd_add_query_upiu_trace(hba, UFS_QUERY_SEND, lrbp->ucd_req_ptr); - ufshcd_send_command(hba, tag); + ufshcd_send_command(hba, tag, hba->dev_cmd_queue); /* * ignore the returning value here - ufshcd_check_query_response is * bound to fail since dev_cmd.query and dev_cmd.type were left empty. diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h index ac46d36..ae20697 100644 --- a/include/ufs/ufshcd.h +++ b/include/ufs/ufshcd.h @@ -222,6 +222,7 @@ struct ufs_dev_cmd { struct mutex lock; struct completion *complete; struct ufs_query query; + struct cq_entry *cqe; }; /** @@ -1064,6 +1065,8 @@ struct ufs_hba { * @cqe_dma_addr: completion queue dma address * @max_entries: max number of slots in this hardware queue * @id: hardware queue ID + * @sq_tp_slot: current slot to which SQ tail pointer is pointing + * @sq_lock: serialize submission queue access */ struct ufs_hw_queue { void __iomem *mcq_sq_head; @@ -1077,6 +1080,8 @@ struct ufs_hw_queue { dma_addr_t cqe_dma_addr; u32 max_entries; u32 id; + u32 sq_tail_slot; + spinlock_t sq_lock; }; static inline bool is_mcq_enabled(struct ufs_hba *hba) From patchwork Wed Dec 7 20:46:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Asutosh Das X-Patchwork-Id: 633212 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77FBBC63705 for ; Wed, 7 Dec 2022 20:49:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230093AbiLGUtk (ORCPT ); Wed, 7 Dec 2022 15:49:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33208 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229908AbiLGUtK (ORCPT ); Wed, 7 Dec 2022 15:49:10 -0500 Received: from alexa-out-sd-02.qualcomm.com (alexa-out-sd-02.qualcomm.com [199.106.114.39]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 63E497E425; Wed, 7 Dec 2022 12:48:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1670446135; x=1701982135; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=yizNSbKBP5y6dEAI547lFI/knj727aVkgCZ/6hf5lM0=; b=jDAgDqzQZqcLv/UGGNBCQWtAGHydvGQxB5GqV57lV0i7INcIRCYTCP/+ zyZBzjnfPAtcgqzh4n8qJzPA7xM/1BEbdTcSaOFJwDJfS9o61NBQ+hdsP GuJJ1f68opDgqZfX7TTbJy1xaatdXK8g3TR5xrU/JxMDlBwNJO6u1qrOs 0=; Received: from unknown (HELO ironmsg-SD-alpha.qualcomm.com) ([10.53.140.30]) by alexa-out-sd-02.qualcomm.com with ESMTP; 07 Dec 2022 12:48:54 -0800 X-QCInternal: smtphost Received: from unknown (HELO nasanex01a.na.qualcomm.com) ([10.52.223.231]) by ironmsg-SD-alpha.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Dec 2022 12:48:54 -0800 Received: from asutoshd-linux1.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 7 Dec 2022 12:48:54 -0800 From: Asutosh Das To: , , CC: , , , , , , , , , Asutosh Das , , Alim Akhtar , "James E.J. Bottomley" , Arthur Simchaev , Jinyoung Choi , Krzysztof Kozlowski , open list Subject: [PATCH v10 12/16] ufs: core: mcq: Find hardware queue to queue request Date: Wed, 7 Dec 2022 12:46:23 -0800 Message-ID: <717b85383e037daddc5ad231eb288d35860afcab.1670445699.git.quic_asutoshd@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nasanex01a.na.qualcomm.com (10.52.223.231) Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Adds support to find the hardware queue on which the request would be queued. Since the very first queue is to serve device commands, an offset of 1 is added to the index of the hardware queue. Co-developed-by: Can Guo Signed-off-by: Can Guo Signed-off-by: Asutosh Das Reviewed-by: Bart Van Assche Reviewed-by: Manivannan Sadhasivam --- drivers/ufs/core/ufs-mcq.c | 19 +++++++++++++++++++ drivers/ufs/core/ufshcd-priv.h | 3 +++ drivers/ufs/core/ufshcd.c | 3 +++ 3 files changed, 25 insertions(+) diff --git a/drivers/ufs/core/ufs-mcq.c b/drivers/ufs/core/ufs-mcq.c index 6815825..be84bce 100644 --- a/drivers/ufs/core/ufs-mcq.c +++ b/drivers/ufs/core/ufs-mcq.c @@ -93,6 +93,25 @@ void ufshcd_mcq_config_mac(struct ufs_hba *hba, u32 max_active_cmds) } /** + * ufshcd_mcq_req_to_hwq - find the hardware queue on which the + * request would be issued. + * @hba - per adapter instance + * @req - pointer to the request to be issued + * + * Returns the hardware queue instance on which the request would + * be queued. + */ +struct ufs_hw_queue *ufshcd_mcq_req_to_hwq(struct ufs_hba *hba, + struct request *req) +{ + u32 utag = blk_mq_unique_tag(req); + u32 hwq = blk_mq_unique_tag_to_hwq(utag); + + /* uhq[0] is used to serve device commands */ + return &hba->uhq[hwq + UFSHCD_MCQ_IO_QUEUE_OFFSET]; +} + +/** * ufshcd_mcq_decide_queue_depth - decide the queue depth * @hba - per adapter instance * diff --git a/drivers/ufs/core/ufshcd-priv.h b/drivers/ufs/core/ufshcd-priv.h index d295be4..a07f85e 100644 --- a/drivers/ufs/core/ufshcd-priv.h +++ b/drivers/ufs/core/ufshcd-priv.h @@ -67,7 +67,10 @@ int ufshcd_mcq_memory_alloc(struct ufs_hba *hba); void ufshcd_mcq_make_queues_operational(struct ufs_hba *hba); void ufshcd_mcq_config_mac(struct ufs_hba *hba, u32 max_active_cmds); void ufshcd_mcq_select_mcq_mode(struct ufs_hba *hba); +struct ufs_hw_queue *ufshcd_mcq_req_to_hwq(struct ufs_hba *hba, + struct request *req); +#define UFSHCD_MCQ_IO_QUEUE_OFFSET 1 #define SD_ASCII_STD true #define SD_RAW false int ufshcd_read_string_desc(struct ufs_hba *hba, u8 desc_index, diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index 64fc95e..a7e613d 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -2921,6 +2921,9 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd) goto out; } + if (is_mcq_enabled(hba)) + hwq = ufshcd_mcq_req_to_hwq(hba, scsi_cmd_to_rq(cmd)); + ufshcd_send_command(hba, tag, hwq); out: From patchwork Wed Dec 7 20:46:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Asutosh Das X-Patchwork-Id: 633211 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DEDB2C63705 for ; Wed, 7 Dec 2022 20:50:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229967AbiLGUuN (ORCPT ); Wed, 7 Dec 2022 15:50:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33256 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230081AbiLGUtP (ORCPT ); Wed, 7 Dec 2022 15:49:15 -0500 Received: from alexa-out-sd-02.qualcomm.com (alexa-out-sd-02.qualcomm.com [199.106.114.39]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4DA2A81D9F; Wed, 7 Dec 2022 12:49:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1670446143; x=1701982143; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=ZWh0XtmWAZIO7YQMk78LiilkuRluQTz7fx7cik9AWAc=; b=LTYOeyE0zzYTu7EK+Bvsuy4TEGPA8xvOXd9xxWWn4mz7/BWAVtAQcPwz kWrWVMri0pTH6xFq1hikx6H6V5J/pQLL/HFFzSoAqegRWo6WIsIsS0ocu 4IOSyBdDVCyUE7otkilFkUUTjXgpf+YZYyIaiOl4nectxg/1fpUcIfNSP w=; Received: from unknown (HELO ironmsg-SD-alpha.qualcomm.com) ([10.53.140.30]) by alexa-out-sd-02.qualcomm.com with ESMTP; 07 Dec 2022 12:49:03 -0800 X-QCInternal: smtphost Received: from unknown (HELO nasanex01a.na.qualcomm.com) ([10.52.223.231]) by ironmsg-SD-alpha.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Dec 2022 12:49:03 -0800 Received: from asutoshd-linux1.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 7 Dec 2022 12:49:02 -0800 From: Asutosh Das To: , , CC: , , , , , , , , , Asutosh Das , , Alim Akhtar , "James E.J. Bottomley" , Krzysztof Kozlowski , Jinyoung Choi , open list Subject: [PATCH v10 13/16] ufs: core: Prepare for completion in mcq Date: Wed, 7 Dec 2022 12:46:24 -0800 Message-ID: <73b4e33ce04f8ae3d909d4a32fbc5851c4987bd5.1670445699.git.quic_asutoshd@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nasanex01a.na.qualcomm.com (10.52.223.231) Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Modify completion path APIs and add completion queue entry. Co-developed-by: Can Guo Signed-off-by: Can Guo Signed-off-by: Asutosh Das Reviewed-by: Bart Van Assche Reviewed-by: Manivannan Sadhasivam --- drivers/ufs/core/ufshcd-priv.h | 2 ++ drivers/ufs/core/ufshcd.c | 80 ++++++++++++++++++++++++++---------------- 2 files changed, 51 insertions(+), 31 deletions(-) diff --git a/drivers/ufs/core/ufshcd-priv.h b/drivers/ufs/core/ufshcd-priv.h index a07f85e..a7af6aa 100644 --- a/drivers/ufs/core/ufshcd-priv.h +++ b/drivers/ufs/core/ufshcd-priv.h @@ -61,6 +61,8 @@ int ufshcd_query_attr(struct ufs_hba *hba, enum query_opcode opcode, int ufshcd_query_flag(struct ufs_hba *hba, enum query_opcode opcode, enum flag_idn idn, u8 index, bool *flag_res); void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit); +void ufshcd_compl_one_cqe(struct ufs_hba *hba, int task_tag, + struct cq_entry *cqe); int ufshcd_mcq_init(struct ufs_hba *hba); int ufshcd_mcq_decide_queue_depth(struct ufs_hba *hba); int ufshcd_mcq_memory_alloc(struct ufs_hba *hba); diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index a7e613d..831da4f 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -784,12 +784,17 @@ static inline bool ufshcd_is_device_present(struct ufs_hba *hba) /** * ufshcd_get_tr_ocs - Get the UTRD Overall Command Status * @lrbp: pointer to local command reference block + * @cqe: pointer to the completion queue entry * * This function is used to get the OCS field from UTRD * Returns the OCS field in the UTRD */ -static enum utp_ocs ufshcd_get_tr_ocs(struct ufshcd_lrb *lrbp) +static enum utp_ocs ufshcd_get_tr_ocs(struct ufshcd_lrb *lrbp, + struct cq_entry *cqe) { + if (cqe) + return le32_to_cpu(cqe->status) & MASK_OCS; + return le32_to_cpu(lrbp->utr_descriptor_ptr->header.dword_2) & MASK_OCS; } @@ -3048,7 +3053,7 @@ static int ufshcd_wait_for_dev_cmd(struct ufs_hba *hba, * not trigger any race conditions. */ hba->dev_cmd.complete = NULL; - err = ufshcd_get_tr_ocs(lrbp); + err = ufshcd_get_tr_ocs(lrbp, hba->dev_cmd.cqe); if (!err) err = ufshcd_dev_cmd_completion(hba, lrbp); } else { @@ -5216,18 +5221,20 @@ ufshcd_scsi_cmd_status(struct ufshcd_lrb *lrbp, int scsi_status) * ufshcd_transfer_rsp_status - Get overall status of the response * @hba: per adapter instance * @lrbp: pointer to local reference block of completed command + * @cqe: pointer to the completion queue entry * * Returns result of the command to notify SCSI midlayer */ static inline int -ufshcd_transfer_rsp_status(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) +ufshcd_transfer_rsp_status(struct ufs_hba *hba, struct ufshcd_lrb *lrbp, + struct cq_entry *cqe) { int result = 0; int scsi_status; enum utp_ocs ocs; /* overall command status of utrd */ - ocs = ufshcd_get_tr_ocs(lrbp); + ocs = ufshcd_get_tr_ocs(lrbp, cqe); if (hba->quirks & UFSHCD_QUIRK_BROKEN_OCS_FATAL_ERROR) { if (be32_to_cpu(lrbp->ucd_rsp_ptr->header.dword_1) & @@ -5392,42 +5399,53 @@ static void ufshcd_release_scsi_cmd(struct ufs_hba *hba, } /** - * __ufshcd_transfer_req_compl - handle SCSI and query command completion + * ufshcd_compl_one_cqe - handle a completion queue entry * @hba: per adapter instance - * @completed_reqs: bitmask that indicates which requests to complete + * @task_tag: the task tag of the request to be completed + * @cqe: pointer to the completion queue entry */ -static void __ufshcd_transfer_req_compl(struct ufs_hba *hba, - unsigned long completed_reqs) +void ufshcd_compl_one_cqe(struct ufs_hba *hba, int task_tag, + struct cq_entry *cqe) { struct ufshcd_lrb *lrbp; struct scsi_cmnd *cmd; - int index; - - for_each_set_bit(index, &completed_reqs, hba->nutrs) { - lrbp = &hba->lrb[index]; - lrbp->compl_time_stamp = ktime_get(); - lrbp->compl_time_stamp_local_clock = local_clock(); - cmd = lrbp->cmd; - if (cmd) { - if (unlikely(ufshcd_should_inform_monitor(hba, lrbp))) - ufshcd_update_monitor(hba, lrbp); - ufshcd_add_command_trace(hba, index, UFS_CMD_COMP); - cmd->result = ufshcd_transfer_rsp_status(hba, lrbp); - ufshcd_release_scsi_cmd(hba, lrbp); - /* Do not touch lrbp after scsi done */ - scsi_done(cmd); - } else if (lrbp->command_type == UTP_CMD_TYPE_DEV_MANAGE || - lrbp->command_type == UTP_CMD_TYPE_UFS_STORAGE) { - if (hba->dev_cmd.complete) { - ufshcd_add_command_trace(hba, index, - UFS_DEV_COMP); - complete(hba->dev_cmd.complete); - ufshcd_clk_scaling_update_busy(hba); - } + + lrbp = &hba->lrb[task_tag]; + lrbp->compl_time_stamp = ktime_get(); + cmd = lrbp->cmd; + if (cmd) { + if (unlikely(ufshcd_should_inform_monitor(hba, lrbp))) + ufshcd_update_monitor(hba, lrbp); + ufshcd_add_command_trace(hba, task_tag, UFS_CMD_COMP); + cmd->result = ufshcd_transfer_rsp_status(hba, lrbp, cqe); + ufshcd_release_scsi_cmd(hba, lrbp); + /* Do not touch lrbp after scsi done */ + scsi_done(cmd); + } else if (lrbp->command_type == UTP_CMD_TYPE_DEV_MANAGE || + lrbp->command_type == UTP_CMD_TYPE_UFS_STORAGE) { + if (hba->dev_cmd.complete) { + hba->dev_cmd.cqe = cqe; + ufshcd_add_command_trace(hba, task_tag, UFS_DEV_COMP); + complete(hba->dev_cmd.complete); + ufshcd_clk_scaling_update_busy(hba); } } } +/** + * __ufshcd_transfer_req_compl - handle SCSI and query command completion + * @hba: per adapter instance + * @completed_reqs: bitmask that indicates which requests to complete + */ +static void __ufshcd_transfer_req_compl(struct ufs_hba *hba, + unsigned long completed_reqs) +{ + int tag; + + for_each_set_bit(tag, &completed_reqs, hba->nutrs) + ufshcd_compl_one_cqe(hba, tag, NULL); +} + /* Any value that is not an existing queue number is fine for this constant. */ enum { UFSHCD_POLL_FROM_INTERRUPT_CONTEXT = -1 From patchwork Wed Dec 7 20:46:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Asutosh Das X-Patchwork-Id: 631825 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9DDFC4708D for ; Wed, 7 Dec 2022 20:50:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230119AbiLGUua (ORCPT ); Wed, 7 Dec 2022 15:50:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60138 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229939AbiLGUtd (ORCPT ); Wed, 7 Dec 2022 15:49:33 -0500 Received: from alexa-out-sd-02.qualcomm.com (alexa-out-sd-02.qualcomm.com [199.106.114.39]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 58D0481D83; Wed, 7 Dec 2022 12:49:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1670446159; x=1701982159; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=YfLoq5vozXSD5hJSR9wdVTvKdruVf/IGIPjt3UGljSc=; b=S17CiXrFxZpB0KvyWI/NBHRNEdDMRNQV2800+hmDLtFQhrGrHXyOE3Oy QjTN1H5g8uk7yXm2eHOV4gIeGZY5GUoMJjsKFojh8XcmMrf5IU2gznoIy uL2Mt5rpQpMNg4RCwzXRlT78uH5+ULsJqMIL8/QBJSuD9e1m0WArEFgZt 8=; Received: from unknown (HELO ironmsg-SD-alpha.qualcomm.com) ([10.53.140.30]) by alexa-out-sd-02.qualcomm.com with ESMTP; 07 Dec 2022 12:49:19 -0800 X-QCInternal: smtphost Received: from unknown (HELO nasanex01a.na.qualcomm.com) ([10.52.223.231]) by ironmsg-SD-alpha.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Dec 2022 12:49:19 -0800 Received: from asutoshd-linux1.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 7 Dec 2022 12:49:18 -0800 From: Asutosh Das To: , , CC: , , , , , , , , , Asutosh Das , , Alim Akhtar , "James E.J. Bottomley" , "Andy Gross" , Bjorn Andersson , "Konrad Dybcio" , Jinyoung Choi , Krzysztof Kozlowski , Arthur Simchaev , Keoseong Park , Yoshihiro Shimoda , Kiwoong Kim , "open list" Subject: [PATCH v10 14/16] ufs: mcq: Add completion support of a cqe Date: Wed, 7 Dec 2022 12:46:25 -0800 Message-ID: <3720ae1ff79d4b93ab688ab7bab2d4935c92e85f.1670445699.git.quic_asutoshd@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nasanex01a.na.qualcomm.com (10.52.223.231) Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Add support for completing requests from Completion Queue. Some host controllers support vendor specific registers that provide a bitmap of all CQ's which have at least one completed CQE. Add this support. The MCQ specification doesn't provide the Task Tag or its equivalent in the Completion Queue Entry. So use an indirect method to find the Task Tag from the Completion Queue Entry. Co-developed-by: Can Guo Signed-off-by: Can Guo Signed-off-by: Asutosh Das Reviewed-by: Bart Van Assche Reviewed-by: Manivannan Sadhasivam --- drivers/ufs/core/ufs-mcq.c | 61 ++++++++++++++++++++++++++++++++++++++++++ drivers/ufs/core/ufshcd-priv.h | 43 +++++++++++++++++++++++++++++ drivers/ufs/core/ufshcd.c | 37 +++++++++++++++++++++++++ drivers/ufs/host/ufs-qcom.c | 14 ++++++++++ drivers/ufs/host/ufs-qcom.h | 4 +++ include/ufs/ufshcd.h | 7 +++++ include/ufs/ufshci.h | 3 +++ 7 files changed, 169 insertions(+) diff --git a/drivers/ufs/core/ufs-mcq.c b/drivers/ufs/core/ufs-mcq.c index be84bce..cd10d59 100644 --- a/drivers/ufs/core/ufs-mcq.c +++ b/drivers/ufs/core/ufs-mcq.c @@ -25,6 +25,7 @@ #define MCQ_CFG_MAC_MASK GENMASK(16, 8) #define MCQ_QCFG_SIZE 0x40 #define MCQ_ENTRY_SIZE_IN_DWORD 8 +#define CQE_UCD_BA GENMASK_ULL(63, 7) static int rw_queue_count_set(const char *val, const struct kernel_param *kp) { @@ -236,6 +237,63 @@ static void __iomem *mcq_opr_base(struct ufs_hba *hba, return opr->base + opr->stride * i; } +u32 ufshcd_mcq_read_cqis(struct ufs_hba *hba, int i) +{ + return readl(mcq_opr_base(hba, OPR_CQIS, i) + REG_CQIS); +} + +void ufshcd_mcq_write_cqis(struct ufs_hba *hba, u32 val, int i) +{ + writel(val, mcq_opr_base(hba, OPR_CQIS, i) + REG_CQIS); +} + +/* + * Current MCQ specification doesn't provide a Task Tag or its equivalent in + * the Completion Queue Entry. Find the Task Tag using an indirect method. + */ +static int ufshcd_mcq_get_tag(struct ufs_hba *hba, + struct ufs_hw_queue *hwq, + struct cq_entry *cqe) +{ + u64 addr; + + /* sizeof(struct utp_transfer_cmd_desc) must be a multiple of 128 */ + BUILD_BUG_ON(sizeof(struct utp_transfer_cmd_desc) & GENMASK(6, 0)); + + /* Bits 63:7 UCD base address, 6:5 are reserved, 4:0 is SQ ID */ + addr = (le64_to_cpu(cqe->command_desc_base_addr) & CQE_UCD_BA) - + hba->ucdl_dma_addr; + + return div_u64(addr, sizeof(struct utp_transfer_cmd_desc)); +} + +static void ufshcd_mcq_process_cqe(struct ufs_hba *hba, + struct ufs_hw_queue *hwq) +{ + struct cq_entry *cqe = ufshcd_mcq_cur_cqe(hwq); + int tag = ufshcd_mcq_get_tag(hba, hwq, cqe); + + ufshcd_compl_one_cqe(hba, tag, cqe); +} + +unsigned long ufshcd_mcq_poll_cqe_nolock(struct ufs_hba *hba, + struct ufs_hw_queue *hwq) +{ + unsigned long completed_reqs = 0; + + ufshcd_mcq_update_cq_tail_slot(hwq); + while (!ufshcd_mcq_is_cq_empty(hwq)) { + ufshcd_mcq_process_cqe(hba, hwq); + ufshcd_mcq_inc_cq_head_slot(hwq); + completed_reqs++; + } + + if (completed_reqs) + ufshcd_mcq_update_cq_head(hwq); + + return completed_reqs; +} + void ufshcd_mcq_make_queues_operational(struct ufs_hba *hba) { struct ufs_hw_queue *hwq; @@ -279,6 +337,9 @@ void ufshcd_mcq_make_queues_operational(struct ufs_hba *hba) hwq->mcq_cq_head = mcq_opr_base(hba, OPR_CQD, i) + REG_CQHP; hwq->mcq_cq_tail = mcq_opr_base(hba, OPR_CQD, i) + REG_CQTP; + /* Reinitializing is needed upon HC reset */ + hwq->sq_tail_slot = hwq->cq_tail_slot = hwq->cq_head_slot = 0; + /* Enable Tail Entry Push Status interrupt only for non-poll queues */ if (i < hba->nr_hw_queues - hba->nr_queues[HCTX_TYPE_POLL]) writel(1, mcq_opr_base(hba, OPR_CQIS, i) + REG_CQIE); diff --git a/drivers/ufs/core/ufshcd-priv.h b/drivers/ufs/core/ufshcd-priv.h index a7af6aa..70e3416 100644 --- a/drivers/ufs/core/ufshcd-priv.h +++ b/drivers/ufs/core/ufshcd-priv.h @@ -69,6 +69,10 @@ int ufshcd_mcq_memory_alloc(struct ufs_hba *hba); void ufshcd_mcq_make_queues_operational(struct ufs_hba *hba); void ufshcd_mcq_config_mac(struct ufs_hba *hba, u32 max_active_cmds); void ufshcd_mcq_select_mcq_mode(struct ufs_hba *hba); +u32 ufshcd_mcq_read_cqis(struct ufs_hba *hba, int i); +void ufshcd_mcq_write_cqis(struct ufs_hba *hba, u32 val, int i); +unsigned long ufshcd_mcq_poll_cqe_nolock(struct ufs_hba *hba, + struct ufs_hw_queue *hwq); struct ufs_hw_queue *ufshcd_mcq_req_to_hwq(struct ufs_hba *hba, struct request *req); @@ -261,6 +265,15 @@ static inline int ufshcd_mcq_vops_op_runtime_config(struct ufs_hba *hba) return -EOPNOTSUPP; } +static inline int ufshcd_vops_get_outstanding_cqs(struct ufs_hba *hba, + unsigned long *ocqs) +{ + if (hba->vops && hba->vops->get_outstanding_cqs) + return hba->vops->get_outstanding_cqs(hba, ocqs); + + return -EOPNOTSUPP; +} + extern const struct ufs_pm_lvl_states ufs_pm_lvl_states[]; /** @@ -347,4 +360,34 @@ static inline void ufshcd_inc_sq_tail(struct ufs_hw_queue *q) writel(val, q->mcq_sq_tail); } +static inline void ufshcd_mcq_update_cq_tail_slot(struct ufs_hw_queue *q) +{ + u32 val = readl(q->mcq_cq_tail); + + q->cq_tail_slot = val / sizeof(struct cq_entry); +} + +static inline bool ufshcd_mcq_is_cq_empty(struct ufs_hw_queue *q) +{ + return q->cq_head_slot == q->cq_tail_slot; +} + +static inline void ufshcd_mcq_inc_cq_head_slot(struct ufs_hw_queue *q) +{ + q->cq_head_slot++; + if (q->cq_head_slot == q->max_entries) + q->cq_head_slot = 0; +} + +static inline void ufshcd_mcq_update_cq_head(struct ufs_hw_queue *q) +{ + writel(q->cq_head_slot * sizeof(struct cq_entry), q->mcq_cq_head); +} + +static inline struct cq_entry *ufshcd_mcq_cur_cqe(struct ufs_hw_queue *q) +{ + struct cq_entry *cqe = q->cqe_base_addr; + + return cqe + q->cq_head_slot; +} #endif /* _UFSHCD_PRIV_H_ */ diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index 831da4f..884dabb 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -6698,6 +6698,40 @@ static irqreturn_t ufshcd_tmc_handler(struct ufs_hba *hba) } /** + * ufshcd_handle_mcq_cq_events - handle MCQ completion queue events + * @hba: per adapter instance + * + * Returns IRQ_HANDLED if interrupt is handled + */ +static irqreturn_t ufshcd_handle_mcq_cq_events(struct ufs_hba *hba) +{ + struct ufs_hw_queue *hwq; + unsigned long outstanding_cqs; + unsigned int nr_queues; + int i, ret; + u32 events; + + ret = ufshcd_vops_get_outstanding_cqs(hba, &outstanding_cqs); + if (ret) + outstanding_cqs = (1U << hba->nr_hw_queues) - 1; + + /* Exclude the poll queues */ + nr_queues = hba->nr_hw_queues - hba->nr_queues[HCTX_TYPE_POLL]; + for_each_set_bit(i, &outstanding_cqs, nr_queues) { + hwq = &hba->uhq[i]; + + events = ufshcd_mcq_read_cqis(hba, i); + if (events) + ufshcd_mcq_write_cqis(hba, events, i); + + if (events & UFSHCD_MCQ_CQIS_TAIL_ENT_PUSH_STS) + ufshcd_mcq_poll_cqe_nolock(hba, hwq); + } + + return IRQ_HANDLED; +} + +/** * ufshcd_sl_intr - Interrupt service routine * @hba: per adapter instance * @intr_status: contains interrupts generated by the controller @@ -6722,6 +6756,9 @@ static irqreturn_t ufshcd_sl_intr(struct ufs_hba *hba, u32 intr_status) if (intr_status & UTP_TRANSFER_REQ_COMPL) retval |= ufshcd_transfer_req_compl(hba); + if (intr_status & MCQ_CQ_EVENT_STATUS) + retval |= ufshcd_handle_mcq_cq_events(hba); + return retval; } diff --git a/drivers/ufs/host/ufs-qcom.c b/drivers/ufs/host/ufs-qcom.c index 61bb1c8..379b213 100644 --- a/drivers/ufs/host/ufs-qcom.c +++ b/drivers/ufs/host/ufs-qcom.c @@ -1553,6 +1553,19 @@ static int ufs_qcom_get_hba_mac(struct ufs_hba *hba) return MAX_SUPP_MAC; } +static int ufs_qcom_get_outstanding_cqs(struct ufs_hba *hba, + unsigned long *ocqs) +{ + struct ufshcd_res_info *mcq_vs_res = &hba->res[RES_MCQ_VS]; + + if (!mcq_vs_res->base) + return -EINVAL; + + *ocqs = readl(mcq_vs_res->base + UFS_MEM_CQIS_VS); + + return 0; +} + /* * struct ufs_hba_qcom_vops - UFS QCOM specific variant operations * @@ -1579,6 +1592,7 @@ static const struct ufs_hba_variant_ops ufs_hba_qcom_vops = { .mcq_config_resource = ufs_qcom_mcq_config_resource, .get_hba_mac = ufs_qcom_get_hba_mac, .op_runtime_config = ufs_qcom_op_runtime_config, + .get_outstanding_cqs = ufs_qcom_get_outstanding_cqs, }; /** diff --git a/drivers/ufs/host/ufs-qcom.h b/drivers/ufs/host/ufs-qcom.h index f86e532..6912bdf 100644 --- a/drivers/ufs/host/ufs-qcom.h +++ b/drivers/ufs/host/ufs-qcom.h @@ -73,6 +73,10 @@ enum { UFS_UFS_DBG_RD_EDTL_RAM = 0x1900, }; +enum { + UFS_MEM_CQIS_VS = 0x8, +}; + #define UFS_CNTLR_2_x_x_VEN_REGS_OFFSET(x) (0x000 + x) #define UFS_CNTLR_3_x_x_VEN_REGS_OFFSET(x) (0x400 + x) diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h index ae20697..8441c46 100644 --- a/include/ufs/ufshcd.h +++ b/include/ufs/ufshcd.h @@ -301,6 +301,7 @@ struct ufs_pwr_mode_info { * @mcq_config_resource: called to configure MCQ platform resources * @get_hba_mac: called to get vendor specific mac value, mandatory for mcq mode * @op_runtime_config: called to config Operation and runtime regs Pointers + * @get_outstanding_cqs: called to get outstanding completion queues */ struct ufs_hba_variant_ops { const char *name; @@ -342,6 +343,8 @@ struct ufs_hba_variant_ops { int (*mcq_config_resource)(struct ufs_hba *hba); int (*get_hba_mac)(struct ufs_hba *hba); int (*op_runtime_config)(struct ufs_hba *hba); + int (*get_outstanding_cqs)(struct ufs_hba *hba, + unsigned long *ocqs); }; /* clock gating state */ @@ -1067,6 +1070,8 @@ struct ufs_hba { * @id: hardware queue ID * @sq_tp_slot: current slot to which SQ tail pointer is pointing * @sq_lock: serialize submission queue access + * @cq_tail_slot: current slot to which CQ tail pointer is pointing + * @cq_head_slot: current slot to which CQ head pointer is pointing */ struct ufs_hw_queue { void __iomem *mcq_sq_head; @@ -1082,6 +1087,8 @@ struct ufs_hw_queue { u32 id; u32 sq_tail_slot; spinlock_t sq_lock; + u32 cq_tail_slot; + u32 cq_head_slot; }; static inline bool is_mcq_enabled(struct ufs_hba *hba) diff --git a/include/ufs/ufshci.h b/include/ufs/ufshci.h index 8784b88..1df8425 100644 --- a/include/ufs/ufshci.h +++ b/include/ufs/ufshci.h @@ -262,6 +262,9 @@ enum { /* UTMRLRSR - UTP Task Management Request Run-Stop Register 80h */ #define UTP_TASK_REQ_LIST_RUN_STOP_BIT 0x1 +/* CQISy - CQ y Interrupt Status Register */ +#define UFSHCD_MCQ_CQIS_TAIL_ENT_PUSH_STS 0x1 + /* UICCMD - UIC Command */ #define COMMAND_OPCODE_MASK 0xFF #define GEN_SELECTOR_INDEX_MASK 0xFFFF From patchwork Wed Dec 7 20:46:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Asutosh Das X-Patchwork-Id: 633210 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9A2BC4708D for ; Wed, 7 Dec 2022 20:50:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230096AbiLGUut (ORCPT ); Wed, 7 Dec 2022 15:50:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33394 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230107AbiLGUuN (ORCPT ); Wed, 7 Dec 2022 15:50:13 -0500 Received: from alexa-out-sd-02.qualcomm.com (alexa-out-sd-02.qualcomm.com [199.106.114.39]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EF4946A74C; Wed, 7 Dec 2022 12:49:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1670446173; x=1701982173; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=79R+NExqrZIZoVwsptvjNlEk8LlSSWuIaPAXQS6vKic=; b=IYawNfqX35b0tQU6C4iD+5SlQJSH3oBrgtseZRVwos1ebxYfwtMfcAV/ tZAWEIC1qbMs5N5Lc7M0qeg4hpGwZ2mn4W3SKyyp/sB2LUHBaoCraZe3c mbUyXoOWkvxcd2A60AC4SafNkWUCstjK2S6bqe0PHrjfms1N+DyjXzca9 o=; Received: from unknown (HELO ironmsg-SD-alpha.qualcomm.com) ([10.53.140.30]) by alexa-out-sd-02.qualcomm.com with ESMTP; 07 Dec 2022 12:49:33 -0800 X-QCInternal: smtphost Received: from unknown (HELO nasanex01a.na.qualcomm.com) ([10.52.223.231]) by ironmsg-SD-alpha.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Dec 2022 12:49:33 -0800 Received: from asutoshd-linux1.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 7 Dec 2022 12:49:33 -0800 From: Asutosh Das To: , , CC: , , , , , , , , , Asutosh Das , , Alim Akhtar , "James E.J. Bottomley" , Krzysztof Kozlowski , Jinyoung Choi , Arthur Simchaev , Yoshihiro Shimoda , Keoseong Park , open list Subject: [PATCH v10 15/16] ufs: core: mcq: Add completion support in poll Date: Wed, 7 Dec 2022 12:46:26 -0800 Message-ID: X-Mailer: git-send-email 2.7.4 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nasanex01a.na.qualcomm.com (10.52.223.231) Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Complete cqe requests in poll. Assumption is that several poll completion may happen in different CPUs for the same completion queue. Hence a spin lock protection is added. Co-developed-by: Can Guo Signed-off-by: Can Guo Signed-off-by: Asutosh Das Reviewed-by: Bart Van Assche Reviewed-by: Manivannan Sadhasivam --- drivers/ufs/core/ufs-mcq.c | 13 +++++++++++++ drivers/ufs/core/ufshcd-priv.h | 2 ++ drivers/ufs/core/ufshcd.c | 7 +++++++ include/ufs/ufshcd.h | 2 ++ 4 files changed, 24 insertions(+) diff --git a/drivers/ufs/core/ufs-mcq.c b/drivers/ufs/core/ufs-mcq.c index cd10d59..e710d19 100644 --- a/drivers/ufs/core/ufs-mcq.c +++ b/drivers/ufs/core/ufs-mcq.c @@ -294,6 +294,18 @@ unsigned long ufshcd_mcq_poll_cqe_nolock(struct ufs_hba *hba, return completed_reqs; } +unsigned long ufshcd_mcq_poll_cqe_lock(struct ufs_hba *hba, + struct ufs_hw_queue *hwq) +{ + unsigned long completed_reqs; + + spin_lock(&hwq->cq_lock); + completed_reqs = ufshcd_mcq_poll_cqe_nolock(hba, hwq); + spin_unlock(&hwq->cq_lock); + + return completed_reqs; +} + void ufshcd_mcq_make_queues_operational(struct ufs_hba *hba) { struct ufs_hw_queue *hwq; @@ -390,6 +402,7 @@ int ufshcd_mcq_init(struct ufs_hba *hba) hwq = &hba->uhq[i]; hwq->max_entries = hba->nutrs; spin_lock_init(&hwq->sq_lock); + spin_lock_init(&hwq->cq_lock); } /* The very first HW queue serves device commands */ diff --git a/drivers/ufs/core/ufshcd-priv.h b/drivers/ufs/core/ufshcd-priv.h index 70e3416..ff03aa5 100644 --- a/drivers/ufs/core/ufshcd-priv.h +++ b/drivers/ufs/core/ufshcd-priv.h @@ -75,6 +75,8 @@ unsigned long ufshcd_mcq_poll_cqe_nolock(struct ufs_hba *hba, struct ufs_hw_queue *hwq); struct ufs_hw_queue *ufshcd_mcq_req_to_hwq(struct ufs_hba *hba, struct request *req); +unsigned long ufshcd_mcq_poll_cqe_lock(struct ufs_hba *hba, + struct ufs_hw_queue *hwq); #define UFSHCD_MCQ_IO_QUEUE_OFFSET 1 #define SD_ASCII_STD true diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index 884dabb..e42d642 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -5475,6 +5475,13 @@ static int ufshcd_poll(struct Scsi_Host *shost, unsigned int queue_num) struct ufs_hba *hba = shost_priv(shost); unsigned long completed_reqs, flags; u32 tr_doorbell; + struct ufs_hw_queue *hwq; + + if (is_mcq_enabled(hba)) { + hwq = &hba->uhq[queue_num + UFSHCD_MCQ_IO_QUEUE_OFFSET]; + + return ufshcd_mcq_poll_cqe_lock(hba, hwq); + } spin_lock_irqsave(&hba->outstanding_lock, flags); tr_doorbell = ufshcd_readl(hba, REG_UTP_TRANSFER_REQ_DOOR_BELL); diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h index 8441c46..f20557b 100644 --- a/include/ufs/ufshcd.h +++ b/include/ufs/ufshcd.h @@ -1072,6 +1072,7 @@ struct ufs_hba { * @sq_lock: serialize submission queue access * @cq_tail_slot: current slot to which CQ tail pointer is pointing * @cq_head_slot: current slot to which CQ head pointer is pointing + * @cq_lock: Synchronize between multiple polling instances */ struct ufs_hw_queue { void __iomem *mcq_sq_head; @@ -1089,6 +1090,7 @@ struct ufs_hw_queue { spinlock_t sq_lock; u32 cq_tail_slot; u32 cq_head_slot; + spinlock_t cq_lock; }; static inline bool is_mcq_enabled(struct ufs_hba *hba) From patchwork Wed Dec 7 20:46:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Asutosh Das X-Patchwork-Id: 631824 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1FD17C63708 for ; Wed, 7 Dec 2022 20:51:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230156AbiLGUvf (ORCPT ); Wed, 7 Dec 2022 15:51:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33238 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230086AbiLGUvF (ORCPT ); Wed, 7 Dec 2022 15:51:05 -0500 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0A19883E90; Wed, 7 Dec 2022 12:49:51 -0800 (PST) Received: from pps.filterd (m0279868.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2B7KnbEs031095; Wed, 7 Dec 2022 20:49:37 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=8SFDvbzQ1dlolm07vRWQ6nGPb415dRbXSJxfEmPGi4I=; b=pJ/a57BZCdU68XY9sSHFeZc49JnxtxJ9drOI27FiwBaIkJ+BQSaUs0Nx1deCbixWzZm7 0aH1n+4TFIF1m+kP422bjzx7PO+2PQhqadRMBzoKJq79tpvgFoBaDJocJbSY6Ws3+Ioo hzv4r37pmk96ZxY7HwHCYhxwcqtTgXsykzCkZVIsP4B77tyF/+GfZJNzncNBBBSapd7L qhstLkQOTZPb2ss4ZujzgQVLCzxkoQrXFA+VBh+CQHeYNntARgTQSbNvzhfR1Becef9i l4oltmFXfhv1hOPB7VM+QKUVBTpHU1y+qM2xebLg4u4VmTBIFzepTvA/x64sb/891V5w KQ== Received: from nasanppmta04.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3man0ka09f-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 07 Dec 2022 20:49:37 +0000 Received: from nasanex01a.na.qualcomm.com ([10.52.223.231]) by NASANPPMTA04.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 2B7Knatq014967 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 7 Dec 2022 20:49:36 GMT Received: from asutoshd-linux1.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 7 Dec 2022 12:49:36 -0800 From: Asutosh Das To: , , CC: , , , , , , , , , Asutosh Das , , Alim Akhtar , "James E.J. Bottomley" , Jinyoung Choi , open list Subject: [PATCH v10 16/16] ufs: core: mcq: Enable Multi Circular Queue Date: Wed, 7 Dec 2022 12:46:27 -0800 Message-ID: <70f8760735786ff9397c5e38144c72aa5482de7f.1670445699.git.quic_asutoshd@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: JA_cyClS-aZsCHyKr9bta4npkswA0E5S X-Proofpoint-ORIG-GUID: JA_cyClS-aZsCHyKr9bta4npkswA0E5S X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.923,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-12-07_09,2022-12-07_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 lowpriorityscore=0 mlxscore=0 impostorscore=0 adultscore=0 mlxlogscore=999 clxscore=1015 priorityscore=1501 suspectscore=0 malwarescore=0 spamscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2212070174 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Enable MCQ in the Host Controller. Signed-off-by: Asutosh Das Reviewed-by: Bart Van Assche Reviewed-by: Manivannan Sadhasivam --- drivers/ufs/core/ufshcd.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index e42d642..7f0bc10 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -8381,6 +8381,12 @@ static void ufshcd_config_mcq(struct ufs_hba *hba) hba->host->can_queue = hba->nutrs - UFSHCD_NUM_RESERVED; hba->reserved_slot = hba->nutrs - UFSHCD_NUM_RESERVED; + + /* Select MCQ mode */ + ufshcd_writel(hba, ufshcd_readl(hba, REG_UFS_MEM_CFG) | 0x1, + REG_UFS_MEM_CFG); + hba->mcq_enabled = true; + dev_info(hba->dev, "MCQ configured, nr_queues=%d, io_queues=%d, read_queue=%d, poll_queues=%d, queue_depth=%d\n", hba->nr_hw_queues, hba->nr_queues[HCTX_TYPE_DEFAULT], hba->nr_queues[HCTX_TYPE_READ], hba->nr_queues[HCTX_TYPE_POLL],