From patchwork Thu Jan 11 13:17:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 761876 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 29E2932C95; Thu, 11 Jan 2024 13:18:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.216]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4T9lXW4vjPz6D8bq; Thu, 11 Jan 2024 21:15:39 +0800 (CST) Received: from lhrpeml500006.china.huawei.com (unknown [7.191.161.198]) by mail.maildlp.com (Postfix) with ESMTPS id 51F7D1400D9; Thu, 11 Jan 2024 21:18:02 +0800 (CST) Received: from SecurePC30232.china.huawei.com (10.122.247.234) by lhrpeml500006.china.huawei.com (7.191.161.198) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Thu, 11 Jan 2024 13:18:01 +0000 From: To: , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC PATCH v5 01/12] cxl/mbox: Add GET_SUPPORTED_FEATURES mailbox command Date: Thu, 11 Jan 2024 21:17:30 +0800 Message-ID: <20240111131741.1356-2-shiju.jose@huawei.com> X-Mailer: git-send-email 2.35.1.windows.2 In-Reply-To: <20240111131741.1356-1-shiju.jose@huawei.com> References: <20240111131741.1356-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500004.china.huawei.com (7.191.163.9) To lhrpeml500006.china.huawei.com (7.191.161.198) From: Shiju Jose Add support for GET_SUPPORTED_FEATURES mailbox command. CXL spec 3.0 section 8.2.9.6 describes optional device specific features. CXL devices supports features with changeable attributes. Get Supported Features retrieves the list of supported device specific features. The settings of a feature can be retrieved using Get Feature and optionally modified using Set Feature. Signed-off-by: Shiju Jose --- drivers/cxl/core/mbox.c | 23 ++++++++++++++++ drivers/cxl/cxlmem.h | 59 +++++++++++++++++++++++++++++++++++++++++ 2 files changed, 82 insertions(+) diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c index 36270dcfb42e..6960ce935e73 100644 --- a/drivers/cxl/core/mbox.c +++ b/drivers/cxl/core/mbox.c @@ -1303,6 +1303,29 @@ int cxl_set_timestamp(struct cxl_memdev_state *mds) } EXPORT_SYMBOL_NS_GPL(cxl_set_timestamp, CXL); +int cxl_get_supported_features(struct cxl_memdev_state *mds, + struct cxl_mbox_get_supp_feats_in *pi, + void *feats_out) +{ + struct cxl_mbox_cmd mbox_cmd; + int rc; + + mbox_cmd = (struct cxl_mbox_cmd) { + .opcode = CXL_MBOX_OP_GET_SUPPORTED_FEATURES, + .size_in = sizeof(*pi), + .payload_in = pi, + .size_out = le32_to_cpu(pi->count), + .payload_out = feats_out, + .min_out = sizeof(struct cxl_mbox_get_supp_feats_out), + }; + rc = cxl_internal_send_cmd(mds, &mbox_cmd); + if (rc < 0) + return rc; + + return 0; +} +EXPORT_SYMBOL_NS_GPL(cxl_get_supported_features, CXL); + int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len, struct cxl_region *cxlr) { diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index a2fcbca253f3..d831dad748f5 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -506,6 +506,7 @@ enum cxl_opcode { CXL_MBOX_OP_SET_TIMESTAMP = 0x0301, CXL_MBOX_OP_GET_SUPPORTED_LOGS = 0x0400, CXL_MBOX_OP_GET_LOG = 0x0401, + CXL_MBOX_OP_GET_SUPPORTED_FEATURES = 0x0500, CXL_MBOX_OP_IDENTIFY = 0x4000, CXL_MBOX_OP_GET_PARTITION_INFO = 0x4100, CXL_MBOX_OP_SET_PARTITION_INFO = 0x4101, @@ -740,6 +741,61 @@ struct cxl_mbox_set_timestamp_in { } __packed; +/* Get Supported Features CXL 3.0 Spec 8.2.9.6.1 */ +/* + * Get Supported Features input payload + * CXL rev 3.0 section 8.2.9.6.1; Table 8-75 + */ +struct cxl_mbox_get_supp_feats_in { + __le32 count; + __le16 start_index; + u16 reserved; +} __packed; + +/* + * Get Supported Features Supported Feature Entry + * CXL rev 3.0 section 8.2.9.6.1; Table 8-77 + */ +/* Supported Feature Entry : Payload out attribute flags */ +#define CXL_FEAT_ENTRY_FLAG_CHANGABLE BIT(0) +#define CXL_FEAT_ENTRY_FLAG_DEEPEST_RESET_PERSISTENCE_MASK GENMASK(3, 1) + +enum cxl_feat_attrib_value_persistence { + CXL_FEAT_ATTRIB_VALUE_PERSISTENCE_NONE = 0x0, + CXL_FEAT_ATTRIB_VALUE_PERSISTENCE_CXL_RESET = 0x1, + CXL_FEAT_ATTRIB_VALUE_PERSISTENCE_HOT_RESET = 0x2, + CXL_FEAT_ATTRIB_VALUE_PERSISTENCE_WARM_RESET = 0x3, + CXL_FEAT_ATTRIB_VALUE_PERSISTENCE_COLD_RESET = 0x4, + CXL_FEAT_ATTRIB_VALUE_PERSISTENCE_MAX +}; + +#define CXL_FEAT_ENTRY_FLAG_PERSISTENCE_ACROSS_FW_UPDATE_MASK BIT(4) +#define CXL_FEAT_ENTRY_FLAG_PERSISTENCE_DEFAULT_SEL_SUPPORT_MASK BIT(5) +#define CXL_FEAT_ENTRY_FLAG_PERSISTENCE_SAVED_SEL_SUPPORT_MASK BIT(6) + +struct cxl_mbox_supp_feat_entry { + uuid_t uuid; + __le16 feat_index; + __le16 get_feat_size; + __le16 set_feat_size; + __le32 attrb_flags; + u8 get_feat_version; + u8 set_feat_version; + __le16 set_feat_effects; + u8 rsvd[18]; +} __packed; + +/* + * Get Supported Features output payload + * CXL rev 3.0 section 8.2.9.6.1; Table 8-76 + */ +struct cxl_mbox_get_supp_feats_out { + __le16 entries; + __le16 nsuppfeats_dev; + u32 reserved; + struct cxl_mbox_supp_feat_entry feat_entries[]; +} __packed; + /* Get Poison List CXL 3.0 Spec 8.2.9.8.4.1 */ struct cxl_mbox_poison_in { __le64 offset; @@ -867,6 +923,9 @@ void clear_exclusive_cxl_commands(struct cxl_memdev_state *mds, unsigned long *cmds); void cxl_mem_get_event_records(struct cxl_memdev_state *mds, u32 status); int cxl_set_timestamp(struct cxl_memdev_state *mds); +int cxl_get_supported_features(struct cxl_memdev_state *mds, + struct cxl_mbox_get_supp_feats_in *pi, + void *feats_out); int cxl_poison_state_init(struct cxl_memdev_state *mds); int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len, struct cxl_region *cxlr); From patchwork Thu Jan 11 13:17:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 761875 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BD8B932C97; Thu, 11 Jan 2024 13:18:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4T9lXX4mNbz6D8gt; Thu, 11 Jan 2024 21:15:40 +0800 (CST) Received: from lhrpeml500006.china.huawei.com (unknown [7.191.161.198]) by mail.maildlp.com (Postfix) with ESMTPS id 4B2E7140B33; Thu, 11 Jan 2024 21:18:03 +0800 (CST) Received: from SecurePC30232.china.huawei.com (10.122.247.234) by lhrpeml500006.china.huawei.com (7.191.161.198) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Thu, 11 Jan 2024 13:18:02 +0000 From: To: , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC PATCH v5 02/12] cxl/mbox: Add GET_FEATURE mailbox command Date: Thu, 11 Jan 2024 21:17:31 +0800 Message-ID: <20240111131741.1356-3-shiju.jose@huawei.com> X-Mailer: git-send-email 2.35.1.windows.2 In-Reply-To: <20240111131741.1356-1-shiju.jose@huawei.com> References: <20240111131741.1356-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500004.china.huawei.com (7.191.163.9) To lhrpeml500006.china.huawei.com (7.191.161.198) From: Shiju Jose Add support for GET_FEATURE mailbox command. CXL spec 3.0 section 8.2.9.6 describes optional device specific features. The settings of a feature can be retrieved using Get Feature command. Signed-off-by: Shiju Jose --- drivers/cxl/core/mbox.c | 22 ++++++++++++++++++++++ drivers/cxl/cxlmem.h | 23 +++++++++++++++++++++++ 2 files changed, 45 insertions(+) diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c index 6960ce935e73..a50c874edebc 100644 --- a/drivers/cxl/core/mbox.c +++ b/drivers/cxl/core/mbox.c @@ -1326,6 +1326,28 @@ int cxl_get_supported_features(struct cxl_memdev_state *mds, } EXPORT_SYMBOL_NS_GPL(cxl_get_supported_features, CXL); +int cxl_get_feature(struct cxl_memdev_state *mds, + struct cxl_mbox_get_feat_in *pi, void *feat_out) +{ + struct cxl_mbox_cmd mbox_cmd; + int rc; + + mbox_cmd = (struct cxl_mbox_cmd) { + .opcode = CXL_MBOX_OP_GET_FEATURE, + .size_in = sizeof(*pi), + .payload_in = pi, + .size_out = le16_to_cpu(pi->count), + .payload_out = feat_out, + .min_out = le16_to_cpu(pi->count), + }; + rc = cxl_internal_send_cmd(mds, &mbox_cmd); + if (rc < 0) + return rc; + + return 0; +} +EXPORT_SYMBOL_NS_GPL(cxl_get_feature, CXL); + int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len, struct cxl_region *cxlr) { diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index d831dad748f5..92c1f2a44713 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -507,6 +507,7 @@ enum cxl_opcode { CXL_MBOX_OP_GET_SUPPORTED_LOGS = 0x0400, CXL_MBOX_OP_GET_LOG = 0x0401, CXL_MBOX_OP_GET_SUPPORTED_FEATURES = 0x0500, + CXL_MBOX_OP_GET_FEATURE = 0x0501, CXL_MBOX_OP_IDENTIFY = 0x4000, CXL_MBOX_OP_GET_PARTITION_INFO = 0x4100, CXL_MBOX_OP_SET_PARTITION_INFO = 0x4101, @@ -796,6 +797,26 @@ struct cxl_mbox_get_supp_feats_out { struct cxl_mbox_supp_feat_entry feat_entries[]; } __packed; +/* Get Feature CXL 3.0 Spec 8.2.9.6.2 */ +/* + * Get Feature input payload + * CXL rev 3.0 section 8.2.9.6.2; Table 8-79 + */ +/* Get Feature : Payload in selection */ +enum cxl_get_feat_selection { + CXL_GET_FEAT_SEL_CURRENT_VALUE = 0x0, + CXL_GET_FEAT_SEL_DEFAULT_VALUE = 0x1, + CXL_GET_FEAT_SEL_SAVED_VALUE = 0x2, + CXL_GET_FEAT_SEL_MAX +}; + +struct cxl_mbox_get_feat_in { + uuid_t uuid; + __le16 offset; + __le16 count; + u8 selection; +} __packed; + /* Get Poison List CXL 3.0 Spec 8.2.9.8.4.1 */ struct cxl_mbox_poison_in { __le64 offset; @@ -926,6 +947,8 @@ int cxl_set_timestamp(struct cxl_memdev_state *mds); int cxl_get_supported_features(struct cxl_memdev_state *mds, struct cxl_mbox_get_supp_feats_in *pi, void *feats_out); +int cxl_get_feature(struct cxl_memdev_state *mds, + struct cxl_mbox_get_feat_in *pi, void *feat_out); int cxl_poison_state_init(struct cxl_memdev_state *mds); int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len, struct cxl_region *cxlr); From patchwork Thu Jan 11 13:17:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 762730 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6BCB433062; Thu, 11 Jan 2024 13:18:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.231]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4T9lXH00rWz67LmL; Thu, 11 Jan 2024 21:15:27 +0800 (CST) Received: from lhrpeml500006.china.huawei.com (unknown [7.191.161.198]) by mail.maildlp.com (Postfix) with ESMTPS id 50DC41400D7; Thu, 11 Jan 2024 21:18:04 +0800 (CST) Received: from SecurePC30232.china.huawei.com (10.122.247.234) by lhrpeml500006.china.huawei.com (7.191.161.198) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Thu, 11 Jan 2024 13:18:03 +0000 From: To: , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC PATCH v5 03/12] cxl/mbox: Add SET_FEATURE mailbox command Date: Thu, 11 Jan 2024 21:17:32 +0800 Message-ID: <20240111131741.1356-4-shiju.jose@huawei.com> X-Mailer: git-send-email 2.35.1.windows.2 In-Reply-To: <20240111131741.1356-1-shiju.jose@huawei.com> References: <20240111131741.1356-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500004.china.huawei.com (7.191.163.9) To lhrpeml500006.china.huawei.com (7.191.161.198) From: Shiju Jose Add support for SET_FEATURE mailbox command. CXL spec 3.0 section 8.2.9.6 describes optional device specific features. CXL devices supports features with changeable attributes. The settings of a feature can be optionally modified using Set Feature command. Signed-off-by: Shiju Jose --- drivers/cxl/core/mbox.c | 14 ++++++++++++++ drivers/cxl/cxlmem.h | 27 +++++++++++++++++++++++++++ 2 files changed, 41 insertions(+) diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c index a50c874edebc..7f379cbb71a0 100644 --- a/drivers/cxl/core/mbox.c +++ b/drivers/cxl/core/mbox.c @@ -1348,6 +1348,20 @@ int cxl_get_feature(struct cxl_memdev_state *mds, } EXPORT_SYMBOL_NS_GPL(cxl_get_feature, CXL); +int cxl_set_feature(struct cxl_memdev_state *mds, void *feat_in, size_t size) +{ + struct cxl_mbox_cmd mbox_cmd; + + mbox_cmd = (struct cxl_mbox_cmd) { + .opcode = CXL_MBOX_OP_SET_FEATURE, + .size_in = size, + .payload_in = feat_in, + }; + + return cxl_internal_send_cmd(mds, &mbox_cmd); +} +EXPORT_SYMBOL_NS_GPL(cxl_set_feature, CXL); + int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len, struct cxl_region *cxlr) { diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index 92c1f2a44713..46131dcd0900 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -508,6 +508,7 @@ enum cxl_opcode { CXL_MBOX_OP_GET_LOG = 0x0401, CXL_MBOX_OP_GET_SUPPORTED_FEATURES = 0x0500, CXL_MBOX_OP_GET_FEATURE = 0x0501, + CXL_MBOX_OP_SET_FEATURE = 0x0502, CXL_MBOX_OP_IDENTIFY = 0x4000, CXL_MBOX_OP_GET_PARTITION_INFO = 0x4100, CXL_MBOX_OP_SET_PARTITION_INFO = 0x4101, @@ -817,6 +818,31 @@ struct cxl_mbox_get_feat_in { u8 selection; } __packed; +/* Set Feature CXL 3.0 Spec 8.2.9.6.3 */ +/* + * Set Feature input payload + * CXL rev 3.0 section 8.2.9.6.3; Table 8-81 + */ +/* Set Feature : Payload in flags */ +#define CXL_SET_FEAT_FLAG_DATA_TRANSFER_MASK GENMASK(2, 0) +enum cxl_set_feat_flag_data_transfer { + CXL_SET_FEAT_FLAG_FULL_DATA_TRANSFER = 0x0, + CXL_SET_FEAT_FLAG_INITIATE_DATA_TRANSFER = 0x1, + CXL_SET_FEAT_FLAG_CONTINUE_DATA_TRANSFER = 0x2, + CXL_SET_FEAT_FLAG_FINISH_DATA_TRANSFER = 0x3, + CXL_SET_FEAT_FLAG_ABORT_DATA_TRANSFER = 0x4, + CXL_SET_FEAT_FLAG_DATA_TRANSFER_MAX +}; +#define CXL_SET_FEAT_FLAG_MOD_VALUE_SAVED_ACROSS_RESET BIT(3) + +struct cxl_mbox_set_feat_in { + uuid_t uuid; + __le32 flags; + __le16 offset; + u8 version; + u8 rsvd[9]; +} __packed; + /* Get Poison List CXL 3.0 Spec 8.2.9.8.4.1 */ struct cxl_mbox_poison_in { __le64 offset; @@ -949,6 +975,7 @@ int cxl_get_supported_features(struct cxl_memdev_state *mds, void *feats_out); int cxl_get_feature(struct cxl_memdev_state *mds, struct cxl_mbox_get_feat_in *pi, void *feat_out); +int cxl_set_feature(struct cxl_memdev_state *mds, void *feat_in, size_t size); int cxl_poison_state_init(struct cxl_memdev_state *mds); int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len, struct cxl_region *cxlr); From patchwork Thu Jan 11 13:17:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 762729 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BBC57171B9; Thu, 11 Jan 2024 13:18:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.216]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4T9lY84hjKz6J6Wq; Thu, 11 Jan 2024 21:16:12 +0800 (CST) Received: from lhrpeml500006.china.huawei.com (unknown [7.191.161.198]) by mail.maildlp.com (Postfix) with ESMTPS id 517691400D9; Thu, 11 Jan 2024 21:18:05 +0800 (CST) Received: from SecurePC30232.china.huawei.com (10.122.247.234) by lhrpeml500006.china.huawei.com (7.191.161.198) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Thu, 11 Jan 2024 13:18:04 +0000 From: To: , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC PATCH v5 04/12] cxl/memscrub: Add CXL device patrol scrub control feature Date: Thu, 11 Jan 2024 21:17:33 +0800 Message-ID: <20240111131741.1356-5-shiju.jose@huawei.com> X-Mailer: git-send-email 2.35.1.windows.2 In-Reply-To: <20240111131741.1356-1-shiju.jose@huawei.com> References: <20240111131741.1356-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500004.china.huawei.com (7.191.163.9) To lhrpeml500006.china.huawei.com (7.191.161.198) From: Shiju Jose CXL spec 3.1 section 8.2.9.9.11.1 describes the device patrol scrub control feature. The device patrol scrub proactively locates and makes corrections to errors in regular cycle. The patrol scrub control allows the request to configure patrol scrub input configurations. The patrol scrub control allows the requester to specify the number of hours for which the patrol scrub cycles must be completed, provided that the requested number is not less than the minimum number of hours for the patrol scrub cycle that the device is capable of. In addition, the patrol scrub controls allow the host to disable and enable the feature in case disabling of the feature is needed for other purposes such as performance-aware operations which require the background operations to be turned off. Signed-off-by: Shiju Jose --- drivers/cxl/Kconfig | 17 +++ drivers/cxl/core/Makefile | 1 + drivers/cxl/core/memscrub.c | 266 ++++++++++++++++++++++++++++++++++++ drivers/cxl/cxlmem.h | 8 ++ drivers/cxl/pci.c | 5 + 5 files changed, 297 insertions(+) create mode 100644 drivers/cxl/core/memscrub.c diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig index 8ea1d340e438..67d88f9bf52b 100644 --- a/drivers/cxl/Kconfig +++ b/drivers/cxl/Kconfig @@ -154,4 +154,21 @@ config CXL_PMU monitoring units and provide standard perf based interfaces. If unsure say 'm'. + +config CXL_SCRUB + bool "CXL: Memory scrub feature" + depends on CXL_PCI + depends on CXL_MEM + help + The CXL memory scrub control is an optional feature allows host to + control the scrub configurations of CXL Type 3 devices, which + support patrol scrub and/or DDR5 ECS(Error Check Scrub). + + Say 'y/n' to enable/disable the CXL memory scrub driver that will + attach to CXL.mem devices for memory scrub control feature. See + sections 8.2.9.9.11.1 and 8.2.9.9.11.2 in the CXL 3.1 specification + for a detailed description of CXL memory scrub control features. + + If unsure say 'n'. + endif diff --git a/drivers/cxl/core/Makefile b/drivers/cxl/core/Makefile index 1f66b5d4d935..99e3202f868f 100644 --- a/drivers/cxl/core/Makefile +++ b/drivers/cxl/core/Makefile @@ -15,3 +15,4 @@ cxl_core-y += hdm.o cxl_core-y += pmu.o cxl_core-$(CONFIG_TRACING) += trace.o cxl_core-$(CONFIG_CXL_REGION) += region.o +cxl_core-$(CONFIG_CXL_SCRUB) += memscrub.o diff --git a/drivers/cxl/core/memscrub.c b/drivers/cxl/core/memscrub.c new file mode 100644 index 000000000000..e0d482b0bf3a --- /dev/null +++ b/drivers/cxl/core/memscrub.c @@ -0,0 +1,266 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * cxl_memscrub.c - CXL memory scrub driver + * + * Copyright (c) 2023 HiSilicon Limited. + * + * - Provides functions to configure patrol scrub + * feature of the CXL memory devices. + */ + +#define pr_fmt(fmt) "CXL_MEM_SCRUB: " fmt + +#include + +/* CXL memory scrub feature common definitions */ +#define CXL_SCRUB_MAX_ATTRB_RANGE_LENGTH 128 + +static int cxl_mem_get_supported_feature_entry(struct cxl_memdev *cxlmd, const uuid_t *feat_uuid, + struct cxl_mbox_supp_feat_entry *feat_entry_out) +{ + struct cxl_mbox_get_supp_feats_out *feats_out __free(kvfree) = NULL; + struct cxl_mbox_supp_feat_entry *feat_entry; + struct cxl_dev_state *cxlds = cxlmd->cxlds; + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); + struct cxl_mbox_get_supp_feats_in pi; + int feat_index, count; + int nentries; + int ret; + + feat_index = 0; + pi.count = sizeof(struct cxl_mbox_get_supp_feats_out) + + sizeof(struct cxl_mbox_supp_feat_entry); + feats_out = kvmalloc(pi.count, GFP_KERNEL); + if (!feats_out) + return -ENOMEM; + + do { + pi.start_index = feat_index; + memset(feats_out, 0, pi.count); + ret = cxl_get_supported_features(mds, &pi, feats_out); + if (ret) + return ret; + + nentries = feats_out->entries; + if (!nentries) + break; + + /* Check CXL memdev supports the feature */ + feat_entry = (void *)feats_out->feat_entries; + for (count = 0; count < nentries; count++, feat_entry++) { + if (uuid_equal(&feat_entry->uuid, feat_uuid)) { + memcpy(feat_entry_out, feat_entry, sizeof(*feat_entry_out)); + return 0; + } + } + feat_index += nentries; + } while (nentries); + + return -ENOTSUPP; +} + +/* CXL memory patrol scrub control definitions */ +#define CXL_MEMDEV_PS_GET_FEAT_VERSION 0x01 +#define CXL_MEMDEV_PS_SET_FEAT_VERSION 0x01 + +static const uuid_t cxl_patrol_scrub_uuid = + UUID_INIT(0x96dad7d6, 0xfde8, 0x482b, 0xa7, 0x33, 0x75, 0x77, 0x4e, \ + 0x06, 0xdb, 0x8a); + +/* CXL memory patrol scrub control functions */ +struct cxl_patrol_scrub_context { + struct device *dev; + u16 get_feat_size; + u16 set_feat_size; + bool scrub_cycle_changeable; +}; + +/** + * struct cxl_memdev_ps_params - CXL memory patrol scrub parameter data structure. + * @enable: [IN] enable(1)/disable(0) patrol scrub. + * @scrub_cycle_changeable: [OUT] scrub cycle attribute of patrol scrub is changeable. + * @rate: [IN] Requested patrol scrub cycle in hours. + * [OUT] Current patrol scrub cycle in hours. + * @min_rate:[OUT] minimum patrol scrub cycle, in hours, supported. + * @rate_avail:[OUT] Supported patrol scrub cycle in hours. + */ +struct cxl_memdev_ps_params { + bool enable; + bool scrub_cycle_changeable; + u16 rate; + u16 min_rate; + char rate_avail[CXL_SCRUB_MAX_ATTRB_RANGE_LENGTH]; +}; + +enum { + CXL_MEMDEV_PS_PARAM_ENABLE = 0, + CXL_MEMDEV_PS_PARAM_RATE, +}; + +#define CXL_MEMDEV_PS_SCRUB_CYCLE_CHANGE_CAP_MASK BIT(0) +#define CXL_MEMDEV_PS_SCRUB_CYCLE_REALTIME_REPORT_CAP_MASK BIT(1) +#define CXL_MEMDEV_PS_CUR_SCRUB_CYCLE_MASK GENMASK(7, 0) +#define CXL_MEMDEV_PS_MIN_SCRUB_CYCLE_MASK GENMASK(15, 8) +#define CXL_MEMDEV_PS_FLAG_ENABLED_MASK BIT(0) + +struct cxl_memdev_ps_feat_read_attrbs { + u8 scrub_cycle_cap; + __le16 scrub_cycle; + u8 scrub_flags; +} __packed; + +struct cxl_memdev_ps_set_feat_pi { + struct cxl_mbox_set_feat_in pi; + u8 scrub_cycle_hr; + u8 scrub_flags; +} __packed; + +static int cxl_mem_ps_get_attrbs(struct device *dev, + struct cxl_memdev_ps_params *params) +{ + struct cxl_memdev_ps_feat_read_attrbs *rd_attrbs __free(kvfree) = NULL; + struct cxl_mbox_get_feat_in pi = { + .uuid = cxl_patrol_scrub_uuid, + .offset = 0, + .count = sizeof(struct cxl_memdev_ps_feat_read_attrbs), + .selection = CXL_GET_FEAT_SEL_CURRENT_VALUE, + }; + struct cxl_memdev *cxlmd = to_cxl_memdev(dev); + struct cxl_dev_state *cxlds = cxlmd->cxlds; + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); + int ret; + + if (!mds) + return -EFAULT; + + rd_attrbs = kvmalloc(pi.count, GFP_KERNEL); + if (!rd_attrbs) + return -ENOMEM; + + ret = cxl_get_feature(mds, &pi, rd_attrbs); + if (ret) { + params->enable = 0; + params->rate = 0; + snprintf(params->rate_avail, CXL_SCRUB_MAX_ATTRB_RANGE_LENGTH, + "Unavailable"); + return ret; + } + params->scrub_cycle_changeable = FIELD_GET(CXL_MEMDEV_PS_SCRUB_CYCLE_CHANGE_CAP_MASK, + rd_attrbs->scrub_cycle_cap); + params->enable = FIELD_GET(CXL_MEMDEV_PS_FLAG_ENABLED_MASK, + rd_attrbs->scrub_flags); + params->rate = FIELD_GET(CXL_MEMDEV_PS_CUR_SCRUB_CYCLE_MASK, + rd_attrbs->scrub_cycle); + params->min_rate = FIELD_GET(CXL_MEMDEV_PS_MIN_SCRUB_CYCLE_MASK, + rd_attrbs->scrub_cycle); + snprintf(params->rate_avail, CXL_SCRUB_MAX_ATTRB_RANGE_LENGTH, + "Minimum scrub cycle = %d hour", params->min_rate); + + return 0; +} + +static int __maybe_unused +cxl_mem_ps_set_attrbs(struct device *dev, struct cxl_memdev_ps_params *params, + u8 param_type) +{ + struct cxl_memdev_ps_set_feat_pi set_pi = { + .pi.uuid = cxl_patrol_scrub_uuid, + .pi.flags = CXL_SET_FEAT_FLAG_MOD_VALUE_SAVED_ACROSS_RESET | + CXL_SET_FEAT_FLAG_FULL_DATA_TRANSFER, + .pi.offset = 0, + .pi.version = CXL_MEMDEV_PS_SET_FEAT_VERSION, + }; + struct cxl_memdev *cxlmd = to_cxl_memdev(dev); + struct cxl_dev_state *cxlds = cxlmd->cxlds; + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); + struct cxl_memdev_ps_params rd_params; + int ret; + + if (!mds) + return -EFAULT; + + ret = cxl_mem_ps_get_attrbs(dev, &rd_params); + if (ret) { + dev_err(dev, "Get cxlmemdev patrol scrub params fail ret=%d\n", + ret); + return ret; + } + + switch (param_type) { + case CXL_MEMDEV_PS_PARAM_ENABLE: + set_pi.scrub_flags = FIELD_PREP(CXL_MEMDEV_PS_FLAG_ENABLED_MASK, + params->enable); + set_pi.scrub_cycle_hr = FIELD_PREP(CXL_MEMDEV_PS_CUR_SCRUB_CYCLE_MASK, + rd_params.rate); + break; + case CXL_MEMDEV_PS_PARAM_RATE: + if (params->rate < rd_params.min_rate) { + dev_err(dev, "Invalid CXL patrol scrub cycle(%d) to set\n", + params->rate); + dev_err(dev, "Minimum supported CXL patrol scrub cycle in hour %d\n", + params->min_rate); + return -EINVAL; + } + set_pi.scrub_cycle_hr = FIELD_PREP(CXL_MEMDEV_PS_CUR_SCRUB_CYCLE_MASK, + params->rate); + set_pi.scrub_flags = FIELD_PREP(CXL_MEMDEV_PS_FLAG_ENABLED_MASK, + rd_params.enable); + break; + default: + dev_err(dev, "Invalid CXL patrol scrub parameter to set\n"); + return -EINVAL; + } + + ret = cxl_set_feature(mds, &set_pi, sizeof(set_pi)); + if (ret) { + dev_err(dev, "CXL patrol scrub set feature fail ret=%d\n", + ret); + return ret; + } + + /* Verify attribute set successfully */ + if (param_type == CXL_MEMDEV_PS_PARAM_RATE) { + ret = cxl_mem_ps_get_attrbs(dev, &rd_params); + if (ret) { + dev_err(dev, "Get cxlmemdev patrol scrub params fail ret=%d\n", ret); + return ret; + } + if (rd_params.rate != params->rate) + return -EFAULT; + } + + return 0; +} + +int cxl_mem_patrol_scrub_init(struct cxl_memdev *cxlmd) +{ + struct cxl_patrol_scrub_context *cxl_ps_ctx; + struct cxl_mbox_supp_feat_entry feat_entry; + struct cxl_memdev_ps_params params; + int ret; + + ret = cxl_mem_get_supported_feature_entry(cxlmd, &cxl_patrol_scrub_uuid, + &feat_entry); + if (ret < 0) + return ret; + + if (!(feat_entry.attrb_flags & CXL_FEAT_ENTRY_FLAG_CHANGABLE)) + return -ENOTSUPP; + + cxl_ps_ctx = devm_kzalloc(&cxlmd->dev, sizeof(*cxl_ps_ctx), GFP_KERNEL); + if (!cxl_ps_ctx) + return -ENOMEM; + + cxl_ps_ctx->get_feat_size = feat_entry.get_feat_size; + cxl_ps_ctx->set_feat_size = feat_entry.set_feat_size; + ret = cxl_mem_ps_get_attrbs(&cxlmd->dev, ¶ms); + if (ret) { + dev_err(&cxlmd->dev, "Get CXL patrol scrub params fail ret=%d\n", + ret); + return ret; + } + cxl_ps_ctx->scrub_cycle_changeable = params.scrub_cycle_changeable; + + return 0; +} +EXPORT_SYMBOL_NS_GPL(cxl_mem_patrol_scrub_init, CXL); diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index 46131dcd0900..25c46e72af16 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -983,6 +983,14 @@ int cxl_trigger_poison_list(struct cxl_memdev *cxlmd); int cxl_inject_poison(struct cxl_memdev *cxlmd, u64 dpa); int cxl_clear_poison(struct cxl_memdev *cxlmd, u64 dpa); +/* cxl memory scrub functions */ +#ifdef CONFIG_CXL_SCRUB +int cxl_mem_patrol_scrub_init(struct cxl_memdev *cxlmd); +#else +static inline int cxl_mem_patrol_scrub_init(struct cxl_memdev *cxlmd) +{ return -ENOTSUPP; } +#endif + #ifdef CONFIG_CXL_SUSPEND void cxl_mem_active_inc(void); void cxl_mem_active_dec(void); diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index 0155fb66b580..acc337b8c365 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -881,6 +881,11 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (rc) return rc; + /* + * Initialize optional CXL scrub features + */ + cxl_mem_patrol_scrub_init(cxlmd); + rc = devm_cxl_sanitize_setup_notifier(&pdev->dev, cxlmd); if (rc) return rc; From patchwork Thu Jan 11 13:17:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 761874 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A1C043A8DB; Thu, 11 Jan 2024 13:18:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4T9lXb5TZYz6D8gS; Thu, 11 Jan 2024 21:15:43 +0800 (CST) Received: from lhrpeml500006.china.huawei.com (unknown [7.191.161.198]) by mail.maildlp.com (Postfix) with ESMTPS id 5C384140B33; Thu, 11 Jan 2024 21:18:06 +0800 (CST) Received: from SecurePC30232.china.huawei.com (10.122.247.234) by lhrpeml500006.china.huawei.com (7.191.161.198) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Thu, 11 Jan 2024 13:18:05 +0000 From: To: , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC PATCH v5 05/12] cxl/memscrub: Add CXL device ECS control feature Date: Thu, 11 Jan 2024 21:17:34 +0800 Message-ID: <20240111131741.1356-6-shiju.jose@huawei.com> X-Mailer: git-send-email 2.35.1.windows.2 In-Reply-To: <20240111131741.1356-1-shiju.jose@huawei.com> References: <20240111131741.1356-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500004.china.huawei.com (7.191.163.9) To lhrpeml500006.china.huawei.com (7.191.161.198) From: Shiju Jose CXL spec 3.1 section 8.2.9.9.11.2 describes the DDR5 Error Check Scrub (ECS) control feature. The Error Check Scrub (ECS) is a feature defined in JEDEC DDR5 SDRAM Specification (JESD79-5) and allows the DRAM to internally read, correct single-bit errors, and write back corrected data bits to the DRAM array while providing transparency to error counts. The ECS control feature allows the request to configure ECS input configurations during system boot or at run-time. The ECS control allows the requester to change the log entry type, the ECS threshold count provided that the request is within the definition specified in DDR5 mode registers, change mode between codeword mode and row count mode, and reset the ECS counter. Open Question: Is cxl_mem_ecs_init() invoked in the right function in cxl/core/region.c? Signed-off-by: Shiju Jose --- drivers/cxl/core/memscrub.c | 303 +++++++++++++++++++++++++++++++++++- drivers/cxl/core/region.c | 1 + drivers/cxl/cxlmem.h | 3 + 3 files changed, 306 insertions(+), 1 deletion(-) diff --git a/drivers/cxl/core/memscrub.c b/drivers/cxl/core/memscrub.c index e0d482b0bf3a..e7741e2fdbdb 100644 --- a/drivers/cxl/core/memscrub.c +++ b/drivers/cxl/core/memscrub.c @@ -5,7 +5,7 @@ * Copyright (c) 2023 HiSilicon Limited. * * - Provides functions to configure patrol scrub - * feature of the CXL memory devices. + * and DDR5 ECS features of the CXL memory devices. */ #define pr_fmt(fmt) "CXL_MEM_SCRUB: " fmt @@ -264,3 +264,304 @@ int cxl_mem_patrol_scrub_init(struct cxl_memdev *cxlmd) return 0; } EXPORT_SYMBOL_NS_GPL(cxl_mem_patrol_scrub_init, CXL); + +/* CXL DDR5 ECS control definitions */ +#define CXL_MEMDEV_ECS_GET_FEAT_VERSION 0x01 +#define CXL_MEMDEV_ECS_SET_FEAT_VERSION 0x01 + +static const uuid_t cxl_ecs_uuid = + UUID_INIT(0xe5b13f22, 0x2328, 0x4a14, 0xb8, 0xba, 0xb9, 0x69, 0x1e, \ + 0x89, 0x33, 0x86); + +struct cxl_ecs_context { + struct device *dev; + u16 nregions; + int region_id; + u16 get_feat_size; + u16 set_feat_size; +}; + +/** + * struct cxl_memdev_ecs_params - CXL memory DDR5 ECS parameter data structure. + * @log_entry_type: ECS log entry type, per DRAM or per memory media FRU. + * @threshold: ECS threshold count per GB of memory cells. + * @mode: codeword/row count mode + * 0 : ECS counts rows with errors + * 1 : ECS counts codeword with errors + * @reset_counter: [IN] reset ECC counter to default value. + */ +struct cxl_memdev_ecs_params { + u8 log_entry_type; + u16 threshold; + u8 mode; + bool reset_counter; +}; + +enum { + CXL_MEMDEV_ECS_PARAM_LOG_ENTRY_TYPE = 0, + CXL_MEMDEV_ECS_PARAM_THRESHOLD, + CXL_MEMDEV_ECS_PARAM_MODE, + CXL_MEMDEV_ECS_PARAM_RESET_COUNTER, +}; + +#define CXL_MEMDEV_ECS_LOG_ENTRY_TYPE_MASK GENMASK(1, 0) +#define CXL_MEMDEV_ECS_REALTIME_REPORT_CAP_MASK BIT(0) +#define CXL_MEMDEV_ECS_THRESHOLD_COUNT_MASK GENMASK(2, 0) +#define CXL_MEMDEV_ECS_MODE_MASK BIT(3) +#define CXL_MEMDEV_ECS_RESET_COUNTER_MASK BIT(4) + +static const u16 ecs_supp_threshold[] = { 0, 0, 0, 256, 1024, 4096 }; + +enum { + ECS_LOG_ENTRY_TYPE_DRAM = 0x0, + ECS_LOG_ENTRY_TYPE_MEM_MEDIA_FRU = 0x1, +}; + +enum { + ECS_THRESHOLD_256 = 3, + ECS_THRESHOLD_1024 = 4, + ECS_THRESHOLD_4096 = 5, +}; + +enum { + ECS_MODE_COUNTS_ROWS = 0, + ECS_MODE_COUNTS_CODEWORDS = 1, +}; + +struct cxl_memdev_ecs_feat_read_attrbs { + u8 ecs_log_cap; + u8 ecs_cap; + __le16 ecs_config; + u8 ecs_flags; +} __packed; + +struct cxl_memdev_ecs_set_feat_pi { + struct cxl_mbox_set_feat_in pi; + struct cxl_memdev_ecs_feat_wr_attrbs { + u8 ecs_log_cap; + __le16 ecs_config; + } __packed wr_attrbs[]; +} __packed; + +/* CXL DDR5 ECS control functions */ +static int cxl_mem_ecs_get_attrbs(struct device *dev, int fru_id, + struct cxl_memdev_ecs_params *params) +{ + struct cxl_memdev_ecs_feat_read_attrbs *rd_attrbs __free(kvfree) = NULL; + struct cxl_memdev *cxlmd = to_cxl_memdev(dev->parent); + struct cxl_dev_state *cxlds = cxlmd->cxlds; + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); + struct cxl_mbox_get_feat_in pi = { + .uuid = cxl_ecs_uuid, + .offset = 0, + .selection = CXL_GET_FEAT_SEL_CURRENT_VALUE, + }; + struct cxl_ecs_context *cxl_ecs_ctx; + u8 threshold_index; + int ret; + + if (!mds) + return -EFAULT; + cxl_ecs_ctx = dev_get_drvdata(dev); + + pi.count = cxl_ecs_ctx->get_feat_size; + rd_attrbs = kvmalloc(pi.count, GFP_KERNEL); + if (!rd_attrbs) + return -ENOMEM; + + ret = cxl_get_feature(mds, &pi, rd_attrbs); + if (ret) { + params->log_entry_type = 0; + params->threshold = 0; + params->mode = 0; + return ret; + } + params->log_entry_type = FIELD_GET(CXL_MEMDEV_ECS_LOG_ENTRY_TYPE_MASK, + rd_attrbs[fru_id].ecs_log_cap); + threshold_index = FIELD_GET(CXL_MEMDEV_ECS_THRESHOLD_COUNT_MASK, + rd_attrbs[fru_id].ecs_config); + params->threshold = ecs_supp_threshold[threshold_index]; + params->mode = FIELD_GET(CXL_MEMDEV_ECS_MODE_MASK, + rd_attrbs[fru_id].ecs_config); + + return 0; +} + +static int __maybe_unused +cxl_mem_ecs_set_attrbs(struct device *dev, int fru_id, + struct cxl_memdev_ecs_params *params, u8 param_type) +{ + struct cxl_memdev_ecs_feat_read_attrbs *rd_attrbs __free(kvfree) = NULL; + struct cxl_memdev_ecs_set_feat_pi *set_pi __free(kvfree) = NULL; + struct cxl_memdev *cxlmd = to_cxl_memdev(dev->parent); + struct cxl_dev_state *cxlds = cxlmd->cxlds; + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); + struct cxl_mbox_get_feat_in pi = { + .uuid = cxl_ecs_uuid, + .offset = 0, + .selection = CXL_GET_FEAT_SEL_CURRENT_VALUE, + }; + struct cxl_memdev_ecs_feat_wr_attrbs *wr_attrbs; + struct cxl_memdev_ecs_params rd_params; + struct cxl_ecs_context *cxl_ecs_ctx; + u16 nmedia_frus, count; + u32 set_pi_size; + int ret; + + if (!mds) + return -EFAULT; + + cxl_ecs_ctx = dev_get_drvdata(dev); + nmedia_frus = cxl_ecs_ctx->nregions; + + rd_attrbs = kvmalloc(cxl_ecs_ctx->get_feat_size, GFP_KERNEL); + if (!rd_attrbs) + return -ENOMEM; + + pi.count = cxl_ecs_ctx->get_feat_size; + ret = cxl_get_feature(mds, &pi, rd_attrbs); + if (ret) + return ret; + set_pi_size = sizeof(struct cxl_mbox_set_feat_in) + + cxl_ecs_ctx->set_feat_size; + set_pi = kvmalloc(set_pi_size, GFP_KERNEL); + if (!set_pi) + return -ENOMEM; + + set_pi->pi.uuid = cxl_ecs_uuid; + set_pi->pi.flags = CXL_SET_FEAT_FLAG_MOD_VALUE_SAVED_ACROSS_RESET | + CXL_SET_FEAT_FLAG_FULL_DATA_TRANSFER; + set_pi->pi.offset = 0; + set_pi->pi.version = CXL_MEMDEV_ECS_SET_FEAT_VERSION; + /* Fill writable attributes from the current attributes read for all the media FRUs */ + wr_attrbs = set_pi->wr_attrbs; + for (count = 0; count < nmedia_frus; count++) { + wr_attrbs[count].ecs_log_cap = rd_attrbs[count].ecs_log_cap; + wr_attrbs[count].ecs_config = rd_attrbs[count].ecs_config; + } + + /* Fill attribute to be set for the media FRU */ + switch (param_type) { + case CXL_MEMDEV_ECS_PARAM_LOG_ENTRY_TYPE: + if (params->log_entry_type != ECS_LOG_ENTRY_TYPE_DRAM && + params->log_entry_type != ECS_LOG_ENTRY_TYPE_MEM_MEDIA_FRU) { + dev_err(dev->parent, + "Invalid CXL ECS scrub log entry type(%d) to set\n", + params->log_entry_type); + dev_err(dev->parent, + "Log Entry Type 0: per DRAM 1: per Memory Media FRU\n"); + return -EINVAL; + } + wr_attrbs[fru_id].ecs_log_cap = FIELD_PREP(CXL_MEMDEV_ECS_LOG_ENTRY_TYPE_MASK, + params->log_entry_type); + break; + case CXL_MEMDEV_ECS_PARAM_THRESHOLD: + wr_attrbs[fru_id].ecs_config &= ~CXL_MEMDEV_ECS_THRESHOLD_COUNT_MASK; + switch (params->threshold) { + case 256: + wr_attrbs[fru_id].ecs_config |= FIELD_PREP( + CXL_MEMDEV_ECS_THRESHOLD_COUNT_MASK, + ECS_THRESHOLD_256); + break; + case 1024: + wr_attrbs[fru_id].ecs_config |= FIELD_PREP( + CXL_MEMDEV_ECS_THRESHOLD_COUNT_MASK, + ECS_THRESHOLD_1024); + break; + case 4096: + wr_attrbs[fru_id].ecs_config |= FIELD_PREP( + CXL_MEMDEV_ECS_THRESHOLD_COUNT_MASK, + ECS_THRESHOLD_4096); + break; + default: + dev_err(dev->parent, + "Invalid CXL ECS scrub threshold count(%d) to set\n", + params->threshold); + dev_err(dev->parent, + "Supported scrub threshold count: 256,1024,4096\n"); + return -EINVAL; + } + break; + case CXL_MEMDEV_ECS_PARAM_MODE: + if (params->mode != ECS_MODE_COUNTS_ROWS && + params->mode != ECS_MODE_COUNTS_CODEWORDS) { + dev_err(dev->parent, + "Invalid CXL ECS scrub mode(%d) to set\n", + params->mode); + dev_err(dev->parent, + "Mode 0: ECS counts rows with errors" + " 1: ECS counts codewords with errors\n"); + return -EINVAL; + } + wr_attrbs[fru_id].ecs_config &= ~CXL_MEMDEV_ECS_MODE_MASK; + wr_attrbs[fru_id].ecs_config |= FIELD_PREP(CXL_MEMDEV_ECS_MODE_MASK, + params->mode); + break; + case CXL_MEMDEV_ECS_PARAM_RESET_COUNTER: + wr_attrbs[fru_id].ecs_config &= ~CXL_MEMDEV_ECS_RESET_COUNTER_MASK; + wr_attrbs[fru_id].ecs_config |= FIELD_PREP(CXL_MEMDEV_ECS_RESET_COUNTER_MASK, + params->reset_counter); + break; + default: + dev_err(dev->parent, "Invalid CXL ECS parameter to set\n"); + return -EINVAL; + } + ret = cxl_set_feature(mds, set_pi, set_pi_size); + if (ret) { + dev_err(dev->parent, "CXL ECS set feature fail ret=%d\n", ret); + return ret; + } + + /* Verify attribute is set successfully */ + ret = cxl_mem_ecs_get_attrbs(dev, fru_id, &rd_params); + if (ret) { + dev_err(dev->parent, "Get cxlmemdev ECS params fail ret=%d\n", ret); + return ret; + } + switch (param_type) { + case CXL_MEMDEV_ECS_PARAM_LOG_ENTRY_TYPE: + if (rd_params.log_entry_type != params->log_entry_type) + return -EFAULT; + break; + case CXL_MEMDEV_ECS_PARAM_THRESHOLD: + if (rd_params.threshold != params->threshold) + return -EFAULT; + break; + case CXL_MEMDEV_ECS_PARAM_MODE: + if (rd_params.mode != params->mode) + return -EFAULT; + break; + } + + return 0; +} + +int cxl_mem_ecs_init(struct cxl_memdev *cxlmd, int region_id) +{ + struct cxl_mbox_supp_feat_entry feat_entry; + struct cxl_ecs_context *cxl_ecs_ctx; + int nmedia_frus; + int ret; + + ret = cxl_mem_get_supported_feature_entry(cxlmd, &cxl_ecs_uuid, &feat_entry); + if (ret < 0) + return ret; + + if (!(feat_entry.attrb_flags & CXL_FEAT_ENTRY_FLAG_CHANGABLE)) + return -ENOTSUPP; + nmedia_frus = feat_entry.get_feat_size/ + sizeof(struct cxl_memdev_ecs_feat_read_attrbs); + if (nmedia_frus) { + cxl_ecs_ctx = devm_kzalloc(&cxlmd->dev, sizeof(*cxl_ecs_ctx), GFP_KERNEL); + if (!cxl_ecs_ctx) + return -ENOMEM; + + cxl_ecs_ctx->nregions = nmedia_frus; + cxl_ecs_ctx->get_feat_size = feat_entry.get_feat_size; + cxl_ecs_ctx->set_feat_size = feat_entry.set_feat_size; + cxl_ecs_ctx->region_id = region_id; + } + + return 0; +} +EXPORT_SYMBOL_NS_GPL(cxl_mem_ecs_init, CXL); diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index 3e817a6f94c6..ca71ad403d62 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -2912,6 +2912,7 @@ int cxl_add_to_region(struct cxl_port *root, struct cxl_endpoint_decoder *cxled) dev_err(&cxlr->dev, "failed to enable, range: %pr\n", p->res); } + cxl_mem_ecs_init(cxlmd, atomic_read(&cxlrd->region_id)); put_device(region_dev); out: diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index 25c46e72af16..c255063dd795 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -986,9 +986,12 @@ int cxl_clear_poison(struct cxl_memdev *cxlmd, u64 dpa); /* cxl memory scrub functions */ #ifdef CONFIG_CXL_SCRUB int cxl_mem_patrol_scrub_init(struct cxl_memdev *cxlmd); +int cxl_mem_ecs_init(struct cxl_memdev *cxlmd, int region_id); #else static inline int cxl_mem_patrol_scrub_init(struct cxl_memdev *cxlmd) { return -ENOTSUPP; } +static inline int cxl_mem_ecs_init(struct cxl_memdev *cxlmd, int region_id) +{ return -ENOTSUPP; } #endif #ifdef CONFIG_CXL_SUSPEND From patchwork Thu Jan 11 13:17:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 762728 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B53C13C49B; Thu, 11 Jan 2024 13:18:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4T9lXc5j7Xz6D8hH; Thu, 11 Jan 2024 21:15:44 +0800 (CST) Received: from lhrpeml500006.china.huawei.com (unknown [7.191.161.198]) by mail.maildlp.com (Postfix) with ESMTPS id 6A04A140B33; Thu, 11 Jan 2024 21:18:07 +0800 (CST) Received: from SecurePC30232.china.huawei.com (10.122.247.234) by lhrpeml500006.china.huawei.com (7.191.161.198) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Thu, 11 Jan 2024 13:18:06 +0000 From: To: , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC PATCH v5 06/12] memory: scrub: Add scrub subsystem driver supports configuring memory scrubs in the system Date: Thu, 11 Jan 2024 21:17:35 +0800 Message-ID: <20240111131741.1356-7-shiju.jose@huawei.com> X-Mailer: git-send-email 2.35.1.windows.2 In-Reply-To: <20240111131741.1356-1-shiju.jose@huawei.com> References: <20240111131741.1356-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500004.china.huawei.com (7.191.163.9) To lhrpeml500006.china.huawei.com (7.191.161.198) From: Shiju Jose Add scrub driver supports configuring the memory scrubs in the system. The scrub driver provides the interface for registering the scrub devices and supports configuring memory scrubs in the system. Driver exposes the sysfs scrub control attributes to the user in /sys/class/scrub/scrubX/regionN/ Signed-off-by: Shiju Jose --- .../ABI/testing/sysfs-class-scrub-configure | 91 +++++ drivers/memory/Kconfig | 1 + drivers/memory/Makefile | 1 + drivers/memory/scrub/Kconfig | 11 + drivers/memory/scrub/Makefile | 6 + drivers/memory/scrub/memory-scrub.c | 367 ++++++++++++++++++ include/memory/memory-scrub.h | 78 ++++ 7 files changed, 555 insertions(+) create mode 100644 Documentation/ABI/testing/sysfs-class-scrub-configure create mode 100644 drivers/memory/scrub/Kconfig create mode 100644 drivers/memory/scrub/Makefile create mode 100755 drivers/memory/scrub/memory-scrub.c create mode 100755 include/memory/memory-scrub.h diff --git a/Documentation/ABI/testing/sysfs-class-scrub-configure b/Documentation/ABI/testing/sysfs-class-scrub-configure new file mode 100644 index 000000000000..d2d422b667cf --- /dev/null +++ b/Documentation/ABI/testing/sysfs-class-scrub-configure @@ -0,0 +1,91 @@ +What: /sys/class/scrub/ +Date: January 2024 +KernelVersion: 6.8 +Contact: linux-kernel@vger.kernel.org +Description: + The scrub/ class subdirectory belongs to the + scrubber subsystem. + +What: /sys/class/scrub/scrubX/ +Date: January 2024 +KernelVersion: 6.8 +Contact: linux-kernel@vger.kernel.org +Description: + The /sys/class/scrub/scrub{0,1,2,3,...} directories + correspond to each scrub device. + +What: /sys/class/scrub/scrubX/name +Date: January 2024 +KernelVersion: 6.8 +Contact: linux-kernel@vger.kernel.org +Description: + (RO) name of the memory scrub device + +What: /sys/class/scrub/scrubX/regionN/ +Date: January 2024 +KernelVersion: 6.8 +Contact: linux-kernel@vger.kernel.org +Description: + The /sys/class/scrub/scrubX/region{0,1,2,3,...} + directories correspond to each scrub region under a scrub device. + Scrub region is a physical address range for which scrub may be + separately controlled. Regions may overlap in which case the + scrubbing rate of the overlapped memory will be at least that + expected due to each overlapping region. + +What: /sys/class/scrub/scrubX/regionN/addr_base +Date: January 2024 +KernelVersion: 6.8 +Contact: linux-kernel@vger.kernel.org +Description: + (RW) The base of the address range of the memory region + to be scrubbed. + On reading, returns the base of the memory region for + the actual address range(The platform calculates + the nearest patrol scrub boundary address from where + it can start scrub). + +What: /sys/class/scrub/scrubX/regionN/addr_size +Date: January 2024 +KernelVersion: 6.8 +Contact: linux-kernel@vger.kernel.org +Description: + (RW) The size of the address range to be scrubbed. + On reading, returns the size of the memory region for + the actual address range. + +What: /sys/class/scrub/scrubX/regionN/enable +Date: January 2024 +KernelVersion: 6.8 +Contact: linux-kernel@vger.kernel.org +Description: + (WO) Enable/Disable scrub the memory region. + 1 - enable the memory scrub. + 0 - disable the memory scrub. + +What: /sys/class/scrub/scrubX/regionN/enable_background_scrub +Date: January 2024 +KernelVersion: 6.8 +Contact: linux-kernel@vger.kernel.org +Description: + (WO) Enable/Disable background scrubbing if supported. + 1 - enable background scrubbing. + 0 - disable background scrubbing. + +What: /sys/class/scrub/scrubX/regionN/rate_available +Date: January 2024 +KernelVersion: 6.8 +Contact: linux-kernel@vger.kernel.org +Description: + (RO) Supported range for the scrub rate) + by the scrubber for a memory region. + The unit of the scrub rate vary depends on the scrub. + +What: /sys/class/scrub/scrubX/regionN/rate +Date: January 2024 +KernelVersion: 6.8 +Contact: linux-kernel@vger.kernel.org +Description: + (RW) The scrub rate in the memory region specified + and it must be with in the supported range by the scrub. + The unit of the scrub rate vary depends on the scrub. diff --git a/drivers/memory/Kconfig b/drivers/memory/Kconfig index 8efdd1f97139..d2e015c09d83 100644 --- a/drivers/memory/Kconfig +++ b/drivers/memory/Kconfig @@ -227,5 +227,6 @@ config STM32_FMC2_EBI source "drivers/memory/samsung/Kconfig" source "drivers/memory/tegra/Kconfig" +source "drivers/memory/scrub/Kconfig" endif diff --git a/drivers/memory/Makefile b/drivers/memory/Makefile index d2e6ca9abbe0..4b37312cb342 100644 --- a/drivers/memory/Makefile +++ b/drivers/memory/Makefile @@ -27,6 +27,7 @@ obj-$(CONFIG_STM32_FMC2_EBI) += stm32-fmc2-ebi.o obj-$(CONFIG_SAMSUNG_MC) += samsung/ obj-$(CONFIG_TEGRA_MC) += tegra/ +obj-$(CONFIG_SCRUB) += scrub/ obj-$(CONFIG_TI_EMIF_SRAM) += ti-emif-sram.o obj-$(CONFIG_FPGA_DFL_EMIF) += dfl-emif.o diff --git a/drivers/memory/scrub/Kconfig b/drivers/memory/scrub/Kconfig new file mode 100644 index 000000000000..fa7d68f53a69 --- /dev/null +++ b/drivers/memory/scrub/Kconfig @@ -0,0 +1,11 @@ +# SPDX-License-Identifier: GPL-2.0-only +# +# Memory scrub driver configurations +# + +config SCRUB + bool "Memory scrub driver" + help + This option selects the memory scrub subsystem, supports + configuring the parameters of underlying scrubbers in the + system for the DRAM memories. diff --git a/drivers/memory/scrub/Makefile b/drivers/memory/scrub/Makefile new file mode 100644 index 000000000000..1b677132ca13 --- /dev/null +++ b/drivers/memory/scrub/Makefile @@ -0,0 +1,6 @@ +# SPDX-License-Identifier: GPL-2.0 +# +# Makefile for memory scrub drivers +# + +obj-$(CONFIG_SCRUB) += memory-scrub.o diff --git a/drivers/memory/scrub/memory-scrub.c b/drivers/memory/scrub/memory-scrub.c new file mode 100755 index 000000000000..02488bbf6392 --- /dev/null +++ b/drivers/memory/scrub/memory-scrub.c @@ -0,0 +1,367 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Memory scrub driver supports configuring + * the memory scrubs. + * + * Copyright (c) 2023 HiSilicon Limited. + */ + +#define pr_fmt(fmt) "MEM SCRUB: " fmt + +#include +#include +#include +#include +#include +#include +#include + +/* memory scrubber config definitions */ +#define SCRUB_ID_PREFIX "scrub" +#define SCRUB_ID_FORMAT SCRUB_ID_PREFIX "%d" +#define SCRUB_DEV_MAX_NAME_LENGTH 128 +#define SCRUB_MAX_SYSFS_ATTR_NAME_LENGTH 64 + +static DEFINE_IDA(scrub_ida); + +struct scrub_device { + char name[SCRUB_DEV_MAX_NAME_LENGTH]; + int id; + struct device dev; + char region_name[SCRUB_MAX_SYSFS_ATTR_NAME_LENGTH]; + int region_id; + struct attribute_group group; + const struct attribute_group *groups[2]; + const struct scrub_ops *ops; +}; + +#define to_scrub_device(d) container_of(d, struct scrub_device, dev) + +static ssize_t name_show(struct device *dev, struct device_attribute *attr, char *buf) +{ + return sprintf(buf, "%s\n", to_scrub_device(dev)->name); +} +static DEVICE_ATTR_RO(name); + +static struct attribute *scrub_dev_attrs[] = { + &dev_attr_name.attr, + NULL +}; + +static umode_t scrub_dev_attr_is_visible(struct kobject *kobj, + struct attribute *attr, int n) +{ + if (attr != &dev_attr_name.attr) + return 0; + + return attr->mode; +} + +static const struct attribute_group scrub_dev_attr_group = { + .attrs = scrub_dev_attrs, + .is_visible = scrub_dev_attr_is_visible, +}; + +static const struct attribute_group *scrub_dev_attr_groups[] = { + &scrub_dev_attr_group, + NULL +}; + +static void scrub_dev_release(struct device *dev) +{ + struct scrub_device *scrub_dev = to_scrub_device(dev); + + ida_free(&scrub_ida, scrub_dev->id); + kfree(scrub_dev); +} + +static struct class scrub_class = { + .name = "scrub", + .dev_groups = scrub_dev_attr_groups, + .dev_release = scrub_dev_release, +}; + +static umode_t scrub_attr_visible(struct kobject *kobj, + struct attribute *a, int attr_id) +{ + struct device *dev = kobj_to_dev(kobj); + struct scrub_device *scrub_dev = to_scrub_device(dev); + int region_id = scrub_dev->region_id; + + if (!scrub_dev->ops) + return 0; + + return scrub_dev->ops->is_visible(dev, attr_id, region_id); +} + +static ssize_t scrub_attr_show(struct device *dev, int attr_id, + char *buf) +{ + struct scrub_device *scrub_dev = to_scrub_device(dev); + int region_id = scrub_dev->region_id; + int ret; + u64 val; + + ret = scrub_dev->ops->read(dev, attr_id, region_id, &val); + if (ret < 0) + return ret; + + return sprintf(buf, "%lld\n", val); +} + +static ssize_t scrub_attr_show_hex(struct device *dev, int attr_id, + char *buf) +{ + struct scrub_device *scrub_dev = to_scrub_device(dev); + int region_id = scrub_dev->region_id; + int ret; + u64 val; + + ret = scrub_dev->ops->read(dev, attr_id, region_id, &val); + if (ret < 0) + return ret; + + return sprintf(buf, "0x%llx\n", val); +} + +static ssize_t scrub_attr_show_string(struct device *dev, int attr_id, + char *buf) +{ + struct scrub_device *scrub_dev = to_scrub_device(dev); + int region_id = scrub_dev->region_id; + int ret; + + ret = scrub_dev->ops->read_string(dev, attr_id, region_id, buf); + if (ret < 0) + return ret; + + return strlen(buf); +} + +static ssize_t scrub_attr_store(struct device *dev, int attr_id, + const char *buf, size_t count) +{ + struct scrub_device *scrub_dev = to_scrub_device(dev); + int region_id = scrub_dev->region_id; + long val; + int ret; + + ret = kstrtol(buf, 10, &val); + if (ret < 0) + return ret; + + ret = scrub_dev->ops->write(dev, attr_id, region_id, val); + if (ret < 0) + return ret; + + return count; +} + +static ssize_t scrub_attr_store_hex(struct device *dev, int attr_id, + const char *buf, size_t count) +{ + struct scrub_device *scrub_dev = to_scrub_device(dev); + int region_id = scrub_dev->region_id; + int ret; + u64 val; + + ret = kstrtou64(buf, 16, &val); + if (ret < 0) + return ret; + + ret = scrub_dev->ops->write(dev, attr_id, region_id, val); + if (ret < 0) + return ret; + + return count; +} + +static ssize_t show_scrub_attr(struct device *dev, char *buf, int attr_id) +{ + switch (attr_id) { + case scrub_addr_base: + case scrub_addr_size: + return scrub_attr_show_hex(dev, attr_id, buf); + case scrub_enable: + case scrub_rate: + return scrub_attr_show(dev, attr_id, buf); + case scrub_rate_available: + return scrub_attr_show_string(dev, attr_id, buf); + } + + return -ENOTSUPP; +} + +static ssize_t store_scrub_attr(struct device *dev, const char *buf, + size_t count, int attr_id) +{ + switch (attr_id) { + case scrub_addr_base: + case scrub_addr_size: + return scrub_attr_store_hex(dev, attr_id, buf, count); + case scrub_enable: + case scrub_enable_background_scrub: + case scrub_rate: + return scrub_attr_store(dev, attr_id, buf, count); + } + + return -ENOTSUPP; +} + +#define SCRUB_ATTR_RW(attrb) \ +static ssize_t attrb##_show(struct device *dev, \ + struct device_attribute *attr, char *buf) \ +{ \ + return show_scrub_attr(dev, buf, (scrub_##attrb)); \ +} \ +static ssize_t attrb##_store(struct device *dev, \ + struct device_attribute *attr, \ + const char *buf, size_t count) \ +{ \ + return store_scrub_attr(dev, buf, count, (scrub_##attrb));\ +} \ +static DEVICE_ATTR_RW(attrb) + +#define SCRUB_ATTR_RO(attrb) \ +static ssize_t attrb##_show(struct device *dev, \ + struct device_attribute *attr, char *buf) \ +{ \ + return show_scrub_attr(dev, buf, (scrub_##attrb)); \ +} \ +static DEVICE_ATTR_RO(attrb) + +#define SCRUB_ATTR_WO(attrb) \ +static ssize_t attrb##_store(struct device *dev, \ + struct device_attribute *attr, \ + const char *buf, size_t count) \ +{ \ + return store_scrub_attr(dev, buf, count, (scrub_##attrb));\ +} \ +static DEVICE_ATTR_WO(attrb) + +SCRUB_ATTR_RW(addr_base); +SCRUB_ATTR_RW(addr_size); +SCRUB_ATTR_RW(enable); +SCRUB_ATTR_RW(enable_background_scrub); +SCRUB_ATTR_RW(rate); +SCRUB_ATTR_RO(rate_available); + +static struct attribute *scrub_attrs[] = { + &dev_attr_addr_base.attr, + &dev_attr_addr_size.attr, + &dev_attr_enable.attr, + &dev_attr_enable_background_scrub.attr, + &dev_attr_rate.attr, + &dev_attr_rate_available.attr, + NULL, +}; + +static struct device * +scrub_device_register(struct device *dev, const char *name, void *drvdata, + const struct scrub_ops *ops, + int region_id, + struct attribute_group *attr_group) +{ + struct scrub_device *scrub_dev; + struct device *hdev; + int err; + + scrub_dev = kzalloc(sizeof(*scrub_dev), GFP_KERNEL); + if (!scrub_dev) + return ERR_PTR(-ENOMEM); + hdev = &scrub_dev->dev; + + scrub_dev->id = ida_alloc(&scrub_ida, GFP_KERNEL); + if (scrub_dev->id < 0) { + kfree(scrub_dev); + return ERR_PTR(-ENOMEM); + } + + snprintf((char *)scrub_dev->region_name, SCRUB_MAX_SYSFS_ATTR_NAME_LENGTH, + "region%d", region_id); + if (attr_group) { + attr_group->name = (char *)scrub_dev->region_name; + scrub_dev->groups[0] = attr_group; + scrub_dev->region_id = region_id; + } else { + scrub_dev->group.name = (char *)scrub_dev->region_name; + scrub_dev->group.attrs = scrub_attrs; + scrub_dev->group.is_visible = scrub_attr_visible; + scrub_dev->groups[0] = &scrub_dev->group; + scrub_dev->ops = ops; + scrub_dev->region_id = region_id; + } + + hdev->groups = scrub_dev->groups; + hdev->class = &scrub_class; + hdev->parent = dev; + dev_set_drvdata(hdev, drvdata); + dev_set_name(hdev, SCRUB_ID_FORMAT, scrub_dev->id); + snprintf(scrub_dev->name, SCRUB_DEV_MAX_NAME_LENGTH, "%s", name); + err = device_register(hdev); + if (err) { + put_device(hdev); + return ERR_PTR(err); + } + + return hdev; +} + +static void devm_scrub_release(void *dev) +{ + struct device *hdev = dev; + + device_unregister(hdev); +} + +/** + * devm_scrub_device_register - register hw scrubber device + * @dev: the parent device + * @name: hw scrubber name attribute + * @drvdata: driver data to attach to created device + * @ops: pointer to scrub_ops structure (optional) + * @region_id: region ID + * @attr_group: input attribute group (optional) + * + * Returns the pointer to the new device. The new device is automatically + * unregistered with the parent device. + */ +struct device * +devm_scrub_device_register(struct device *dev, const char *name, + void *drvdata, + const struct scrub_ops *ops, + int region_id, + struct attribute_group *attr_group) +{ + struct device *hdev; + int ret; + + if (!dev || !name) + return ERR_PTR(-EINVAL); + + hdev = scrub_device_register(dev, name, drvdata, ops, + region_id, attr_group); + if (IS_ERR(hdev)) + return hdev; + + ret = devm_add_action_or_reset(dev, devm_scrub_release, hdev); + if (ret) + return ERR_PTR(ret); + + return hdev; +} +EXPORT_SYMBOL_GPL(devm_scrub_device_register); + +static int __init memory_scrub_control_init(void) +{ + int err; + + err = class_register(&scrub_class); + if (err) { + pr_err("couldn't register memory scrub control sysfs class\n"); + return err; + } + + return 0; +} +subsys_initcall(memory_scrub_control_init); diff --git a/include/memory/memory-scrub.h b/include/memory/memory-scrub.h new file mode 100755 index 000000000000..3d7054e98b9a --- /dev/null +++ b/include/memory/memory-scrub.h @@ -0,0 +1,78 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Memory scrub controller driver support to configure + * the controls of the memory scrub and enable. + * + * Copyright (c) 2023 HiSilicon Limited. + */ + +#ifndef __MEMORY_SCRUB_H +#define __MEMORY_SCRUB_H + +#include + +enum scrub_types { + scrub_common, + scrub_max, +}; + +enum scrub_attributes { + scrub_addr_base, + scrub_addr_size, + scrub_enable, + scrub_enable_background_scrub, + scrub_rate, + scrub_rate_available, + max_attrs, +}; + +/** + * struct scrub_ops - scrub device operations + * @is_visible: Callback to return attribute visibility. Mandatory. + * Parameters are: + * @dev: pointer to hardware scrub device + * @attr: scrub attribute + * @region_id: memory region id + * The function returns the file permissions. + * If the return value is 0, no attribute will be created. + * @read: Read callback for data attributes. Mandatory if readable + * data attributes are present. + * Parameters are: + * @dev: pointer to hardware scrub device + * @attr: scrub attribute + * @region_id: + * memory region id + * @val: pointer to returned value + * The function returns 0 on success or a negative error number. + * @read_string: Read callback for string attributes. Mandatory if string + * attributes are present. + * Parameters are: + * @dev: pointer to hardware scrub device + * @attr: scrub attribute + * @region_id: + * memory region id + * @buf: pointer to buffer to copy string + * The function returns 0 on success or a negative error number. + * @write: Write callback for data attributes. Mandatory if writeable + * data attributes are present. + * Parameters are: + * @dev: pointer to hardware scrub device + * @attr: scrub attribute + * @region_id: + * memory region id + * @val: value to write + * The function returns 0 on success or a negative error number. + */ +struct scrub_ops { + umode_t (*is_visible)(struct device *dev, u32 attr, int region_id); + int (*read)(struct device *dev, u32 attr, int region_id, u64 *val); + int (*read_string)(struct device *dev, u32 attr, int region_id, char *buf); + int (*write)(struct device *dev, u32 attr, int region_id, u64 val); +}; + +struct device * +devm_scrub_device_register(struct device *dev, const char *name, + void *drvdata, const struct scrub_ops *ops, + int region_id, + struct attribute_group *attr_group); +#endif /* __MEMORY_SCRUB_H */ From patchwork Thu Jan 11 13:17:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 761873 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A12033D55E; Thu, 11 Jan 2024 13:18:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.231]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4T9lYC5HYcz6J6Zg; Thu, 11 Jan 2024 21:16:15 +0800 (CST) Received: from lhrpeml500006.china.huawei.com (unknown [7.191.161.198]) by mail.maildlp.com (Postfix) with ESMTPS id 5CD0F140CB1; Thu, 11 Jan 2024 21:18:08 +0800 (CST) Received: from SecurePC30232.china.huawei.com (10.122.247.234) by lhrpeml500006.china.huawei.com (7.191.161.198) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Thu, 11 Jan 2024 13:18:07 +0000 From: To: , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC PATCH v5 07/12] cxl/memscrub: Register CXL device patrol scrub with scrub configure driver Date: Thu, 11 Jan 2024 21:17:36 +0800 Message-ID: <20240111131741.1356-8-shiju.jose@huawei.com> X-Mailer: git-send-email 2.35.1.windows.2 In-Reply-To: <20240111131741.1356-1-shiju.jose@huawei.com> References: <20240111131741.1356-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500004.china.huawei.com (7.191.163.9) To lhrpeml500006.china.huawei.com (7.191.161.198) From: Shiju Jose Register with the scrub configure driver to expose the sysfs attributes to the user for configuring the CXL device memory patrol scrub. Add the callback functions to support configuring the CXL memory device patrol scrub. Signed-off-by: Shiju Jose --- drivers/cxl/Kconfig | 6 ++ drivers/cxl/core/memscrub.c | 201 +++++++++++++++++++++++++++++++++++- 2 files changed, 204 insertions(+), 3 deletions(-) diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig index 67d88f9bf52b..964b5f789770 100644 --- a/drivers/cxl/Kconfig +++ b/drivers/cxl/Kconfig @@ -159,11 +159,17 @@ config CXL_SCRUB bool "CXL: Memory scrub feature" depends on CXL_PCI depends on CXL_MEM + depends on SCRUB help The CXL memory scrub control is an optional feature allows host to control the scrub configurations of CXL Type 3 devices, which support patrol scrub and/or DDR5 ECS(Error Check Scrub). + Register with the scrub configure driver to expose sysfs attributes + to the user for configuring the CXL device memory patrol and DDR5 ECS + scrubs. Provides the interface functions to support configuring the + CXL memory device patrol and ECS scrubs. + Say 'y/n' to enable/disable the CXL memory scrub driver that will attach to CXL.mem devices for memory scrub control feature. See sections 8.2.9.9.11.1 and 8.2.9.9.11.2 in the CXL 3.1 specification diff --git a/drivers/cxl/core/memscrub.c b/drivers/cxl/core/memscrub.c index e7741e2fdbdb..48fd02a4bfaf 100644 --- a/drivers/cxl/core/memscrub.c +++ b/drivers/cxl/core/memscrub.c @@ -6,14 +6,19 @@ * * - Provides functions to configure patrol scrub * and DDR5 ECS features of the CXL memory devices. + * - Registers with the scrub driver to expose + * the sysfs attributes to the user for configuring + * the memory patrol scrub and DDR5 ECS features. */ #define pr_fmt(fmt) "CXL_MEM_SCRUB: " fmt #include +#include /* CXL memory scrub feature common definitions */ #define CXL_SCRUB_MAX_ATTRB_RANGE_LENGTH 128 +#define CXL_MEMDEV_MAX_NAME_LENGTH 128 static int cxl_mem_get_supported_feature_entry(struct cxl_memdev *cxlmd, const uuid_t *feat_uuid, struct cxl_mbox_supp_feat_entry *feat_entry_out) @@ -63,6 +68,8 @@ static int cxl_mem_get_supported_feature_entry(struct cxl_memdev *cxlmd, const u #define CXL_MEMDEV_PS_GET_FEAT_VERSION 0x01 #define CXL_MEMDEV_PS_SET_FEAT_VERSION 0x01 +#define CXL_PATROL_SCRUB "cxl_patrol_scrub" + static const uuid_t cxl_patrol_scrub_uuid = UUID_INIT(0x96dad7d6, 0xfde8, 0x482b, 0xa7, 0x33, 0x75, 0x77, 0x4e, \ 0x06, 0xdb, 0x8a); @@ -159,9 +166,8 @@ static int cxl_mem_ps_get_attrbs(struct device *dev, return 0; } -static int __maybe_unused -cxl_mem_ps_set_attrbs(struct device *dev, struct cxl_memdev_ps_params *params, - u8 param_type) +static int cxl_mem_ps_set_attrbs(struct device *dev, + struct cxl_memdev_ps_params *params, u8 param_type) { struct cxl_memdev_ps_set_feat_pi set_pi = { .pi.uuid = cxl_patrol_scrub_uuid, @@ -232,11 +238,192 @@ cxl_mem_ps_set_attrbs(struct device *dev, struct cxl_memdev_ps_params *params, return 0; } +static int cxl_mem_ps_enable_read(struct device *dev, u64 *val) +{ + struct cxl_memdev_ps_params params; + int ret; + + ret = cxl_mem_ps_get_attrbs(dev, ¶ms); + if (ret) { + dev_err(dev, "Get CXL patrol scrub params fail ret=%d\n", ret); + return ret; + } + *val = params.enable; + + return 0; +} + +static int cxl_mem_ps_enable_write(struct device *dev, long val) +{ + struct cxl_memdev_ps_params params; + int ret; + + params.enable = val; + ret = cxl_mem_ps_set_attrbs(dev, ¶ms, CXL_MEMDEV_PS_PARAM_ENABLE); + if (ret) { + dev_err(dev, "CXL patrol scrub enable fail, enable=%d ret=%d\n", + params.enable, ret); + return ret; + } + + return 0; +} + +static int cxl_mem_ps_rate_read(struct device *dev, u64 *val) +{ + struct cxl_memdev_ps_params params; + int ret; + + ret = cxl_mem_ps_get_attrbs(dev, ¶ms); + if (ret) { + dev_err(dev, "Get CXL patrol scrub params fail ret=%d\n", ret); + return ret; + } + *val = params.rate; + + return 0; +} + +static int cxl_mem_ps_rate_write(struct device *dev, long val) +{ + struct cxl_memdev_ps_params params; + int ret; + + params.rate = val; + ret = cxl_mem_ps_set_attrbs(dev, ¶ms, CXL_MEMDEV_PS_PARAM_RATE); + if (ret) { + dev_err(dev, "Set CXL patrol scrub params for rate fail ret=%d\n", ret); + return ret; + } + + return 0; +} + +static int cxl_mem_ps_rate_available_read(struct device *dev, char *buf) +{ + struct cxl_memdev_ps_params params; + int ret; + + ret = cxl_mem_ps_get_attrbs(dev, ¶ms); + if (ret) { + dev_err(dev, "Get CXL patrol scrub params fail ret=%d\n", ret); + return ret; + } + + sysfs_emit(buf, "%s\n", params.rate_avail); + + return 0; +} + +/** + * cxl_mem_patrol_scrub_is_visible() - Callback to return attribute visibility + * @dev: Pointer to scrub device + * @attr: Scrub attribute + * @region_id: ID of the memory region + * + * Returns: 0 on success, an error otherwise + */ +static umode_t cxl_mem_patrol_scrub_is_visible(struct device *dev, + u32 attr_id, int region_id) +{ + const struct cxl_patrol_scrub_context *cxl_ps_ctx = dev_get_drvdata(dev); + + if (attr_id == scrub_rate_available || + attr_id == scrub_rate) { + if (!cxl_ps_ctx->scrub_cycle_changeable) + return 0; + } + + switch (attr_id) { + case scrub_rate_available: + return 0444; + case scrub_enable: + case scrub_rate: + return 0644; + default: + return 0; + } +} + +/** + * cxl_mem_patrol_scrub_read() - Read callback for data attributes + * @dev: Pointer to scrub device + * @attr: Scrub attribute + * @region_id: ID of the memory region + * @val: Pointer to the returned data + * + * Returns: 0 on success, an error otherwise + */ +static int cxl_mem_patrol_scrub_read(struct device *dev, u32 attr, + int region_id, u64 *val) +{ + + switch (attr) { + case scrub_enable: + return cxl_mem_ps_enable_read(dev->parent, val); + case scrub_rate: + return cxl_mem_ps_rate_read(dev->parent, val); + default: + return -ENOTSUPP; + } +} + +/** + * cxl_mem_patrol_scrub_write() - Write callback for data attributes + * @dev: Pointer to scrub device + * @attr: Scrub attribute + * @region_id: ID of the memory region + * @val: Value to write + * + * Returns: 0 on success, an error otherwise + */ +static int cxl_mem_patrol_scrub_write(struct device *dev, u32 attr, + int region_id, u64 val) +{ + switch (attr) { + case scrub_enable: + return cxl_mem_ps_enable_write(dev->parent, val); + case scrub_rate: + return cxl_mem_ps_rate_write(dev->parent, val); + default: + return -ENOTSUPP; + } +} + +/** + * cxl_mem_patrol_scrub_read_strings() - Read callback for string attributes + * @dev: Pointer to scrub device + * @attr: Scrub attribute + * @region_id: ID of the memory region + * @buf: Pointer to the buffer for copying returned string + * + * Returns: 0 on success, an error otherwise + */ +static int cxl_mem_patrol_scrub_read_strings(struct device *dev, u32 attr, + int region_id, char *buf) +{ + switch (attr) { + case scrub_rate_available: + return cxl_mem_ps_rate_available_read(dev->parent, buf); + default: + return -ENOTSUPP; + } +} + +static const struct scrub_ops cxl_ps_scrub_ops = { + .is_visible = cxl_mem_patrol_scrub_is_visible, + .read = cxl_mem_patrol_scrub_read, + .write = cxl_mem_patrol_scrub_write, + .read_string = cxl_mem_patrol_scrub_read_strings, +}; + int cxl_mem_patrol_scrub_init(struct cxl_memdev *cxlmd) { + char scrub_name[CXL_MEMDEV_MAX_NAME_LENGTH]; struct cxl_patrol_scrub_context *cxl_ps_ctx; struct cxl_mbox_supp_feat_entry feat_entry; struct cxl_memdev_ps_params params; + struct device *cxl_scrub_dev; int ret; ret = cxl_mem_get_supported_feature_entry(cxlmd, &cxl_patrol_scrub_uuid, @@ -261,6 +448,14 @@ int cxl_mem_patrol_scrub_init(struct cxl_memdev *cxlmd) } cxl_ps_ctx->scrub_cycle_changeable = params.scrub_cycle_changeable; + snprintf(scrub_name, sizeof(scrub_name), "%s_%s", + CXL_PATROL_SCRUB, dev_name(&cxlmd->dev)); + cxl_scrub_dev = devm_scrub_device_register(&cxlmd->dev, scrub_name, + cxl_ps_ctx, &cxl_ps_scrub_ops, + 0, NULL); + if (IS_ERR(cxl_scrub_dev)) + return PTR_ERR(cxl_scrub_dev); + return 0; } EXPORT_SYMBOL_NS_GPL(cxl_mem_patrol_scrub_init, CXL); From patchwork Thu Jan 11 13:17:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 762727 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9466F3D56F; Thu, 11 Jan 2024 13:18:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4T9lXN0bLXz6K8xn; Thu, 11 Jan 2024 21:15:32 +0800 (CST) Received: from lhrpeml500006.china.huawei.com (unknown [7.191.161.198]) by mail.maildlp.com (Postfix) with ESMTPS id 5C6D2140D37; Thu, 11 Jan 2024 21:18:09 +0800 (CST) Received: from SecurePC30232.china.huawei.com (10.122.247.234) by lhrpeml500006.china.huawei.com (7.191.161.198) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Thu, 11 Jan 2024 13:18:08 +0000 From: To: , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC PATCH v5 08/12] cxl/memscrub: Register CXL device ECS with scrub configure driver Date: Thu, 11 Jan 2024 21:17:37 +0800 Message-ID: <20240111131741.1356-9-shiju.jose@huawei.com> X-Mailer: git-send-email 2.35.1.windows.2 In-Reply-To: <20240111131741.1356-1-shiju.jose@huawei.com> References: <20240111131741.1356-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500004.china.huawei.com (7.191.163.9) To lhrpeml500006.china.huawei.com (7.191.161.198) From: Shiju Jose Register with the scrub configure driver to expose the sysfs attributes to the user for configuring the CXL memory device's ECS feature. Add the static CXL ECS specific attributes to support configuring the CXL memory device ECS feature. Signed-off-by: Shiju Jose --- drivers/cxl/core/memscrub.c | 253 +++++++++++++++++++++++++++++++++++- 1 file changed, 250 insertions(+), 3 deletions(-) diff --git a/drivers/cxl/core/memscrub.c b/drivers/cxl/core/memscrub.c index 48fd02a4bfaf..e76b1be3876d 100644 --- a/drivers/cxl/core/memscrub.c +++ b/drivers/cxl/core/memscrub.c @@ -464,6 +464,8 @@ EXPORT_SYMBOL_NS_GPL(cxl_mem_patrol_scrub_init, CXL); #define CXL_MEMDEV_ECS_GET_FEAT_VERSION 0x01 #define CXL_MEMDEV_ECS_SET_FEAT_VERSION 0x01 +#define CXL_DDR5_ECS "cxl_ecs" + static const uuid_t cxl_ecs_uuid = UUID_INIT(0xe5b13f22, 0x2328, 0x4a14, 0xb8, 0xba, 0xb9, 0x69, 0x1e, \ 0x89, 0x33, 0x86); @@ -582,9 +584,8 @@ static int cxl_mem_ecs_get_attrbs(struct device *dev, int fru_id, return 0; } -static int __maybe_unused -cxl_mem_ecs_set_attrbs(struct device *dev, int fru_id, - struct cxl_memdev_ecs_params *params, u8 param_type) +static int cxl_mem_ecs_set_attrbs(struct device *dev, int fru_id, + struct cxl_memdev_ecs_params *params, u8 param_type) { struct cxl_memdev_ecs_feat_read_attrbs *rd_attrbs __free(kvfree) = NULL; struct cxl_memdev_ecs_set_feat_pi *set_pi __free(kvfree) = NULL; @@ -731,10 +732,247 @@ cxl_mem_ecs_set_attrbs(struct device *dev, int fru_id, return 0; } +static int cxl_mem_ecs_log_entry_type_write(struct device *dev, int region_id, long val) +{ + struct cxl_memdev_ecs_params params; + int ret; + + params.log_entry_type = val; + ret = cxl_mem_ecs_set_attrbs(dev, region_id, ¶ms, + CXL_MEMDEV_ECS_PARAM_LOG_ENTRY_TYPE); + if (ret) { + dev_err(dev->parent, "Set CXL ECS params for log entry type fail ret=%d\n", + ret); + return ret; + } + + return 0; +} + +static int cxl_mem_ecs_threshold_write(struct device *dev, int region_id, long val) +{ + struct cxl_memdev_ecs_params params; + int ret; + + params.threshold = val; + ret = cxl_mem_ecs_set_attrbs(dev, region_id, ¶ms, + CXL_MEMDEV_ECS_PARAM_THRESHOLD); + if (ret) { + dev_err(dev->parent, "Set CXL ECS params for threshold fail ret=%d\n", + ret); + return ret; + } + + return 0; +} + +static int cxl_mem_ecs_mode_write(struct device *dev, int region_id, long val) +{ + struct cxl_memdev_ecs_params params; + int ret; + + params.mode = val; + ret = cxl_mem_ecs_set_attrbs(dev, region_id, ¶ms, + CXL_MEMDEV_ECS_PARAM_MODE); + if (ret) { + dev_err(dev->parent, "Set CXL ECS params for mode fail ret=%d\n", + ret); + return ret; + } + + return 0; +} + +static int cxl_mem_ecs_reset_counter_write(struct device *dev, int region_id, long val) +{ + struct cxl_memdev_ecs_params params; + int ret; + + params.reset_counter = val; + ret = cxl_mem_ecs_set_attrbs(dev, region_id, ¶ms, + CXL_MEMDEV_ECS_PARAM_RESET_COUNTER); + if (ret) { + dev_err(dev->parent, "Set CXL ECS params for reset ECC counter fail ret=%d\n", + ret); + return ret; + } + + return 0; +} + +enum cxl_mem_ecs_scrub_attributes { + cxl_ecs_log_entry_type, + cxl_ecs_log_entry_type_per_dram, + cxl_ecs_log_entry_type_per_memory_media, + cxl_ecs_mode, + cxl_ecs_mode_counts_codewords, + cxl_ecs_mode_counts_rows, + cxl_ecs_reset, + cxl_ecs_threshold, + cxl_ecs_threshold_available, + cxl_ecs_max_attrs, +}; + +static ssize_t cxl_mem_ecs_show_scrub_attr(struct device *dev, char *buf, + int attr_id) +{ + struct cxl_ecs_context *cxl_ecs_ctx = dev_get_drvdata(dev); + int region_id = cxl_ecs_ctx->region_id; + struct cxl_memdev_ecs_params params; + int ret; + + if (attr_id == cxl_ecs_log_entry_type || + attr_id == cxl_ecs_log_entry_type_per_dram || + attr_id == cxl_ecs_log_entry_type_per_memory_media || + attr_id == cxl_ecs_mode || + attr_id == cxl_ecs_mode_counts_codewords || + attr_id == cxl_ecs_mode_counts_rows || + attr_id == cxl_ecs_threshold) { + ret = cxl_mem_ecs_get_attrbs(dev, region_id, ¶ms); + if (ret) { + dev_err(dev->parent, "Get CXL ECS params fail ret=%d\n", ret); + return ret; + } + } + + switch (attr_id) { + case cxl_ecs_log_entry_type: + return sprintf(buf, "%d\n", params.log_entry_type); + case cxl_ecs_log_entry_type_per_dram: + if (params.log_entry_type == ECS_LOG_ENTRY_TYPE_DRAM) + return sysfs_emit(buf, "1\n"); + else + return sysfs_emit(buf, "0\n"); + case cxl_ecs_log_entry_type_per_memory_media: + if (params.log_entry_type == ECS_LOG_ENTRY_TYPE_MEM_MEDIA_FRU) + return sysfs_emit(buf, "1\n"); + else + return sysfs_emit(buf, "0\n"); + case cxl_ecs_mode: + return sprintf(buf, "%d\n", params.mode); + case cxl_ecs_mode_counts_codewords: + if (params.mode == ECS_MODE_COUNTS_CODEWORDS) + return sysfs_emit(buf, "1\n"); + else + return sysfs_emit(buf, "0\n"); + case cxl_ecs_mode_counts_rows: + if (params.mode == ECS_MODE_COUNTS_ROWS) + return sysfs_emit(buf, "1\n"); + else + return sysfs_emit(buf, "0\n"); + case cxl_ecs_threshold: + return sprintf(buf, "%d\n", params.threshold); + case cxl_ecs_threshold_available: + return sysfs_emit(buf, "256,1024,4096\n"); + } + + return -ENOTSUPP; +} + +static ssize_t cxl_mem_ecs_store_scrub_attr(struct device *dev, const char *buf, + size_t count, int attr_id) +{ + struct cxl_ecs_context *cxl_ecs_ctx = dev_get_drvdata(dev); + int region_id = cxl_ecs_ctx->region_id; + long val; + int ret; + + ret = kstrtol(buf, 10, &val); + if (ret < 0) + return ret; + + switch (attr_id) { + case cxl_ecs_log_entry_type: + ret = cxl_mem_ecs_log_entry_type_write(dev, region_id, val); + if (ret) + return -ENOTSUPP; + break; + case cxl_ecs_mode: + ret = cxl_mem_ecs_mode_write(dev, region_id, val); + if (ret) + return -ENOTSUPP; + break; + case cxl_ecs_reset: + ret = cxl_mem_ecs_reset_counter_write(dev, region_id, val); + if (ret) + return -ENOTSUPP; + break; + case cxl_ecs_threshold: + ret = cxl_mem_ecs_threshold_write(dev, region_id, val); + if (ret) + return -ENOTSUPP; + break; + default: + return -ENOTSUPP; + } + + return count; +} + +#define CXL_ECS_SCRUB_ATTR_RW(attrb) \ +static ssize_t attrb##_show(struct device *dev, \ + struct device_attribute *attr, char *buf) \ +{ \ + return cxl_mem_ecs_show_scrub_attr(dev, buf, (cxl_ecs_##attrb)); \ +} \ +static ssize_t attrb##_store(struct device *dev, \ + struct device_attribute *attr, \ + const char *buf, size_t count) \ +{ \ + return cxl_mem_ecs_store_scrub_attr(dev, buf, count, (cxl_ecs_##attrb));\ +} \ +static DEVICE_ATTR_RW(attrb) + +#define CXL_ECS_SCRUB_ATTR_RO(attrb) \ +static ssize_t attrb##_show(struct device *dev, \ + struct device_attribute *attr, char *buf) \ +{ \ + return cxl_mem_ecs_show_scrub_attr(dev, buf, (cxl_ecs_##attrb)); \ +} \ +static DEVICE_ATTR_RO(attrb) + +#define CXL_ECS_SCRUB_ATTR_WO(attrb) \ +static ssize_t attrb##_store(struct device *dev, \ + struct device_attribute *attr, \ + const char *buf, size_t count) \ +{ \ + return cxl_mem_ecs_store_scrub_attr(dev, buf, count, (cxl_ecs_##attrb));\ +} \ +static DEVICE_ATTR_WO(attrb) + +CXL_ECS_SCRUB_ATTR_RW(log_entry_type); +CXL_ECS_SCRUB_ATTR_RO(log_entry_type_per_dram); +CXL_ECS_SCRUB_ATTR_RO(log_entry_type_per_memory_media); +CXL_ECS_SCRUB_ATTR_RW(mode); +CXL_ECS_SCRUB_ATTR_RO(mode_counts_codewords); +CXL_ECS_SCRUB_ATTR_RO(mode_counts_rows); +CXL_ECS_SCRUB_ATTR_WO(reset); +CXL_ECS_SCRUB_ATTR_RW(threshold); +CXL_ECS_SCRUB_ATTR_RO(threshold_available); + +static struct attribute *cxl_mem_ecs_scrub_attrs[] = { + &dev_attr_log_entry_type.attr, + &dev_attr_log_entry_type_per_dram.attr, + &dev_attr_log_entry_type_per_memory_media.attr, + &dev_attr_mode.attr, + &dev_attr_mode_counts_codewords.attr, + &dev_attr_mode_counts_rows.attr, + &dev_attr_reset.attr, + &dev_attr_threshold.attr, + &dev_attr_threshold_available.attr, + NULL, +}; + +static struct attribute_group cxl_mem_ecs_attr_group = { + .attrs = cxl_mem_ecs_scrub_attrs, +}; + int cxl_mem_ecs_init(struct cxl_memdev *cxlmd, int region_id) { + char scrub_name[CXL_MEMDEV_MAX_NAME_LENGTH]; struct cxl_mbox_supp_feat_entry feat_entry; struct cxl_ecs_context *cxl_ecs_ctx; + struct device *cxl_scrub_dev; int nmedia_frus; int ret; @@ -755,6 +993,15 @@ int cxl_mem_ecs_init(struct cxl_memdev *cxlmd, int region_id) cxl_ecs_ctx->get_feat_size = feat_entry.get_feat_size; cxl_ecs_ctx->set_feat_size = feat_entry.set_feat_size; cxl_ecs_ctx->region_id = region_id; + + snprintf(scrub_name, sizeof(scrub_name), "%s_%s_region%d", + CXL_DDR5_ECS, dev_name(&cxlmd->dev), cxl_ecs_ctx->region_id); + cxl_scrub_dev = devm_scrub_device_register(&cxlmd->dev, scrub_name, + cxl_ecs_ctx, NULL, + cxl_ecs_ctx->region_id, + &cxl_mem_ecs_attr_group); + if (IS_ERR(cxl_scrub_dev)) + return PTR_ERR(cxl_scrub_dev); } return 0; From patchwork Thu Jan 11 13:17:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 761872 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9D1883E49B; Thu, 11 Jan 2024 13:18:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.216]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4T9lXP0cv3z6K8yK; Thu, 11 Jan 2024 21:15:33 +0800 (CST) Received: from lhrpeml500006.china.huawei.com (unknown [7.191.161.198]) by mail.maildlp.com (Postfix) with ESMTPS id 56A98140CB9; Thu, 11 Jan 2024 21:18:10 +0800 (CST) Received: from SecurePC30232.china.huawei.com (10.122.247.234) by lhrpeml500006.china.huawei.com (7.191.161.198) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Thu, 11 Jan 2024 13:18:09 +0000 From: To: , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC PATCH v5 09/12] ACPI:RASF: Add common library for RASF and RAS2 PCC interfaces Date: Thu, 11 Jan 2024 21:17:38 +0800 Message-ID: <20240111131741.1356-10-shiju.jose@huawei.com> X-Mailer: git-send-email 2.35.1.windows.2 In-Reply-To: <20240111131741.1356-1-shiju.jose@huawei.com> References: <20240111131741.1356-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500004.china.huawei.com (7.191.163.9) To lhrpeml500006.china.huawei.com (7.191.161.198) From: A Somasundaram The code contains PCC interfaces for RASF and RAS2 table, functions to send RASF commands as per ACPI 5.1 and RAS2 commands as per ACPI 6.5 & upwards revision. References for this implementation, ACPI specification 6.5, section 5.2.20 for RASF table, section 5.2.21 for RAS2 table and chapter 14 for PCC (Platform Communication Channel). Driver uses PCC interfaces to communicate to the ACPI HW. This code implements PCC interfaces and the functions to send the RASF/RAS2 commands to be used by OSPM. Signed-off-by: A Somasundaram Co-developed-by: Shiju Jose Signed-off-by: Shiju Jose --- drivers/acpi/Kconfig | 15 ++ drivers/acpi/Makefile | 1 + drivers/acpi/rasf_acpi_common.c | 272 ++++++++++++++++++++++++++++++++ include/acpi/rasf_acpi.h | 58 +++++++ 4 files changed, 346 insertions(+) create mode 100755 drivers/acpi/rasf_acpi_common.c create mode 100644 include/acpi/rasf_acpi.h diff --git a/drivers/acpi/Kconfig b/drivers/acpi/Kconfig index f819e760ff19..c5a2a094c2af 100644 --- a/drivers/acpi/Kconfig +++ b/drivers/acpi/Kconfig @@ -280,6 +280,21 @@ config ACPI_CPPC_LIB If your platform does not support CPPC in firmware, leave this option disabled. +config ACPI_RASF + bool "ACPI RASF driver" + depends on ACPI_PROCESSOR + select MAILBOX + select PCC + help + The driver adds support for PCC (platform communication + channel) interfaces to communicate with the ACPI complaint + hardware platform supports RASF(RAS Feature table) or + and RAS2(RAS2 Feature table). + The driver adds support for RASF/RAS2(extraction of RASF/RAS2 + tables from OS system table), PCC interfaces and OSPM interfaces + to send RASF & RAS2 commands. Driver adds platform device which + binds to the RASF/RAS2 memory driver. + config ACPI_PROCESSOR tristate "Processor" depends on X86 || ARM64 || LOONGARCH diff --git a/drivers/acpi/Makefile b/drivers/acpi/Makefile index eaa09bf52f17..f0788390516b 100644 --- a/drivers/acpi/Makefile +++ b/drivers/acpi/Makefile @@ -104,6 +104,7 @@ obj-$(CONFIG_ACPI_CUSTOM_METHOD)+= custom_method.o obj-$(CONFIG_ACPI_BGRT) += bgrt.o obj-$(CONFIG_ACPI_CPPC_LIB) += cppc_acpi.o obj-$(CONFIG_ACPI_SPCR_TABLE) += spcr.o +obj-$(CONFIG_ACPI_RASF) += rasf_acpi_common.o obj-$(CONFIG_ACPI_DEBUGGER_USER) += acpi_dbg.o obj-$(CONFIG_ACPI_PPTT) += pptt.o obj-$(CONFIG_ACPI_PFRUT) += pfr_update.o pfr_telemetry.o diff --git a/drivers/acpi/rasf_acpi_common.c b/drivers/acpi/rasf_acpi_common.c new file mode 100755 index 000000000000..3ee34f5d12d3 --- /dev/null +++ b/drivers/acpi/rasf_acpi_common.c @@ -0,0 +1,272 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * rasf_acpi_common.c - ACPI RASF table processing common functions + * + * (C) Copyright 2014, 2015 Hewlett-Packard Enterprises. + * + * Copyright (c) 2023 HiSilicon Limited. + * + * Support for + * RASF - ACPI 6.5 Specification, section 5.2.20 + * RAS2 - ACPI 6.5 Specification, section 5.2.21 + * PCC(Platform Communications Channel) - ACPI 6.5 Specification, + * chapter 14. + * + * Code contains common functions for RASF. + * PCC(Platform communication channel) interfaces for the RASF & RAS2 + * and the functions for sending RASF & RAS2 commands to the ACPI HW. + */ + +#define pr_fmt(fmt) "ACPI RASF COMMON: " fmt + +#include +#include +#include +#include +#include +#include + +static int rasf_check_pcc_chan(struct rasf_context *rasf_ctx) +{ + int ret = -EIO; + struct acpi_rasf_shared_memory __iomem *generic_comm_base = rasf_ctx->pcc_comm_addr; + ktime_t next_deadline = ktime_add(ktime_get(), rasf_ctx->deadline); + + while (!ktime_after(ktime_get(), next_deadline)) { + /* + * As per ACPI spec, the PCC space wil be initialized by + * platform and should have set the command completion bit when + * PCC can be used by OSPM + */ + if (readw_relaxed(&generic_comm_base->status) & RASF_PCC_CMD_COMPLETE) { + ret = 0; + break; + } + /* + * Reducing the bus traffic in case this loop takes longer than + * a few retries. + */ + udelay(10); + } + + return ret; +} + +/** + * rasf_send_pcc_cmd() - Send RASF command via PCC channel + * @rasf_ctx: pointer to the rasf context structure + * @cmd: command to send + * + * Returns: 0 on success, an error otherwise + */ +int rasf_send_pcc_cmd(struct rasf_context *rasf_ctx, u16 cmd) +{ + int ret = -EIO; + struct acpi_rasf_shared_memory *generic_comm_base = + (struct acpi_rasf_shared_memory *)rasf_ctx->pcc_comm_addr; + static ktime_t last_cmd_cmpl_time, last_mpar_reset; + static int mpar_count; + unsigned int time_delta; + + if (cmd == RASF_PCC_CMD_EXEC) { + ret = rasf_check_pcc_chan(rasf_ctx); + if (ret) + return ret; + } + + /* + * Handle the Minimum Request Turnaround Time(MRTT) + * "The minimum amount of time that OSPM must wait after the completion + * of a command before issuing the next command, in microseconds" + */ + if (rasf_ctx->pcc_mrtt) { + time_delta = ktime_us_delta(ktime_get(), last_cmd_cmpl_time); + if (rasf_ctx->pcc_mrtt > time_delta) + udelay(rasf_ctx->pcc_mrtt - time_delta); + } + + /* + * Handle the non-zero Maximum Periodic Access Rate(MPAR) + * "The maximum number of periodic requests that the subspace channel can + * support, reported in commands per minute. 0 indicates no limitation." + * + * This parameter should be ideally zero or large enough so that it can + * handle maximum number of requests that all the cores in the system can + * collectively generate. If it is not, we will follow the spec and just + * not send the request to the platform after hitting the MPAR limit in + * any 60s window + */ + if (rasf_ctx->pcc_mpar) { + if (mpar_count == 0) { + time_delta = ktime_ms_delta(ktime_get(), last_mpar_reset); + if (time_delta < 60 * MSEC_PER_SEC) { + pr_debug("PCC cmd not sent due to MPAR limit"); + return -EIO; + } + last_mpar_reset = ktime_get(); + mpar_count = rasf_ctx->pcc_mpar; + } + mpar_count--; + } + + /* Write to the shared comm region. */ + writew_relaxed(cmd, &generic_comm_base->command); + + /* Flip CMD COMPLETE bit */ + writew_relaxed(0, &generic_comm_base->status); + + /* Ring doorbell */ + ret = mbox_send_message(rasf_ctx->pcc_channel, &cmd); + if (ret < 0) { + pr_err("Err sending PCC mbox message. cmd:%d, ret:%d\n", + cmd, ret); + return ret; + } + + /* + * For READs we need to ensure the cmd completed to ensure + * the ensuing read()s can proceed. For WRITEs we dont care + * because the actual write()s are done before coming here + * and the next READ or WRITE will check if the channel + * is busy/free at the entry of this call. + * + * If Minimum Request Turnaround Time is non-zero, we need + * to record the completion time of both READ and WRITE + * command for proper handling of MRTT, so we need to check + * for pcc_mrtt in addition to CMD_READ + */ + if (cmd == RASF_PCC_CMD_EXEC || rasf_ctx->pcc_mrtt) { + ret = rasf_check_pcc_chan(rasf_ctx); + if (rasf_ctx->pcc_mrtt) + last_cmd_cmpl_time = ktime_get(); + } + + if (rasf_ctx->pcc_channel->mbox->txdone_irq) + mbox_chan_txdone(rasf_ctx->pcc_channel, ret); + else + mbox_client_txdone(rasf_ctx->pcc_channel, ret); + + return ret; +} +EXPORT_SYMBOL_GPL(rasf_send_pcc_cmd); + +/** + * rasf_register_pcc_channel() - Register PCC channel + * @rasf_ctx: pointer to the rasf context structure + * + * Returns: 0 on success, an error otherwise + */ +int rasf_register_pcc_channel(struct rasf_context *rasf_ctx) +{ + u64 usecs_lat; + unsigned int len; + struct pcc_mbox_chan *pcc_chan; + struct mbox_client *rasf_mbox_cl; + struct acpi_pcct_hw_reduced *rasf_ss; + + rasf_mbox_cl = &rasf_ctx->mbox_client; + if (!rasf_mbox_cl || rasf_ctx->pcc_subspace_idx < 0) + return -EINVAL; + + pcc_chan = pcc_mbox_request_channel(rasf_mbox_cl, + rasf_ctx->pcc_subspace_idx); + + if (IS_ERR(pcc_chan)) { + pr_err("Failed to find PCC channel for subspace %d\n", + rasf_ctx->pcc_subspace_idx); + return -ENODEV; + } + rasf_ctx->pcc_chan = pcc_chan; + rasf_ctx->pcc_channel = pcc_chan->mchan; + /* + * The PCC mailbox controller driver should + * have parsed the PCCT (global table of all + * PCC channels) and stored pointers to the + * subspace communication region in con_priv. + */ + rasf_ss = rasf_ctx->pcc_channel->con_priv; + + if (!rasf_ss) { + pr_err("No PCC subspace found for RASF\n"); + pcc_mbox_free_channel(rasf_ctx->pcc_chan); + return -ENODEV; + } + + /* + * This is the shared communication region + * for the OS and Platform to communicate over. + */ + rasf_ctx->comm_base_addr = rasf_ss->base_address; + len = rasf_ss->length; + pr_debug("PCC subspace for RASF=0x%llx len=%d\n", + rasf_ctx->comm_base_addr, len); + + /* + * rasf_ss->latency is just a Nominal value. In reality + * the remote processor could be much slower to reply. + * So add an arbitrary amount of wait on top of Nominal. + */ + usecs_lat = RASF_NUM_RETRIES * rasf_ss->latency; + rasf_ctx->deadline = ns_to_ktime(usecs_lat * NSEC_PER_USEC); + rasf_ctx->pcc_mrtt = rasf_ss->min_turnaround_time; + rasf_ctx->pcc_mpar = rasf_ss->max_access_rate; + rasf_ctx->pcc_comm_addr = acpi_os_ioremap(rasf_ctx->comm_base_addr, len); + pr_debug("pcc_comm_addr=%p\n", rasf_ctx->pcc_comm_addr); + + /* Set flag so that we dont come here for each CPU. */ + rasf_ctx->pcc_channel_acquired = true; + + return 0; +} +EXPORT_SYMBOL_GPL(rasf_register_pcc_channel); + +/** + * rasf_unregister_pcc_channel() - Unregister PCC channel + * @rasf_ctx: pointer to the rasf context structure + * + * Returns: 0 on success, an error otherwise + */ +int rasf_unregister_pcc_channel(struct rasf_context *rasf_ctx) +{ + if (!rasf_ctx->pcc_chan) + return -EINVAL; + + pcc_mbox_free_channel(rasf_ctx->pcc_chan); + + return 0; +} +EXPORT_SYMBOL_GPL(rasf_unregister_pcc_channel); + +/** + * rasf_add_platform_device() - Add a platform device for RASF + * @name: name of the device we're adding + * @data: platform specific data for this platform device + * @size: size of platform specific data + * + * Returns: pointer to platform device on success, an error otherwise + */ +struct platform_device *rasf_add_platform_device(char *name, const void *data, + size_t size) +{ + int ret; + struct platform_device *pdev; + + pdev = platform_device_alloc(name, PLATFORM_DEVID_AUTO); + if (!pdev) + return NULL; + + ret = platform_device_add_data(pdev, data, size); + if (ret) + goto dev_put; + + ret = platform_device_add(pdev); + if (ret) + goto dev_put; + + return pdev; + +dev_put: + platform_device_put(pdev); + + return NULL; +} diff --git a/include/acpi/rasf_acpi.h b/include/acpi/rasf_acpi.h new file mode 100644 index 000000000000..aa4f935b28cf --- /dev/null +++ b/include/acpi/rasf_acpi.h @@ -0,0 +1,58 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * RASF driver header file + * + * (C) Copyright 2014, 2015 Hewlett-Packard Enterprises + * + * Copyright (c) 2023 HiSilicon Limited + */ + +#ifndef _RASF_ACPI_H +#define _RASF_ACPI_H + +#include +#include +#include +#include +#include + +#define RASF_PCC_CMD_COMPLETE 1 + +/* RASF specific PCC commands */ +#define RASF_PCC_CMD_EXEC 0x01 + +#define RASF_FAILURE 0 +#define RASF_SUCCESS 1 + +/* + * Arbitrary Retries for PCC commands. + */ +#define RASF_NUM_RETRIES 600 + +/* + * Data structures for PCC communication and RASF table + */ +struct rasf_context { + struct device *dev; + int id; + struct mbox_client mbox_client; + struct mbox_chan *pcc_channel; + struct pcc_mbox_chan *pcc_chan; + void __iomem *pcc_comm_addr; + u64 comm_base_addr; + int pcc_subspace_idx; + bool pcc_channel_acquired; + ktime_t deadline; + unsigned int pcc_mpar; + unsigned int pcc_mrtt; + spinlock_t spinlock; /* Lock to provide mutually exclusive access to PCC channel */ + struct device *scrub_dev; + const struct rasf_hw_scrub_ops *ops; +}; + +struct platform_device *rasf_add_platform_device(char *name, const void *data, + size_t size); +int rasf_send_pcc_cmd(struct rasf_context *rasf_ctx, u16 cmd); +int rasf_register_pcc_channel(struct rasf_context *rasf_ctx); +int rasf_unregister_pcc_channel(struct rasf_context *rasf_ctx); +#endif /* _RASF_ACPI_H */ From patchwork Thu Jan 11 13:17:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 762726 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5C2A23F8CC; Thu, 11 Jan 2024 13:18:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4T9lYG5HHcz6J6cn; Thu, 11 Jan 2024 21:16:18 +0800 (CST) Received: from lhrpeml500006.china.huawei.com (unknown [7.191.161.198]) by mail.maildlp.com (Postfix) with ESMTPS id 5D33F1400CD; Thu, 11 Jan 2024 21:18:11 +0800 (CST) Received: from SecurePC30232.china.huawei.com (10.122.247.234) by lhrpeml500006.china.huawei.com (7.191.161.198) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Thu, 11 Jan 2024 13:18:10 +0000 From: To: , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC PATCH v5 10/12] ACPICA: ACPI 6.5: Add support for RAS2 table Date: Thu, 11 Jan 2024 21:17:39 +0800 Message-ID: <20240111131741.1356-11-shiju.jose@huawei.com> X-Mailer: git-send-email 2.35.1.windows.2 In-Reply-To: <20240111131741.1356-1-shiju.jose@huawei.com> References: <20240111131741.1356-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500004.china.huawei.com (7.191.163.9) To lhrpeml500006.china.huawei.com (7.191.161.198) From: Shiju Jose Add support for ACPI RAS2 feature table(RAS2) defined in the ACPI 6.5 Specification & upwards revision, section 5.2.21. The RAS2 table provides interfaces for platform RAS features. RAS2 offers the same services as RASF, but is more scalable than the latter. RAS2 supports independent RAS controls and capabilities for a given RAS feature for multiple instances of the same component in a given system. The platform can support either RAS2 or RASF but not both. Link: https://github.com/acpica/acpica/pull/899 Signed-off-by: Shiju Jose --- include/acpi/actbl2.h | 137 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 137 insertions(+) diff --git a/include/acpi/actbl2.h b/include/acpi/actbl2.h index 3751ae69432f..0e23d4685511 100644 --- a/include/acpi/actbl2.h +++ b/include/acpi/actbl2.h @@ -47,6 +47,7 @@ #define ACPI_SIG_PPTT "PPTT" /* Processor Properties Topology Table */ #define ACPI_SIG_PRMT "PRMT" /* Platform Runtime Mechanism Table */ #define ACPI_SIG_RASF "RASF" /* RAS Feature table */ +#define ACPI_SIG_RAS2 "RAS2" /* RAS2 Feature table */ #define ACPI_SIG_RGRT "RGRT" /* Regulatory Graphics Resource Table */ #define ACPI_SIG_RHCT "RHCT" /* RISC-V Hart Capabilities Table */ #define ACPI_SIG_SBST "SBST" /* Smart Battery Specification Table */ @@ -2743,6 +2744,142 @@ enum acpi_rasf_status { #define ACPI_RASF_ERROR (1<<2) #define ACPI_RASF_STATUS (0x1F<<3) +/******************************************************************************* + * + * RAS2 - RAS2 Feature Table (ACPI 6.5) + * Version 2 + * + * + ******************************************************************************/ + +struct acpi_table_ras2 { + struct acpi_table_header header; /* Common ACPI table header */ + u16 reserved; + u16 num_pcc_descs; +}; + +/* + * RAS2 Platform Communication Channel Descriptor + */ + +struct acpi_ras2_pcc_desc { + u8 channel_id; + u16 reserved; + u8 feature_type; + u32 instance; +}; + +/* + * RAS2 Platform Communication Channel Shared Memory Region + */ + +struct acpi_ras2_shared_memory { + u32 signature; + u16 command; + u16 status; + u16 version; + u8 features[16]; + u8 set_capabilities[16]; + u16 num_parameter_blocks; + u32 set_capabilities_status; +}; + +/* RAS2 Parameter Block Structure Header */ + +struct acpi_ras2_parameter_block { + u16 type; + u16 version; + u16 length; +}; + +/* + * RAS2 Parameter Block Structure for PATROL_SCRUB + */ + +struct acpi_ras2_patrol_scrub_parameter { + struct acpi_ras2_parameter_block header; + u16 patrol_scrub_command; + u64 requested_address_range[2]; + u64 actual_address_range[2]; + u32 flags; + u32 scrub_params_out; + u32 scrub_params_in; +}; + +/* Masks for Flags field above */ + +#define ACPI_RAS2_SCRUBBER_RUNNING 1 + +/* + * RAS2 Parameter Block Structure for LA2PA_TRANSLATION + */ + +struct acpi_ras2_la2pa_translation_parameter { + struct acpi_ras2_parameter_block header; + u16 addr_translation_command; + u64 sub_instance_id; + u64 logical_address; + u64 physical_address; + u32 status; +}; + +/* Channel Commands */ + +enum acpi_ras2_commands { + ACPI_RAS2_EXECUTE_RAS2_COMMAND = 1 +}; + +/* Platform RAS2 Features */ + +enum acpi_ras2_features { + ACPI_RAS2_PATROL_SCRUB_SUPPORTED = 0, + ACPI_RAS2_LA2PA_TRANSLATION = 1 +}; + +/* RAS2 Patrol Scrub Commands */ + +enum acpi_ras2_patrol_scrub_commands { + ACPI_RAS2_GET_PATROL_PARAMETERS = 1, + ACPI_RAS2_START_PATROL_SCRUBBER = 2, + ACPI_RAS2_STOP_PATROL_SCRUBBER = 3 +}; + +/* RAS2 LA2PA Translation Commands */ + +enum acpi_ras2_la2pa_translation_commands { + ACPI_RAS2_GET_LA2PA_TRANSLATION = 1 +}; + +/* RAS2 LA2PA Translation Status values */ + +enum acpi_ras2_la2pa_translation_status { + ACPI_RAS2_LA2PA_TRANSLATION_SUCCESS = 0, + ACPI_RAS2_LA2PA_TRANSLATION_FAIL = 1 +}; + +/* Channel Command flags */ + +#define ACPI_RAS2_GENERATE_SCI (1<<15) + +/* Status values */ + +enum acpi_ras2_status { + ACPI_RAS2_SUCCESS = 0, + ACPI_RAS2_NOT_VALID = 1, + ACPI_RAS2_NOT_SUPPORTED = 2, + ACPI_RAS2_BUSY = 3, + ACPI_RAS2_FAILED = 4, + ACPI_RAS2_ABORTED = 5, + ACPI_RAS2_INVALID_DATA = 6 +}; + +/* Status flags */ + +#define ACPI_RAS2_COMMAND_COMPLETE (1) +#define ACPI_RAS2_SCI_DOORBELL (1<<1) +#define ACPI_RAS2_ERROR (1<<2) +#define ACPI_RAS2_STATUS (0x1F<<3) + /******************************************************************************* * * RGRT - Regulatory Graphics Resource Table From patchwork Thu Jan 11 13:17:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 761871 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 64F893FE2E; Thu, 11 Jan 2024 13:18:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4T9lXR0c4jz6K8yJ; Thu, 11 Jan 2024 21:15:35 +0800 (CST) Received: from lhrpeml500006.china.huawei.com (unknown [7.191.161.198]) by mail.maildlp.com (Postfix) with ESMTPS id 58B24140D27; Thu, 11 Jan 2024 21:18:12 +0800 (CST) Received: from SecurePC30232.china.huawei.com (10.122.247.234) by lhrpeml500006.china.huawei.com (7.191.161.198) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Thu, 11 Jan 2024 13:18:11 +0000 From: To: , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC PATCH v5 11/12] ACPI:RAS2: Add driver for ACPI RAS2 feature table (RAS2) Date: Thu, 11 Jan 2024 21:17:40 +0800 Message-ID: <20240111131741.1356-12-shiju.jose@huawei.com> X-Mailer: git-send-email 2.35.1.windows.2 In-Reply-To: <20240111131741.1356-1-shiju.jose@huawei.com> References: <20240111131741.1356-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500004.china.huawei.com (7.191.163.9) To lhrpeml500006.china.huawei.com (7.191.161.198) From: Shiju Jose Add support for ACPI RAS2 feature table (RAS2) defined in the ACPI 6.5 Specification, section 5.2.21. This driver contains RAS2 Init, which extracts the RAS2 table. Driver adds platform device, for each memory feature, which binds to the RAS2 memory driver. Signed-off-by: Shiju Jose --- drivers/acpi/Makefile | 2 +- drivers/acpi/ras2_acpi.c | 97 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 98 insertions(+), 1 deletion(-) create mode 100755 drivers/acpi/ras2_acpi.c diff --git a/drivers/acpi/Makefile b/drivers/acpi/Makefile index f0788390516b..189ad0057a5c 100644 --- a/drivers/acpi/Makefile +++ b/drivers/acpi/Makefile @@ -104,7 +104,7 @@ obj-$(CONFIG_ACPI_CUSTOM_METHOD)+= custom_method.o obj-$(CONFIG_ACPI_BGRT) += bgrt.o obj-$(CONFIG_ACPI_CPPC_LIB) += cppc_acpi.o obj-$(CONFIG_ACPI_SPCR_TABLE) += spcr.o -obj-$(CONFIG_ACPI_RASF) += rasf_acpi_common.o +obj-$(CONFIG_ACPI_RASF) += rasf_acpi_common.o ras2_acpi.o obj-$(CONFIG_ACPI_DEBUGGER_USER) += acpi_dbg.o obj-$(CONFIG_ACPI_PPTT) += pptt.o obj-$(CONFIG_ACPI_PFRUT) += pfr_update.o pfr_telemetry.o diff --git a/drivers/acpi/ras2_acpi.c b/drivers/acpi/ras2_acpi.c new file mode 100755 index 000000000000..b8a7740355a8 --- /dev/null +++ b/drivers/acpi/ras2_acpi.c @@ -0,0 +1,97 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * ras2_acpi.c - Implementation of ACPI RAS2 feature table processing + * functions. + * + * Copyright (c) 2023 HiSilicon Limited. + * + * Support for + * RAS2 - ACPI 6.5 Specification, section 5.2.21 + * + * Driver contains RAS2 init, which extracts the RAS2 table and + * registers the PCC channel for communicating with the ACPI compliant + * platform that contains RAS2 command support in hardware.Driver adds + * platform device which binds to the RAS2 memory driver. + */ + +#define pr_fmt(fmt) "ACPI RAS2: " fmt + +#include +#include +#include +#include +#include +#include + +#define RAS2_FEATURE_TYPE_MEMORY 0x00 + +int __init ras2_acpi_init(void) +{ + u8 count; + acpi_status status; + acpi_size ras2_size; + int pcc_subspace_idx; + struct platform_device *pdev; + struct acpi_table_ras2 *pRas2Table; + struct acpi_ras2_pcc_desc *pcc_desc_list; + struct platform_device **pdev_list = NULL; + struct acpi_table_header *pAcpiTable = NULL; + + status = acpi_get_table("RAS2", 0, &pAcpiTable); + if (ACPI_FAILURE(status) || !pAcpiTable) { + pr_err("ACPI RAS2 driver failed to initialize, get table failed\n"); + return RASF_FAILURE; + } + + ras2_size = pAcpiTable->length; + if (ras2_size < sizeof(struct acpi_table_ras2)) { + pr_err("ACPI RAS2 table present but broken (too short #1)\n"); + goto free_ras2_table; + } + + pRas2Table = (struct acpi_table_ras2 *)pAcpiTable; + + if (pRas2Table->num_pcc_descs <= 0) { + pr_err("ACPI RAS2 table does not contain PCC descriptors\n"); + goto free_ras2_table; + } + + pdev_list = kzalloc((pRas2Table->num_pcc_descs * sizeof(struct platform_device *)), + GFP_KERNEL); + if (!pdev_list) + goto free_ras2_table; + + pcc_desc_list = (struct acpi_ras2_pcc_desc *) + ((void *)pRas2Table + sizeof(struct acpi_table_ras2)); + count = 0; + while (count < pRas2Table->num_pcc_descs) { + if (pcc_desc_list->feature_type == RAS2_FEATURE_TYPE_MEMORY) { + pcc_subspace_idx = pcc_desc_list->channel_id; + /* Add the platform device and bind ras2 memory driver */ + pdev = rasf_add_platform_device("ras2", &pcc_subspace_idx, + sizeof(pcc_subspace_idx)); + if (!pdev) + goto free_ras2_pdev; + pdev_list[count] = pdev; + } + count++; + pcc_desc_list = pcc_desc_list + sizeof(struct acpi_ras2_pcc_desc); + } + + acpi_put_table(pAcpiTable); + return RASF_SUCCESS; + +free_ras2_pdev: + count = 0; + while (count < pRas2Table->num_pcc_descs) { + if (pcc_desc_list->feature_type == + RAS2_FEATURE_TYPE_MEMORY) + platform_device_put(pdev_list[count++]); + } + kfree(pdev_list); + +free_ras2_table: + acpi_put_table(pAcpiTable); + return RASF_FAILURE; +} +late_initcall(ras2_acpi_init) From patchwork Thu Jan 11 13:17:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 762725 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C210640BE2; Thu, 11 Jan 2024 13:18:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4T9lXk5tlZz6D8hf; Thu, 11 Jan 2024 21:15:50 +0800 (CST) Received: from lhrpeml500006.china.huawei.com (unknown [7.191.161.198]) by mail.maildlp.com (Postfix) with ESMTPS id 6EA4D140D5A; Thu, 11 Jan 2024 21:18:13 +0800 (CST) Received: from SecurePC30232.china.huawei.com (10.122.247.234) by lhrpeml500006.china.huawei.com (7.191.161.198) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Thu, 11 Jan 2024 13:18:12 +0000 From: To: , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC PATCH v5 12/12] memory: RAS2: Add memory RAS2 driver Date: Thu, 11 Jan 2024 21:17:41 +0800 Message-ID: <20240111131741.1356-13-shiju.jose@huawei.com> X-Mailer: git-send-email 2.35.1.windows.2 In-Reply-To: <20240111131741.1356-1-shiju.jose@huawei.com> References: <20240111131741.1356-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500004.china.huawei.com (7.191.163.9) To lhrpeml500006.china.huawei.com (7.191.161.198) From: Shiju Jose Memory RAS2 driver binds to the platform device add by the ACPI RAS2 driver. Driver registers the PCC channel for communicating with the ACPI compliant platform that contains RAS2 command support in the hardware. Add interface functions to support configuring the parameters of HW patrol scrubs in the system, which exposed to the kernel via the RAS2 and PCC, using the RAS2 commands. Add support for RAS2 platform devices to register with scrub subsystem driver. This enables user to configure the parameters of HW patrol scrubs, which exposed to the kernel via the RAS2 table, through the scrub sysfs attributes. Open Question: Sysfs scrub control attribute "enable_background_scrub" is added for RAS2, based on the feedback from Bill Schwartz --- drivers/memory/Kconfig | 14 ++ drivers/memory/Makefile | 2 + drivers/memory/ras2.c | 354 +++++++++++++++++++++++++++++++++++ drivers/memory/rasf_common.c | 269 ++++++++++++++++++++++++++ include/memory/rasf.h | 88 +++++++++ 5 files changed, 727 insertions(+) create mode 100644 drivers/memory/ras2.c create mode 100644 drivers/memory/rasf_common.c create mode 100755 include/memory/rasf.h diff --git a/drivers/memory/Kconfig b/drivers/memory/Kconfig index d2e015c09d83..5fff18fcd3e2 100644 --- a/drivers/memory/Kconfig +++ b/drivers/memory/Kconfig @@ -225,6 +225,20 @@ config STM32_FMC2_EBI devices (like SRAM, ethernet adapters, FPGAs, LCD displays, ...) on SOCs containing the FMC2 External Bus Interface. +config MEM_RASF + bool "Memory RASF driver" + depends on ACPI_RASF + depends on SCRUB + help + The driver bound to the platform device added by the ACPI RAS2 + driver. Driver registers the PCC channel for communicating with + the ACPI compliant platform that contains RAS2 command support + in the hardware. + Registers with the scrub configure driver to provide sysfs interfaces + for configuring the hw patrol scrubber in the system, which exposed + via the ACPI RAS2 table and PCC. Provides the interface functions + support configuring the HW patrol scrubbers in the system. + source "drivers/memory/samsung/Kconfig" source "drivers/memory/tegra/Kconfig" source "drivers/memory/scrub/Kconfig" diff --git a/drivers/memory/Makefile b/drivers/memory/Makefile index 4b37312cb342..4bd7653e1dce 100644 --- a/drivers/memory/Makefile +++ b/drivers/memory/Makefile @@ -7,6 +7,8 @@ obj-$(CONFIG_DDR) += jedec_ddr_data.o ifeq ($(CONFIG_DDR),y) obj-$(CONFIG_OF) += of_memory.o endif +obj-$(CONFIG_MEM_RASF) += rasf_common.o ras2.o + obj-$(CONFIG_ARM_PL172_MPMC) += pl172.o obj-$(CONFIG_ATMEL_EBI) += atmel-ebi.o obj-$(CONFIG_BRCMSTB_DPFE) += brcmstb_dpfe.o diff --git a/drivers/memory/ras2.c b/drivers/memory/ras2.c new file mode 100644 index 000000000000..046e4ae7eaf0 --- /dev/null +++ b/drivers/memory/ras2.c @@ -0,0 +1,354 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * ras2.c - ACPI RAS2 memory driver + * + * Copyright (c) 2023 HiSilicon Limited. + * + * - Registers the PCC channel for communicating with the + * ACPI compliant platform that contains RAS2 command + * support in the hardware. + * - Provides functions to configure HW patrol scrubs + * in the system. + * - Registers with the scrub configure driver for the + * hw patrol scrub in the system, which exposed via + * the ACPI RAS2 table and PCC. + */ + +#define pr_fmt(fmt) "MEMORY RAS2: " fmt + +#include +#include +#include +#include + +#include +#include + +/* RAS2 specific definitions. */ +#define RAS2_SCRUB "ras2_scrub" +#define RAS2_ID_FORMAT RAS2_SCRUB "%d" +#define RAS2_SUPPORT_HW_PARTOL_SCRUB BIT(0) +#define RAS2_TYPE_PATROL_SCRUB 0x0000 + +#define RAS2_GET_PATROL_PARAMETERS 0x01 +#define RAS2_START_PATROL_SCRUBBER 0x02 +#define RAS2_STOP_PATROL_SCRUBBER 0x03 + +#define RAS2_PATROL_SCRUB_RATE_VALID BIT(0) +#define RAS2_PATROL_SCRUB_RATE_IN_MASK GENMASK(15, 8) +#define RAS2_PATROL_SCRUB_EN_BACKGROUND BIT(0) +#define RAS2_PATROL_SCRUB_RATE_OUT_MASK GENMASK(7, 0) +#define RAS2_PATROL_SCRUB_MIN_RATE_OUT_MASK GENMASK(15, 8) +#define RAS2_PATROL_SCRUB_MAX_RATE_OUT_MASK GENMASK(23, 16) + +static void ras2_tx_done(struct mbox_client *cl, void *msg, int ret) +{ + if (ret) { + dev_dbg(cl->dev, "TX did not complete: CMD sent:%x, ret:%d\n", + *(u16 *)msg, ret); + } else { + dev_dbg(cl->dev, "TX completed. CMD sent:%x, ret:%d\n", + *(u16 *)msg, ret); + } +} + +/* + * The below functions are exposed to OSPM, to query, configure and + * initiate memory patrol scrub. + */ +static int ras2_is_patrol_scrub_support(struct rasf_context *ras2_ctx) +{ + int ret; + struct acpi_ras2_shared_memory __iomem *generic_comm_base; + + if (!ras2_ctx || !ras2_ctx->pcc_comm_addr) + return -EFAULT; + + generic_comm_base = ras2_ctx->pcc_comm_addr; + guard(spinlock_irqsave)(&ras2_ctx->spinlock); + generic_comm_base->set_capabilities[0] = 0; + + /* send command for reading RAS2 capabilities */ + ret = rasf_send_pcc_cmd(ras2_ctx, RASF_PCC_CMD_EXEC); + if (ret) { + pr_err("%s: rasf_send_pcc_cmd failed\n", __func__); + return ret; + } + + return generic_comm_base->features[0] & RAS2_SUPPORT_HW_PARTOL_SCRUB; +} + +static int ras2_get_patrol_scrub_params(struct rasf_context *ras2_ctx, + struct rasf_scrub_params *params) +{ + int ret = 0; + u8 min_supp_scrub_rate, max_supp_scrub_rate; + struct acpi_ras2_shared_memory __iomem *generic_comm_base; + struct acpi_ras2_patrol_scrub_parameter __iomem *patrol_scrub_params; + + if (!ras2_ctx || !ras2_ctx->pcc_comm_addr) + return -EFAULT; + + generic_comm_base = ras2_ctx->pcc_comm_addr; + patrol_scrub_params = ras2_ctx->pcc_comm_addr + sizeof(*generic_comm_base); + + guard(spinlock_irqsave)(&ras2_ctx->spinlock); + generic_comm_base->set_capabilities[0] = RAS2_SUPPORT_HW_PARTOL_SCRUB; + /* send command for reading RASF capabilities */ + ret = rasf_send_pcc_cmd(ras2_ctx, RASF_PCC_CMD_EXEC); + if (ret) { + pr_err("%s: rasf_send_pcc_cmd failed\n", __func__); + return ret; + } + + if (!(generic_comm_base->features[0] & RAS2_SUPPORT_HW_PARTOL_SCRUB) || + !(generic_comm_base->num_parameter_blocks)) { + pr_err("%s: Platform does not support HW Patrol Scrubber\n", __func__); + return -ENOTSUPP; + } + + if (!patrol_scrub_params->requested_address_range[1]) { + pr_err("%s: Invalid requested address range, \ + requested_address_range[0]=0x%llx \ + requested_address_range[1]=0x%llx\n", + __func__, + patrol_scrub_params->requested_address_range[0], + patrol_scrub_params->requested_address_range[1]); + return -ENOTSUPP; + } + + generic_comm_base->set_capabilities[0] = RAS2_SUPPORT_HW_PARTOL_SCRUB; + patrol_scrub_params->header.type = RAS2_TYPE_PATROL_SCRUB; + patrol_scrub_params->patrol_scrub_command = RAS2_GET_PATROL_PARAMETERS; + + /* send command for reading the HW patrol scrub parameters */ + ret = rasf_send_pcc_cmd(ras2_ctx, RASF_PCC_CMD_EXEC); + if (ret) { + pr_err("%s: failed to read HW patrol scrub parameters\n", __func__); + return ret; + } + + /* copy output scrub parameters */ + params->addr_base = patrol_scrub_params->actual_address_range[0]; + params->addr_size = patrol_scrub_params->actual_address_range[1]; + params->flags = patrol_scrub_params->flags; + params->rate = FIELD_GET(RAS2_PATROL_SCRUB_RATE_OUT_MASK, + patrol_scrub_params->scrub_params_out); + min_supp_scrub_rate = FIELD_GET(RAS2_PATROL_SCRUB_MIN_RATE_OUT_MASK, + patrol_scrub_params->scrub_params_out); + max_supp_scrub_rate = FIELD_GET(RAS2_PATROL_SCRUB_MAX_RATE_OUT_MASK, + patrol_scrub_params->scrub_params_out); + snprintf(params->rate_avail, RASF_MAX_RATE_RANGE_LENGTH, + "%d-%d", min_supp_scrub_rate, max_supp_scrub_rate); + + return 0; +} + +static int ras2_enable_patrol_scrub(struct rasf_context *ras2_ctx, bool enable) +{ + int ret = 0; + struct rasf_scrub_params params; + struct acpi_ras2_shared_memory __iomem *generic_comm_base; + u8 scrub_rate_to_set, min_supp_scrub_rate, max_supp_scrub_rate; + struct acpi_ras2_patrol_scrub_parameter __iomem *patrol_scrub_params; + + if (!ras2_ctx || !ras2_ctx->pcc_comm_addr) + return -EFAULT; + + generic_comm_base = ras2_ctx->pcc_comm_addr; + patrol_scrub_params = ras2_ctx->pcc_comm_addr + sizeof(*generic_comm_base); + + if (enable) { + ret = ras2_get_patrol_scrub_params(ras2_ctx, ¶ms); + if (ret) + return ret; + } + + guard(spinlock_irqsave)(&ras2_ctx->spinlock); + generic_comm_base->set_capabilities[0] = RAS2_SUPPORT_HW_PARTOL_SCRUB; + patrol_scrub_params->header.type = RAS2_TYPE_PATROL_SCRUB; + + if (enable) { + patrol_scrub_params->patrol_scrub_command = RAS2_START_PATROL_SCRUBBER; + patrol_scrub_params->requested_address_range[0] = params.addr_base; + patrol_scrub_params->requested_address_range[1] = params.addr_size; + + scrub_rate_to_set = FIELD_GET(RAS2_PATROL_SCRUB_RATE_IN_MASK, + patrol_scrub_params->scrub_params_in); + min_supp_scrub_rate = FIELD_GET(RAS2_PATROL_SCRUB_MIN_RATE_OUT_MASK, + patrol_scrub_params->scrub_params_out); + max_supp_scrub_rate = FIELD_GET(RAS2_PATROL_SCRUB_MAX_RATE_OUT_MASK, + patrol_scrub_params->scrub_params_out); + if (scrub_rate_to_set < min_supp_scrub_rate || + scrub_rate_to_set > max_supp_scrub_rate) { + pr_warn("patrol scrub rate to set is out of the supported range\n"); + pr_warn("min_supp_scrub_rate=%d max_supp_scrub_rate=%d\n", + min_supp_scrub_rate, max_supp_scrub_rate); + return -EINVAL; + } + } else { + patrol_scrub_params->patrol_scrub_command = RAS2_STOP_PATROL_SCRUBBER; + } + + /* send command for enable/disable HW patrol scrub */ + ret = rasf_send_pcc_cmd(ras2_ctx, RASF_PCC_CMD_EXEC); + if (ret) { + pr_err("%s: failed to enable/disable the HW patrol scrub\n", __func__); + return ret; + } + + return 0; +} + +static int ras2_enable_background_scrub(struct rasf_context *ras2_ctx, bool enable) +{ + int ret; + struct acpi_ras2_shared_memory __iomem *generic_comm_base; + struct acpi_ras2_patrol_scrub_parameter __iomem *patrol_scrub_params; + + if (!ras2_ctx || !ras2_ctx->pcc_comm_addr) + return -EFAULT; + + generic_comm_base = ras2_ctx->pcc_comm_addr; + patrol_scrub_params = ras2_ctx->pcc_comm_addr + sizeof(*generic_comm_base); + + guard(spinlock_irqsave)(&ras2_ctx->spinlock); + generic_comm_base->set_capabilities[0] = RAS2_SUPPORT_HW_PARTOL_SCRUB; + patrol_scrub_params->header.type = RAS2_TYPE_PATROL_SCRUB; + patrol_scrub_params->patrol_scrub_command = RAS2_START_PATROL_SCRUBBER; + + patrol_scrub_params->scrub_params_in &= ~RAS2_PATROL_SCRUB_EN_BACKGROUND; + patrol_scrub_params->scrub_params_in |= FIELD_PREP(RAS2_PATROL_SCRUB_EN_BACKGROUND, + enable); + + /* send command for enable/disable HW patrol scrub */ + ret = rasf_send_pcc_cmd(ras2_ctx, RASF_PCC_CMD_EXEC); + if (ret) { + pr_err("%s: failed to enable/disable background patrol scrubbing\n", __func__); + return ret; + } + + return 0; +} +static int ras2_set_patrol_scrub_params(struct rasf_context *ras2_ctx, + struct rasf_scrub_params *params, u8 param_type) +{ + struct acpi_ras2_shared_memory __iomem *generic_comm_base; + struct acpi_ras2_patrol_scrub_parameter __iomem *patrol_scrub_params; + + if (!ras2_ctx || !ras2_ctx->pcc_comm_addr) + return -EFAULT; + + generic_comm_base = ras2_ctx->pcc_comm_addr; + patrol_scrub_params = ras2_ctx->pcc_comm_addr + sizeof(*generic_comm_base); + + guard(spinlock_irqsave)(&ras2_ctx->spinlock); + patrol_scrub_params->header.type = RAS2_TYPE_PATROL_SCRUB; + if (param_type == RASF_MEM_SCRUB_PARAM_ADDR_BASE && params->addr_base) { + patrol_scrub_params->requested_address_range[0] = params->addr_base; + } else if (param_type == RASF_MEM_SCRUB_PARAM_ADDR_SIZE && params->addr_size) { + patrol_scrub_params->requested_address_range[1] = params->addr_size; + } else if (param_type == RASF_MEM_SCRUB_PARAM_RATE) { + patrol_scrub_params->scrub_params_in &= ~RAS2_PATROL_SCRUB_RATE_IN_MASK; + patrol_scrub_params->scrub_params_in |= FIELD_PREP(RAS2_PATROL_SCRUB_RATE_IN_MASK, + params->rate); + } else { + pr_err("Invalid patrol scrub parameter to set\n"); + return -EINVAL; + } + + return 0; +} + +static const struct rasf_hw_scrub_ops ras2_hw_ops = { + .enable_scrub = ras2_enable_patrol_scrub, + .enable_background_scrub = ras2_enable_background_scrub, + .get_scrub_params = ras2_get_patrol_scrub_params, + .set_scrub_params = ras2_set_patrol_scrub_params, +}; + +static const struct scrub_ops ras2_scrub_ops = { + .is_visible = rasf_hw_scrub_is_visible, + .read = rasf_hw_scrub_read, + .write = rasf_hw_scrub_write, + .read_string = rasf_hw_scrub_read_strings, +}; + +static DEFINE_IDA(ras2_ida); + +static void devm_ras2_release(void *ctx) +{ + struct rasf_context *ras2_ctx = ctx; + + ida_free(&ras2_ida, ras2_ctx->id); + rasf_unregister_pcc_channel(ras2_ctx); +} + +static int ras2_probe(struct platform_device *pdev) +{ + int ret, id; + struct mbox_client *cl; + struct device *hw_scrub_dev; + struct rasf_context *ras2_ctx; + char scrub_name[RASF_MAX_NAME_LENGTH]; + + ras2_ctx = devm_kzalloc(&pdev->dev, sizeof(*ras2_ctx), GFP_KERNEL); + if (!ras2_ctx) + return -ENOMEM; + + ras2_ctx->dev = &pdev->dev; + ras2_ctx->ops = &ras2_hw_ops; + spin_lock_init(&ras2_ctx->spinlock); + platform_set_drvdata(pdev, ras2_ctx); + + cl = &ras2_ctx->mbox_client; + /* Request mailbox channel */ + cl->dev = &pdev->dev; + cl->tx_done = ras2_tx_done; + cl->knows_txdone = true; + ras2_ctx->pcc_subspace_idx = *((int *)pdev->dev.platform_data); + dev_dbg(&pdev->dev, "pcc-subspace-id=%d\n", ras2_ctx->pcc_subspace_idx); + ret = rasf_register_pcc_channel(ras2_ctx); + if (ret < 0) + return ret; + + ret = devm_add_action_or_reset(&pdev->dev, devm_ras2_release, ras2_ctx); + if (ret < 0) + return ret; + + if (ras2_is_patrol_scrub_support(ras2_ctx)) { + id = ida_alloc(&ras2_ida, GFP_KERNEL); + if (id < 0) + return id; + ras2_ctx->id = id; + snprintf(scrub_name, sizeof(scrub_name), "%s%d", RAS2_SCRUB, id); + dev_set_name(&pdev->dev, RAS2_ID_FORMAT, id); + hw_scrub_dev = devm_scrub_device_register(&pdev->dev, scrub_name, + ras2_ctx, &ras2_scrub_ops, + 0, NULL); + if (PTR_ERR_OR_ZERO(hw_scrub_dev)) + return PTR_ERR_OR_ZERO(hw_scrub_dev); + } + ras2_ctx->scrub_dev = hw_scrub_dev; + + return 0; +} + +static const struct platform_device_id ras2_id_table[] = { + { .name = "ras2", }, + { } +}; +MODULE_DEVICE_TABLE(platform, ras2_id_table); + +static struct platform_driver ras2_driver = { + .probe = ras2_probe, + .driver = { + .name = "ras2", + .suppress_bind_attrs = true, + }, + .id_table = ras2_id_table, +}; +module_driver(ras2_driver, platform_driver_register, platform_driver_unregister); + +MODULE_DESCRIPTION("ras2 memory driver"); +MODULE_LICENSE("GPL"); diff --git a/drivers/memory/rasf_common.c b/drivers/memory/rasf_common.c new file mode 100644 index 000000000000..85f67308698d --- /dev/null +++ b/drivers/memory/rasf_common.c @@ -0,0 +1,269 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * rasf_common.c - Common functions for memory RASF driver + * + * Copyright (c) 2023 HiSilicon Limited. + * + * This driver implements call back functions for the scrub + * configure driver to configure the parameters of the hw patrol + * scrubbers in the system, which exposed via the ACPI RASF/RAS2 + * table and PCC. + */ + +#define pr_fmt(fmt) "MEMORY RASF COMMON: " fmt + +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include + +static int enable_write(struct rasf_context *rasf_ctx, long val) +{ + int ret; + bool enable = val; + + ret = rasf_ctx->ops->enable_scrub(rasf_ctx, enable); + if (ret) { + pr_err("enable patrol scrub fail, enable=%d ret=%d\n", + enable, ret); + return ret; + } + + return 0; +} + +static int enable_background_scrub_write(struct rasf_context *rasf_ctx, long val) +{ + int ret; + bool enable = val; + + ret = rasf_ctx->ops->enable_background_scrub(rasf_ctx, enable); + if (ret) { + pr_err("enable background patrol scrub fail, enable=%d ret=%d\n", + enable, ret); + return ret; + } + + return 0; +} + +static int addr_base_read(struct rasf_context *rasf_ctx, u64 *val) +{ + int ret; + struct rasf_scrub_params params; + + ret = rasf_ctx->ops->get_scrub_params(rasf_ctx, ¶ms); + if (ret) { + pr_err("get patrol scrub params fail ret=%d\n", ret); + return ret; + } + *val = params.addr_base; + + return 0; +} + +static int addr_base_write(struct rasf_context *rasf_ctx, u64 val) +{ + int ret; + struct rasf_scrub_params params; + + params.addr_base = val; + ret = rasf_ctx->ops->set_scrub_params(rasf_ctx, ¶ms, RASF_MEM_SCRUB_PARAM_ADDR_BASE); + if (ret) { + pr_err("set patrol scrub params for addr_base fail ret=%d\n", ret); + return ret; + } + + return 0; +} + +static int addr_size_read(struct rasf_context *rasf_ctx, u64 *val) +{ + int ret; + struct rasf_scrub_params params; + + ret = rasf_ctx->ops->get_scrub_params(rasf_ctx, ¶ms); + if (ret) { + pr_err("get patrol scrub params fail ret=%d\n", ret); + return ret; + } + *val = params.addr_size; + + return 0; +} + +static int addr_size_write(struct rasf_context *rasf_ctx, u64 val) +{ + int ret; + struct rasf_scrub_params params; + + params.addr_size = val; + ret = rasf_ctx->ops->set_scrub_params(rasf_ctx, ¶ms, RASF_MEM_SCRUB_PARAM_ADDR_SIZE); + if (ret) { + pr_err("set patrol scrub params for addr_size fail ret=%d\n", ret); + return ret; + } + + return 0; +} + +static int rate_read(struct rasf_context *rasf_ctx, u64 *val) +{ + int ret; + struct rasf_scrub_params params; + + ret = rasf_ctx->ops->get_scrub_params(rasf_ctx, ¶ms); + if (ret) { + pr_err("get patrol scrub params fail ret=%d\n", ret); + return ret; + } + *val = params.rate; + + return 0; +} + +static int rate_write(struct rasf_context *rasf_ctx, long val) +{ + int ret; + struct rasf_scrub_params params; + + params.rate = val; + ret = rasf_ctx->ops->set_scrub_params(rasf_ctx, ¶ms, RASF_MEM_SCRUB_PARAM_RATE); + if (ret) { + pr_err("set patrol scrub params for rate fail ret=%d\n", ret); + return ret; + } + + return 0; +} + +static int rate_available_read(struct rasf_context *rasf_ctx, char *buf) +{ + int ret; + struct rasf_scrub_params params; + + ret = rasf_ctx->ops->get_scrub_params(rasf_ctx, ¶ms); + if (ret) { + pr_err("get patrol scrub params fail ret=%d\n", ret); + return ret; + } + + sprintf(buf, "%s\n", params.rate_avail); + + return 0; +} + +/** + * rasf_hw_scrub_is_visible() - Callback to return attribute visibility + * @drv_data: Pointer to driver-private data structure passed + * as argument to devm_scrub_device_register(). + * @attr_id: Scrub attribute + * @region_id: ID of the memory region + * + * Returns: 0 on success, an error otherwise + */ +umode_t rasf_hw_scrub_is_visible(struct device *dev, u32 attr_id, int region_id) +{ + switch (attr_id) { + case scrub_rate_available: + return 0444; + case scrub_enable: + case scrub_enable_background_scrub: + return 0200; + case scrub_addr_base: + case scrub_addr_size: + case scrub_rate: + return 0644; + default: + return 0; + } +} + +/** + * rasf_hw_scrub_read() - Read callback for data attributes + * @device: Pointer to scrub device + * @attr_id: Scrub attribute + * @region_id: ID of the memory region + * @val: Pointer to the returned data + * + * Returns: 0 on success, an error otherwise + */ +int rasf_hw_scrub_read(struct device *device, u32 attr_id, int region_id, u64 *val) +{ + struct rasf_context *rasf_ctx; + + rasf_ctx = dev_get_drvdata(device); + + switch (attr_id) { + case scrub_addr_base: + return addr_base_read(rasf_ctx, val); + case scrub_addr_size: + return addr_size_read(rasf_ctx, val); + case scrub_rate: + return rate_read(rasf_ctx, val); + default: + return -ENOTSUPP; + } +} + +/** + * rasf_hw_scrub_write() - Write callback for data attributes + * @device: Pointer to scrub device + * @attr_id: Scrub attribute + * @region_id: ID of the memory region + * @val: Value to write + * + * Returns: 0 on success, an error otherwise + */ +int rasf_hw_scrub_write(struct device *device, u32 attr_id, int region_id, u64 val) +{ + struct rasf_context *rasf_ctx; + + rasf_ctx = dev_get_drvdata(device); + + switch (attr_id) { + case scrub_addr_base: + return addr_base_write(rasf_ctx, val); + case scrub_addr_size: + return addr_size_write(rasf_ctx, val); + case scrub_enable: + return enable_write(rasf_ctx, val); + case scrub_enable_background_scrub: + return enable_background_scrub_write(rasf_ctx, val); + case scrub_rate: + return rate_write(rasf_ctx, val); + default: + return -ENOTSUPP; + } +} + +/** + * rasf_hw_scrub_read_strings() - Read callback for string attributes + * @device: Pointer to scrub device + * @attr_id: Scrub attribute + * @region_id: ID of the memory region + * @buf: Pointer to the buffer for copying returned string + * + * Returns: 0 on success, an error otherwise + */ +int rasf_hw_scrub_read_strings(struct device *device, u32 attr_id, int region_id, + char *buf) +{ + struct rasf_context *rasf_ctx; + + rasf_ctx = dev_get_drvdata(device); + + switch (attr_id) { + case scrub_rate_available: + return rate_available_read(rasf_ctx, buf); + default: + return -ENOTSUPP; + } +} diff --git a/include/memory/rasf.h b/include/memory/rasf.h new file mode 100755 index 000000000000..aacfa84dcc6f --- /dev/null +++ b/include/memory/rasf.h @@ -0,0 +1,88 @@ +/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0 */ +/* + * Memory RASF driver header file + * + * Copyright (c) 2023 HiSilicon Limited + */ + +#ifndef _RASF_H +#define _RASF_H + +#include + +#define RASF_MAX_NAME_LENGTH 64 +#define RASF_MAX_RATE_RANGE_LENGTH 64 + +/* + * Data structures RASF + */ + +/** + * struct rasf_scrub_params- RASF scrub parameter data structure. + * @addr_base: [IN] Base address of the address range to be patrol scrubbed. + * [OUT] Base address of the actual address range. + * @addr_size: [IN] Size of the address range to be patrol scrubbed. + * [OUT] Size of the actual address range. + * @flags: [OUT] The platform returns this value in response to + * GET_PATROL_PARAMETERS. + * For RASF and RAS2: + * Bit [0]: Will be set if memory scrubber is already + * running for address range specified in “Actual Address Range”. + * For RASF: + * Bits [3:1]: Current Patrol rate, if Bit [0] is set. + * @rate: [IN] Requested patrol scrub rate. + * [OUT] Current patrol scrub rate. + * @rate_avail:[OUT] Supported patrol rates. + */ +struct rasf_scrub_params { + u64 addr_base; + u64 addr_size; + u16 flags; + u32 rate; + char rate_avail[RASF_MAX_RATE_RANGE_LENGTH]; +}; + +enum { + RASF_MEM_SCRUB_PARAM_ADDR_BASE = 0, + RASF_MEM_SCRUB_PARAM_ADDR_SIZE, + RASF_MEM_SCRUB_PARAM_RATE, +}; + +/** + * struct rasf_hw_scrub_ops - rasf hw scrub device operations + * @enable_scrub: Function to enable/disable RASF/RAS2 scrubber. + * Parameters are: + * @rasf_ctx: Pointer to RASF/RAS2 context structure. + * @enable: enable/disable RASF/RAS2 patrol scrubber. + * The function returns 0 on success or a negative error number. + * @enable_background_scrub: Function to enable/disable background scrubbing. + * Parameters are: + * @rasf_ctx: Pointer to RASF/RAS2 context structure. + * @enable: enable/disable background patrol scrubbing. + * The function returns 0 on success or a negative error number. + * @get_scrub_params: Read scrubber parameters. Mandatory + * Parameters are: + * @rasf_ctx: Pointer to RASF/RAS2 context structure. + * @params: Pointer to scrub params data structure. + * The function returns 0 on success or a negative error number. + * @set_scrub_params: Set scrubber parameters. Mandatory. + * Parameters are: + * @rasf_ctx: Pointer to RASF/RAS2 context structure. + * @params: Pointer to scrub params data structure. + * @param_type: Scrub parameter type to set. + * The function returns 0 on success or a negative error number. + */ +struct rasf_hw_scrub_ops { + int (*enable_scrub)(struct rasf_context *rasf_ctx, bool enable); + int (*enable_background_scrub)(struct rasf_context *rasf_ctx, bool enable); + int (*get_scrub_params)(struct rasf_context *rasf_ctx, + struct rasf_scrub_params *params); + int (*set_scrub_params)(struct rasf_context *rasf_ctx, + struct rasf_scrub_params *params, u8 param_type); +}; + +umode_t rasf_hw_scrub_is_visible(struct device *dev, u32 attr_id, int region_id); +int rasf_hw_scrub_read(struct device *dev, u32 attr_id, int region_id, u64 *val); +int rasf_hw_scrub_write(struct device *dev, u32 attr_id, int region_id, u64 val); +int rasf_hw_scrub_read_strings(struct device *dev, u32 attr_id, int region_id, char *buf); +#endif /* _RASF_H */