From patchwork Wed Oct 9 12:41:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 834199 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 69B6A196C86; Wed, 9 Oct 2024 12:43:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728477787; cv=none; b=L6W/4cTrd1k2vJ0BsMepxIGod/xan1EIfndbrWMCvbiFRBMK1y27Ojfqe2C89nr5o1RNAhgKUdg2eVmVVJlHe01DyEO95p16lP4BzwV65LbT376r9pLszgLp4WDT6nIS9NZURSyCQ8hUjoCkQN9o5q/bRPivTjilvx4qKezCUxM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728477787; c=relaxed/simple; bh=yrNCyx9yb9V6b5kQCIoAzFZlrNsJxIoLo6J5iJzf9fY=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=CCLQbAcOFzKPn0i4dhClRSS390obpKhGDIgf4iZ9+4t01WJbVAQoWkHPxYW9OQxVqsk+X5XrIJu136YIqhjRf7eg3T33rjxVtSaAzwS4iTJVS8yqB89RezGDoSJAYc1yrIyIxFj9dPZGq/weWesGmKmXBggiYL5oT5oQJzsZqyM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4XNsvs5Vs9z6K71q; Wed, 9 Oct 2024 20:41:45 +0800 (CST) Received: from frapeml500007.china.huawei.com (unknown [7.182.85.172]) by mail.maildlp.com (Postfix) with ESMTPS id 6DFBA140133; Wed, 9 Oct 2024 20:43:03 +0800 (CST) Received: from P_UKIT01-A7bmah.china.huawei.com (10.48.152.209) by frapeml500007.china.huawei.com (7.182.85.172) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 9 Oct 2024 14:43:01 +0200 From: To: , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v13 01/18] EDAC: Add support for EDAC device features control Date: Wed, 9 Oct 2024 13:41:02 +0100 Message-ID: <20241009124120.1124-2-shiju.jose@huawei.com> X-Mailer: git-send-email 2.43.0.windows.1 In-Reply-To: <20241009124120.1124-1-shiju.jose@huawei.com> References: <20241009124120.1124-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500003.china.huawei.com (7.191.162.67) To frapeml500007.china.huawei.com (7.182.85.172) From: Shiju Jose Add generic EDAC device features control supports registering RAS features supported in the system. Driver exposes features control attributes to userspace in /sys/bus/edac/devices/// Co-developed-by: Jonathan Cameron Signed-off-by: Jonathan Cameron Signed-off-by: Shiju Jose --- drivers/edac/edac_device.c | 105 +++++++++++++++++++++++++++++++++++++ include/linux/edac.h | 32 +++++++++++ 2 files changed, 137 insertions(+) diff --git a/drivers/edac/edac_device.c b/drivers/edac/edac_device.c index 621dc2a5d034..0b8aa8150239 100644 --- a/drivers/edac/edac_device.c +++ b/drivers/edac/edac_device.c @@ -570,3 +570,108 @@ void edac_device_handle_ue_count(struct edac_device_ctl_info *edac_dev, block ? block->name : "N/A", count, msg); } EXPORT_SYMBOL_GPL(edac_device_handle_ue_count); + +/* EDAC device feature */ +static void edac_dev_release(struct device *dev) +{ + struct edac_dev_feat_ctx *ctx = container_of(dev, struct edac_dev_feat_ctx, dev); + + kfree(ctx->dev.groups); + kfree(ctx); +} + +const struct device_type edac_dev_type = { + .name = "edac_dev", + .release = edac_dev_release, +}; + +static void edac_dev_unreg(void *data) +{ + device_unregister(data); +} + +/** + * edac_dev_register - register device for RAS features with EDAC + * @parent: client device. + * @name: client device's name. + * @private: parent driver's data to store in the context if any. + * @num_features: number of RAS features to register. + * @ras_features: list of RAS features to register. + * + * Return: + * * %0 - Success. + * * %-EINVAL - Invalid parameters passed. + * * %-ENOMEM - Dynamic memory allocation failed. + * + * The new edac_dev_feat_ctx would be freed automatically. + */ +int edac_dev_register(struct device *parent, char *name, + void *private, int num_features, + const struct edac_dev_feature *ras_features) +{ + const struct attribute_group **ras_attr_groups; + struct edac_dev_feat_ctx *ctx; + int attr_gcnt = 0; + int ret, feat; + + if (!parent || !name || !num_features || !ras_features) + return -EINVAL; + + /* Double parse to make space for attributes */ + for (feat = 0; feat < num_features; feat++) { + switch (ras_features[feat].ft_type) { + /* Add feature specific code */ + default: + return -EINVAL; + } + } + + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); + if (!ctx) + return -ENOMEM; + + ctx->dev.parent = parent; + ctx->private = private; + + ras_attr_groups = kcalloc(attr_gcnt + 1, sizeof(*ras_attr_groups), GFP_KERNEL); + if (!ras_attr_groups) { + ret = -ENOMEM; + goto ctx_free; + } + + attr_gcnt = 0; + for (feat = 0; feat < num_features; feat++, ras_features++) { + switch (ras_features->ft_type) { + /* Add feature specific code */ + default: + ret = -EINVAL; + goto groups_free; + } + } + + ras_attr_groups[attr_gcnt] = NULL; + ctx->dev.bus = edac_get_sysfs_subsys(); + ctx->dev.type = &edac_dev_type; + ctx->dev.groups = ras_attr_groups; + dev_set_drvdata(&ctx->dev, ctx); + + ret = dev_set_name(&ctx->dev, name); + if (ret) + goto groups_free; + + ret = device_register(&ctx->dev); + if (ret) { + put_device(&ctx->dev); + goto groups_free; + return ret; + } + + return devm_add_action_or_reset(parent, edac_dev_unreg, &ctx->dev); + +groups_free: + kfree(ras_attr_groups); +ctx_free: + kfree(ctx); + return ret; +} +EXPORT_SYMBOL_GPL(edac_dev_register); diff --git a/include/linux/edac.h b/include/linux/edac.h index b4ee8961e623..1db008a82690 100644 --- a/include/linux/edac.h +++ b/include/linux/edac.h @@ -661,4 +661,36 @@ static inline struct dimm_info *edac_get_dimm(struct mem_ctl_info *mci, return mci->dimms[index]; } + +/* EDAC device features */ + +#define EDAC_FEAT_NAME_LEN 128 + +/* RAS feature type */ +enum edac_dev_feat { + RAS_FEAT_MAX +}; + +/* EDAC device feature information structure */ +struct edac_dev_data { + u8 instance; + void *private; +}; + +struct device; + +struct edac_dev_feat_ctx { + struct device dev; + void *private; +}; + +struct edac_dev_feature { + enum edac_dev_feat ft_type; + u8 instance; + void *ctx; +}; + +int edac_dev_register(struct device *parent, char *dev_name, + void *parent_pvt_data, int num_features, + const struct edac_dev_feature *ras_features); #endif /* _LINUX_EDAC_H_ */ From patchwork Wed Oct 9 12:41:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 834020 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 216A1198E71; Wed, 9 Oct 2024 12:43:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728477790; cv=none; b=pz2lXG4JTJVKdPRtdnrRd0H+1tjeefzY9+TSLU4pvj/DELCJtech28Rk2VytQNsndVE4KGA4IGVitxbp76GdCaLfpvBcBhhDDFVnOXMGA44s4tvrPX6Hzg0Ue2BTAaUYCU/tbEtRczYYrP4wyrCZ8cchkmL2p8Lij8lyGrDMqvg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728477790; c=relaxed/simple; bh=yq5/u22qacJ2rq3gk5TxnIP8/FUh3udJT9cgG9OOvxU=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=aUkSxUnL9i34UYHUbVjEJB74XtHP7+NscAt2ISmRGHy3uIitB2nG0J4pUabzLFoxXDkLYMCVJ0IlKbru1f9IoYZJEehVv1Gzqv1JfsirWacINkEyF9xblmfNXc/piw8YZTsmu1wEyG0VaY78F9jPkMD6mJ1IbIbPC8iLA8gMbWw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4XNsvw3WKzz6K71t; Wed, 9 Oct 2024 20:41:48 +0800 (CST) Received: from frapeml500007.china.huawei.com (unknown [7.182.85.172]) by mail.maildlp.com (Postfix) with ESMTPS id 272DA140133; Wed, 9 Oct 2024 20:43:06 +0800 (CST) Received: from P_UKIT01-A7bmah.china.huawei.com (10.48.152.209) by frapeml500007.china.huawei.com (7.182.85.172) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 9 Oct 2024 14:43:04 +0200 From: To: , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v13 02/18] EDAC: Add scrub control feature Date: Wed, 9 Oct 2024 13:41:03 +0100 Message-ID: <20241009124120.1124-3-shiju.jose@huawei.com> X-Mailer: git-send-email 2.43.0.windows.1 In-Reply-To: <20241009124120.1124-1-shiju.jose@huawei.com> References: <20241009124120.1124-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500003.china.huawei.com (7.191.162.67) To frapeml500007.china.huawei.com (7.182.85.172) From: Shiju Jose Add generic EDAC scrub control in order to control the memory scrubbers in the system. The device with scrub feature registers with EDAC device driver, which retrieves the scrub descriptor from EDAC scrub driver and expose the sysfs scrub control attributes for a scrub instance to userspace in /sys/bus/edac/devices//scrubX/. The common sysfs scrub control interface abstracts the control of an arbitrary scrubbing functionality to a common set of functions. The sysfs scrub attribute nodes would be present only if the client driver has implemented the corresponding attribute callback function and passed in ops to the EDAC device driver during registration. Co-developed-by: Jonathan Cameron Signed-off-by: Jonathan Cameron Signed-off-by: Shiju Jose --- Documentation/ABI/testing/sysfs-edac-scrub | 69 +++++++ drivers/edac/Makefile | 1 + drivers/edac/edac_device.c | 41 +++- drivers/edac/scrub.c | 223 +++++++++++++++++++++ include/linux/edac.h | 38 ++++ 5 files changed, 367 insertions(+), 5 deletions(-) create mode 100644 Documentation/ABI/testing/sysfs-edac-scrub create mode 100755 drivers/edac/scrub.c diff --git a/Documentation/ABI/testing/sysfs-edac-scrub b/Documentation/ABI/testing/sysfs-edac-scrub new file mode 100644 index 000000000000..b4f8f0bba17b --- /dev/null +++ b/Documentation/ABI/testing/sysfs-edac-scrub @@ -0,0 +1,69 @@ +What: /sys/bus/edac/devices//scrubX +Date: Oct 2024 +KernelVersion: 6.12 +Contact: linux-edac@vger.kernel.org +Description: + The sysfs EDAC bus devices //scrubX subdirectory + belongs to an instance of memory scrub control feature, + where directory corresponds to a device/memory + region registered with the EDAC device driver for the + scrub control feature. + The sysfs scrub attr nodes would be present only if the + client driver has implemented the corresponding attr + callback function and passed in ops to the EDAC RAS feature + driver during registration. + +What: /sys/bus/edac/devices//scrubX/addr_range_base +Date: Oct 2024 +KernelVersion: 6.12 +Contact: linux-edac@vger.kernel.org +Description: + (RW) The base of the address range of the memory region + to be scrubbed (on-demand scrubbing). + +What: /sys/bus/edac/devices//scrubX/addr_range_size +Date: Oct 2024 +KernelVersion: 6.12 +Contact: linux-edac@vger.kernel.org +Description: + (RW) The size of the address range of the memory region + to be scrubbed (on-demand scrubbing). + +What: /sys/bus/edac/devices//scrubX/enable_background +Date: Oct 2024 +KernelVersion: 6.12 +Contact: linux-edac@vger.kernel.org +Description: + (RW) Start/Stop background(patrol) scrubbing if supported. + +What: /sys/bus/edac/devices//scrubX/enable_on_demand +Date: Oct 2024 +KernelVersion: 6.12 +Contact: linux-edac@vger.kernel.org +Description: + (RW) Start/Stop on-demand scrubbing the memory region + if supported. + +What: /sys/bus/edac/devices//scrubX/min_cycle_duration +Date: Oct 2024 +KernelVersion: 6.12 +Contact: linux-edac@vger.kernel.org +Description: + (RO) Supported minimum scrub cycle duration in seconds + by the memory scrubber. + +What: /sys/bus/edac/devices//scrubX/max_cycle_duration +Date: Oct 2024 +KernelVersion: 6.12 +Contact: linux-edac@vger.kernel.org +Description: + (RO) Supported maximum scrub cycle duration in seconds + by the memory scrubber. + +What: /sys/bus/edac/devices//scrubX/current_cycle_duration +Date: Oct 2024 +KernelVersion: 6.12 +Contact: linux-edac@vger.kernel.org +Description: + (RW) The current scrub cycle duration in seconds and must be + within the supported range by the memory scrubber. diff --git a/drivers/edac/Makefile b/drivers/edac/Makefile index 4edfb83ffbee..a96a74de8b9e 100644 --- a/drivers/edac/Makefile +++ b/drivers/edac/Makefile @@ -10,6 +10,7 @@ obj-$(CONFIG_EDAC) := edac_core.o edac_core-y := edac_mc.o edac_device.o edac_mc_sysfs.o edac_core-y += edac_module.o edac_device_sysfs.o wq.o +edac_core-y += scrub.o edac_core-$(CONFIG_EDAC_DEBUG) += debugfs.o diff --git a/drivers/edac/edac_device.c b/drivers/edac/edac_device.c index 0b8aa8150239..0c9da55d18bc 100644 --- a/drivers/edac/edac_device.c +++ b/drivers/edac/edac_device.c @@ -576,6 +576,7 @@ static void edac_dev_release(struct device *dev) { struct edac_dev_feat_ctx *ctx = container_of(dev, struct edac_dev_feat_ctx, dev); + kfree(ctx->scrub); kfree(ctx->dev.groups); kfree(ctx); } @@ -610,7 +611,9 @@ int edac_dev_register(struct device *parent, char *name, const struct edac_dev_feature *ras_features) { const struct attribute_group **ras_attr_groups; + struct edac_dev_data *dev_data; struct edac_dev_feat_ctx *ctx; + int scrub_cnt = 0, scrub_inst = 0; int attr_gcnt = 0; int ret, feat; @@ -620,7 +623,10 @@ int edac_dev_register(struct device *parent, char *name, /* Double parse to make space for attributes */ for (feat = 0; feat < num_features; feat++) { switch (ras_features[feat].ft_type) { - /* Add feature specific code */ + case RAS_FEAT_SCRUB: + attr_gcnt++; + scrub_cnt++; + break; default: return -EINVAL; } @@ -639,13 +645,36 @@ int edac_dev_register(struct device *parent, char *name, goto ctx_free; } + if (scrub_cnt) { + ctx->scrub = kcalloc(scrub_cnt, sizeof(*ctx->scrub), GFP_KERNEL); + if (!ctx->scrub) { + ret = -ENOMEM; + goto groups_free; + } + } + attr_gcnt = 0; for (feat = 0; feat < num_features; feat++, ras_features++) { switch (ras_features->ft_type) { - /* Add feature specific code */ + case RAS_FEAT_SCRUB: + if (!ras_features->scrub_ops) + continue; + if (scrub_inst != ras_features->instance) + goto data_mem_free; + dev_data = &ctx->scrub[scrub_inst]; + dev_data->instance = scrub_inst; + dev_data->scrub_ops = ras_features->scrub_ops; + dev_data->private = ras_features->ctx; + ret = edac_scrub_get_desc(parent, &ras_attr_groups[attr_gcnt], + ras_features->instance); + if (ret) + goto data_mem_free; + scrub_inst++; + attr_gcnt++; + break; default: ret = -EINVAL; - goto groups_free; + goto data_mem_free; } } @@ -657,17 +686,19 @@ int edac_dev_register(struct device *parent, char *name, ret = dev_set_name(&ctx->dev, name); if (ret) - goto groups_free; + goto data_mem_free; ret = device_register(&ctx->dev); if (ret) { put_device(&ctx->dev); - goto groups_free; + goto data_mem_free; return ret; } return devm_add_action_or_reset(parent, edac_dev_unreg, &ctx->dev); +data_mem_free: + kfree(ctx->scrub); groups_free: kfree(ras_attr_groups); ctx_free: diff --git a/drivers/edac/scrub.c b/drivers/edac/scrub.c new file mode 100755 index 000000000000..acc36ec7c926 --- /dev/null +++ b/drivers/edac/scrub.c @@ -0,0 +1,223 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Generic EDAC scrub driver in order to control the memory + * scrubbers in the system and the common sysfs scrub interface + * abstracts the control of an arbitrary scrubbing functionality + * to a common set of functions. + * + * Copyright (c) 2024 HiSilicon Limited. + */ + +#define pr_fmt(fmt) "EDAC SCRUB: " fmt + +#include + +enum edac_scrub_attributes { + SCRUB_ADDR_RANGE_BASE, + SCRUB_ADDR_RANGE_SIZE, + SCRUB_ENABLE_BACKGROUND, + SCRUB_ENABLE_ON_DEMAND, + SCRUB_MIN_CYCLE_DURATION, + SCRUB_MAX_CYCLE_DURATION, + SCRUB_CUR_CYCLE_DURATION, + SCRUB_MAX_ATTRS +}; + +struct edac_scrub_dev_attr { + struct device_attribute dev_attr; + u8 instance; +}; + +struct edac_scrub_context { + char name[EDAC_FEAT_NAME_LEN]; + struct edac_scrub_dev_attr scrub_dev_attr[SCRUB_MAX_ATTRS]; + struct attribute *scrub_attrs[SCRUB_MAX_ATTRS + 1]; + struct attribute_group group; +}; + +#define TO_SCRUB_DEV_ATTR(_dev_attr) \ + container_of(_dev_attr, struct edac_scrub_dev_attr, dev_attr) + +#define EDAC_SCRUB_ATTR_SHOW(attrib, cb, type, format) \ +static ssize_t attrib##_show(struct device *ras_feat_dev, \ + struct device_attribute *attr, char *buf) \ +{ \ + u8 inst = TO_SCRUB_DEV_ATTR(attr)->instance; \ + struct edac_dev_feat_ctx *ctx = dev_get_drvdata(ras_feat_dev); \ + const struct edac_scrub_ops *ops = ctx->scrub[inst].scrub_ops; \ + type data; \ + int ret; \ + \ + ret = ops->cb(ras_feat_dev->parent, ctx->scrub[inst].private, &data); \ + if (ret) \ + return ret; \ + \ + return sysfs_emit(buf, format, data); \ +} + +EDAC_SCRUB_ATTR_SHOW(addr_range_base, read_range_base, u64, "0x%llx\n") +EDAC_SCRUB_ATTR_SHOW(addr_range_size, read_range_size, u64, "0x%llx\n") +EDAC_SCRUB_ATTR_SHOW(enable_background, get_enabled_bg, bool, "%u\n") +EDAC_SCRUB_ATTR_SHOW(enable_on_demand, get_enabled_od, bool, "%u\n") +EDAC_SCRUB_ATTR_SHOW(min_cycle_duration, get_min_cycle, u32, "%u\n") +EDAC_SCRUB_ATTR_SHOW(max_cycle_duration, get_max_cycle, u32, "%u\n") +EDAC_SCRUB_ATTR_SHOW(current_cycle_duration, get_cycle_duration, u32, "%u\n") + +#define EDAC_SCRUB_ATTR_STORE(attrib, cb, type, conv_func) \ +static ssize_t attrib##_store(struct device *ras_feat_dev, \ + struct device_attribute *attr, \ + const char *buf, size_t len) \ +{ \ + u8 inst = TO_SCRUB_DEV_ATTR(attr)->instance; \ + struct edac_dev_feat_ctx *ctx = dev_get_drvdata(ras_feat_dev); \ + const struct edac_scrub_ops *ops = ctx->scrub[inst].scrub_ops; \ + type data; \ + int ret; \ + \ + ret = conv_func(buf, 0, &data); \ + if (ret < 0) \ + return ret; \ + \ + ret = ops->cb(ras_feat_dev->parent, ctx->scrub[inst].private, data); \ + if (ret) \ + return ret; \ + \ + return len; \ +} + +EDAC_SCRUB_ATTR_STORE(addr_range_base, write_range_base, u64, kstrtou64) +EDAC_SCRUB_ATTR_STORE(addr_range_size, write_range_size, u64, kstrtou64) +EDAC_SCRUB_ATTR_STORE(enable_background, set_enabled_bg, unsigned long, kstrtoul) +EDAC_SCRUB_ATTR_STORE(enable_on_demand, set_enabled_od, unsigned long, kstrtoul) +EDAC_SCRUB_ATTR_STORE(current_cycle_duration, set_cycle_duration, unsigned long, kstrtoul) + +static umode_t scrub_attr_visible(struct kobject *kobj, struct attribute *a, int attr_id) +{ + struct device *ras_feat_dev = kobj_to_dev(kobj); + struct device_attribute *dev_attr = container_of(a, struct device_attribute, attr); + u8 inst = TO_SCRUB_DEV_ATTR(dev_attr)->instance; + struct edac_dev_feat_ctx *ctx = dev_get_drvdata(ras_feat_dev); + const struct edac_scrub_ops *ops = ctx->scrub[inst].scrub_ops; + + switch (attr_id) { + case SCRUB_ADDR_RANGE_BASE: + if (ops->read_range_base) { + if (ops->write_range_base) + return a->mode; + else + return 0444; + } + break; + case SCRUB_ADDR_RANGE_SIZE: + if (ops->read_range_size) { + if (ops->write_range_size) + return a->mode; + else + return 0444; + } + break; + case SCRUB_ENABLE_BACKGROUND: + if (ops->get_enabled_bg) { + if (ops->set_enabled_bg) + return a->mode; + else + return 0444; + } + break; + case SCRUB_ENABLE_ON_DEMAND: + if (ops->get_enabled_od) { + if (ops->set_enabled_od) + return a->mode; + else + return 0444; + } + break; + case SCRUB_MIN_CYCLE_DURATION: + if (ops->get_min_cycle) + return a->mode; + break; + case SCRUB_MAX_CYCLE_DURATION: + if (ops->get_max_cycle) + return a->mode; + break; + case SCRUB_CUR_CYCLE_DURATION: + if (ops->get_cycle_duration) { + if (ops->set_cycle_duration) + return a->mode; + else + return 0444; + } + break; + default: + break; + } + + return 0; +} + +#define EDAC_SCRUB_ATTR_RO(_name, _instance) \ + ((struct edac_scrub_dev_attr) { .dev_attr = __ATTR_RO(_name), \ + .instance = _instance }) + +#define EDAC_SCRUB_ATTR_WO(_name, _instance) \ + ((struct edac_scrub_dev_attr) { .dev_attr = __ATTR_WO(_name), \ + .instance = _instance }) + +#define EDAC_SCRUB_ATTR_RW(_name, _instance) \ + ((struct edac_scrub_dev_attr) { .dev_attr = __ATTR_RW(_name), \ + .instance = _instance }) + +static int scrub_create_desc(struct device *scrub_dev, + const struct attribute_group **attr_groups, u8 instance) +{ + struct edac_scrub_context *scrub_ctx; + struct attribute_group *group; + int i; + struct edac_scrub_dev_attr dev_attr[] = { + [SCRUB_ADDR_RANGE_BASE] = EDAC_SCRUB_ATTR_RW(addr_range_base, instance), + [SCRUB_ADDR_RANGE_SIZE] = EDAC_SCRUB_ATTR_RW(addr_range_size, instance), + [SCRUB_ENABLE_BACKGROUND] = EDAC_SCRUB_ATTR_RW(enable_background, instance), + [SCRUB_ENABLE_ON_DEMAND] = EDAC_SCRUB_ATTR_RW(enable_on_demand, instance), + [SCRUB_MIN_CYCLE_DURATION] = EDAC_SCRUB_ATTR_RO(min_cycle_duration, instance), + [SCRUB_MAX_CYCLE_DURATION] = EDAC_SCRUB_ATTR_RO(max_cycle_duration, instance), + [SCRUB_CUR_CYCLE_DURATION] = EDAC_SCRUB_ATTR_RW(current_cycle_duration, instance) + }; + + scrub_ctx = devm_kzalloc(scrub_dev, sizeof(*scrub_ctx), GFP_KERNEL); + if (!scrub_ctx) + return -ENOMEM; + + group = &scrub_ctx->group; + for (i = 0; i < SCRUB_MAX_ATTRS; i++) { + memcpy(&scrub_ctx->scrub_dev_attr[i], &dev_attr[i], sizeof(dev_attr[i])); + scrub_ctx->scrub_attrs[i] = &scrub_ctx->scrub_dev_attr[i].dev_attr.attr; + } + sprintf(scrub_ctx->name, "%s%d", "scrub", instance); + group->name = scrub_ctx->name; + group->attrs = scrub_ctx->scrub_attrs; + group->is_visible = scrub_attr_visible; + + attr_groups[0] = group; + + return 0; +} + +/** + * edac_scrub_get_desc - get EDAC scrub descriptors + * @scrub_dev: client device, with scrub support + * @attr_groups: pointer to attribute group container + * @instance: device's scrub instance number. + * + * Return: + * * %0 - Success. + * * %-EINVAL - Invalid parameters passed. + * * %-ENOMEM - Dynamic memory allocation failed. + */ +int edac_scrub_get_desc(struct device *scrub_dev, + const struct attribute_group **attr_groups, u8 instance) +{ + if (!scrub_dev || !attr_groups) + return -EINVAL; + + return scrub_create_desc(scrub_dev, attr_groups, instance); +} diff --git a/include/linux/edac.h b/include/linux/edac.h index 1db008a82690..5344e2cf6808 100644 --- a/include/linux/edac.h +++ b/include/linux/edac.h @@ -668,11 +668,47 @@ static inline struct dimm_info *edac_get_dimm(struct mem_ctl_info *mci, /* RAS feature type */ enum edac_dev_feat { + RAS_FEAT_SCRUB, RAS_FEAT_MAX }; +/** + * struct scrub_ops - scrub device operations (all elements optional) + * @read_range_base: read base of scrubbing range. + * @read_range_size: read offset of scrubbing range. + * @write_range_base: set base of the scrubbing range. + * @write_range_size: set offset of the scrubbing range. + * @get_enabled_bg: check if currently performing background scrub. + * @set_enabled_bg: start or stop a bg-scrub. + * @get_enabled_od: check if currently performing on-demand scrub. + * @set_enabled_od: start or stop an on-demand scrub. + * @get_min_cycle: get minimum supported scrub cycle duration in seconds. + * @get_max_cycle: get maximum supported scrub cycle duration in seconds. + * @get_cycle_duration: get current scrub cycle duration in seconds. + * @set_cycle_duration: set current scrub cycle duration in seconds. + */ +struct edac_scrub_ops { + int (*read_range_base)(struct device *dev, void *drv_data, u64 *base); + int (*read_range_size)(struct device *dev, void *drv_data, u64 *size); + int (*write_range_base)(struct device *dev, void *drv_data, u64 base); + int (*write_range_size)(struct device *dev, void *drv_data, u64 size); + int (*get_enabled_bg)(struct device *dev, void *drv_data, bool *enable); + int (*set_enabled_bg)(struct device *dev, void *drv_data, bool enable); + int (*get_enabled_od)(struct device *dev, void *drv_data, bool *enable); + int (*set_enabled_od)(struct device *dev, void *drv_data, bool enable); + int (*get_min_cycle)(struct device *dev, void *drv_data, u32 *min); + int (*get_max_cycle)(struct device *dev, void *drv_data, u32 *max); + int (*get_cycle_duration)(struct device *dev, void *drv_data, u32 *cycle); + int (*set_cycle_duration)(struct device *dev, void *drv_data, u32 cycle); +}; + +int edac_scrub_get_desc(struct device *scrub_dev, + const struct attribute_group **attr_groups, + u8 instance); + /* EDAC device feature information structure */ struct edac_dev_data { + const struct edac_scrub_ops *scrub_ops; u8 instance; void *private; }; @@ -682,11 +718,13 @@ struct device; struct edac_dev_feat_ctx { struct device dev; void *private; + struct edac_dev_data *scrub; }; struct edac_dev_feature { enum edac_dev_feat ft_type; u8 instance; + const struct edac_scrub_ops *scrub_ops; void *ctx; }; From patchwork Wed Oct 9 12:41:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 834198 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AE3B31991AD; Wed, 9 Oct 2024 12:43:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728477793; cv=none; b=hQWf4ZLeRFWJQhchgv+EhQsRg3TDb8DPOybVWfcEuaiwGA3StaNB4G+N95/s0lzCoPc5lEaX1Y0pZYHBRopM7dXT94kI/aMJK4+USzkg3OLdNFhhvnC+wsspwS+/LRkRvH6rv7WqLPYAZGCfJSYYs0Mwuczxsec6CUhiNZaLv/Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728477793; c=relaxed/simple; bh=Cv2Wqsb75lEkOglJJJq3IkUYDk1K1b5uj4r92tX6b5M=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Fqcr68YFOUYFP7BRWfK2NADMjyfmGb3bDBN0cgSWRQJEKFXsY/XHxW+w32URn9Vx27zq8/llCR8/8kyshYVyEZK06uAdHVEWdIt8pRTNDmsYQLy0oGouBcCaIIpyjR/XopiWH/n9qCoKw1KfoRncnv/5vYs/5NXjtMy2FNUpBlQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.216]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4XNsrT4Znjz6GFKB; Wed, 9 Oct 2024 20:38:49 +0800 (CST) Received: from frapeml500007.china.huawei.com (unknown [7.182.85.172]) by mail.maildlp.com (Postfix) with ESMTPS id E44C2140B18; Wed, 9 Oct 2024 20:43:08 +0800 (CST) Received: from P_UKIT01-A7bmah.china.huawei.com (10.48.152.209) by frapeml500007.china.huawei.com (7.182.85.172) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 9 Oct 2024 14:43:07 +0200 From: To: , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v13 03/18] EDAC: Add ECS control feature Date: Wed, 9 Oct 2024 13:41:04 +0100 Message-ID: <20241009124120.1124-4-shiju.jose@huawei.com> X-Mailer: git-send-email 2.43.0.windows.1 In-Reply-To: <20241009124120.1124-1-shiju.jose@huawei.com> References: <20241009124120.1124-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500003.china.huawei.com (7.191.162.67) To frapeml500007.china.huawei.com (7.182.85.172) From: Shiju Jose Add EDAC ECS (Error Check Scrub) control in order to control a memory device's ECS feature. The Error Check Scrub (ECS) is a feature defined in JEDEC DDR5 SDRAM Specification (JESD79-5) and allows the DRAM to internally read, correct single-bit errors, and write back corrected data bits to the DRAM array while providing transparency to error counts. The DDR5 device contains number of memory media FRUs per device. The DDR5 ECS feature and thus the ECS control driver supports configuring the ECS parameters per FRU. The memory devices support ECS feature register with EDAC device driver, which retrieves the ECS descriptor from EDAC ECS driver and exposes the sysfs ECS control attributes to userspace in /sys/bus/edac/devices//ecs_fruX/. The common sysfs ECS control interface abstracts the control of an arbitrary ECS functionality to a common set of functions. The support for ECS feature is added separately because the DDR5 ECS features control attributes are dissimilar from those of the scrub feature. The sysfs ECS attr nodes would be present only if the client driver has implemented the corresponding attr callback function and passed in ops to the EDAC RAS feature driver during registration. Co-developed-by: Jonathan Cameron Signed-off-by: Jonathan Cameron Signed-off-by: Shiju Jose --- Documentation/ABI/testing/sysfs-edac-ecs | 77 ++++++++ drivers/edac/Makefile | 2 +- drivers/edac/ecs.c | 240 +++++++++++++++++++++++ drivers/edac/edac_device.c | 15 ++ include/linux/edac.h | 51 ++++- 5 files changed, 382 insertions(+), 3 deletions(-) create mode 100644 Documentation/ABI/testing/sysfs-edac-ecs create mode 100755 drivers/edac/ecs.c diff --git a/Documentation/ABI/testing/sysfs-edac-ecs b/Documentation/ABI/testing/sysfs-edac-ecs new file mode 100644 index 000000000000..abd440846ef2 --- /dev/null +++ b/Documentation/ABI/testing/sysfs-edac-ecs @@ -0,0 +1,77 @@ +What: /sys/bus/edac/devices//ecs_fruX +Date: Oct 2024 +KernelVersion: 6.12 +Contact: linux-edac@vger.kernel.org +Description: + The sysfs EDAC bus devices //ecs_fruX subdirectory + belongs to the memory media ECS (Error Check Scrub) control + feature, where directory corresponds to a device + registered with the EDAC device driver for the ECS feature. + /ecs_fruX belongs to the media FRUs (Field Replaceable Unit) + under the memory device. + The sysfs ECS attr nodes would be present only if the client + driver has implemented the corresponding attr callback + function and passed in ops to the EDAC RAS feature driver + during registration. + +What: /sys/bus/edac/devices//ecs_fruX/log_entry_type +Date: Oct 2024 +KernelVersion: 6.12 +Contact: linux-edac@vger.kernel.org +Description: + (RW) The log entry type of how the DDR5 ECS log is reported. + 00b - per DRAM. + 01b - per memory media FRU. + +What: /sys/bus/edac/devices//ecs_fruX/log_entry_type_per_dram +Date: Oct 2024 +KernelVersion: 6.12 +Contact: linux-edac@vger.kernel.org +Description: + (RO) True if current log entry type is per DRAM. + +What: /sys/bus/edac/devices//ecs_fruX/log_entry_type_per_memory_media +Date: Oct 2024 +KernelVersion: 6.12 +Contact: linux-edac@vger.kernel.org +Description: + (RO) True if current log entry type is per memory media FRU. + +What: /sys/bus/edac/devices//ecs_fruX/mode +Date: Oct 2024 +KernelVersion: 6.12 +Contact: linux-edac@vger.kernel.org +Description: + (RW) The mode of how the DDR5 ECS counts the errors. + 0 - ECS counts rows with errors. + 1 - ECS counts codewords with errors. + +What: /sys/bus/edac/devices//ecs_fruX/mode_counts_rows +Date: Oct 2024 +KernelVersion: 6.12 +Contact: linux-edac@vger.kernel.org +Description: + (RO) True if current mode is ECS counts rows with errors. + +What: /sys/bus/edac/devices//ecs_fruX/mode_counts_codewords +Date: Oct 2024 +KernelVersion: 6.12 +Contact: linux-edac@vger.kernel.org +Description: + (RO) True if current mode is ECS counts codewords with errors. + +What: /sys/bus/edac/devices//ecs_fruX/reset +Date: Oct 2024 +KernelVersion: 6.12 +Contact: linux-edac@vger.kernel.org +Description: + (WO) ECS reset ECC counter. + 0 - normal, ECC counter running actively. + 1 - reset ECC counter to the default value. + +What: /sys/bus/edac/devices//ecs_fruX/threshold +Date: Oct 2024 +KernelVersion: 6.12 +Contact: linux-edac@vger.kernel.org +Description: + (RW) ECS threshold count per GB of memory cells. diff --git a/drivers/edac/Makefile b/drivers/edac/Makefile index a96a74de8b9e..c2135b94de34 100644 --- a/drivers/edac/Makefile +++ b/drivers/edac/Makefile @@ -10,7 +10,7 @@ obj-$(CONFIG_EDAC) := edac_core.o edac_core-y := edac_mc.o edac_device.o edac_mc_sysfs.o edac_core-y += edac_module.o edac_device_sysfs.o wq.o -edac_core-y += scrub.o +edac_core-y += scrub.o ecs.o edac_core-$(CONFIG_EDAC_DEBUG) += debugfs.o diff --git a/drivers/edac/ecs.c b/drivers/edac/ecs.c new file mode 100755 index 000000000000..a2b64d7bf6b6 --- /dev/null +++ b/drivers/edac/ecs.c @@ -0,0 +1,240 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Generic ECS driver in order to support control the on die + * error check scrub (e.g. DDR5 ECS). The common sysfs ECS + * interface abstracts the control of an arbitrary ECS + * functionality to a common set of functions. + * + * Copyright (c) 2024 HiSilicon Limited. + */ + +#define pr_fmt(fmt) "EDAC ECS: " fmt + +#include + +#define EDAC_ECS_FRU_NAME "ecs_fru" + +enum edac_ecs_attributes { + ECS_LOG_ENTRY_TYPE, + ECS_LOG_ENTRY_TYPE_PER_DRAM, + ECS_LOG_ENTRY_TYPE_PER_MEMORY_MEDIA, + ECS_MODE, + ECS_MODE_COUNTS_ROWS, + ECS_MODE_COUNTS_CODEWORDS, + ECS_RESET, + ECS_THRESHOLD, + ECS_MAX_ATTRS +}; + +struct edac_ecs_dev_attr { + struct device_attribute dev_attr; + int fru_id; +}; + +struct edac_ecs_fru_context { + char name[EDAC_FEAT_NAME_LEN]; + struct edac_ecs_dev_attr ecs_dev_attr[ECS_MAX_ATTRS]; + struct attribute *ecs_attrs[ECS_MAX_ATTRS + 1]; + struct attribute_group group; +}; + +struct edac_ecs_context { + u16 num_media_frus; + struct edac_ecs_fru_context *fru_ctxs; +}; + +#define TO_ECS_DEV_ATTR(_dev_attr) \ + container_of(_dev_attr, struct edac_ecs_dev_attr, dev_attr) + +#define EDAC_ECS_ATTR_SHOW(attrib, cb, type, format) \ +static ssize_t attrib##_show(struct device *ras_feat_dev, \ + struct device_attribute *attr, char *buf) \ +{ \ + struct edac_ecs_dev_attr *ecs_dev_attr = TO_ECS_DEV_ATTR(attr); \ + struct edac_dev_feat_ctx *ctx = dev_get_drvdata(ras_feat_dev); \ + const struct edac_ecs_ops *ops = ctx->ecs.ecs_ops; \ + type data; \ + int ret; \ + \ + ret = ops->cb(ras_feat_dev->parent, ctx->ecs.private, \ + ecs_dev_attr->fru_id, &data); \ + if (ret) \ + return ret; \ + \ + return sysfs_emit(buf, format, data); \ +} + +EDAC_ECS_ATTR_SHOW(log_entry_type, get_log_entry_type, u32, "%u\n") +EDAC_ECS_ATTR_SHOW(log_entry_type_per_dram, get_log_entry_type_per_dram, u32, "%u\n") +EDAC_ECS_ATTR_SHOW(log_entry_type_per_memory_media, get_log_entry_type_per_memory_media, + u32, "%u\n") +EDAC_ECS_ATTR_SHOW(mode, get_mode, u32, "%u\n") +EDAC_ECS_ATTR_SHOW(mode_counts_rows, get_mode_counts_rows, u32, "%u\n") +EDAC_ECS_ATTR_SHOW(mode_counts_codewords, get_mode_counts_codewords, u32, "%u\n") +EDAC_ECS_ATTR_SHOW(threshold, get_threshold, u32, "%u\n") + +#define EDAC_ECS_ATTR_STORE(attrib, cb, type, conv_func) \ +static ssize_t attrib##_store(struct device *ras_feat_dev, \ + struct device_attribute *attr, \ + const char *buf, size_t len) \ +{ \ + struct edac_ecs_dev_attr *ecs_dev_attr = TO_ECS_DEV_ATTR(attr); \ + struct edac_dev_feat_ctx *ctx = dev_get_drvdata(ras_feat_dev); \ + const struct edac_ecs_ops *ops = ctx->ecs.ecs_ops; \ + type data; \ + int ret; \ + \ + ret = conv_func(buf, 0, &data); \ + if (ret < 0) \ + return ret; \ + \ + ret = ops->cb(ras_feat_dev->parent, ctx->ecs.private, \ + ecs_dev_attr->fru_id, data); \ + if (ret) \ + return ret; \ + \ + return len; \ +} + +EDAC_ECS_ATTR_STORE(log_entry_type, set_log_entry_type, unsigned long, kstrtoul) +EDAC_ECS_ATTR_STORE(mode, set_mode, unsigned long, kstrtoul) +EDAC_ECS_ATTR_STORE(reset, reset, unsigned long, kstrtoul) +EDAC_ECS_ATTR_STORE(threshold, set_threshold, unsigned long, kstrtoul) + +static umode_t ecs_attr_visible(struct kobject *kobj, struct attribute *a, int attr_id) +{ + struct device *ras_feat_dev = kobj_to_dev(kobj); + struct edac_dev_feat_ctx *ctx = dev_get_drvdata(ras_feat_dev); + const struct edac_ecs_ops *ops = ctx->ecs.ecs_ops; + + switch (attr_id) { + case ECS_LOG_ENTRY_TYPE: + if (ops->get_log_entry_type) { + if (ops->set_log_entry_type) + return a->mode; + else + return 0444; + } + break; + case ECS_LOG_ENTRY_TYPE_PER_DRAM: + if (ops->get_log_entry_type_per_dram) + return a->mode; + break; + case ECS_LOG_ENTRY_TYPE_PER_MEMORY_MEDIA: + if (ops->get_log_entry_type_per_memory_media) + return a->mode; + break; + case ECS_MODE: + if (ops->get_mode) { + if (ops->set_mode) + return a->mode; + else + return 0444; + } + break; + case ECS_MODE_COUNTS_ROWS: + if (ops->get_mode_counts_rows) + return a->mode; + break; + case ECS_MODE_COUNTS_CODEWORDS: + if (ops->get_mode_counts_codewords) + return a->mode; + break; + case ECS_RESET: + if (ops->reset) + return a->mode; + break; + case ECS_THRESHOLD: + if (ops->get_threshold) { + if (ops->set_threshold) + return a->mode; + else + return 0444; + } + break; + default: + break; + } + + return 0; +} + +#define EDAC_ECS_ATTR_RO(_name, _fru_id) \ + ((struct edac_ecs_dev_attr) { .dev_attr = __ATTR_RO(_name), \ + .fru_id = _fru_id }) + +#define EDAC_ECS_ATTR_WO(_name, _fru_id) \ + ((struct edac_ecs_dev_attr) { .dev_attr = __ATTR_WO(_name), \ + .fru_id = _fru_id }) + +#define EDAC_ECS_ATTR_RW(_name, _fru_id) \ + ((struct edac_ecs_dev_attr) { .dev_attr = __ATTR_RW(_name), \ + .fru_id = _fru_id }) + +static int ecs_create_desc(struct device *ecs_dev, + const struct attribute_group **attr_groups, u16 num_media_frus) +{ + struct edac_ecs_context *ecs_ctx; + u32 fru; + + ecs_ctx = devm_kzalloc(ecs_dev, sizeof(*ecs_ctx), GFP_KERNEL); + if (!ecs_ctx) + return -ENOMEM; + + ecs_ctx->num_media_frus = num_media_frus; + ecs_ctx->fru_ctxs = devm_kcalloc(ecs_dev, num_media_frus, + sizeof(*ecs_ctx->fru_ctxs), + GFP_KERNEL); + if (!ecs_ctx->fru_ctxs) + return -ENOMEM; + + for (fru = 0; fru < num_media_frus; fru++) { + struct edac_ecs_fru_context *fru_ctx = &ecs_ctx->fru_ctxs[fru]; + struct attribute_group *group = &fru_ctx->group; + int i; + + fru_ctx->ecs_dev_attr[ECS_LOG_ENTRY_TYPE] = EDAC_ECS_ATTR_RW(log_entry_type, fru); + fru_ctx->ecs_dev_attr[ECS_LOG_ENTRY_TYPE_PER_DRAM] = + EDAC_ECS_ATTR_RO(log_entry_type_per_dram, fru); + fru_ctx->ecs_dev_attr[ECS_LOG_ENTRY_TYPE_PER_MEMORY_MEDIA] = + EDAC_ECS_ATTR_RO(log_entry_type_per_memory_media, fru); + fru_ctx->ecs_dev_attr[ECS_MODE] = EDAC_ECS_ATTR_RW(mode, fru); + fru_ctx->ecs_dev_attr[ECS_MODE_COUNTS_ROWS] = + EDAC_ECS_ATTR_RO(mode_counts_rows, fru); + fru_ctx->ecs_dev_attr[ECS_MODE_COUNTS_CODEWORDS] = + EDAC_ECS_ATTR_RO(mode_counts_codewords, fru); + fru_ctx->ecs_dev_attr[ECS_RESET] = EDAC_ECS_ATTR_WO(reset, fru); + fru_ctx->ecs_dev_attr[ECS_THRESHOLD] = EDAC_ECS_ATTR_RW(threshold, fru); + for (i = 0; i < ECS_MAX_ATTRS; i++) + fru_ctx->ecs_attrs[i] = &fru_ctx->ecs_dev_attr[i].dev_attr.attr; + + sprintf(fru_ctx->name, "%s%d", EDAC_ECS_FRU_NAME, fru); + group->name = fru_ctx->name; + group->attrs = fru_ctx->ecs_attrs; + group->is_visible = ecs_attr_visible; + + attr_groups[fru] = group; + } + + return 0; +} + +/** + * edac_ecs_get_desc - get EDAC ECS descriptors + * @ecs_dev: client device, supports ECS feature + * @attr_groups: pointer to attribute group container + * @num_media_frus: number of media FRUs in the device + * + * Return: + * * %0 - Success. + * * %-EINVAL - Invalid parameters passed. + * * %-ENOMEM - Dynamic memory allocation failed. + */ +int edac_ecs_get_desc(struct device *ecs_dev, + const struct attribute_group **attr_groups, u16 num_media_frus) +{ + if (!ecs_dev || !attr_groups || !num_media_frus) + return -EINVAL; + + return ecs_create_desc(ecs_dev, attr_groups, num_media_frus); +} diff --git a/drivers/edac/edac_device.c b/drivers/edac/edac_device.c index 0c9da55d18bc..1743ce8e57a7 100644 --- a/drivers/edac/edac_device.c +++ b/drivers/edac/edac_device.c @@ -627,6 +627,9 @@ int edac_dev_register(struct device *parent, char *name, attr_gcnt++; scrub_cnt++; break; + case RAS_FEAT_ECS: + attr_gcnt += ras_features[feat].ecs_info.num_media_frus; + break; default: return -EINVAL; } @@ -672,6 +675,18 @@ int edac_dev_register(struct device *parent, char *name, scrub_inst++; attr_gcnt++; break; + case RAS_FEAT_ECS: + if (!ras_features->ecs_ops) + continue; + dev_data = &ctx->ecs; + dev_data->ecs_ops = ras_features->ecs_ops; + dev_data->private = ras_features->ctx; + ret = edac_ecs_get_desc(parent, &ras_attr_groups[attr_gcnt], + ras_features->ecs_info.num_media_frus); + if (ret) + goto data_mem_free; + attr_gcnt += ras_features->ecs_info.num_media_frus; + break; default: ret = -EINVAL; goto data_mem_free; diff --git a/include/linux/edac.h b/include/linux/edac.h index 5344e2cf6808..20bdb08c7626 100644 --- a/include/linux/edac.h +++ b/include/linux/edac.h @@ -669,6 +669,7 @@ static inline struct dimm_info *edac_get_dimm(struct mem_ctl_info *mci, /* RAS feature type */ enum edac_dev_feat { RAS_FEAT_SCRUB, + RAS_FEAT_ECS, RAS_FEAT_MAX }; @@ -706,9 +707,50 @@ int edac_scrub_get_desc(struct device *scrub_dev, const struct attribute_group **attr_groups, u8 instance); +/** + * struct ecs_ops - ECS device operations (all elements optional) + * @get_log_entry_type: read the log entry type value. + * @set_log_entry_type: set the log entry type value. + * @get_log_entry_type_per_dram: read the log entry type per dram value. + * @get_log_entry_type_memory_media: read the log entry type per memory media value. + * @get_mode: read the mode value. + * @set_mode: set the mode value. + * @get_mode_counts_rows: read the mode counts rows value. + * @get_mode_counts_codewords: read the mode counts codewords value. + * @reset: reset the ECS counter. + * @get_threshold: read the threshold value. + * @set_threshold: set the threshold value. + */ +struct edac_ecs_ops { + int (*get_log_entry_type)(struct device *dev, void *drv_data, int fru_id, u32 *val); + int (*set_log_entry_type)(struct device *dev, void *drv_data, int fru_id, u32 val); + int (*get_log_entry_type_per_dram)(struct device *dev, void *drv_data, + int fru_id, u32 *val); + int (*get_log_entry_type_per_memory_media)(struct device *dev, void *drv_data, + int fru_id, u32 *val); + int (*get_mode)(struct device *dev, void *drv_data, int fru_id, u32 *val); + int (*set_mode)(struct device *dev, void *drv_data, int fru_id, u32 val); + int (*get_mode_counts_rows)(struct device *dev, void *drv_data, int fru_id, u32 *val); + int (*get_mode_counts_codewords)(struct device *dev, void *drv_data, int fru_id, u32 *val); + int (*reset)(struct device *dev, void *drv_data, int fru_id, u32 val); + int (*get_threshold)(struct device *dev, void *drv_data, int fru_id, u32 *threshold); + int (*set_threshold)(struct device *dev, void *drv_data, int fru_id, u32 threshold); +}; + +struct edac_ecs_ex_info { + u16 num_media_frus; +}; + +int edac_ecs_get_desc(struct device *ecs_dev, + const struct attribute_group **attr_groups, + u16 num_media_frus); + /* EDAC device feature information structure */ struct edac_dev_data { - const struct edac_scrub_ops *scrub_ops; + union { + const struct edac_scrub_ops *scrub_ops; + const struct edac_ecs_ops *ecs_ops; + }; u8 instance; void *private; }; @@ -719,13 +761,18 @@ struct edac_dev_feat_ctx { struct device dev; void *private; struct edac_dev_data *scrub; + struct edac_dev_data ecs; }; struct edac_dev_feature { enum edac_dev_feat ft_type; u8 instance; - const struct edac_scrub_ops *scrub_ops; + union { + const struct edac_scrub_ops *scrub_ops; + const struct edac_ecs_ops *ecs_ops; + }; void *ctx; + struct edac_ecs_ex_info ecs_info; }; int edac_dev_register(struct device *parent, char *dev_name, From patchwork Wed Oct 9 12:41:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 834019 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8BCD819925D; Wed, 9 Oct 2024 12:43:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728477795; cv=none; b=UK1ii5qEYs0QTJZU6H7u/BP89pZBWmjQJO+SBvxFMjVdlb4bisbGaLojFGxcXm0H9Jw/y4t/h/Kw7J7wgzYB/jc1OyC14Z/9PppcenMHMpG9Rgzg9W8T8zFnz3Atnl6tTMqNc8+GLpIKeUaR+k0sV5KbGojDcHQWzdKE2oAF6f8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728477795; c=relaxed/simple; bh=n8ZrTMeEK5T2FCH/z1sV9DvU+y6t9ARXL10uoj6S1XA=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=VBIw+d8L8yu5KHFVeVtkKqWxzYHQNi5iJhbQS9LmxTFf0EmA0fNDOaMr37vkTlQNiRUVXl/oZqswyU/i5669hVJR3hKy+/0oFWjKR2zeES2NY4C1KUv2isfPuCquqSvzmTTDNgUMfW7iirSLudrBp+PAF0h+75uGTWHbL1AW8CM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.216]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4XNsx96SJvz6GD5Y; Wed, 9 Oct 2024 20:42:53 +0800 (CST) Received: from frapeml500007.china.huawei.com (unknown [7.182.85.172]) by mail.maildlp.com (Postfix) with ESMTPS id A2322140A90; Wed, 9 Oct 2024 20:43:11 +0800 (CST) Received: from P_UKIT01-A7bmah.china.huawei.com (10.48.152.209) by frapeml500007.china.huawei.com (7.182.85.172) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 9 Oct 2024 14:43:09 +0200 From: To: , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v13 04/18] cxl: move cxl headers to new include/cxl/ directory Date: Wed, 9 Oct 2024 13:41:05 +0100 Message-ID: <20241009124120.1124-5-shiju.jose@huawei.com> X-Mailer: git-send-email 2.43.0.windows.1 In-Reply-To: <20241009124120.1124-1-shiju.jose@huawei.com> References: <20241009124120.1124-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500003.china.huawei.com (7.191.162.67) To frapeml500007.china.huawei.com (7.182.85.172) From: Dave Jiang Group all cxl related kernel headers into include/cxl/ directory. Signed-off-by: Dave Jiang Reviewed-by: Alison Schofield Reviewed-by: Ira Weiny Reviewed-by: Jonathan Cameron --- MAINTAINERS | 3 +-- drivers/acpi/apei/einj-cxl.c | 2 +- drivers/acpi/apei/ghes.c | 2 +- drivers/cxl/core/port.c | 2 +- drivers/cxl/cxlmem.h | 2 +- include/{linux/einj-cxl.h => cxl/einj.h} | 0 include/{linux/cxl-event.h => cxl/event.h} | 0 7 files changed, 5 insertions(+), 6 deletions(-) rename include/{linux/einj-cxl.h => cxl/einj.h} (100%) rename include/{linux/cxl-event.h => cxl/event.h} (100%) diff --git a/MAINTAINERS b/MAINTAINERS index cc40a9d9b8cd..ae17d28c5f73 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -5620,8 +5620,7 @@ L: linux-cxl@vger.kernel.org S: Maintained F: Documentation/driver-api/cxl F: drivers/cxl/ -F: include/linux/einj-cxl.h -F: include/linux/cxl-event.h +F: include/cxl/ F: include/uapi/linux/cxl_mem.h F: tools/testing/cxl/ diff --git a/drivers/acpi/apei/einj-cxl.c b/drivers/acpi/apei/einj-cxl.c index 8b8be0c90709..4f81a119ec08 100644 --- a/drivers/acpi/apei/einj-cxl.c +++ b/drivers/acpi/apei/einj-cxl.c @@ -7,9 +7,9 @@ * * Author: Ben Cheatham */ -#include #include #include +#include #include "apei-internal.h" diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c index 623cc0cb4a65..ada93cfde9ba 100644 --- a/drivers/acpi/apei/ghes.c +++ b/drivers/acpi/apei/ghes.c @@ -27,7 +27,6 @@ #include #include #include -#include #include #include #include @@ -50,6 +49,7 @@ #include #include #include +#include #include #include "apei-internal.h" diff --git a/drivers/cxl/core/port.c b/drivers/cxl/core/port.c index 1d5007e3795a..e0b28a6730c1 100644 --- a/drivers/cxl/core/port.c +++ b/drivers/cxl/core/port.c @@ -3,7 +3,6 @@ #include #include #include -#include #include #include #include @@ -11,6 +10,7 @@ #include #include #include +#include #include #include #include diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index afb53d058d62..a81a8982bf93 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -6,8 +6,8 @@ #include #include #include -#include #include +#include #include "cxl.h" /* CXL 2.0 8.2.8.5.1.1 Memory Device Status Register */ diff --git a/include/linux/einj-cxl.h b/include/cxl/einj.h similarity index 100% rename from include/linux/einj-cxl.h rename to include/cxl/einj.h diff --git a/include/linux/cxl-event.h b/include/cxl/event.h similarity index 100% rename from include/linux/cxl-event.h rename to include/cxl/event.h From patchwork Wed Oct 9 12:41:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 834197 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6EEF3199EB4; Wed, 9 Oct 2024 12:43:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728477798; cv=none; b=BxipM+lkWLlNSpwxsVn/ds1og81wzihG5WGIk+W9CkDR6G+3gl28xny0kdYvuwwnvgUBKZCgRT48T1c1AOMnVf9d0LWFV4pZiFA+TSJDjyDQtAQ7bINnQo7kfXqP9x7VXIChEdXw0q1rvI91RVCNKyLLVWrJpDYMPB/MqmJte2c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728477798; c=relaxed/simple; bh=LG1AM+cAWZDf8o4xhnK6NbaWKnRSDBouvHV5Jq27VfE=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=jeH4uXU/CM19eI3YTiauighKa4iURgg5VF3eUy4WowBs38KIr5TZkUSWn89G9OvOt9zdZYi50ULpSGZGdhtfVfXcXi3bJpq82ZEVlZpl1M2dHZyL5/T4SgQPIHYUc4enMr6NakmoRYNFSU0eDQ79QAqL7H/Ogw9U7lLkbKa4Smc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.231]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4XNsw45s5Nz6K70v; Wed, 9 Oct 2024 20:41:56 +0800 (CST) Received: from frapeml500007.china.huawei.com (unknown [7.182.85.172]) by mail.maildlp.com (Postfix) with ESMTPS id 78F96140B3C; Wed, 9 Oct 2024 20:43:14 +0800 (CST) Received: from P_UKIT01-A7bmah.china.huawei.com (10.48.152.209) by frapeml500007.china.huawei.com (7.182.85.172) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 9 Oct 2024 14:43:12 +0200 From: To: , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v13 05/18] cxl: Move mailbox related bits to the same context Date: Wed, 9 Oct 2024 13:41:06 +0100 Message-ID: <20241009124120.1124-6-shiju.jose@huawei.com> X-Mailer: git-send-email 2.43.0.windows.1 In-Reply-To: <20241009124120.1124-1-shiju.jose@huawei.com> References: <20241009124120.1124-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500003.china.huawei.com (7.191.162.67) To frapeml500007.china.huawei.com (7.182.85.172) From: Dave Jiang Create a new 'struct cxl_mailbox' and move all mailbox related bits to it. This allows isolation of all CXL mailbox data in order to export some of the calls to external kernel callers and avoid exporting of CXL driver specific bits such has device states. The allocation of 'struct cxl_mailbox' is also split out with cxl_mailbox_create() so the mailbox can be created independently. Reviewed-by: Fan Ni Reviewed-by: Alejandro Lucero Reviewed-by: Alison Schofield Link: https://patch.msgid.link/20240724185649.2574627-2-dave.jiang@intel.com Signed-off-by: Dave Jiang Reviewed-by: Ira Weiny --- drivers/cxl/core/mbox.c | 57 ++++++++++++++++++++--------- drivers/cxl/core/memdev.c | 18 ++++++---- drivers/cxl/cxlmem.h | 21 +++++------ drivers/cxl/pci.c | 70 +++++++++++++++++++++++------------- drivers/cxl/pmem.c | 4 ++- include/cxl/mailbox.h | 28 +++++++++++++++ tools/testing/cxl/test/mem.c | 38 ++++++++++++++------ 7 files changed, 164 insertions(+), 72 deletions(-) create mode 100644 include/cxl/mailbox.h diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c index e5cdeafdf76e..4d37462125b9 100644 --- a/drivers/cxl/core/mbox.c +++ b/drivers/cxl/core/mbox.c @@ -244,16 +244,17 @@ static const char *cxl_mem_opcode_to_name(u16 opcode) int cxl_internal_send_cmd(struct cxl_memdev_state *mds, struct cxl_mbox_cmd *mbox_cmd) { + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; size_t out_size, min_out; int rc; - if (mbox_cmd->size_in > mds->payload_size || - mbox_cmd->size_out > mds->payload_size) + if (mbox_cmd->size_in > cxl_mbox->payload_size || + mbox_cmd->size_out > cxl_mbox->payload_size) return -E2BIG; out_size = mbox_cmd->size_out; min_out = mbox_cmd->min_out; - rc = mds->mbox_send(mds, mbox_cmd); + rc = cxl_mbox->mbox_send(cxl_mbox, mbox_cmd); /* * EIO is reserved for a payload size mismatch and mbox_send() * may not return this error. @@ -353,6 +354,7 @@ static int cxl_mbox_cmd_ctor(struct cxl_mbox_cmd *mbox, struct cxl_memdev_state *mds, u16 opcode, size_t in_size, size_t out_size, u64 in_payload) { + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; *mbox = (struct cxl_mbox_cmd) { .opcode = opcode, .size_in = in_size, @@ -374,7 +376,7 @@ static int cxl_mbox_cmd_ctor(struct cxl_mbox_cmd *mbox, /* Prepare to handle a full payload for variable sized output */ if (out_size == CXL_VARIABLE_PAYLOAD) - mbox->size_out = mds->payload_size; + mbox->size_out = cxl_mbox->payload_size; else mbox->size_out = out_size; @@ -398,6 +400,8 @@ static int cxl_to_mem_cmd_raw(struct cxl_mem_command *mem_cmd, const struct cxl_send_command *send_cmd, struct cxl_memdev_state *mds) { + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; + if (send_cmd->raw.rsvd) return -EINVAL; @@ -406,7 +410,7 @@ static int cxl_to_mem_cmd_raw(struct cxl_mem_command *mem_cmd, * gets passed along without further checking, so it must be * validated here. */ - if (send_cmd->out.size > mds->payload_size) + if (send_cmd->out.size > cxl_mbox->payload_size) return -EINVAL; if (!cxl_mem_raw_command_allowed(send_cmd->raw.opcode)) @@ -494,6 +498,7 @@ static int cxl_validate_cmd_from_user(struct cxl_mbox_cmd *mbox_cmd, struct cxl_memdev_state *mds, const struct cxl_send_command *send_cmd) { + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; struct cxl_mem_command mem_cmd; int rc; @@ -505,7 +510,7 @@ static int cxl_validate_cmd_from_user(struct cxl_mbox_cmd *mbox_cmd, * supports, but output can be arbitrarily large (simply write out as * much data as the hardware provides). */ - if (send_cmd->in.size > mds->payload_size) + if (send_cmd->in.size > cxl_mbox->payload_size) return -EINVAL; /* Sanitize and construct a cxl_mem_command */ @@ -591,6 +596,7 @@ static int handle_mailbox_cmd_from_user(struct cxl_memdev_state *mds, u64 out_payload, s32 *size_out, u32 *retval) { + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; struct device *dev = mds->cxlds.dev; int rc; @@ -601,7 +607,7 @@ static int handle_mailbox_cmd_from_user(struct cxl_memdev_state *mds, cxl_mem_opcode_to_name(mbox_cmd->opcode), mbox_cmd->opcode, mbox_cmd->size_in); - rc = mds->mbox_send(mds, mbox_cmd); + rc = cxl_mbox->mbox_send(cxl_mbox, mbox_cmd); if (rc) goto out; @@ -659,11 +665,12 @@ int cxl_send_cmd(struct cxl_memdev *cxlmd, struct cxl_send_command __user *s) static int cxl_xfer_log(struct cxl_memdev_state *mds, uuid_t *uuid, u32 *size, u8 *out) { + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; u32 remaining = *size; u32 offset = 0; while (remaining) { - u32 xfer_size = min_t(u32, remaining, mds->payload_size); + u32 xfer_size = min_t(u32, remaining, cxl_mbox->payload_size); struct cxl_mbox_cmd mbox_cmd; struct cxl_mbox_get_log log; int rc; @@ -752,17 +759,18 @@ static void cxl_walk_cel(struct cxl_memdev_state *mds, size_t size, u8 *cel) static struct cxl_mbox_get_supported_logs *cxl_get_gsl(struct cxl_memdev_state *mds) { + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; struct cxl_mbox_get_supported_logs *ret; struct cxl_mbox_cmd mbox_cmd; int rc; - ret = kvmalloc(mds->payload_size, GFP_KERNEL); + ret = kvmalloc(cxl_mbox->payload_size, GFP_KERNEL); if (!ret) return ERR_PTR(-ENOMEM); mbox_cmd = (struct cxl_mbox_cmd) { .opcode = CXL_MBOX_OP_GET_SUPPORTED_LOGS, - .size_out = mds->payload_size, + .size_out = cxl_mbox->payload_size, .payload_out = ret, /* At least the record number field must be valid */ .min_out = 2, @@ -910,6 +918,7 @@ static int cxl_clear_event_record(struct cxl_memdev_state *mds, enum cxl_event_log_type log, struct cxl_get_event_payload *get_pl) { + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; struct cxl_mbox_clear_event_payload *payload; u16 total = le16_to_cpu(get_pl->record_count); u8 max_handles = CXL_CLEAR_EVENT_MAX_HANDLES; @@ -920,8 +929,8 @@ static int cxl_clear_event_record(struct cxl_memdev_state *mds, int i; /* Payload size may limit the max handles */ - if (pl_size > mds->payload_size) { - max_handles = (mds->payload_size - sizeof(*payload)) / + if (pl_size > cxl_mbox->payload_size) { + max_handles = (cxl_mbox->payload_size - sizeof(*payload)) / sizeof(__le16); pl_size = struct_size(payload, handles, max_handles); } @@ -979,6 +988,7 @@ static int cxl_clear_event_record(struct cxl_memdev_state *mds, static void cxl_mem_get_records_log(struct cxl_memdev_state *mds, enum cxl_event_log_type type) { + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; struct cxl_memdev *cxlmd = mds->cxlds.cxlmd; struct device *dev = mds->cxlds.dev; struct cxl_get_event_payload *payload; @@ -995,7 +1005,7 @@ static void cxl_mem_get_records_log(struct cxl_memdev_state *mds, .payload_in = &log_type, .size_in = sizeof(log_type), .payload_out = payload, - .size_out = mds->payload_size, + .size_out = cxl_mbox->payload_size, .min_out = struct_size(payload, records, 0), }; @@ -1328,6 +1338,7 @@ int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len, struct cxl_region *cxlr) { struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; struct cxl_mbox_poison_out *po; struct cxl_mbox_poison_in pi; int nr_records = 0; @@ -1346,7 +1357,7 @@ int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len, .opcode = CXL_MBOX_OP_GET_POISON, .size_in = sizeof(pi), .payload_in = &pi, - .size_out = mds->payload_size, + .size_out = cxl_mbox->payload_size, .payload_out = po, .min_out = struct_size(po, record, 0), }; @@ -1382,7 +1393,9 @@ static void free_poison_buf(void *buf) /* Get Poison List output buffer is protected by mds->poison.lock */ static int cxl_poison_alloc_buf(struct cxl_memdev_state *mds) { - mds->poison.list_out = kvmalloc(mds->payload_size, GFP_KERNEL); + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; + + mds->poison.list_out = kvmalloc(cxl_mbox->payload_size, GFP_KERNEL); if (!mds->poison.list_out) return -ENOMEM; @@ -1408,6 +1421,19 @@ int cxl_poison_state_init(struct cxl_memdev_state *mds) } EXPORT_SYMBOL_NS_GPL(cxl_poison_state_init, CXL); +int cxl_mailbox_init(struct cxl_mailbox *cxl_mbox, struct device *host) +{ + if (!cxl_mbox || !host) + return -EINVAL; + + cxl_mbox->host = host; + mutex_init(&cxl_mbox->mbox_mutex); + rcuwait_init(&cxl_mbox->mbox_wait); + + return 0; +} +EXPORT_SYMBOL_NS_GPL(cxl_mailbox_init, CXL); + struct cxl_memdev_state *cxl_memdev_state_create(struct device *dev) { struct cxl_memdev_state *mds; @@ -1418,7 +1444,6 @@ struct cxl_memdev_state *cxl_memdev_state_create(struct device *dev) return ERR_PTR(-ENOMEM); } - mutex_init(&mds->mbox_mutex); mutex_init(&mds->event.log_lock); mds->cxlds.dev = dev; mds->cxlds.reg_map.host = dev; diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c index 0277726afd04..05bb84cb1274 100644 --- a/drivers/cxl/core/memdev.c +++ b/drivers/cxl/core/memdev.c @@ -58,7 +58,7 @@ static ssize_t payload_max_show(struct device *dev, if (!mds) return sysfs_emit(buf, "\n"); - return sysfs_emit(buf, "%zu\n", mds->payload_size); + return sysfs_emit(buf, "%zu\n", cxlds->cxl_mbox.payload_size); } static DEVICE_ATTR_RO(payload_max); @@ -124,15 +124,16 @@ static ssize_t security_state_show(struct device *dev, { struct cxl_memdev *cxlmd = to_cxl_memdev(dev); struct cxl_dev_state *cxlds = cxlmd->cxlds; + struct cxl_mailbox *cxl_mbox = &cxlds->cxl_mbox; struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); unsigned long state = mds->security.state; int rc = 0; /* sync with latest submission state */ - mutex_lock(&mds->mbox_mutex); + mutex_lock(&cxl_mbox->mbox_mutex); if (mds->security.sanitize_active) rc = sysfs_emit(buf, "sanitize\n"); - mutex_unlock(&mds->mbox_mutex); + mutex_unlock(&cxl_mbox->mbox_mutex); if (rc) return rc; @@ -829,12 +830,13 @@ static enum fw_upload_err cxl_fw_prepare(struct fw_upload *fwl, const u8 *data, { struct cxl_memdev_state *mds = fwl->dd_handle; struct cxl_mbox_transfer_fw *transfer; + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; if (!size) return FW_UPLOAD_ERR_INVALID_SIZE; mds->fw.oneshot = struct_size(transfer, data, size) < - mds->payload_size; + cxl_mbox->payload_size; if (cxl_mem_get_fw_info(mds)) return FW_UPLOAD_ERR_HW_ERROR; @@ -854,6 +856,7 @@ static enum fw_upload_err cxl_fw_write(struct fw_upload *fwl, const u8 *data, { struct cxl_memdev_state *mds = fwl->dd_handle; struct cxl_dev_state *cxlds = &mds->cxlds; + struct cxl_mailbox *cxl_mbox = &cxlds->cxl_mbox; struct cxl_memdev *cxlmd = cxlds->cxlmd; struct cxl_mbox_transfer_fw *transfer; struct cxl_mbox_cmd mbox_cmd; @@ -877,7 +880,7 @@ static enum fw_upload_err cxl_fw_write(struct fw_upload *fwl, const u8 *data, * sizeof(*transfer) is 128. These constraints imply that @cur_size * will always be 128b aligned. */ - cur_size = min_t(size_t, size, mds->payload_size - sizeof(*transfer)); + cur_size = min_t(size_t, size, cxl_mbox->payload_size - sizeof(*transfer)); remaining = size - cur_size; size_in = struct_size(transfer, data, cur_size); @@ -1059,16 +1062,17 @@ EXPORT_SYMBOL_NS_GPL(devm_cxl_add_memdev, CXL); static void sanitize_teardown_notifier(void *data) { struct cxl_memdev_state *mds = data; + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; struct kernfs_node *state; /* * Prevent new irq triggered invocations of the workqueue and * flush inflight invocations. */ - mutex_lock(&mds->mbox_mutex); + mutex_lock(&cxl_mbox->mbox_mutex); state = mds->security.sanitize_node; mds->security.sanitize_node = NULL; - mutex_unlock(&mds->mbox_mutex); + mutex_unlock(&cxl_mbox->mbox_mutex); cancel_delayed_work_sync(&mds->security.poll_dwork); sysfs_put(state); diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index a81a8982bf93..4ead415f8971 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -3,11 +3,12 @@ #ifndef __CXL_MEM_H__ #define __CXL_MEM_H__ #include +#include #include #include -#include #include #include +#include #include "cxl.h" /* CXL 2.0 8.2.8.5.1.1 Memory Device Status Register */ @@ -424,6 +425,7 @@ struct cxl_dpa_perf { * @ram_res: Active Volatile memory capacity configuration * @serial: PCIe Device Serial Number * @type: Generic Memory Class device or Vendor Specific Memory device + * @cxl_mbox: CXL mailbox context */ struct cxl_dev_state { struct device *dev; @@ -438,8 +440,14 @@ struct cxl_dev_state { struct resource ram_res; u64 serial; enum cxl_devtype type; + struct cxl_mailbox cxl_mbox; }; +static inline struct cxl_dev_state *mbox_to_cxlds(struct cxl_mailbox *cxl_mbox) +{ + return dev_get_drvdata(cxl_mbox->host); +} + /** * struct cxl_memdev_state - Generic Type-3 Memory Device Class driver data * @@ -448,11 +456,8 @@ struct cxl_dev_state { * the functionality related to that like Identify Memory Device and Get * Partition Info * @cxlds: Core driver state common across Type-2 and Type-3 devices - * @payload_size: Size of space for payload - * (CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register) * @lsa_size: Size of Label Storage Area * (CXL 2.0 8.2.9.5.1.1 Identify Memory Device) - * @mbox_mutex: Mutex to synchronize mailbox access. * @firmware_version: Firmware version for the memory device. * @enabled_cmds: Hardware commands found enabled in CEL. * @exclusive_cmds: Commands that are kernel-internal only @@ -470,17 +475,13 @@ struct cxl_dev_state { * @poison: poison driver state info * @security: security driver state info * @fw: firmware upload / activation state - * @mbox_wait: RCU wait for mbox send completely - * @mbox_send: @dev specific transport for transmitting mailbox commands * * See CXL 3.0 8.2.9.8.2 Capacity Configuration and Label Storage for * details on capacity parameters. */ struct cxl_memdev_state { struct cxl_dev_state cxlds; - size_t payload_size; size_t lsa_size; - struct mutex mbox_mutex; /* Protects device mailbox and firmware */ char firmware_version[0x10]; DECLARE_BITMAP(enabled_cmds, CXL_MEM_COMMAND_ID_MAX); DECLARE_BITMAP(exclusive_cmds, CXL_MEM_COMMAND_ID_MAX); @@ -500,10 +501,6 @@ struct cxl_memdev_state { struct cxl_poison_state poison; struct cxl_security_state security; struct cxl_fw_state fw; - - struct rcuwait mbox_wait; - int (*mbox_send)(struct cxl_memdev_state *mds, - struct cxl_mbox_cmd *cmd); }; static inline struct cxl_memdev_state * diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index 4be35dc22202..eca6f533386b 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -11,6 +11,7 @@ #include #include #include +#include #include "cxlmem.h" #include "cxlpci.h" #include "cxl.h" @@ -124,6 +125,7 @@ static irqreturn_t cxl_pci_mbox_irq(int irq, void *id) u16 opcode; struct cxl_dev_id *dev_id = id; struct cxl_dev_state *cxlds = dev_id->cxlds; + struct cxl_mailbox *cxl_mbox = &cxlds->cxl_mbox; struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); if (!cxl_mbox_background_complete(cxlds)) @@ -132,13 +134,13 @@ static irqreturn_t cxl_pci_mbox_irq(int irq, void *id) reg = readq(cxlds->regs.mbox + CXLDEV_MBOX_BG_CMD_STATUS_OFFSET); opcode = FIELD_GET(CXLDEV_MBOX_BG_CMD_COMMAND_OPCODE_MASK, reg); if (opcode == CXL_MBOX_OP_SANITIZE) { - mutex_lock(&mds->mbox_mutex); + mutex_lock(&cxl_mbox->mbox_mutex); if (mds->security.sanitize_node) mod_delayed_work(system_wq, &mds->security.poll_dwork, 0); - mutex_unlock(&mds->mbox_mutex); + mutex_unlock(&cxl_mbox->mbox_mutex); } else { /* short-circuit the wait in __cxl_pci_mbox_send_cmd() */ - rcuwait_wake_up(&mds->mbox_wait); + rcuwait_wake_up(&cxl_mbox->mbox_wait); } return IRQ_HANDLED; @@ -152,8 +154,9 @@ static void cxl_mbox_sanitize_work(struct work_struct *work) struct cxl_memdev_state *mds = container_of(work, typeof(*mds), security.poll_dwork.work); struct cxl_dev_state *cxlds = &mds->cxlds; + struct cxl_mailbox *cxl_mbox = &cxlds->cxl_mbox; - mutex_lock(&mds->mbox_mutex); + mutex_lock(&cxl_mbox->mbox_mutex); if (cxl_mbox_background_complete(cxlds)) { mds->security.poll_tmo_secs = 0; if (mds->security.sanitize_node) @@ -167,7 +170,7 @@ static void cxl_mbox_sanitize_work(struct work_struct *work) mds->security.poll_tmo_secs = min(15 * 60, timeout); schedule_delayed_work(&mds->security.poll_dwork, timeout * HZ); } - mutex_unlock(&mds->mbox_mutex); + mutex_unlock(&cxl_mbox->mbox_mutex); } /** @@ -192,17 +195,18 @@ static void cxl_mbox_sanitize_work(struct work_struct *work) * not need to coordinate with each other. The driver only uses the primary * mailbox. */ -static int __cxl_pci_mbox_send_cmd(struct cxl_memdev_state *mds, +static int __cxl_pci_mbox_send_cmd(struct cxl_mailbox *cxl_mbox, struct cxl_mbox_cmd *mbox_cmd) { - struct cxl_dev_state *cxlds = &mds->cxlds; + struct cxl_dev_state *cxlds = mbox_to_cxlds(cxl_mbox); + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); void __iomem *payload = cxlds->regs.mbox + CXLDEV_MBOX_PAYLOAD_OFFSET; struct device *dev = cxlds->dev; u64 cmd_reg, status_reg; size_t out_len; int rc; - lockdep_assert_held(&mds->mbox_mutex); + lockdep_assert_held(&cxl_mbox->mbox_mutex); /* * Here are the steps from 8.2.8.4 of the CXL 2.0 spec. @@ -315,10 +319,10 @@ static int __cxl_pci_mbox_send_cmd(struct cxl_memdev_state *mds, timeout = mbox_cmd->poll_interval_ms; for (i = 0; i < mbox_cmd->poll_count; i++) { - if (rcuwait_wait_event_timeout(&mds->mbox_wait, - cxl_mbox_background_complete(cxlds), - TASK_UNINTERRUPTIBLE, - msecs_to_jiffies(timeout)) > 0) + if (rcuwait_wait_event_timeout(&cxl_mbox->mbox_wait, + cxl_mbox_background_complete(cxlds), + TASK_UNINTERRUPTIBLE, + msecs_to_jiffies(timeout)) > 0) break; } @@ -360,7 +364,7 @@ static int __cxl_pci_mbox_send_cmd(struct cxl_memdev_state *mds, */ size_t n; - n = min3(mbox_cmd->size_out, mds->payload_size, out_len); + n = min3(mbox_cmd->size_out, cxl_mbox->payload_size, out_len); memcpy_fromio(mbox_cmd->payload_out, payload, n); mbox_cmd->size_out = n; } else { @@ -370,14 +374,14 @@ static int __cxl_pci_mbox_send_cmd(struct cxl_memdev_state *mds, return 0; } -static int cxl_pci_mbox_send(struct cxl_memdev_state *mds, +static int cxl_pci_mbox_send(struct cxl_mailbox *cxl_mbox, struct cxl_mbox_cmd *cmd) { int rc; - mutex_lock_io(&mds->mbox_mutex); - rc = __cxl_pci_mbox_send_cmd(mds, cmd); - mutex_unlock(&mds->mbox_mutex); + mutex_lock_io(&cxl_mbox->mbox_mutex); + rc = __cxl_pci_mbox_send_cmd(cxl_mbox, cmd); + mutex_unlock(&cxl_mbox->mbox_mutex); return rc; } @@ -385,6 +389,7 @@ static int cxl_pci_mbox_send(struct cxl_memdev_state *mds, static int cxl_pci_setup_mailbox(struct cxl_memdev_state *mds, bool irq_avail) { struct cxl_dev_state *cxlds = &mds->cxlds; + struct cxl_mailbox *cxl_mbox = &cxlds->cxl_mbox; const int cap = readl(cxlds->regs.mbox + CXLDEV_MBOX_CAPS_OFFSET); struct device *dev = cxlds->dev; unsigned long timeout; @@ -417,8 +422,8 @@ static int cxl_pci_setup_mailbox(struct cxl_memdev_state *mds, bool irq_avail) return -ETIMEDOUT; } - mds->mbox_send = cxl_pci_mbox_send; - mds->payload_size = + cxl_mbox->mbox_send = cxl_pci_mbox_send; + cxl_mbox->payload_size = 1 << FIELD_GET(CXLDEV_MBOX_CAP_PAYLOAD_SIZE_MASK, cap); /* @@ -428,16 +433,15 @@ static int cxl_pci_setup_mailbox(struct cxl_memdev_state *mds, bool irq_avail) * there's no point in going forward. If the size is too large, there's * no harm is soft limiting it. */ - mds->payload_size = min_t(size_t, mds->payload_size, SZ_1M); - if (mds->payload_size < 256) { + cxl_mbox->payload_size = min_t(size_t, cxl_mbox->payload_size, SZ_1M); + if (cxl_mbox->payload_size < 256) { dev_err(dev, "Mailbox is too small (%zub)", - mds->payload_size); + cxl_mbox->payload_size); return -ENXIO; } - dev_dbg(dev, "Mailbox payload sized %zu", mds->payload_size); + dev_dbg(dev, "Mailbox payload sized %zu", cxl_mbox->payload_size); - rcuwait_init(&mds->mbox_wait); INIT_DELAYED_WORK(&mds->security.poll_dwork, cxl_mbox_sanitize_work); /* background command interrupts are optional */ @@ -578,9 +582,10 @@ static void free_event_buf(void *buf) */ static int cxl_mem_alloc_event_buf(struct cxl_memdev_state *mds) { + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; struct cxl_get_event_payload *buf; - buf = kvmalloc(mds->payload_size, GFP_KERNEL); + buf = kvmalloc(cxl_mbox->payload_size, GFP_KERNEL); if (!buf) return -ENOMEM; mds->event.buf = buf; @@ -786,6 +791,17 @@ static int cxl_event_config(struct pci_host_bridge *host_bridge, return 0; } +static int cxl_pci_type3_init_mailbox(struct cxl_dev_state *cxlds) +{ + /* + * Fail the init if there's no mailbox. For a type3 this is out of spec. + */ + if (!cxlds->reg_map.device_map.mbox.valid) + return -ENODEV; + + return cxl_mailbox_init(&cxlds->cxl_mbox, cxlds->dev); +} + static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) { struct pci_host_bridge *host_bridge = pci_find_host_bridge(pdev->bus); @@ -846,6 +862,10 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (rc) dev_dbg(&pdev->dev, "Failed to map RAS capability.\n"); + rc = cxl_pci_type3_init_mailbox(cxlds); + if (rc) + return rc; + rc = cxl_await_media_ready(cxlds); if (rc == 0) cxlds->media_ready = true; diff --git a/drivers/cxl/pmem.c b/drivers/cxl/pmem.c index 4ef93da22335..3985ff9ce70e 100644 --- a/drivers/cxl/pmem.c +++ b/drivers/cxl/pmem.c @@ -102,13 +102,15 @@ static int cxl_pmem_get_config_size(struct cxl_memdev_state *mds, struct nd_cmd_get_config_size *cmd, unsigned int buf_len) { + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; + if (sizeof(*cmd) > buf_len) return -EINVAL; *cmd = (struct nd_cmd_get_config_size){ .config_size = mds->lsa_size, .max_xfer = - mds->payload_size - sizeof(struct cxl_mbox_set_lsa), + cxl_mbox->payload_size - sizeof(struct cxl_mbox_set_lsa), }; return 0; diff --git a/include/cxl/mailbox.h b/include/cxl/mailbox.h new file mode 100644 index 000000000000..bacd111e75f1 --- /dev/null +++ b/include/cxl/mailbox.h @@ -0,0 +1,28 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright(c) 2024 Intel Corporation. */ +#ifndef __CXL_MBOX_H__ +#define __CXL_MBOX_H__ +#include + +struct cxl_mbox_cmd; + +/** + * struct cxl_mailbox - context for CXL mailbox operations + * @host: device that hosts the mailbox + * @payload_size: Size of space for payload + * (CXL 3.1 8.2.8.4.3 Mailbox Capabilities Register) + * @mbox_mutex: mutex protects device mailbox and firmware + * @mbox_wait: rcuwait for mailbox + * @mbox_send: @dev specific transport for transmitting mailbox commands + */ +struct cxl_mailbox { + struct device *host; + size_t payload_size; + struct mutex mbox_mutex; /* lock to protect mailbox context */ + struct rcuwait mbox_wait; + int (*mbox_send)(struct cxl_mailbox *cxl_mbox, struct cxl_mbox_cmd *cmd); +}; + +int cxl_mailbox_init(struct cxl_mailbox *cxl_mbox, struct device *host); + +#endif diff --git a/tools/testing/cxl/test/mem.c b/tools/testing/cxl/test/mem.c index 129f179b0ac5..7b4b8a2aaf61 100644 --- a/tools/testing/cxl/test/mem.c +++ b/tools/testing/cxl/test/mem.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include @@ -534,6 +535,7 @@ static int mock_gsl(struct cxl_mbox_cmd *cmd) static int mock_get_log(struct cxl_memdev_state *mds, struct cxl_mbox_cmd *cmd) { + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; struct cxl_mbox_get_log *gl = cmd->payload_in; u32 offset = le32_to_cpu(gl->offset); u32 length = le32_to_cpu(gl->length); @@ -542,7 +544,7 @@ static int mock_get_log(struct cxl_memdev_state *mds, struct cxl_mbox_cmd *cmd) if (cmd->size_in < sizeof(*gl)) return -EINVAL; - if (length > mds->payload_size) + if (length > cxl_mbox->payload_size) return -EINVAL; if (offset + length > sizeof(mock_cel)) return -EINVAL; @@ -617,12 +619,13 @@ void cxl_mockmem_sanitize_work(struct work_struct *work) { struct cxl_memdev_state *mds = container_of(work, typeof(*mds), security.poll_dwork.work); + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; - mutex_lock(&mds->mbox_mutex); + mutex_lock(&cxl_mbox->mbox_mutex); if (mds->security.sanitize_node) sysfs_notify_dirent(mds->security.sanitize_node); mds->security.sanitize_active = false; - mutex_unlock(&mds->mbox_mutex); + mutex_unlock(&cxl_mbox->mbox_mutex); dev_dbg(mds->cxlds.dev, "sanitize complete\n"); } @@ -631,6 +634,7 @@ static int mock_sanitize(struct cxl_mockmem_data *mdata, struct cxl_mbox_cmd *cmd) { struct cxl_memdev_state *mds = mdata->mds; + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; int rc = 0; if (cmd->size_in != 0) @@ -648,14 +652,14 @@ static int mock_sanitize(struct cxl_mockmem_data *mdata, return -ENXIO; } - mutex_lock(&mds->mbox_mutex); + mutex_lock(&cxl_mbox->mbox_mutex); if (schedule_delayed_work(&mds->security.poll_dwork, msecs_to_jiffies(mdata->sanitize_timeout))) { mds->security.sanitize_active = true; dev_dbg(mds->cxlds.dev, "sanitize issued\n"); } else rc = -EBUSY; - mutex_unlock(&mds->mbox_mutex); + mutex_unlock(&cxl_mbox->mbox_mutex); return rc; } @@ -1333,12 +1337,13 @@ static int mock_activate_fw(struct cxl_mockmem_data *mdata, return -EINVAL; } -static int cxl_mock_mbox_send(struct cxl_memdev_state *mds, +static int cxl_mock_mbox_send(struct cxl_mailbox *cxl_mbox, struct cxl_mbox_cmd *cmd) { - struct cxl_dev_state *cxlds = &mds->cxlds; - struct device *dev = cxlds->dev; + struct device *dev = cxl_mbox->host; struct cxl_mockmem_data *mdata = dev_get_drvdata(dev); + struct cxl_memdev_state *mds = mdata->mds; + struct cxl_dev_state *cxlds = &mds->cxlds; int rc = -EIO; switch (cmd->opcode) { @@ -1453,6 +1458,11 @@ static ssize_t event_trigger_store(struct device *dev, } static DEVICE_ATTR_WO(event_trigger); +static int cxl_mock_mailbox_create(struct cxl_dev_state *cxlds) +{ + return cxl_mailbox_init(&cxlds->cxl_mbox, cxlds->dev); +} + static int cxl_mock_mem_probe(struct platform_device *pdev) { struct device *dev = &pdev->dev; @@ -1460,6 +1470,7 @@ static int cxl_mock_mem_probe(struct platform_device *pdev) struct cxl_memdev_state *mds; struct cxl_dev_state *cxlds; struct cxl_mockmem_data *mdata; + struct cxl_mailbox *cxl_mbox; int rc; mdata = devm_kzalloc(dev, sizeof(*mdata), GFP_KERNEL); @@ -1487,13 +1498,18 @@ static int cxl_mock_mem_probe(struct platform_device *pdev) if (IS_ERR(mds)) return PTR_ERR(mds); + cxlds = &mds->cxlds; + rc = cxl_mock_mailbox_create(cxlds); + if (rc) + return rc; + + cxl_mbox = &mds->cxlds.cxl_mbox; mdata->mds = mds; - mds->mbox_send = cxl_mock_mbox_send; - mds->payload_size = SZ_4K; + cxl_mbox->mbox_send = cxl_mock_mbox_send; + cxl_mbox->payload_size = SZ_4K; mds->event.buf = (struct cxl_get_event_payload *) mdata->event_buf; INIT_DELAYED_WORK(&mds->security.poll_dwork, cxl_mockmem_sanitize_work); - cxlds = &mds->cxlds; cxlds->serial = pdev->id; if (is_rcd(pdev)) cxlds->rcd = true; From patchwork Wed Oct 9 12:41:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 834018 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1AE32199FDA; Wed, 9 Oct 2024 12:43:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728477801; cv=none; b=rW7M71f9SC6mXx1r3xq8WXNW8jJmzMNw72fvGvRhdOiaGv6qLPlIBVWBn1GMiOxRYv+4b9mgo3EWZrIUEhX2bsGm6GN3PC3moLlqp5u+4NJ8oimrUk8xrx67VdO06EtIlQ+MasmzHHKb9AMmVBgwVbt97gHHEfRD6KSZOz7Ypqg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728477801; c=relaxed/simple; bh=xI1/mDEUVcgehJeYztwKZRX3KZjtzcwIo/1ptpPrxpk=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=t2f9Mmo4Bpx1m6CRJACt63K1ZR3vPa9s1HU4w5j50ydNCcs5Wsrs2bh5JnAVO00jQVYzI+0eQGLMBRQyeEhmDgqemzMYs1FP+0V6DoABvv8HU1WewABP5dsnNGkGO9wWRsuh+M583jdlIoM7zpOKd0KONWPEoHD2hK8VcURr4Pg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.216]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4XNsxH4KKQz6GD4k; Wed, 9 Oct 2024 20:42:59 +0800 (CST) Received: from frapeml500007.china.huawei.com (unknown [7.182.85.172]) by mail.maildlp.com (Postfix) with ESMTPS id 47E19140C98; Wed, 9 Oct 2024 20:43:17 +0800 (CST) Received: from P_UKIT01-A7bmah.china.huawei.com (10.48.152.209) by frapeml500007.china.huawei.com (7.182.85.172) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 9 Oct 2024 14:43:15 +0200 From: To: , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v13 06/18] cxl: Convert cxl_internal_send_cmd() to use 'struct cxl_mailbox' as input Date: Wed, 9 Oct 2024 13:41:07 +0100 Message-ID: <20241009124120.1124-7-shiju.jose@huawei.com> X-Mailer: git-send-email 2.43.0.windows.1 In-Reply-To: <20241009124120.1124-1-shiju.jose@huawei.com> References: <20241009124120.1124-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500003.china.huawei.com (7.191.162.67) To frapeml500007.china.huawei.com (7.182.85.172) From: Dave Jiang With the CXL mailbox context split out, cxl_internal_send_cmd() can take 'struct cxl_mailbox' as an input parameter rather than 'struct memdev_dev_state'. Change input parameter for cxl_internal_send_cmd() and fixup all impacted call sites. Reviewed-by: Fan Ni Reviewed-by: Jonathan Cameron Reviewed-by: Alison Schofield Link: https://patch.msgid.link/20240724185649.2574627-3-dave.jiang@intel.com Signed-off-by: Dave Jiang Reviewed-by: Ira Weiny --- drivers/cxl/core/mbox.c | 38 ++++++++++++++++++++------------------ drivers/cxl/core/memdev.c | 23 +++++++++++++---------- drivers/cxl/cxlmem.h | 2 +- drivers/cxl/pci.c | 6 ++++-- drivers/cxl/pmem.c | 6 ++++-- drivers/cxl/security.c | 23 ++++++++++++----------- 6 files changed, 54 insertions(+), 44 deletions(-) diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c index 4d37462125b9..e77fb28cb6b2 100644 --- a/drivers/cxl/core/mbox.c +++ b/drivers/cxl/core/mbox.c @@ -225,7 +225,7 @@ static const char *cxl_mem_opcode_to_name(u16 opcode) /** * cxl_internal_send_cmd() - Kernel internal interface to send a mailbox command - * @mds: The driver data for the operation + * @cxl_mbox: CXL mailbox context * @mbox_cmd: initialized command to execute * * Context: Any context. @@ -241,10 +241,9 @@ static const char *cxl_mem_opcode_to_name(u16 opcode) * error. While this distinction can be useful for commands from userspace, the * kernel will only be able to use results when both are successful. */ -int cxl_internal_send_cmd(struct cxl_memdev_state *mds, +int cxl_internal_send_cmd(struct cxl_mailbox *cxl_mbox, struct cxl_mbox_cmd *mbox_cmd) { - struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; size_t out_size, min_out; int rc; @@ -689,7 +688,7 @@ static int cxl_xfer_log(struct cxl_memdev_state *mds, uuid_t *uuid, .payload_out = out, }; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); /* * The output payload length that indicates the number @@ -775,7 +774,7 @@ static struct cxl_mbox_get_supported_logs *cxl_get_gsl(struct cxl_memdev_state * /* At least the record number field must be valid */ .min_out = 2, }; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); if (rc < 0) { kvfree(ret); return ERR_PTR(rc); @@ -964,7 +963,7 @@ static int cxl_clear_event_record(struct cxl_memdev_state *mds, if (i == max_handles) { payload->nr_recs = i; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); if (rc) goto free_pl; i = 0; @@ -975,7 +974,7 @@ static int cxl_clear_event_record(struct cxl_memdev_state *mds, if (i) { payload->nr_recs = i; mbox_cmd.size_in = struct_size(payload, handles, i); - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); if (rc) goto free_pl; } @@ -1009,7 +1008,7 @@ static void cxl_mem_get_records_log(struct cxl_memdev_state *mds, .min_out = struct_size(payload, records, 0), }; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); if (rc) { dev_err_ratelimited(dev, "Event log '%d': Failed to query event records : %d", @@ -1080,6 +1079,7 @@ EXPORT_SYMBOL_NS_GPL(cxl_mem_get_event_records, CXL); */ static int cxl_mem_get_partition_info(struct cxl_memdev_state *mds) { + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; struct cxl_mbox_get_partition_info pi; struct cxl_mbox_cmd mbox_cmd; int rc; @@ -1089,7 +1089,7 @@ static int cxl_mem_get_partition_info(struct cxl_memdev_state *mds) .size_out = sizeof(pi), .payload_out = &pi, }; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); if (rc) return rc; @@ -1116,6 +1116,7 @@ static int cxl_mem_get_partition_info(struct cxl_memdev_state *mds) */ int cxl_dev_state_identify(struct cxl_memdev_state *mds) { + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; /* See CXL 2.0 Table 175 Identify Memory Device Output Payload */ struct cxl_mbox_identify id; struct cxl_mbox_cmd mbox_cmd; @@ -1130,7 +1131,7 @@ int cxl_dev_state_identify(struct cxl_memdev_state *mds) .size_out = sizeof(id), .payload_out = &id, }; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); if (rc < 0) return rc; @@ -1158,6 +1159,7 @@ EXPORT_SYMBOL_NS_GPL(cxl_dev_state_identify, CXL); static int __cxl_mem_sanitize(struct cxl_memdev_state *mds, u16 cmd) { + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; int rc; u32 sec_out = 0; struct cxl_get_security_output { @@ -1169,14 +1171,13 @@ static int __cxl_mem_sanitize(struct cxl_memdev_state *mds, u16 cmd) .size_out = sizeof(out), }; struct cxl_mbox_cmd mbox_cmd = { .opcode = cmd }; - struct cxl_dev_state *cxlds = &mds->cxlds; if (cmd != CXL_MBOX_OP_SANITIZE && cmd != CXL_MBOX_OP_SECURE_ERASE) return -EINVAL; - rc = cxl_internal_send_cmd(mds, &sec_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &sec_cmd); if (rc < 0) { - dev_err(cxlds->dev, "Failed to get security state : %d", rc); + dev_err(cxl_mbox->host, "Failed to get security state : %d", rc); return rc; } @@ -1193,9 +1194,9 @@ static int __cxl_mem_sanitize(struct cxl_memdev_state *mds, u16 cmd) sec_out & CXL_PMEM_SEC_STATE_LOCKED) return -EINVAL; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); if (rc < 0) { - dev_err(cxlds->dev, "Failed to sanitize device : %d", rc); + dev_err(cxl_mbox->host, "Failed to sanitize device : %d", rc); return rc; } @@ -1310,6 +1311,7 @@ EXPORT_SYMBOL_NS_GPL(cxl_mem_create_range_info, CXL); int cxl_set_timestamp(struct cxl_memdev_state *mds) { + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; struct cxl_mbox_cmd mbox_cmd; struct cxl_mbox_set_timestamp_in pi; int rc; @@ -1321,7 +1323,7 @@ int cxl_set_timestamp(struct cxl_memdev_state *mds) .payload_in = &pi, }; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); /* * Command is optional. Devices may have another way of providing * a timestamp, or may return all 0s in timestamp fields. @@ -1338,7 +1340,7 @@ int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len, struct cxl_region *cxlr) { struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); - struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; + struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox; struct cxl_mbox_poison_out *po; struct cxl_mbox_poison_in pi; int nr_records = 0; @@ -1362,7 +1364,7 @@ int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len, .min_out = struct_size(po, record, 0), }; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); if (rc) break; diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c index 05bb84cb1274..84fefb76dafa 100644 --- a/drivers/cxl/core/memdev.c +++ b/drivers/cxl/core/memdev.c @@ -278,7 +278,7 @@ static int cxl_validate_poison_dpa(struct cxl_memdev *cxlmd, u64 dpa) int cxl_inject_poison(struct cxl_memdev *cxlmd, u64 dpa) { - struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); + struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox; struct cxl_mbox_inject_poison inject; struct cxl_poison_record record; struct cxl_mbox_cmd mbox_cmd; @@ -308,13 +308,13 @@ int cxl_inject_poison(struct cxl_memdev *cxlmd, u64 dpa) .size_in = sizeof(inject), .payload_in = &inject, }; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); if (rc) goto out; cxlr = cxl_dpa_to_region(cxlmd, dpa); if (cxlr) - dev_warn_once(mds->cxlds.dev, + dev_warn_once(cxl_mbox->host, "poison inject dpa:%#llx region: %s\n", dpa, dev_name(&cxlr->dev)); @@ -333,7 +333,7 @@ EXPORT_SYMBOL_NS_GPL(cxl_inject_poison, CXL); int cxl_clear_poison(struct cxl_memdev *cxlmd, u64 dpa) { - struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); + struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox; struct cxl_mbox_clear_poison clear; struct cxl_poison_record record; struct cxl_mbox_cmd mbox_cmd; @@ -372,13 +372,13 @@ int cxl_clear_poison(struct cxl_memdev *cxlmd, u64 dpa) .payload_in = &clear, }; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); if (rc) goto out; cxlr = cxl_dpa_to_region(cxlmd, dpa); if (cxlr) - dev_warn_once(mds->cxlds.dev, + dev_warn_once(cxl_mbox->host, "poison clear dpa:%#llx region: %s\n", dpa, dev_name(&cxlr->dev)); @@ -715,6 +715,7 @@ static int cxl_memdev_release_file(struct inode *inode, struct file *file) */ static int cxl_mem_get_fw_info(struct cxl_memdev_state *mds) { + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; struct cxl_mbox_get_fw_info info; struct cxl_mbox_cmd mbox_cmd; int rc; @@ -725,7 +726,7 @@ static int cxl_mem_get_fw_info(struct cxl_memdev_state *mds) .payload_out = &info, }; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); if (rc < 0) return rc; @@ -749,6 +750,7 @@ static int cxl_mem_get_fw_info(struct cxl_memdev_state *mds) */ static int cxl_mem_activate_fw(struct cxl_memdev_state *mds, int slot) { + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; struct cxl_mbox_activate_fw activate; struct cxl_mbox_cmd mbox_cmd; @@ -765,7 +767,7 @@ static int cxl_mem_activate_fw(struct cxl_memdev_state *mds, int slot) activate.action = CXL_FW_ACTIVATE_OFFLINE; activate.slot = slot; - return cxl_internal_send_cmd(mds, &mbox_cmd); + return cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); } /** @@ -780,6 +782,7 @@ static int cxl_mem_activate_fw(struct cxl_memdev_state *mds, int slot) */ static int cxl_mem_abort_fw_xfer(struct cxl_memdev_state *mds) { + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; struct cxl_mbox_transfer_fw *transfer; struct cxl_mbox_cmd mbox_cmd; int rc; @@ -799,7 +802,7 @@ static int cxl_mem_abort_fw_xfer(struct cxl_memdev_state *mds) transfer->action = CXL_FW_TRANSFER_ACTION_ABORT; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); kfree(transfer); return rc; } @@ -924,7 +927,7 @@ static enum fw_upload_err cxl_fw_write(struct fw_upload *fwl, const u8 *data, .poll_count = 30, }; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); if (rc < 0) { rc = FW_UPLOAD_ERR_RW_ERROR; goto out_free; diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index 4ead415f8971..c7c423ae16ab 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -811,7 +811,7 @@ enum { CXL_PMEM_SEC_PASS_USER, }; -int cxl_internal_send_cmd(struct cxl_memdev_state *mds, +int cxl_internal_send_cmd(struct cxl_mailbox *cxl_mbox, struct cxl_mbox_cmd *cmd); int cxl_dev_state_identify(struct cxl_memdev_state *mds); int cxl_await_media_ready(struct cxl_dev_state *cxlds); diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index eca6f533386b..3b747aff2f4b 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -658,6 +658,7 @@ static int cxl_event_req_irq(struct cxl_dev_state *cxlds, u8 setting) static int cxl_event_get_int_policy(struct cxl_memdev_state *mds, struct cxl_event_interrupt_policy *policy) { + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; struct cxl_mbox_cmd mbox_cmd = { .opcode = CXL_MBOX_OP_GET_EVT_INT_POLICY, .payload_out = policy, @@ -665,7 +666,7 @@ static int cxl_event_get_int_policy(struct cxl_memdev_state *mds, }; int rc; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); if (rc < 0) dev_err(mds->cxlds.dev, "Failed to get event interrupt policy : %d", rc); @@ -676,6 +677,7 @@ static int cxl_event_get_int_policy(struct cxl_memdev_state *mds, static int cxl_event_config_msgnums(struct cxl_memdev_state *mds, struct cxl_event_interrupt_policy *policy) { + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; struct cxl_mbox_cmd mbox_cmd; int rc; @@ -692,7 +694,7 @@ static int cxl_event_config_msgnums(struct cxl_memdev_state *mds, .size_in = sizeof(*policy), }; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); if (rc < 0) { dev_err(mds->cxlds.dev, "Failed to set event interrupt policy : %d", rc); diff --git a/drivers/cxl/pmem.c b/drivers/cxl/pmem.c index 3985ff9ce70e..5453c0faa295 100644 --- a/drivers/cxl/pmem.c +++ b/drivers/cxl/pmem.c @@ -120,6 +120,7 @@ static int cxl_pmem_get_config_data(struct cxl_memdev_state *mds, struct nd_cmd_get_config_data_hdr *cmd, unsigned int buf_len) { + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; struct cxl_mbox_get_lsa get_lsa; struct cxl_mbox_cmd mbox_cmd; int rc; @@ -141,7 +142,7 @@ static int cxl_pmem_get_config_data(struct cxl_memdev_state *mds, .payload_out = cmd->out_buf, }; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); cmd->status = 0; return rc; @@ -151,6 +152,7 @@ static int cxl_pmem_set_config_data(struct cxl_memdev_state *mds, struct nd_cmd_set_config_hdr *cmd, unsigned int buf_len) { + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; struct cxl_mbox_set_lsa *set_lsa; struct cxl_mbox_cmd mbox_cmd; int rc; @@ -177,7 +179,7 @@ static int cxl_pmem_set_config_data(struct cxl_memdev_state *mds, .size_in = struct_size(set_lsa, data, cmd->in_length), }; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); /* * Set "firmware" status (4-packed bytes at the end of the input diff --git a/drivers/cxl/security.c b/drivers/cxl/security.c index 21856a3f408e..452d1a9b9148 100644 --- a/drivers/cxl/security.c +++ b/drivers/cxl/security.c @@ -14,6 +14,7 @@ static unsigned long cxl_pmem_get_security_flags(struct nvdimm *nvdimm, { struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm); struct cxl_memdev *cxlmd = cxl_nvd->cxlmd; + struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox; struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); unsigned long security_flags = 0; struct cxl_get_security_output { @@ -29,7 +30,7 @@ static unsigned long cxl_pmem_get_security_flags(struct nvdimm *nvdimm, .payload_out = &out, }; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); if (rc < 0) return 0; @@ -70,7 +71,7 @@ static int cxl_pmem_security_change_key(struct nvdimm *nvdimm, { struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm); struct cxl_memdev *cxlmd = cxl_nvd->cxlmd; - struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); + struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox; struct cxl_mbox_cmd mbox_cmd; struct cxl_set_pass set_pass; @@ -87,7 +88,7 @@ static int cxl_pmem_security_change_key(struct nvdimm *nvdimm, .payload_in = &set_pass, }; - return cxl_internal_send_cmd(mds, &mbox_cmd); + return cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); } static int __cxl_pmem_security_disable(struct nvdimm *nvdimm, @@ -96,7 +97,7 @@ static int __cxl_pmem_security_disable(struct nvdimm *nvdimm, { struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm); struct cxl_memdev *cxlmd = cxl_nvd->cxlmd; - struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); + struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox; struct cxl_disable_pass dis_pass; struct cxl_mbox_cmd mbox_cmd; @@ -112,7 +113,7 @@ static int __cxl_pmem_security_disable(struct nvdimm *nvdimm, .payload_in = &dis_pass, }; - return cxl_internal_send_cmd(mds, &mbox_cmd); + return cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); } static int cxl_pmem_security_disable(struct nvdimm *nvdimm, @@ -131,12 +132,12 @@ static int cxl_pmem_security_freeze(struct nvdimm *nvdimm) { struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm); struct cxl_memdev *cxlmd = cxl_nvd->cxlmd; - struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); + struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox; struct cxl_mbox_cmd mbox_cmd = { .opcode = CXL_MBOX_OP_FREEZE_SECURITY, }; - return cxl_internal_send_cmd(mds, &mbox_cmd); + return cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); } static int cxl_pmem_security_unlock(struct nvdimm *nvdimm, @@ -144,7 +145,7 @@ static int cxl_pmem_security_unlock(struct nvdimm *nvdimm, { struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm); struct cxl_memdev *cxlmd = cxl_nvd->cxlmd; - struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); + struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox; u8 pass[NVDIMM_PASSPHRASE_LEN]; struct cxl_mbox_cmd mbox_cmd; int rc; @@ -156,7 +157,7 @@ static int cxl_pmem_security_unlock(struct nvdimm *nvdimm, .payload_in = pass, }; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); if (rc < 0) return rc; @@ -169,7 +170,7 @@ static int cxl_pmem_security_passphrase_erase(struct nvdimm *nvdimm, { struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm); struct cxl_memdev *cxlmd = cxl_nvd->cxlmd; - struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); + struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox; struct cxl_mbox_cmd mbox_cmd; struct cxl_pass_erase erase; int rc; @@ -185,7 +186,7 @@ static int cxl_pmem_security_passphrase_erase(struct nvdimm *nvdimm, .payload_in = &erase, }; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); if (rc < 0) return rc; From patchwork Wed Oct 9 12:41:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 834196 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0661E19A281; Wed, 9 Oct 2024 12:43:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728477803; cv=none; b=Oi6qs4iwhs2uUjUqpIC6c5imKs48IEZXtOEMg4p+6wRfROWUqvcJjgFpDSLv23jT/SuShAmLgDuv6BmyGFBRYbf73LnfmNIrcXbU3t8uS4CAs9mCMCivtsoNpytVZvrDu9syGwOknpGiR7VbFfl5DipV8JrgH+aTDdfI+ryfTZ8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728477803; c=relaxed/simple; bh=wMOecyixhkrH4EpMxTPGgDwywU7MxzOWmNoUd6gCa1M=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=DjHgUoyIzUgzHuDElwliGE1kvXEiSOB94PfpzTZOuM2Bl0isi5QvUUwzhGSFyHpfJMh934K588OQIadtuVkCKtXC+uoEX2ZEBmGUUdCw6szcvaO3mm/PrnkYsWEVoUVsKbJYAJgUbRCMrEW5CGhoIM1b7fWeoj2MUMsekjYJ2OA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.216]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4XNsxL1nL3z6GD2T; Wed, 9 Oct 2024 20:43:02 +0800 (CST) Received: from frapeml500007.china.huawei.com (unknown [7.182.85.172]) by mail.maildlp.com (Postfix) with ESMTPS id 00954140A90; Wed, 9 Oct 2024 20:43:20 +0800 (CST) Received: from P_UKIT01-A7bmah.china.huawei.com (10.48.152.209) by frapeml500007.china.huawei.com (7.182.85.172) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 9 Oct 2024 14:43:18 +0200 From: To: , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v13 07/18] cxl: Add Get Supported Features command for kernel usage Date: Wed, 9 Oct 2024 13:41:08 +0100 Message-ID: <20241009124120.1124-8-shiju.jose@huawei.com> X-Mailer: git-send-email 2.43.0.windows.1 In-Reply-To: <20241009124120.1124-1-shiju.jose@huawei.com> References: <20241009124120.1124-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500003.china.huawei.com (7.191.162.67) To frapeml500007.china.huawei.com (7.182.85.172) From: Dave Jiang CXL spec r3.1 8.2.9.6.1 Get Supported Features (Opcode 0500h) The command retrieve the list of supported device-specific features (identified by UUID) and general information about each Feature. The driver will retrieve the feature entries in order to make checks and provide information for the Get Feature and Set Feature command. One of the main piece of information retrieved are the effects a Set Feature command would have for a particular feature. Signed-off-by: Dave Jiang Signed-off-by: Shiju Jose Reviewed-by: Jonathan Cameron --- drivers/cxl/core/mbox.c | 175 +++++++++++++++++++++++++++++++++++ drivers/cxl/cxlmem.h | 47 ++++++++++ drivers/cxl/pci.c | 4 + include/cxl/mailbox.h | 4 + include/uapi/linux/cxl_mem.h | 1 + 5 files changed, 231 insertions(+) diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c index e77fb28cb6b2..6493cad8c325 100644 --- a/drivers/cxl/core/mbox.c +++ b/drivers/cxl/core/mbox.c @@ -67,6 +67,7 @@ static struct cxl_mem_command cxl_mem_commands[CXL_MEM_COMMAND_ID_MAX] = { CXL_CMD(SET_SHUTDOWN_STATE, 0x1, 0, 0), CXL_CMD(GET_SCAN_MEDIA_CAPS, 0x10, 0x4, 0), CXL_CMD(GET_TIMESTAMP, 0, 0x8, 0), + CXL_CMD(GET_SUPPORTED_FEATURES, 0x8, CXL_VARIABLE_PAYLOAD, 0), }; /* @@ -795,6 +796,180 @@ static const uuid_t log_uuid[] = { [VENDOR_DEBUG_UUID] = DEFINE_CXL_VENDOR_DEBUG_UUID, }; +static void cxl_free_features(void *features) +{ + kvfree(features); +} + +static int cxl_get_supported_features_count(struct cxl_dev_state *cxlds) +{ + struct cxl_mailbox *cxl_mbox = &cxlds->cxl_mbox; + struct cxl_mbox_get_sup_feats_out mbox_out; + struct cxl_mbox_get_sup_feats_in mbox_in; + struct cxl_mbox_cmd mbox_cmd; + int rc; + + memset(&mbox_in, 0, sizeof(mbox_in)); + mbox_in.count = sizeof(mbox_out); + memset(&mbox_out, 0, sizeof(mbox_out)); + mbox_cmd = (struct cxl_mbox_cmd) { + .opcode = CXL_MBOX_OP_GET_SUPPORTED_FEATURES, + .size_in = sizeof(mbox_in), + .payload_in = &mbox_in, + .size_out = sizeof(mbox_out), + .payload_out = &mbox_out, + .min_out = sizeof(mbox_out), + }; + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); + if (rc < 0) + return rc; + + cxl_mbox->num_features = le16_to_cpu(mbox_out.supported_feats); + if (!cxl_mbox->num_features) + return -ENOENT; + + return 0; +} + +int cxl_get_supported_features(struct cxl_memdev_state *mds) +{ + int remain_feats, max_size, max_feats, start, rc; + struct cxl_dev_state *cxlds = &mds->cxlds; + struct cxl_mailbox *cxl_mbox = &cxlds->cxl_mbox; + int feat_size = sizeof(struct cxl_feat_entry); + struct cxl_mbox_get_sup_feats_out *mbox_out; + struct cxl_mbox_get_sup_feats_in mbox_in; + int hdr_size = sizeof(*mbox_out); + struct cxl_mbox_cmd mbox_cmd; + struct cxl_mem_command *cmd; + void *ptr; + + /* Get supported features is optional, need to check */ + cmd = cxl_mem_find_command(CXL_MBOX_OP_GET_SUPPORTED_FEATURES); + if (!cmd) + return -EOPNOTSUPP; + if (!test_bit(cmd->info.id, mds->enabled_cmds)) + return -EOPNOTSUPP; + + rc = cxl_get_supported_features_count(cxlds); + if (rc) + return rc; + + struct cxl_feat_entry *entries __free(kvfree) = + kvmalloc(cxl_mbox->num_features * feat_size, GFP_KERNEL); + + if (!entries) + return -ENOMEM; + + cxl_mbox->entries = no_free_ptr(entries); + rc = devm_add_action_or_reset(cxl_mbox->host, cxl_free_features, + cxl_mbox->entries); + if (rc) + return rc; + + max_size = cxl_mbox->payload_size - hdr_size; + /* max feat entries that can fit in mailbox max payload size */ + max_feats = max_size / feat_size; + ptr = &cxl_mbox->entries[0]; + + mbox_out = kvmalloc(cxl_mbox->payload_size, GFP_KERNEL); + if (!mbox_out) + return -ENOMEM; + + start = 0; + remain_feats = cxl_mbox->num_features; + do { + int retrieved, alloc_size, copy_feats; + + if (remain_feats > max_feats) { + alloc_size = sizeof(*mbox_out) + max_feats * feat_size; + remain_feats = remain_feats - max_feats; + copy_feats = max_feats; + } else { + alloc_size = sizeof(*mbox_out) + remain_feats * feat_size; + copy_feats = remain_feats; + remain_feats = 0; + } + + memset(&mbox_in, 0, sizeof(mbox_in)); + mbox_in.count = alloc_size; + mbox_in.start_idx = start; + memset(mbox_out, 0, alloc_size); + mbox_cmd = (struct cxl_mbox_cmd) { + .opcode = CXL_MBOX_OP_GET_SUPPORTED_FEATURES, + .size_in = sizeof(mbox_in), + .payload_in = &mbox_in, + .size_out = alloc_size, + .payload_out = mbox_out, + .min_out = hdr_size, + }; + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); + if (rc < 0) + goto err; + if (mbox_cmd.size_out <= hdr_size) { + rc = -ENXIO; + goto err; + } + + /* + * Make sure retrieved out buffer is multiple of feature + * entries. + */ + retrieved = mbox_cmd.size_out - hdr_size; + if (retrieved % feat_size) { + rc = -ENXIO; + goto err; + } + + /* + * If the reported output entries * defined entry size != + * retrieved output bytes, then the output package is incorrect. + */ + if (mbox_out->num_entries * feat_size != retrieved) { + rc = -ENXIO; + goto err; + } + + memcpy(ptr, mbox_out->ents, retrieved); + ptr += retrieved; + /* + * If the number of output entries is less than expected, add the + * remaining entries to the next batch. + */ + remain_feats += copy_feats - mbox_out->num_entries; + start += mbox_out->num_entries; + } while (remain_feats); + + kfree(mbox_out); + return 0; + +err: + kfree(mbox_out); + cxl_mbox->num_features = 0; + return rc; +} +EXPORT_SYMBOL_NS_GPL(cxl_get_supported_features, CXL); + +int cxl_get_supported_feature_entry(struct cxl_memdev_state *mds, const uuid_t *feat_uuid, + struct cxl_feat_entry *feat_entry_out) +{ + struct cxl_dev_state *cxlds = &mds->cxlds; + struct cxl_feat_entry *feat_entry; + int count; + + /* Check CXL dev supports the feature */ + feat_entry = &cxlds->cxl_mbox.entries[0]; + for (count = 0; count < cxlds->cxl_mbox.num_features; count++, feat_entry++) { + if (uuid_equal(&feat_entry->uuid, feat_uuid)) { + memcpy(feat_entry_out, feat_entry, sizeof(*feat_entry_out)); + return 0; + } + } + + return -EOPNOTSUPP; +} +EXPORT_SYMBOL_NS_GPL(cxl_get_supported_feature_entry, CXL); + /** * cxl_enumerate_cmds() - Enumerate commands for a device. * @mds: The driver data for the operation diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index c7c423ae16ab..94ce4645e605 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -528,6 +528,7 @@ enum cxl_opcode { CXL_MBOX_OP_GET_LOG_CAPS = 0x0402, CXL_MBOX_OP_CLEAR_LOG = 0x0403, CXL_MBOX_OP_GET_SUP_LOG_SUBLIST = 0x0405, + CXL_MBOX_OP_GET_SUPPORTED_FEATURES = 0x0500, CXL_MBOX_OP_IDENTIFY = 0x4000, CXL_MBOX_OP_GET_PARTITION_INFO = 0x4100, CXL_MBOX_OP_SET_PARTITION_INFO = 0x4101, @@ -811,6 +812,48 @@ enum { CXL_PMEM_SEC_PASS_USER, }; +/* Get Supported Features (0x500h) CXL r3.1 8.2.9.6.1 */ +struct cxl_mbox_get_sup_feats_in { + __le32 count; + __le16 start_idx; + u8 reserved[2]; +} __packed; + +/* Supported Feature Entry : Payload out attribute flags */ +#define CXL_FEAT_ENTRY_FLAG_CHANGABLE BIT(0) +#define CXL_FEAT_ENTRY_FLAG_DEEPEST_RESET_PERSISTENCE_MASK GENMASK(3, 1) +#define CXL_FEAT_ENTRY_FLAG_PERSIST_ACROSS_FIRMWARE_UPDATE BIT(4) +#define CXL_FEAT_ENTRY_FLAG_SUPPORT_DEFAULT_SELECTION BIT(5) +#define CXL_FEAT_ENTRY_FLAG_SUPPORT_SAVED_SELECTION BIT(6) + +enum cxl_feat_attr_value_persistence { + CXL_FEAT_ATTR_VALUE_PERSISTENCE_NONE, + CXL_FEAT_ATTR_VALUE_PERSISTENCE_CXL_RESET, + CXL_FEAT_ATTR_VALUE_PERSISTENCE_HOT_RESET, + CXL_FEAT_ATTR_VALUE_PERSISTENCE_WARM_RESET, + CXL_FEAT_ATTR_VALUE_PERSISTENCE_COLD_RESET, + CXL_FEAT_ATTR_VALUE_PERSISTENCE_MAX +}; + +struct cxl_feat_entry { + uuid_t uuid; + __le16 id; + __le16 get_feat_size; + __le16 set_feat_size; + __le32 attr_flags; + u8 get_feat_ver; + u8 set_feat_ver; + __le16 set_effects; + u8 reserved[18]; +} __packed; + +struct cxl_mbox_get_sup_feats_out { + __le16 num_entries; + __le16 supported_feats; + u8 reserved[4]; + struct cxl_feat_entry ents[] __counted_by_le(supported_feats); +} __packed; + int cxl_internal_send_cmd(struct cxl_mailbox *cxl_mbox, struct cxl_mbox_cmd *cmd); int cxl_dev_state_identify(struct cxl_memdev_state *mds); @@ -870,4 +913,8 @@ struct cxl_hdm { struct seq_file; struct dentry *cxl_debugfs_create_dir(const char *dir); void cxl_dpa_debug(struct seq_file *file, struct cxl_dev_state *cxlds); + +int cxl_get_supported_features(struct cxl_memdev_state *mds); +int cxl_get_supported_feature_entry(struct cxl_memdev_state *mds, const uuid_t *feat_uuid, + struct cxl_feat_entry *feat_entry_out); #endif /* __CXL_MEM_H__ */ diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index 3b747aff2f4b..96be45159119 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -884,6 +884,10 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (rc) return rc; + rc = cxl_get_supported_features(mds); + if (rc) + dev_dbg(&pdev->dev, "No features enumerated.\n"); + rc = cxl_set_timestamp(mds); if (rc) return rc; diff --git a/include/cxl/mailbox.h b/include/cxl/mailbox.h index bacd111e75f1..cc66afec3473 100644 --- a/include/cxl/mailbox.h +++ b/include/cxl/mailbox.h @@ -14,6 +14,8 @@ struct cxl_mbox_cmd; * @mbox_mutex: mutex protects device mailbox and firmware * @mbox_wait: rcuwait for mailbox * @mbox_send: @dev specific transport for transmitting mailbox commands + * @num_features: number of supported features + * @entries: list of supported feature entries. */ struct cxl_mailbox { struct device *host; @@ -21,6 +23,8 @@ struct cxl_mailbox { struct mutex mbox_mutex; /* lock to protect mailbox context */ struct rcuwait mbox_wait; int (*mbox_send)(struct cxl_mailbox *cxl_mbox, struct cxl_mbox_cmd *cmd); + int num_features; + struct cxl_feat_entry *entries; }; int cxl_mailbox_init(struct cxl_mailbox *cxl_mbox, struct device *host); diff --git a/include/uapi/linux/cxl_mem.h b/include/uapi/linux/cxl_mem.h index c6c0fe27495d..bd2535962f70 100644 --- a/include/uapi/linux/cxl_mem.h +++ b/include/uapi/linux/cxl_mem.h @@ -50,6 +50,7 @@ ___C(GET_LOG_CAPS, "Get Log Capabilities"), \ ___C(CLEAR_LOG, "Clear Log"), \ ___C(GET_SUP_LOG_SUBLIST, "Get Supported Logs Sub-List"), \ + ___C(GET_SUPPORTED_FEATURES, "Get Supported Features"), \ ___C(MAX, "invalid / last command") #define ___C(a, b) CXL_MEM_COMMAND_ID_##a From patchwork Wed Oct 9 12:41:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 834017 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B5AA619D8AD; Wed, 9 Oct 2024 12:43:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728477806; cv=none; b=U3EzDn3T2b+7OLdpWHj1y2uDP6IUR3iqEIEvYfcjgp4ZNyMV6t4HFsEZba017hKMeeW6DgENzjkEquJyNTpHgyeeSQZ5M6BkFn7qBSZnyzUg78nIyJamW+kU34Rtku8lr0JgYHdKW7K0ww33BDMA6+icdJ2jctKkJP3n/ymRfYg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728477806; c=relaxed/simple; bh=7DuUYoIDXxEJO8GKeGrCNoNRLeSfM+LuxPoTYZbXnlc=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=dhf8T/M5B0YE6OA7p9co34ompqQQbkbR2PaTTV4Prag3GUTCYBxFROe81irHnTduLxRw8nzPiXIFjcQlWFo7AbhQCQ8XRZ/uC2NpnzwRkiR771FJjfxOHiN70zZUYq/74ucHR4zQcAE6c0hZ4nu/UYjDIV2606g8HYAf0cCiH2E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.231]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4XNsxN710mz6GD4Y; Wed, 9 Oct 2024 20:43:04 +0800 (CST) Received: from frapeml500007.china.huawei.com (unknown [7.182.85.172]) by mail.maildlp.com (Postfix) with ESMTPS id B4C05140B3C; Wed, 9 Oct 2024 20:43:22 +0800 (CST) Received: from P_UKIT01-A7bmah.china.huawei.com (10.48.152.209) by frapeml500007.china.huawei.com (7.182.85.172) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 9 Oct 2024 14:43:20 +0200 From: To: , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v13 08/18] cxl/mbox: Add GET_FEATURE mailbox command Date: Wed, 9 Oct 2024 13:41:09 +0100 Message-ID: <20241009124120.1124-9-shiju.jose@huawei.com> X-Mailer: git-send-email 2.43.0.windows.1 In-Reply-To: <20241009124120.1124-1-shiju.jose@huawei.com> References: <20241009124120.1124-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500003.china.huawei.com (7.191.162.67) To frapeml500007.china.huawei.com (7.182.85.172) From: Shiju Jose Add support for GET_FEATURE mailbox command. CXL spec 3.1 section 8.2.9.6 describes optional device specific features. The settings of a feature can be retrieved using Get Feature command. CXL spec 3.1 section 8.2.9.6.2 describes Get Feature command. Reviewed-by: Fan Ni Signed-off-by: Shiju Jose Reviewed-by: Jonathan Cameron --- drivers/cxl/core/mbox.c | 41 +++++++++++++++++++++++++++++++++++++++++ drivers/cxl/cxlmem.h | 26 ++++++++++++++++++++++++++ 2 files changed, 67 insertions(+) diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c index 6493cad8c325..eae374a90a9f 100644 --- a/drivers/cxl/core/mbox.c +++ b/drivers/cxl/core/mbox.c @@ -970,6 +970,47 @@ int cxl_get_supported_feature_entry(struct cxl_memdev_state *mds, const uuid_t * } EXPORT_SYMBOL_NS_GPL(cxl_get_supported_feature_entry, CXL); +size_t cxl_get_feature(struct cxl_memdev_state *mds, const uuid_t feat_uuid, + enum cxl_get_feat_selection selection, + void *feat_out, size_t feat_out_size) +{ + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; + size_t data_to_rd_size, size_out; + struct cxl_mbox_get_feat_in pi; + struct cxl_mbox_cmd mbox_cmd; + size_t data_rcvd_size = 0; + int rc; + + if (!feat_out || !feat_out_size) + return 0; + + size_out = min(feat_out_size, cxl_mbox->payload_size); + pi.uuid = feat_uuid; + pi.selection = selection; + do { + data_to_rd_size = min(feat_out_size - data_rcvd_size, + cxl_mbox->payload_size); + pi.offset = cpu_to_le16(data_rcvd_size); + pi.count = cpu_to_le16(data_to_rd_size); + + mbox_cmd = (struct cxl_mbox_cmd) { + .opcode = CXL_MBOX_OP_GET_FEATURE, + .size_in = sizeof(pi), + .payload_in = &pi, + .size_out = size_out, + .payload_out = feat_out + data_rcvd_size, + .min_out = data_to_rd_size, + }; + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); + if (rc < 0 || !mbox_cmd.size_out) + return 0; + data_rcvd_size += mbox_cmd.size_out; + } while (data_rcvd_size < feat_out_size); + + return data_rcvd_size; +} +EXPORT_SYMBOL_NS_GPL(cxl_get_feature, CXL); + /** * cxl_enumerate_cmds() - Enumerate commands for a device. * @mds: The driver data for the operation diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index 94ce4645e605..c7ad63597709 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -529,6 +529,7 @@ enum cxl_opcode { CXL_MBOX_OP_CLEAR_LOG = 0x0403, CXL_MBOX_OP_GET_SUP_LOG_SUBLIST = 0x0405, CXL_MBOX_OP_GET_SUPPORTED_FEATURES = 0x0500, + CXL_MBOX_OP_GET_FEATURE = 0x0501, CXL_MBOX_OP_IDENTIFY = 0x4000, CXL_MBOX_OP_GET_PARTITION_INFO = 0x4100, CXL_MBOX_OP_SET_PARTITION_INFO = 0x4101, @@ -854,6 +855,28 @@ struct cxl_mbox_get_sup_feats_out { struct cxl_feat_entry ents[] __counted_by_le(supported_feats); } __packed; +/* + * Get Feature CXL 3.1 Spec 8.2.9.6.2 + */ + +/* + * Get Feature input payload + * CXL rev 3.1 section 8.2.9.6.2 Table 8-99 + */ +enum cxl_get_feat_selection { + CXL_GET_FEAT_SEL_CURRENT_VALUE, + CXL_GET_FEAT_SEL_DEFAULT_VALUE, + CXL_GET_FEAT_SEL_SAVED_VALUE, + CXL_GET_FEAT_SEL_MAX +}; + +struct cxl_mbox_get_feat_in { + uuid_t uuid; + __le16 offset; + __le16 count; + u8 selection; +} __packed; + int cxl_internal_send_cmd(struct cxl_mailbox *cxl_mbox, struct cxl_mbox_cmd *cmd); int cxl_dev_state_identify(struct cxl_memdev_state *mds); @@ -917,4 +940,7 @@ void cxl_dpa_debug(struct seq_file *file, struct cxl_dev_state *cxlds); int cxl_get_supported_features(struct cxl_memdev_state *mds); int cxl_get_supported_feature_entry(struct cxl_memdev_state *mds, const uuid_t *feat_uuid, struct cxl_feat_entry *feat_entry_out); +size_t cxl_get_feature(struct cxl_memdev_state *mds, const uuid_t feat_uuid, + enum cxl_get_feat_selection selection, + void *feat_out, size_t feat_out_size); #endif /* __CXL_MEM_H__ */ From patchwork Wed Oct 9 12:41:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 834195 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C32CE197A87; Wed, 9 Oct 2024 12:43:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728477810; cv=none; b=iE1TU0/4gZ7405DB/ayaF916R5p+m+e4CCTv027daeZLz+smg9PakdKgzl5kXcScfpa66iBHkBvHiPBF5fkHR5R15y6jtypmy6YgQD37Ln5OVUiZbXwljDlCtw09wV9w/ujeAqAA2y4NWgdtVnR4S8dfwG2Tp5RQkXYSzAPRN5M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728477810; c=relaxed/simple; bh=NjF0uChZFu2gfQsZHK20LCk9jVOA/CHh2vxhg/YAvMA=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=ivRT2Y0a+cXSwXtO3u9fvyI+wq+lZxYzyJpZbWCN74WGPtIhkJFvYTWqLnVN1tvo8BbvUKkeRjgtiWo9KvMEJGwua2+mWSoo6cm7gRPLVctS94K1JvEK80dbgpGt99j0BM6XPoQBOzeJahFQ13lLAFL4eL6vMVFCvX+hD06Fvk4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4XNsrp2Svdz6GFK0; Wed, 9 Oct 2024 20:39:06 +0800 (CST) Received: from frapeml500007.china.huawei.com (unknown [7.182.85.172]) by mail.maildlp.com (Postfix) with ESMTPS id 8769C140133; Wed, 9 Oct 2024 20:43:25 +0800 (CST) Received: from P_UKIT01-A7bmah.china.huawei.com (10.48.152.209) by frapeml500007.china.huawei.com (7.182.85.172) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 9 Oct 2024 14:43:23 +0200 From: To: , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v13 09/18] cxl/mbox: Add SET_FEATURE mailbox command Date: Wed, 9 Oct 2024 13:41:10 +0100 Message-ID: <20241009124120.1124-10-shiju.jose@huawei.com> X-Mailer: git-send-email 2.43.0.windows.1 In-Reply-To: <20241009124120.1124-1-shiju.jose@huawei.com> References: <20241009124120.1124-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500003.china.huawei.com (7.191.162.67) To frapeml500007.china.huawei.com (7.182.85.172) From: Shiju Jose Add support for SET_FEATURE mailbox command. CXL spec 3.1 section 8.2.9.6 describes optional device specific features. CXL devices support features with changeable attributes. The settings of a feature can be optionally modified using Set Feature command. CXL spec 3.1 section 8.2.9.6.3 describes Set Feature command. Signed-off-by: Shiju Jose Reviewed-by: Jonathan Cameron --- drivers/cxl/core/mbox.c | 73 +++++++++++++++++++++++++++++++++++++++++ drivers/cxl/cxlmem.h | 34 +++++++++++++++++++ 2 files changed, 107 insertions(+) diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c index eae374a90a9f..30c44ab11347 100644 --- a/drivers/cxl/core/mbox.c +++ b/drivers/cxl/core/mbox.c @@ -1011,6 +1011,79 @@ size_t cxl_get_feature(struct cxl_memdev_state *mds, const uuid_t feat_uuid, } EXPORT_SYMBOL_NS_GPL(cxl_get_feature, CXL); +/* + * FEAT_DATA_MIN_PAYLOAD_SIZE - min extra number of bytes should be + * available in the mailbox for storing the actual feature data so that + * the feature data transfer would work as expected. + */ +#define FEAT_DATA_MIN_PAYLOAD_SIZE 10 +int cxl_set_feature(struct cxl_memdev_state *mds, + const uuid_t feat_uuid, u8 feat_version, + void *feat_data, size_t feat_data_size, + u8 feat_flag) +{ + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; + struct cxl_memdev_set_feat_pi { + struct cxl_mbox_set_feat_hdr hdr; + u8 feat_data[]; + } __packed; + size_t data_in_size, data_sent_size = 0; + struct cxl_mbox_cmd mbox_cmd; + size_t hdr_size; + int rc = 0; + + struct cxl_memdev_set_feat_pi *pi __free(kfree) = + kmalloc(cxl_mbox->payload_size, GFP_KERNEL); + pi->hdr.uuid = feat_uuid; + pi->hdr.version = feat_version; + feat_flag &= ~CXL_SET_FEAT_FLAG_DATA_TRANSFER_MASK; + feat_flag |= CXL_SET_FEAT_FLAG_DATA_SAVED_ACROSS_RESET; + hdr_size = sizeof(pi->hdr); + /* + * Check minimum mbox payload size is available for + * the feature data transfer. + */ + if (hdr_size + FEAT_DATA_MIN_PAYLOAD_SIZE > cxl_mbox->payload_size) + return -ENOMEM; + + if ((hdr_size + feat_data_size) <= cxl_mbox->payload_size) { + pi->hdr.flags = cpu_to_le32(feat_flag | + CXL_SET_FEAT_FLAG_FULL_DATA_TRANSFER); + data_in_size = feat_data_size; + } else { + pi->hdr.flags = cpu_to_le32(feat_flag | + CXL_SET_FEAT_FLAG_INITIATE_DATA_TRANSFER); + data_in_size = cxl_mbox->payload_size - hdr_size; + } + + do { + pi->hdr.offset = cpu_to_le16(data_sent_size); + memcpy(pi->feat_data, feat_data + data_sent_size, data_in_size); + mbox_cmd = (struct cxl_mbox_cmd) { + .opcode = CXL_MBOX_OP_SET_FEATURE, + .size_in = hdr_size + data_in_size, + .payload_in = pi, + }; + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); + if (rc < 0) + return rc; + + data_sent_size += data_in_size; + if (data_sent_size >= feat_data_size) + return 0; + + if ((feat_data_size - data_sent_size) <= (cxl_mbox->payload_size - hdr_size)) { + data_in_size = feat_data_size - data_sent_size; + pi->hdr.flags = cpu_to_le32(feat_flag | + CXL_SET_FEAT_FLAG_FINISH_DATA_TRANSFER); + } else { + pi->hdr.flags = cpu_to_le32(feat_flag | + CXL_SET_FEAT_FLAG_CONTINUE_DATA_TRANSFER); + } + } while (true); +} +EXPORT_SYMBOL_NS_GPL(cxl_set_feature, CXL); + /** * cxl_enumerate_cmds() - Enumerate commands for a device. * @mds: The driver data for the operation diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index c7ad63597709..b778eef99ce0 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -530,6 +530,7 @@ enum cxl_opcode { CXL_MBOX_OP_GET_SUP_LOG_SUBLIST = 0x0405, CXL_MBOX_OP_GET_SUPPORTED_FEATURES = 0x0500, CXL_MBOX_OP_GET_FEATURE = 0x0501, + CXL_MBOX_OP_SET_FEATURE = 0x0502, CXL_MBOX_OP_IDENTIFY = 0x4000, CXL_MBOX_OP_GET_PARTITION_INFO = 0x4100, CXL_MBOX_OP_SET_PARTITION_INFO = 0x4101, @@ -877,6 +878,35 @@ struct cxl_mbox_get_feat_in { u8 selection; } __packed; +/* + * Set Feature CXL 3.1 Spec 8.2.9.6.3 + */ + +/* + * Set Feature input payload + * CXL rev 3.1 section 8.2.9.6.3 Table 8-101 + */ +/* Set Feature : Payload in flags */ +#define CXL_SET_FEAT_FLAG_DATA_TRANSFER_MASK GENMASK(2, 0) +enum cxl_set_feat_flag_data_transfer { + CXL_SET_FEAT_FLAG_FULL_DATA_TRANSFER, + CXL_SET_FEAT_FLAG_INITIATE_DATA_TRANSFER, + CXL_SET_FEAT_FLAG_CONTINUE_DATA_TRANSFER, + CXL_SET_FEAT_FLAG_FINISH_DATA_TRANSFER, + CXL_SET_FEAT_FLAG_ABORT_DATA_TRANSFER, + CXL_SET_FEAT_FLAG_DATA_TRANSFER_MAX +}; + +#define CXL_SET_FEAT_FLAG_DATA_SAVED_ACROSS_RESET BIT(3) + +struct cxl_mbox_set_feat_hdr { + uuid_t uuid; + __le32 flags; + __le16 offset; + u8 version; + u8 rsvd[9]; +} __packed; + int cxl_internal_send_cmd(struct cxl_mailbox *cxl_mbox, struct cxl_mbox_cmd *cmd); int cxl_dev_state_identify(struct cxl_memdev_state *mds); @@ -943,4 +973,8 @@ int cxl_get_supported_feature_entry(struct cxl_memdev_state *mds, const uuid_t * size_t cxl_get_feature(struct cxl_memdev_state *mds, const uuid_t feat_uuid, enum cxl_get_feat_selection selection, void *feat_out, size_t feat_out_size); +int cxl_set_feature(struct cxl_memdev_state *mds, + const uuid_t feat_uuid, u8 feat_version, + void *feat_data, size_t feat_data_size, + u8 feat_flag); #endif /* __CXL_MEM_H__ */ From patchwork Wed Oct 9 12:41:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 834016 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 603F7197A97; Wed, 9 Oct 2024 12:43:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728477812; cv=none; b=UefNP/IAI2w5G5A2y/gejxAyezhJ6kNJ5Z6vn6zm+g1GWwV9qsOL4SBOPdCGH1oVVpDHtQyB7q97iMzrigqVfSg+dCx18jiilkjD1BR0qmx2ZVTfhwzie97I8Vx6HOi9HuLnQoLOI+zUB57IF/WmoYGNVyPVCwJuLu1CajOLmqM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728477812; c=relaxed/simple; bh=Oh9VkD3RjG1H9g9RjwaTN2d/+qIzybykbjs+cM1swSI=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=qAEYNYluTbWWPLTv4KwXK/Cn8sHTYkPh7O6H0G5fwZ2STbxdGG6pOt0g5EZ5GZ2zg7ANvCr5YWoqLeRm7/rjOAwnFvOwWvfjpcEjFlPsfQtMt36IcDS0qadz2HPJvQBwcD0lZB1T9RkM/U/hUhLvoYdD3ymHwOSAEvEkFRsrH7M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4XNswL5C6nz6K71D; Wed, 9 Oct 2024 20:42:10 +0800 (CST) Received: from frapeml500007.china.huawei.com (unknown [7.182.85.172]) by mail.maildlp.com (Postfix) with ESMTPS id 601E4140133; Wed, 9 Oct 2024 20:43:28 +0800 (CST) Received: from P_UKIT01-A7bmah.china.huawei.com (10.48.152.209) by frapeml500007.china.huawei.com (7.182.85.172) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 9 Oct 2024 14:43:26 +0200 From: To: , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v13 10/18] cxl/memfeature: Add CXL memory device patrol scrub control feature Date: Wed, 9 Oct 2024 13:41:11 +0100 Message-ID: <20241009124120.1124-11-shiju.jose@huawei.com> X-Mailer: git-send-email 2.43.0.windows.1 In-Reply-To: <20241009124120.1124-1-shiju.jose@huawei.com> References: <20241009124120.1124-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500003.china.huawei.com (7.191.162.67) To frapeml500007.china.huawei.com (7.182.85.172) From: Shiju Jose CXL spec 3.1 section 8.2.9.9.11.1 describes the device patrol scrub control feature. The device patrol scrub proactively locates and makes corrections to errors in regular cycle. Allow specifying the number of hours within which the patrol scrub must be completed, subject to minimum and maximum limits reported by the device. Also allow disabling scrub allowing trade-off error rates against performance. Add support for CXL memory device based patrol scrub control. Register with EDAC device driver , which gets the scrub attr descriptors from EDAC scrub and exposes sysfs scrub control attributes to the userspace. For example CXL device based scrub control for the CXL mem0 device is exposed in /sys/bus/edac/devices/cxl_mem0/scrubX/ Also add support for region based CXL memory patrol scrub control. CXL memory region may be interleaved across one or more CXL memory devices. For example region based scrub control for CXL region1 is exposed in /sys/bus/edac/devices/cxl_region1/scrubX/ Open Questions: Q1: CXL 3.1 spec defined patrol scrub control feature at CXL memory devices with supporting set scrub cycle and enable/disable scrub. but not based on HPA range. Thus presently scrub control for a region is implemented based on all associated CXL memory devices. What is the exact use case for the CXL region based scrub control? How the HPA range, which Dan asked for region based scrubbing is used? Does spec change is required for patrol scrub control feature with support for setting the HPA range? Q2: Both CXL device based and CXL region based scrub control would be enabled at the same time in a system? Co-developed-by: Jonathan Cameron Signed-off-by: Jonathan Cameron Signed-off-by: Shiju Jose --- Documentation/edac/edac-scrub.rst | 74 ++++++ drivers/cxl/Kconfig | 18 ++ drivers/cxl/core/Makefile | 1 + drivers/cxl/core/memfeature.c | 383 ++++++++++++++++++++++++++++++ drivers/cxl/core/region.c | 6 + drivers/cxl/cxlmem.h | 7 + drivers/cxl/mem.c | 4 + 7 files changed, 493 insertions(+) create mode 100644 Documentation/edac/edac-scrub.rst create mode 100644 drivers/cxl/core/memfeature.c diff --git a/Documentation/edac/edac-scrub.rst b/Documentation/edac/edac-scrub.rst new file mode 100644 index 000000000000..243035957e99 --- /dev/null +++ b/Documentation/edac/edac-scrub.rst @@ -0,0 +1,74 @@ +.. SPDX-License-Identifier: GPL-2.0 + +=================== +EDAC Scrub control +=================== + +Copyright (c) 2024 HiSilicon Limited. + +:Author: Shiju Jose +:License: The GNU Free Documentation License, Version 1.2 + (dual licensed under the GPL v2) +:Original Reviewers: + +- Written for: 6.12 +- Updated for: + +Introduction +------------ +The EDAC enhancement for RAS featurues exposes interfaces for controlling +the memory scrubbers in the system. The scrub device drivers in the +system register with the EDAC scrub. The driver exposes the +scrub controls to user in the sysfs. + +The File System +--------------- + +The control attributes of the registered scrubber instance could be +accessed in the /sys/bus/edac/devices//scrub*/ + +sysfs +----- + +Sysfs files are documented in +`Documentation/ABI/testing/sysfs-edac-scrub-control`. + +Example +------- + +The usage takes the form shown in this example:: + +1. CXL memory device patrol scrubber +1.1 device based +root@localhost:~# cat /sys/bus/edac/devices/cxl_mem0/scrub0/min_cycle_duration +3600 +root@localhost:~# cat /sys/bus/edac/devices/cxl_mem0/scrub0/max_cycle_duration +918000 +root@localhost:~# cat /sys/bus/edac/devices/cxl_mem0/scrub0/current_cycle_duration +43200 +root@localhost:~# echo 54000 > /sys/bus/edac/devices/cxl_mem0/scrub0/current_cycle_duration +root@localhost:~# cat /sys/bus/edac/devices/cxl_mem0/scrub0/current_cycle_duration +54000 +root@localhost:~# echo 1 > /sys/bus/edac/devices/cxl_mem0/scrub0/enable_background +root@localhost:~# cat /sys/bus/edac/devices/cxl_mem0/scrub0/enable_background +1 +root@localhost:~# echo 0 > /sys/bus/edac/devices/cxl_mem0/scrub0/enable_background +root@localhost:~# cat /sys/bus/edac/devices/cxl_mem0/scrub0/enable_background +0 + +1.2. region based +root@localhost:~# cat /sys/bus/edac/devices/cxl_region0/scrub0/min_cycle_duration +3600 +root@localhost:~# cat /sys/bus/edac/devices/cxl_region0/scrub0/max_cycle_duration +918000 +root@localhost:~# cat /sys/bus/edac/devices/cxl_region0/scrub0/current_cycle_duration +43200 +root@localhost:~# echo 54000 > /sys/bus/edac/devices/cxl_region0/scrub0/current_cycle_duration +root@localhost:~# cat /sys/bus/edac/devices/cxl_region0/scrub0/current_cycle_duration +54000 +root@localhost:~# echo 1 > /sys/bus/edac/devices/cxl_region0/scrub0/enable_background +root@localhost:~# cat /sys/bus/edac/devices/cxl_region0/scrub0/enable_background +1 +root@localhost:~# echo 0 > /sys/bus/edac/devices/cxl_region0/scrub0/enable_background +root@localhost:~# cat /sys/bus/edac/devices/cxl_region0/scrub0/enable_background +0 diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig index 99b5c25be079..b717a152d2a5 100644 --- a/drivers/cxl/Kconfig +++ b/drivers/cxl/Kconfig @@ -145,4 +145,22 @@ config CXL_REGION_INVALIDATION_TEST If unsure, or if this kernel is meant for production environments, say N. +config CXL_RAS_FEAT + tristate "CXL: Memory RAS features" + depends on CXL_PCI + depends on CXL_MEM + depends on EDAC + help + The CXL memory RAS feature control is optional allows host to control + the RAS features configurations of CXL Type 3 devices. + + Registers with the EDAC device subsystem to expose control attributes + of CXL memory device's RAS features to the user. + Provides interface functions to support configuring the CXL memory + device's RAS features. + + Say 'y/n' to enable/disable CXL.mem device'ss RAS features control. + See section 8.2.9.9.11 of CXL 3.1 specification for the detailed + information of CXL memory device features. + endif diff --git a/drivers/cxl/core/Makefile b/drivers/cxl/core/Makefile index 9259bcc6773c..2a3c7197bc23 100644 --- a/drivers/cxl/core/Makefile +++ b/drivers/cxl/core/Makefile @@ -16,3 +16,4 @@ cxl_core-y += pmu.o cxl_core-y += cdat.o cxl_core-$(CONFIG_TRACING) += trace.o cxl_core-$(CONFIG_CXL_REGION) += region.o +cxl_core-$(CONFIG_CXL_RAS_FEAT) += memfeature.o diff --git a/drivers/cxl/core/memfeature.c b/drivers/cxl/core/memfeature.c new file mode 100644 index 000000000000..84d6e887a4fa --- /dev/null +++ b/drivers/cxl/core/memfeature.c @@ -0,0 +1,383 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * CXL memory RAS feature driver. + * + * Copyright (c) 2024 HiSilicon Limited. + * + * - Supports functions to configure RAS features of the + * CXL memory devices. + * - Registers with the EDAC device subsystem driver to expose + * the features sysfs attributes to the user for configuring + * CXL memory RAS feature. + */ + +#define pr_fmt(fmt) "CXL MEM FEAT: " fmt + +#include +#include +#include +#include +#include + +#define CXL_DEV_NUM_RAS_FEATURES 1 +#define CXL_DEV_HOUR_IN_SECS 3600 + +#define CXL_SCRUB_NAME_LEN 128 + +/* CXL memory patrol scrub control definitions */ +static const uuid_t cxl_patrol_scrub_uuid = + UUID_INIT(0x96dad7d6, 0xfde8, 0x482b, 0xa7, 0x33, 0x75, 0x77, 0x4e, \ + 0x06, 0xdb, 0x8a); + +/* CXL memory patrol scrub control functions */ +struct cxl_patrol_scrub_context { + u8 instance; + u16 get_feat_size; + u16 set_feat_size; + u8 get_version; + u8 set_version; + u16 set_effects; + struct cxl_memdev *cxlmd; + struct cxl_region *cxlr; +}; + +/** + * struct cxl_memdev_ps_params - CXL memory patrol scrub parameter data structure. + * @enable: [IN & OUT] enable(1)/disable(0) patrol scrub. + * @scrub_cycle_changeable: [OUT] scrub cycle attribute of patrol scrub is changeable. + * @scrub_cycle_hrs: [IN] Requested patrol scrub cycle in hours. + * [OUT] Current patrol scrub cycle in hours. + * @min_scrub_cycle_hrs:[OUT] minimum patrol scrub cycle in hours supported. + */ +struct cxl_memdev_ps_params { + bool enable; + bool scrub_cycle_changeable; + u16 scrub_cycle_hrs; + u16 min_scrub_cycle_hrs; +}; + +enum cxl_scrub_param { + CXL_PS_PARAM_ENABLE, + CXL_PS_PARAM_SCRUB_CYCLE, +}; + +#define CXL_MEMDEV_PS_SCRUB_CYCLE_CHANGE_CAP_MASK BIT(0) +#define CXL_MEMDEV_PS_SCRUB_CYCLE_REALTIME_REPORT_CAP_MASK BIT(1) +#define CXL_MEMDEV_PS_CUR_SCRUB_CYCLE_MASK GENMASK(7, 0) +#define CXL_MEMDEV_PS_MIN_SCRUB_CYCLE_MASK GENMASK(15, 8) +#define CXL_MEMDEV_PS_FLAG_ENABLED_MASK BIT(0) + +struct cxl_memdev_ps_rd_attrs { + u8 scrub_cycle_cap; + __le16 scrub_cycle_hrs; + u8 scrub_flags; +} __packed; + +struct cxl_memdev_ps_wr_attrs { + u8 scrub_cycle_hrs; + u8 scrub_flags; +} __packed; + +static int cxl_mem_ps_get_attrs(struct cxl_memdev_state *mds, + struct cxl_memdev_ps_params *params) +{ + size_t rd_data_size = sizeof(struct cxl_memdev_ps_rd_attrs); + size_t data_size; + struct cxl_memdev_ps_rd_attrs *rd_attrs __free(kfree) = + kmalloc(rd_data_size, GFP_KERNEL); + if (!rd_attrs) + return -ENOMEM; + + data_size = cxl_get_feature(mds, cxl_patrol_scrub_uuid, + CXL_GET_FEAT_SEL_CURRENT_VALUE, + rd_attrs, rd_data_size); + if (!data_size) + return -EIO; + + params->scrub_cycle_changeable = FIELD_GET(CXL_MEMDEV_PS_SCRUB_CYCLE_CHANGE_CAP_MASK, + rd_attrs->scrub_cycle_cap); + params->enable = FIELD_GET(CXL_MEMDEV_PS_FLAG_ENABLED_MASK, + rd_attrs->scrub_flags); + params->scrub_cycle_hrs = FIELD_GET(CXL_MEMDEV_PS_CUR_SCRUB_CYCLE_MASK, + rd_attrs->scrub_cycle_hrs); + params->min_scrub_cycle_hrs = FIELD_GET(CXL_MEMDEV_PS_MIN_SCRUB_CYCLE_MASK, + rd_attrs->scrub_cycle_hrs); + + return 0; +} + +static int cxl_ps_get_attrs(struct device *dev, void *drv_data, + struct cxl_memdev_ps_params *params) +{ + struct cxl_patrol_scrub_context *cxl_ps_ctx = drv_data; + struct cxl_memdev *cxlmd; + struct cxl_dev_state *cxlds; + struct cxl_memdev_state *mds; + u16 min_scrub_cycle = 0; + int i, ret; + + if (cxl_ps_ctx->cxlr) { + struct cxl_region *cxlr = cxl_ps_ctx->cxlr; + struct cxl_region_params *p = &cxlr->params; + + for (i = p->interleave_ways - 1; i >= 0; i--) { + struct cxl_endpoint_decoder *cxled = p->targets[i]; + + cxlmd = cxled_to_memdev(cxled); + cxlds = cxlmd->cxlds; + mds = to_cxl_memdev_state(cxlds); + ret = cxl_mem_ps_get_attrs(mds, params); + if (ret) + return ret; + + if (params->min_scrub_cycle_hrs > min_scrub_cycle) + min_scrub_cycle = params->min_scrub_cycle_hrs; + } + params->min_scrub_cycle_hrs = min_scrub_cycle; + return 0; + } + cxlmd = cxl_ps_ctx->cxlmd; + cxlds = cxlmd->cxlds; + mds = to_cxl_memdev_state(cxlds); + + return cxl_mem_ps_get_attrs(mds, params); +} + +static int cxl_mem_ps_set_attrs(struct device *dev, void *drv_data, + struct cxl_memdev_state *mds, + struct cxl_memdev_ps_params *params, + enum cxl_scrub_param param_type) +{ + struct cxl_patrol_scrub_context *cxl_ps_ctx = drv_data; + struct cxl_memdev_ps_wr_attrs wr_attrs; + struct cxl_memdev_ps_params rd_params; + int ret; + + ret = cxl_mem_ps_get_attrs(mds, &rd_params); + if (ret) { + dev_err(dev, "Get cxlmemdev patrol scrub params failed ret=%d\n", + ret); + return ret; + } + + switch (param_type) { + case CXL_PS_PARAM_ENABLE: + wr_attrs.scrub_flags = FIELD_PREP(CXL_MEMDEV_PS_FLAG_ENABLED_MASK, + params->enable); + wr_attrs.scrub_cycle_hrs = FIELD_PREP(CXL_MEMDEV_PS_CUR_SCRUB_CYCLE_MASK, + rd_params.scrub_cycle_hrs); + break; + case CXL_PS_PARAM_SCRUB_CYCLE: + if (params->scrub_cycle_hrs < rd_params.min_scrub_cycle_hrs) { + dev_err(dev, "Invalid CXL patrol scrub cycle(%d) to set\n", + params->scrub_cycle_hrs); + dev_err(dev, "Minimum supported CXL patrol scrub cycle in hour %d\n", + rd_params.min_scrub_cycle_hrs); + return -EINVAL; + } + wr_attrs.scrub_cycle_hrs = FIELD_PREP(CXL_MEMDEV_PS_CUR_SCRUB_CYCLE_MASK, + params->scrub_cycle_hrs); + wr_attrs.scrub_flags = FIELD_PREP(CXL_MEMDEV_PS_FLAG_ENABLED_MASK, + rd_params.enable); + break; + } + + ret = cxl_set_feature(mds, cxl_patrol_scrub_uuid, + cxl_ps_ctx->set_version, + &wr_attrs, sizeof(wr_attrs), + CXL_SET_FEAT_FLAG_DATA_SAVED_ACROSS_RESET); + if (ret) { + dev_err(dev, "CXL patrol scrub set feature failed ret=%d\n", ret); + return ret; + } + + return 0; +} + +static int cxl_ps_set_attrs(struct device *dev, void *drv_data, + struct cxl_memdev_ps_params *params, + enum cxl_scrub_param param_type) +{ + struct cxl_patrol_scrub_context *cxl_ps_ctx = drv_data; + struct cxl_memdev *cxlmd; + struct cxl_dev_state *cxlds; + struct cxl_memdev_state *mds; + int ret, i; + + if (cxl_ps_ctx->cxlr) { + struct cxl_region *cxlr = cxl_ps_ctx->cxlr; + struct cxl_region_params *p = &cxlr->params; + + for (i = p->interleave_ways - 1; i >= 0; i--) { + struct cxl_endpoint_decoder *cxled = p->targets[i]; + + cxlmd = cxled_to_memdev(cxled); + cxlds = cxlmd->cxlds; + mds = to_cxl_memdev_state(cxlds); + ret = cxl_mem_ps_set_attrs(dev, drv_data, mds, + params, param_type); + if (ret) + return ret; + } + } else { + cxlmd = cxl_ps_ctx->cxlmd; + cxlds = cxlmd->cxlds; + mds = to_cxl_memdev_state(cxlds); + + return cxl_mem_ps_set_attrs(dev, drv_data, mds, params, param_type); + } + + return 0; +} + +static int cxl_patrol_scrub_get_enabled_bg(struct device *dev, void *drv_data, bool *enabled) +{ + struct cxl_memdev_ps_params params; + int ret; + + ret = cxl_ps_get_attrs(dev, drv_data, ¶ms); + if (ret) + return ret; + + *enabled = params.enable; + + return 0; +} + +static int cxl_patrol_scrub_set_enabled_bg(struct device *dev, void *drv_data, bool enable) +{ + struct cxl_memdev_ps_params params = { + .enable = enable, + }; + + return cxl_ps_set_attrs(dev, drv_data, ¶ms, CXL_PS_PARAM_ENABLE); +} + +static int cxl_patrol_scrub_read_min_scrub_cycle(struct device *dev, void *drv_data, + u32 *min) +{ + struct cxl_memdev_ps_params params; + int ret; + + ret = cxl_ps_get_attrs(dev, drv_data, ¶ms); + if (ret) + return ret; + *min = params.min_scrub_cycle_hrs * CXL_DEV_HOUR_IN_SECS; + + return 0; +} + +static int cxl_patrol_scrub_read_max_scrub_cycle(struct device *dev, void *drv_data, + u32 *max) +{ + *max = U8_MAX * CXL_DEV_HOUR_IN_SECS; /* Max set by register size */ + + return 0; +} + +static int cxl_patrol_scrub_read_scrub_cycle(struct device *dev, void *drv_data, + u32 *scrub_cycle_secs) +{ + struct cxl_memdev_ps_params params; + int ret; + + ret = cxl_ps_get_attrs(dev, drv_data, ¶ms); + if (ret) + return ret; + + *scrub_cycle_secs = params.scrub_cycle_hrs * CXL_DEV_HOUR_IN_SECS; + + return 0; +} + +static int cxl_patrol_scrub_write_scrub_cycle(struct device *dev, void *drv_data, + u32 scrub_cycle_secs) +{ + struct cxl_memdev_ps_params params = { + .scrub_cycle_hrs = scrub_cycle_secs / CXL_DEV_HOUR_IN_SECS, + }; + + return cxl_ps_set_attrs(dev, drv_data, ¶ms, CXL_PS_PARAM_SCRUB_CYCLE); +} + +static const struct edac_scrub_ops cxl_ps_scrub_ops = { + .get_enabled_bg = cxl_patrol_scrub_get_enabled_bg, + .set_enabled_bg = cxl_patrol_scrub_set_enabled_bg, + .get_min_cycle = cxl_patrol_scrub_read_min_scrub_cycle, + .get_max_cycle = cxl_patrol_scrub_read_max_scrub_cycle, + .get_cycle_duration = cxl_patrol_scrub_read_scrub_cycle, + .set_cycle_duration = cxl_patrol_scrub_write_scrub_cycle, +}; + +int cxl_mem_ras_features_init(struct cxl_memdev *cxlmd, struct cxl_region *cxlr) +{ + struct edac_dev_feature ras_features[CXL_DEV_NUM_RAS_FEATURES]; + struct cxl_dev_state *cxlds; + struct cxl_memdev_state *mds; + struct cxl_patrol_scrub_context *cxl_ps_ctx; + struct cxl_feat_entry feat_entry; + char cxl_dev_name[CXL_SCRUB_NAME_LEN]; + int rc, i, num_ras_features = 0; + u8 scrub_inst = 0; + + if (cxlr) { + struct cxl_region_params *p = &cxlr->params; + + for (i = p->interleave_ways - 1; i >= 0; i--) { + struct cxl_endpoint_decoder *cxled = p->targets[i]; + + cxlmd = cxled_to_memdev(cxled); + cxlds = cxlmd->cxlds; + mds = to_cxl_memdev_state(cxlds); + memset(&feat_entry, 0, sizeof(feat_entry)); + rc = cxl_get_supported_feature_entry(mds, &cxl_patrol_scrub_uuid, + &feat_entry); + if (rc < 0) + return rc; + if (!(feat_entry.attr_flags & CXL_FEAT_ENTRY_FLAG_CHANGABLE)) + return -EOPNOTSUPP; + } + } else { + cxlds = cxlmd->cxlds; + mds = to_cxl_memdev_state(cxlds); + rc = cxl_get_supported_feature_entry(mds, &cxl_patrol_scrub_uuid, + &feat_entry); + if (rc < 0) + return rc; + + if (!(feat_entry.attr_flags & CXL_FEAT_ENTRY_FLAG_CHANGABLE)) + return -EOPNOTSUPP; + } + + cxl_ps_ctx = devm_kzalloc(&cxlmd->dev, sizeof(*cxl_ps_ctx), GFP_KERNEL); + if (!cxl_ps_ctx) + return -ENOMEM; + + *cxl_ps_ctx = (struct cxl_patrol_scrub_context) { + .get_feat_size = feat_entry.get_feat_size, + .set_feat_size = feat_entry.set_feat_size, + .get_version = feat_entry.get_feat_ver, + .set_version = feat_entry.set_feat_ver, + .set_effects = feat_entry.set_effects, + .instance = scrub_inst++, + }; + if (cxlr) { + snprintf(cxl_dev_name, sizeof(cxl_dev_name), + "cxl_region%d", cxlr->id); + cxl_ps_ctx->cxlr = cxlr; + } else { + snprintf(cxl_dev_name, sizeof(cxl_dev_name), + "%s_%s", "cxl", dev_name(&cxlmd->dev)); + cxl_ps_ctx->cxlmd = cxlmd; + } + + ras_features[num_ras_features].ft_type = RAS_FEAT_SCRUB; + ras_features[num_ras_features].instance = cxl_ps_ctx->instance; + ras_features[num_ras_features].scrub_ops = &cxl_ps_scrub_ops; + ras_features[num_ras_features].ctx = cxl_ps_ctx; + num_ras_features++; + + return edac_dev_register(&cxlmd->dev, cxl_dev_name, NULL, + num_ras_features, ras_features); +} +EXPORT_SYMBOL_NS_GPL(cxl_mem_ras_features_init, CXL); diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index 21ad5f242875..1cc29ec9ffac 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -3434,6 +3434,12 @@ static int cxl_region_probe(struct device *dev) p->res->start, p->res->end, cxlr, is_system_ram) > 0) return 0; + + rc = cxl_mem_ras_features_init(NULL, cxlr); + if (rc) + dev_warn(&cxlr->dev, "CXL RAS features init for region_id=%d failed\n", + cxlr->id); + return devm_cxl_add_dax_region(cxlr); default: dev_dbg(&cxlr->dev, "unsupported region mode: %d\n", diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index b778eef99ce0..e1156ea93fe7 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -931,6 +931,13 @@ int cxl_trigger_poison_list(struct cxl_memdev *cxlmd); int cxl_inject_poison(struct cxl_memdev *cxlmd, u64 dpa); int cxl_clear_poison(struct cxl_memdev *cxlmd, u64 dpa); +#ifdef CONFIG_CXL_RAS_FEAT +int cxl_mem_ras_features_init(struct cxl_memdev *cxlmd, struct cxl_region *cxlr); +#else +static inline int cxl_mem_ras_features_init(struct cxl_memdev *cxlmd, struct cxl_region *cxlr) +{ return 0; } +#endif + #ifdef CONFIG_CXL_SUSPEND void cxl_mem_active_inc(void); void cxl_mem_active_dec(void); diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c index 7de232eaeb17..be2e69548909 100644 --- a/drivers/cxl/mem.c +++ b/drivers/cxl/mem.c @@ -117,6 +117,10 @@ static int cxl_mem_probe(struct device *dev) if (!cxlds->media_ready) return -EBUSY; + rc = cxl_mem_ras_features_init(cxlmd, NULL); + if (rc) + dev_warn(&cxlmd->dev, "CXL RAS features init failed\n"); + /* * Someone is trying to reattach this device after it lost its port * connection (an endpoint port previously registered by this memdev was From patchwork Wed Oct 9 12:41:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 834194 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CEFEB1A0BF2; Wed, 9 Oct 2024 12:43:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728477814; cv=none; b=oDYMVrm4qeI5AnlvnW4yIpyNKq6AJZfw+WmF3yA//4NJns3DEViXfDmb98BOFwv8CnPz4oM8aFdMzOhyy2JTADSw6TmMg1IL3yr3gj6sQuWiMb0wSJAb7d39oZJDZT41amY68i9aKDWJRR3VGnRIj3UZE9uVOw0EXRWez48k2BE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728477814; c=relaxed/simple; bh=YdJhlc3FnQIYrHIqgXlhDZ2APsCbC/aWzHZ0XBjUDlE=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=mTVaCxWrm3eyJBDilqhIOyzCSwo/p+uPe0bPUpSD13WlmA4FS73leAubJWMxttjC/qpbWs8GlGOwdPrTD3NKQC0tzte9uj897Az0d/Ipu7OYFdYeE5kAc4cUB5HP+kQPoun7nM9UbkrPpKDSIeUDJboUStFSovHS1HhZem1NbZw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.231]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4XNsrv5s4Dz6GFHf; Wed, 9 Oct 2024 20:39:11 +0800 (CST) Received: from frapeml500007.china.huawei.com (unknown [7.182.85.172]) by mail.maildlp.com (Postfix) with ESMTPS id 1A1CB140B3C; Wed, 9 Oct 2024 20:43:31 +0800 (CST) Received: from P_UKIT01-A7bmah.china.huawei.com (10.48.152.209) by frapeml500007.china.huawei.com (7.182.85.172) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 9 Oct 2024 14:43:29 +0200 From: To: , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v13 11/18] cxl/memfeature: Add CXL memory device ECS control feature Date: Wed, 9 Oct 2024 13:41:12 +0100 Message-ID: <20241009124120.1124-12-shiju.jose@huawei.com> X-Mailer: git-send-email 2.43.0.windows.1 In-Reply-To: <20241009124120.1124-1-shiju.jose@huawei.com> References: <20241009124120.1124-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500003.china.huawei.com (7.191.162.67) To frapeml500007.china.huawei.com (7.182.85.172) From: Shiju Jose CXL spec 3.1 section 8.2.9.9.11.2 describes the DDR5 ECS (Error Check Scrub) control feature. The Error Check Scrub (ECS) is a feature defined in JEDEC DDR5 SDRAM Specification (JESD79-5) and allows the DRAM to internally read, correct single-bit errors, and write back corrected data bits to the DRAM array while providing transparency to error counts. The ECS control allows the requester to change the log entry type, the ECS threshold count provided that the request is within the definition specified in DDR5 mode registers, change mode between codeword mode and row count mode, and reset the ECS counter. Register with EDAC device driver, which gets the ECS attr descriptors from the EDAC ECS and expose sysfs ECS control attributes to userspace. For example ECS control for the memory media FRU 0 in CXL mem0 device is in /sys/bus/edac/devices/cxl_mem0/ecs_fruX/ Signed-off-by: Shiju Jose --- drivers/cxl/core/memfeature.c | 467 +++++++++++++++++++++++++++++++++- 1 file changed, 461 insertions(+), 6 deletions(-) diff --git a/drivers/cxl/core/memfeature.c b/drivers/cxl/core/memfeature.c index 84d6e887a4fa..567406566c77 100644 --- a/drivers/cxl/core/memfeature.c +++ b/drivers/cxl/core/memfeature.c @@ -19,7 +19,7 @@ #include #include -#define CXL_DEV_NUM_RAS_FEATURES 1 +#define CXL_DEV_NUM_RAS_FEATURES 2 #define CXL_DEV_HOUR_IN_SECS 3600 #define CXL_SCRUB_NAME_LEN 128 @@ -309,6 +309,420 @@ static const struct edac_scrub_ops cxl_ps_scrub_ops = { .set_cycle_duration = cxl_patrol_scrub_write_scrub_cycle, }; +/* CXL DDR5 ECS control definitions */ +static const uuid_t cxl_ecs_uuid = + UUID_INIT(0xe5b13f22, 0x2328, 0x4a14, 0xb8, 0xba, 0xb9, 0x69, 0x1e, \ + 0x89, 0x33, 0x86); + +struct cxl_ecs_context { + u16 num_media_frus; + u16 get_feat_size; + u16 set_feat_size; + u8 get_version; + u8 set_version; + u16 set_effects; + struct cxl_memdev *cxlmd; +}; + +enum { + CXL_ECS_PARAM_LOG_ENTRY_TYPE, + CXL_ECS_PARAM_THRESHOLD, + CXL_ECS_PARAM_MODE, + CXL_ECS_PARAM_RESET_COUNTER, +}; + +#define CXL_ECS_LOG_ENTRY_TYPE_MASK GENMASK(1, 0) +#define CXL_ECS_REALTIME_REPORT_CAP_MASK BIT(0) +#define CXL_ECS_THRESHOLD_COUNT_MASK GENMASK(2, 0) +#define CXL_ECS_MODE_MASK BIT(3) +#define CXL_ECS_RESET_COUNTER_MASK BIT(4) + +static const u16 ecs_supp_threshold[] = { 0, 0, 0, 256, 1024, 4096 }; + +enum { + ECS_LOG_ENTRY_TYPE_DRAM = 0x0, + ECS_LOG_ENTRY_TYPE_MEM_MEDIA_FRU = 0x1, +}; + +enum { + ECS_THRESHOLD_256 = 3, + ECS_THRESHOLD_1024 = 4, + ECS_THRESHOLD_4096 = 5, +}; + +enum cxl_ecs_mode { + ECS_MODE_COUNTS_ROWS = 0, + ECS_MODE_COUNTS_CODEWORDS = 1, +}; + +/** + * struct cxl_ecs_params - CXL memory DDR5 ECS parameter data structure. + * @log_entry_type: ECS log entry type, per DRAM or per memory media FRU. + * @threshold: ECS threshold count per GB of memory cells. + * @mode: codeword/row count mode + * 0 : ECS counts rows with errors + * 1 : ECS counts codeword with errors + * @reset_counter: [IN] reset ECC counter to default value. + */ +struct cxl_ecs_params { + u8 log_entry_type; + u16 threshold; + enum cxl_ecs_mode mode; + bool reset_counter; +}; + +struct cxl_ecs_fru_rd_attrs { + u8 ecs_cap; + __le16 ecs_config; + u8 ecs_flags; +} __packed; + +struct cxl_ecs_rd_attrs { + u8 ecs_log_cap; + struct cxl_ecs_fru_rd_attrs fru_attrs[]; +} __packed; + +struct cxl_ecs_fru_wr_attrs { + __le16 ecs_config; +} __packed; + +struct cxl_ecs_wr_attrs { + u8 ecs_log_cap; + struct cxl_ecs_fru_wr_attrs fru_attrs[]; +} __packed; + +/* CXL DDR5 ECS control functions */ +static int cxl_mem_ecs_get_attrs(struct device *dev, void *drv_data, int fru_id, + struct cxl_ecs_params *params) +{ + struct cxl_ecs_context *cxl_ecs_ctx = drv_data; + struct cxl_memdev *cxlmd = cxl_ecs_ctx->cxlmd; + struct cxl_dev_state *cxlds = cxlmd->cxlds; + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); + struct cxl_ecs_fru_rd_attrs *fru_rd_attrs; + size_t rd_data_size; + u8 threshold_index; + size_t data_size; + + rd_data_size = cxl_ecs_ctx->get_feat_size; + + struct cxl_ecs_rd_attrs *rd_attrs __free(kfree) = + kmalloc(rd_data_size, GFP_KERNEL); + if (!rd_attrs) + return -ENOMEM; + + params->log_entry_type = 0; + params->threshold = 0; + params->mode = 0; + data_size = cxl_get_feature(mds, cxl_ecs_uuid, + CXL_GET_FEAT_SEL_CURRENT_VALUE, + rd_attrs, rd_data_size); + if (!data_size) + return -EIO; + + fru_rd_attrs = rd_attrs->fru_attrs; + params->log_entry_type = FIELD_GET(CXL_ECS_LOG_ENTRY_TYPE_MASK, + rd_attrs->ecs_log_cap); + threshold_index = FIELD_GET(CXL_ECS_THRESHOLD_COUNT_MASK, + fru_rd_attrs[fru_id].ecs_config); + params->threshold = ecs_supp_threshold[threshold_index]; + params->mode = FIELD_GET(CXL_ECS_MODE_MASK, + fru_rd_attrs[fru_id].ecs_config); + return 0; +} + +static int cxl_mem_ecs_set_attrs(struct device *dev, void *drv_data, int fru_id, + struct cxl_ecs_params *params, u8 param_type) +{ + struct cxl_ecs_context *cxl_ecs_ctx = drv_data; + struct cxl_memdev *cxlmd = cxl_ecs_ctx->cxlmd; + struct cxl_dev_state *cxlds = cxlmd->cxlds; + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); + struct cxl_ecs_fru_rd_attrs *fru_rd_attrs; + struct cxl_ecs_fru_wr_attrs *fru_wr_attrs; + size_t rd_data_size, wr_data_size; + u16 num_media_frus, count; + size_t data_size; + int ret; + + num_media_frus = cxl_ecs_ctx->num_media_frus; + rd_data_size = cxl_ecs_ctx->get_feat_size; + wr_data_size = cxl_ecs_ctx->set_feat_size; + struct cxl_ecs_rd_attrs *rd_attrs __free(kfree) = + kmalloc(rd_data_size, GFP_KERNEL); + if (!rd_attrs) + return -ENOMEM; + + data_size = cxl_get_feature(mds, cxl_ecs_uuid, + CXL_GET_FEAT_SEL_CURRENT_VALUE, + rd_attrs, rd_data_size); + if (!data_size) + return -EIO; + struct cxl_ecs_wr_attrs *wr_attrs __free(kfree) = + kmalloc(wr_data_size, GFP_KERNEL); + if (!wr_attrs) + return -ENOMEM; + + /* Fill writable attributes from the current attributes read + * for all the media FRUs. + */ + fru_rd_attrs = rd_attrs->fru_attrs; + fru_wr_attrs = wr_attrs->fru_attrs; + wr_attrs->ecs_log_cap = rd_attrs->ecs_log_cap; + for (count = 0; count < num_media_frus; count++) + fru_wr_attrs[count].ecs_config = fru_rd_attrs[count].ecs_config; + + /* Fill attribute to be set for the media FRU */ + switch (param_type) { + case CXL_ECS_PARAM_LOG_ENTRY_TYPE: + if (params->log_entry_type != ECS_LOG_ENTRY_TYPE_DRAM && + params->log_entry_type != ECS_LOG_ENTRY_TYPE_MEM_MEDIA_FRU) { + dev_err(dev, + "Invalid CXL ECS scrub log entry type(%d) to set\n", + params->log_entry_type); + dev_err(dev, + "Log Entry Type 0: per DRAM 1: per Memory Media FRU\n"); + return -EINVAL; + } + wr_attrs->ecs_log_cap = FIELD_PREP(CXL_ECS_LOG_ENTRY_TYPE_MASK, + params->log_entry_type); + break; + case CXL_ECS_PARAM_THRESHOLD: + fru_wr_attrs[fru_id].ecs_config &= ~CXL_ECS_THRESHOLD_COUNT_MASK; + switch (params->threshold) { + case 256: + fru_wr_attrs[fru_id].ecs_config |= FIELD_PREP(CXL_ECS_THRESHOLD_COUNT_MASK, + ECS_THRESHOLD_256); + break; + case 1024: + fru_wr_attrs[fru_id].ecs_config |= FIELD_PREP(CXL_ECS_THRESHOLD_COUNT_MASK, + ECS_THRESHOLD_1024); + break; + case 4096: + fru_wr_attrs[fru_id].ecs_config |= FIELD_PREP(CXL_ECS_THRESHOLD_COUNT_MASK, + ECS_THRESHOLD_4096); + break; + default: + dev_err(dev, + "Invalid CXL ECS scrub threshold count(%d) to set\n", + params->threshold); + dev_err(dev, + "Supported scrub threshold counts: 256,1024,4096\n"); + return -EINVAL; + } + break; + case CXL_ECS_PARAM_MODE: + if (params->mode != ECS_MODE_COUNTS_ROWS && + params->mode != ECS_MODE_COUNTS_CODEWORDS) { + dev_err(dev, + "Invalid CXL ECS scrub mode(%d) to set\n", + params->mode); + dev_err(dev, + "Supported ECS Modes: 0: ECS counts rows with errors," + " 1: ECS counts codewords with errors\n"); + return -EINVAL; + } + fru_wr_attrs[fru_id].ecs_config &= ~CXL_ECS_MODE_MASK; + fru_wr_attrs[fru_id].ecs_config |= FIELD_PREP(CXL_ECS_MODE_MASK, + params->mode); + break; + case CXL_ECS_PARAM_RESET_COUNTER: + fru_wr_attrs[fru_id].ecs_config &= ~CXL_ECS_RESET_COUNTER_MASK; + fru_wr_attrs[fru_id].ecs_config |= FIELD_PREP(CXL_ECS_RESET_COUNTER_MASK, + params->reset_counter); + break; + default: + dev_err(dev, "Invalid CXL ECS parameter to set\n"); + return -EINVAL; + } + + ret = cxl_set_feature(mds, cxl_ecs_uuid, cxl_ecs_ctx->set_version, + wr_attrs, wr_data_size, + CXL_SET_FEAT_FLAG_DATA_SAVED_ACROSS_RESET); + if (ret) { + dev_err(dev, "CXL ECS set feature failed ret=%d\n", ret); + return ret; + } + + return 0; +} + +static int cxl_ecs_get_log_entry_type(struct device *dev, void *drv_data, + int fru_id, u32 *val) +{ + struct cxl_ecs_params params; + int ret; + + ret = cxl_mem_ecs_get_attrs(dev, drv_data, fru_id, ¶ms); + if (ret) + return ret; + + *val = params.log_entry_type; + + return 0; +} + +static int cxl_ecs_set_log_entry_type(struct device *dev, void *drv_data, + int fru_id, u32 val) +{ + struct cxl_ecs_params params = { + .log_entry_type = val, + }; + + return cxl_mem_ecs_set_attrs(dev, drv_data, fru_id, + ¶ms, CXL_ECS_PARAM_LOG_ENTRY_TYPE); +} + +static int cxl_ecs_get_log_entry_type_per_dram(struct device *dev, void *drv_data, + int fru_id, u32 *val) +{ + struct cxl_ecs_params params; + int ret; + + ret = cxl_mem_ecs_get_attrs(dev, drv_data, fru_id, ¶ms); + if (ret) + return ret; + + if (params.log_entry_type == ECS_LOG_ENTRY_TYPE_DRAM) + *val = 1; + else + *val = 0; + + return 0; +} + +static int cxl_ecs_get_log_entry_type_per_memory_media(struct device *dev, + void *drv_data, + int fru_id, u32 *val) +{ + struct cxl_ecs_params params; + int ret; + + ret = cxl_mem_ecs_get_attrs(dev, drv_data, fru_id, ¶ms); + if (ret) + return ret; + + if (params.log_entry_type == ECS_LOG_ENTRY_TYPE_MEM_MEDIA_FRU) + *val = 1; + else + *val = 0; + + return 0; +} + +static int cxl_ecs_get_mode(struct device *dev, void *drv_data, + int fru_id, u32 *val) +{ + struct cxl_ecs_params params; + int ret; + + ret = cxl_mem_ecs_get_attrs(dev, drv_data, fru_id, ¶ms); + if (ret) + return ret; + + *val = params.mode; + + return 0; +} + +static int cxl_ecs_set_mode(struct device *dev, void *drv_data, + int fru_id, u32 val) +{ + struct cxl_ecs_params params = { + .mode = val, + }; + + return cxl_mem_ecs_set_attrs(dev, drv_data, fru_id, + ¶ms, CXL_ECS_PARAM_MODE); +} + +static int cxl_ecs_get_mode_counts_rows(struct device *dev, void *drv_data, + int fru_id, u32 *val) +{ + struct cxl_ecs_params params; + int ret; + + ret = cxl_mem_ecs_get_attrs(dev, drv_data, fru_id, ¶ms); + if (ret) + return ret; + + if (params.mode == ECS_MODE_COUNTS_ROWS) + *val = 1; + else + *val = 0; + + return 0; +} + +static int cxl_ecs_get_mode_counts_codewords(struct device *dev, void *drv_data, + int fru_id, u32 *val) +{ + struct cxl_ecs_params params; + int ret; + + ret = cxl_mem_ecs_get_attrs(dev, drv_data, fru_id, ¶ms); + if (ret) + return ret; + + if (params.mode == ECS_MODE_COUNTS_CODEWORDS) + *val = 1; + else + *val = 0; + + return 0; +} + +static int cxl_ecs_reset(struct device *dev, void *drv_data, int fru_id, u32 val) +{ + struct cxl_ecs_params params = { + .reset_counter = val, + }; + + return cxl_mem_ecs_set_attrs(dev, drv_data, fru_id, + ¶ms, CXL_ECS_PARAM_RESET_COUNTER); +} + +static int cxl_ecs_get_threshold(struct device *dev, void *drv_data, + int fru_id, u32 *val) +{ + struct cxl_ecs_params params; + int ret; + + ret = cxl_mem_ecs_get_attrs(dev, drv_data, fru_id, ¶ms); + if (ret) + return ret; + + *val = params.threshold; + + return 0; +} + +static int cxl_ecs_set_threshold(struct device *dev, void *drv_data, + int fru_id, u32 val) +{ + struct cxl_ecs_params params = { + .threshold = val, + }; + + return cxl_mem_ecs_set_attrs(dev, drv_data, fru_id, + ¶ms, CXL_ECS_PARAM_THRESHOLD); +} + +static const struct edac_ecs_ops cxl_ecs_ops = { + .get_log_entry_type = cxl_ecs_get_log_entry_type, + .set_log_entry_type = cxl_ecs_set_log_entry_type, + .get_log_entry_type_per_dram = cxl_ecs_get_log_entry_type_per_dram, + .get_log_entry_type_per_memory_media = + cxl_ecs_get_log_entry_type_per_memory_media, + .get_mode = cxl_ecs_get_mode, + .set_mode = cxl_ecs_set_mode, + .get_mode_counts_codewords = cxl_ecs_get_mode_counts_codewords, + .get_mode_counts_rows = cxl_ecs_get_mode_counts_rows, + .reset = cxl_ecs_reset, + .get_threshold = cxl_ecs_get_threshold, + .set_threshold = cxl_ecs_set_threshold, +}; + int cxl_mem_ras_features_init(struct cxl_memdev *cxlmd, struct cxl_region *cxlr) { struct edac_dev_feature ras_features[CXL_DEV_NUM_RAS_FEATURES]; @@ -317,7 +731,9 @@ int cxl_mem_ras_features_init(struct cxl_memdev *cxlmd, struct cxl_region *cxlr) struct cxl_patrol_scrub_context *cxl_ps_ctx; struct cxl_feat_entry feat_entry; char cxl_dev_name[CXL_SCRUB_NAME_LEN]; + struct cxl_ecs_context *cxl_ecs_ctx; int rc, i, num_ras_features = 0; + int num_media_frus; u8 scrub_inst = 0; if (cxlr) { @@ -343,16 +759,18 @@ int cxl_mem_ras_features_init(struct cxl_memdev *cxlmd, struct cxl_region *cxlr) rc = cxl_get_supported_feature_entry(mds, &cxl_patrol_scrub_uuid, &feat_entry); if (rc < 0) - return rc; + goto feat_ecs; if (!(feat_entry.attr_flags & CXL_FEAT_ENTRY_FLAG_CHANGABLE)) - return -EOPNOTSUPP; + goto feat_ecs; } cxl_ps_ctx = devm_kzalloc(&cxlmd->dev, sizeof(*cxl_ps_ctx), GFP_KERNEL); - if (!cxl_ps_ctx) - return -ENOMEM; - + if (!cxl_ps_ctx) { + if (cxlr) + return -ENOMEM; + goto feat_ecs; + } *cxl_ps_ctx = (struct cxl_patrol_scrub_context) { .get_feat_size = feat_entry.get_feat_size, .set_feat_size = feat_entry.set_feat_size, @@ -377,6 +795,43 @@ int cxl_mem_ras_features_init(struct cxl_memdev *cxlmd, struct cxl_region *cxlr) ras_features[num_ras_features].ctx = cxl_ps_ctx; num_ras_features++; +feat_ecs: + if (!cxlr) { + rc = cxl_get_supported_feature_entry(mds, &cxl_ecs_uuid, + &feat_entry); + if (rc < 0) + goto feat_register; + + if (!(feat_entry.attr_flags & CXL_FEAT_ENTRY_FLAG_CHANGABLE)) + goto feat_register; + num_media_frus = (feat_entry.get_feat_size - sizeof(struct cxl_ecs_rd_attrs)) / + sizeof(struct cxl_ecs_fru_rd_attrs); + if (!num_media_frus) + goto feat_register; + + cxl_ecs_ctx = devm_kzalloc(&cxlmd->dev, sizeof(*cxl_ecs_ctx), + GFP_KERNEL); + if (!cxl_ecs_ctx) + goto feat_register; + *cxl_ecs_ctx = (struct cxl_ecs_context) { + .get_feat_size = feat_entry.get_feat_size, + .set_feat_size = feat_entry.set_feat_size, + .get_version = feat_entry.get_feat_ver, + .set_version = feat_entry.set_feat_ver, + .set_effects = feat_entry.set_effects, + .num_media_frus = num_media_frus, + .cxlmd = cxlmd, + }; + + ras_features[num_ras_features].ft_type = RAS_FEAT_ECS; + ras_features[num_ras_features].ecs_ops = &cxl_ecs_ops; + ras_features[num_ras_features].ctx = cxl_ecs_ctx; + ras_features[num_ras_features].ecs_info.num_media_frus = + num_media_frus; + num_ras_features++; + } + +feat_register: return edac_dev_register(&cxlmd->dev, cxl_dev_name, NULL, num_ras_features, ras_features); } From patchwork Wed Oct 9 12:41:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 834015 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E1A511A264C; Wed, 9 Oct 2024 12:43:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728477817; cv=none; b=iPtuRGybH7vTHxsNDiXXxT6IJ4xuS39mcDG8jNJRxUxpulpFlj5HsZ4Je6CUvL2sljDm4xE6KJzvd/Rwp2QrwWBoJ0uv287qJL/pWbvqjOpGDdSSsdvvL+HTLIXD0lD+TZnf14Vm67plZm+HQDWWqc8sNqswZN0Wm7FMv6+SLi0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728477817; c=relaxed/simple; bh=yJADv5HLP2nZJDFcTIgbib8wNy1Fc6zaeSy4w0kBGkQ=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=XRGTc9HemvIVwxCmWsofxPOY5y034W8C/qy9i+DIhOlemfWfo0JwS9v51dscJdlE4c8uWwXrvtv7lDh+NW3xpN+bKAEc/bYbWw5yuwSQgRMnPd0+nkHCu7wybxzINe2Zi3UDchtSFngkc+AuDYcXtp5EgUDsF8gwcSNAjZioT18= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.231]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4XNsxc1SXnz6GD5T; Wed, 9 Oct 2024 20:43:16 +0800 (CST) Received: from frapeml500007.china.huawei.com (unknown [7.182.85.172]) by mail.maildlp.com (Postfix) with ESMTPS id EAEAB140CB1; Wed, 9 Oct 2024 20:43:33 +0800 (CST) Received: from P_UKIT01-A7bmah.china.huawei.com (10.48.152.209) by frapeml500007.china.huawei.com (7.182.85.172) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 9 Oct 2024 14:43:32 +0200 From: To: , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v13 12/18] platform: Add __free() based cleanup function for platform_device_put Date: Wed, 9 Oct 2024 13:41:13 +0100 Message-ID: <20241009124120.1124-13-shiju.jose@huawei.com> X-Mailer: git-send-email 2.43.0.windows.1 In-Reply-To: <20241009124120.1124-1-shiju.jose@huawei.com> References: <20241009124120.1124-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500003.china.huawei.com (7.191.162.67) To frapeml500007.china.huawei.com (7.182.85.172) From: Jonathan Cameron Add __free() based cleanup function for platform_device_put(). Signed-off-by: Jonathan Cameron Signed-off-by: Shiju Jose --- include/linux/platform_device.h | 1 + 1 file changed, 1 insertion(+) diff --git a/include/linux/platform_device.h b/include/linux/platform_device.h index d422db6eec63..606533b88f44 100644 --- a/include/linux/platform_device.h +++ b/include/linux/platform_device.h @@ -232,6 +232,7 @@ extern int platform_device_add_data(struct platform_device *pdev, extern int platform_device_add(struct platform_device *pdev); extern void platform_device_del(struct platform_device *pdev); extern void platform_device_put(struct platform_device *pdev); +DEFINE_FREE(platform_device_put, struct platform_device *, if (_T) platform_device_put(_T)) struct platform_driver { int (*probe)(struct platform_device *); From patchwork Wed Oct 9 12:41:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 834193 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9DC931ABEA3; Wed, 9 Oct 2024 12:43:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728477820; cv=none; b=A+/uyYO1F/ofngUS6+Ja5gzHps1sAGrSocoiGlz2P8+I2EvufYUmXED7K7vV1/JoArHtYU7Gxd0QtjxlS+ohQUVrtMo5l8k/lbSr9jyME7vpnJToYAHRIzRg54Do5mxvw2vzqxEGcnHpJQ3TydLGszLIRQwUcwcj/TnkYASnnBY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728477820; c=relaxed/simple; bh=ZkEVMP25iJN9bm2daQpJaMA+RTwCI/8OmHHj5Hp/ZoA=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=XtFafiM0rxIxsAgBZGbtvLpbrkQ/JMhjh5VPKIxJjAZ8vLaPJHotd+XzLOg3dERGc0MiNd9ZZM7PiwgX6+A5iGxGXAEy5NKdwHpjQDIpMIQOEdCrVssWbikmhjQ8lypJMw7tvbYTzgEAJ8cmtKosbQGsfaSBSkcTOCOIrjImUFU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.231]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4XNsxf74fBz6GD5J; Wed, 9 Oct 2024 20:43:18 +0800 (CST) Received: from frapeml500007.china.huawei.com (unknown [7.182.85.172]) by mail.maildlp.com (Postfix) with ESMTPS id B7D4C140B3C; Wed, 9 Oct 2024 20:43:36 +0800 (CST) Received: from P_UKIT01-A7bmah.china.huawei.com (10.48.152.209) by frapeml500007.china.huawei.com (7.182.85.172) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 9 Oct 2024 14:43:34 +0200 From: To: , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v13 13/18] ACPI:RAS2: Add ACPI RAS2 driver Date: Wed, 9 Oct 2024 13:41:14 +0100 Message-ID: <20241009124120.1124-14-shiju.jose@huawei.com> X-Mailer: git-send-email 2.43.0.windows.1 In-Reply-To: <20241009124120.1124-1-shiju.jose@huawei.com> References: <20241009124120.1124-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500003.china.huawei.com (7.191.162.67) To frapeml500007.china.huawei.com (7.182.85.172) From: Shiju Jose Add support for ACPI RAS2 feature table (RAS2) defined in the ACPI 6.5 Specification, section 5.2.21. Driver contains RAS2 Init, which extracts the RAS2 table and driver adds platform device for each memory feature which binds to the RAS2 memory driver. Driver uses PCC mailbox to communicate with the ACPI HW and the driver adds OSPM interfaces to send RAS2 commands. Co-developed-by: A Somasundaram Signed-off-by: A Somasundaram Co-developed-by: Jonathan Cameron Signed-off-by: Jonathan Cameron Signed-off-by: Shiju Jose --- drivers/acpi/Kconfig | 10 + drivers/acpi/Makefile | 1 + drivers/acpi/ras2.c | 391 +++++++++++++++++++++++++++++++++++++++ include/acpi/ras2_acpi.h | 60 ++++++ 4 files changed, 462 insertions(+) create mode 100755 drivers/acpi/ras2.c create mode 100644 include/acpi/ras2_acpi.h diff --git a/drivers/acpi/Kconfig b/drivers/acpi/Kconfig index e3a7c2aedd5f..482080f1f0c5 100644 --- a/drivers/acpi/Kconfig +++ b/drivers/acpi/Kconfig @@ -284,6 +284,16 @@ config ACPI_CPPC_LIB If your platform does not support CPPC in firmware, leave this option disabled. +config ACPI_RAS2 + bool "ACPI RAS2 driver" + select MAILBOX + select PCC + help + The driver adds support for ACPI RAS2 feature table(extracts RAS2 + table from OS system table) and OSPM interfaces to send RAS2 + commands via PCC mailbox subspace. Driver adds platform device for + the RAS2 memory features which binds to the RAS2 memory driver. + config ACPI_PROCESSOR tristate "Processor" depends on X86 || ARM64 || LOONGARCH || RISCV diff --git a/drivers/acpi/Makefile b/drivers/acpi/Makefile index 61ca4afe83dc..84e2a2519bae 100644 --- a/drivers/acpi/Makefile +++ b/drivers/acpi/Makefile @@ -100,6 +100,7 @@ obj-$(CONFIG_ACPI_EC_DEBUGFS) += ec_sys.o obj-$(CONFIG_ACPI_BGRT) += bgrt.o obj-$(CONFIG_ACPI_CPPC_LIB) += cppc_acpi.o obj-$(CONFIG_ACPI_SPCR_TABLE) += spcr.o +obj-$(CONFIG_ACPI_RAS2) += ras2.o obj-$(CONFIG_ACPI_DEBUGGER_USER) += acpi_dbg.o obj-$(CONFIG_ACPI_PPTT) += pptt.o obj-$(CONFIG_ACPI_PFRUT) += pfr_update.o pfr_telemetry.o diff --git a/drivers/acpi/ras2.c b/drivers/acpi/ras2.c new file mode 100755 index 000000000000..5daf1510d19e --- /dev/null +++ b/drivers/acpi/ras2.c @@ -0,0 +1,391 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Implementation of ACPI RAS2 driver. + * + * Copyright (c) 2024 HiSilicon Limited. + * + * Support for RAS2 - ACPI 6.5 Specification, section 5.2.21 + * + * Driver contains ACPI RAS2 init, which extracts the ACPI RAS2 table and + * get the PCC channel subspace for communicating with the ACPI compliant + * HW platform which supports ACPI RAS2. Driver adds platform devices + * for each RAS2 memory feature which binds to the memory ACPI RAS2 driver. + */ + +#define pr_fmt(fmt) "ACPI RAS2: " fmt + +#include +#include +#include +#include +#include +#include + +/* + * Arbitrary Retries for PCC commands because the + * remote processor could be much slower to reply. + */ +#define RAS2_NUM_RETRIES 600 + +#define RAS2_FEATURE_TYPE_MEMORY 0x00 + +/* global variables for the RAS2 PCC subspaces */ +static DEFINE_MUTEX(ras2_pcc_subspace_lock); +static LIST_HEAD(ras2_pcc_subspaces); + +static int ras2_report_cap_error(u32 cap_status) +{ + switch (cap_status) { + case ACPI_RAS2_NOT_VALID: + case ACPI_RAS2_NOT_SUPPORTED: + return -EPERM; + case ACPI_RAS2_BUSY: + return -EBUSY; + case ACPI_RAS2_FAILED: + case ACPI_RAS2_ABORTED: + case ACPI_RAS2_INVALID_DATA: + return -EINVAL; + default: /* 0 or other, Success */ + return 0; + } +} + +static int ras2_check_pcc_chan(struct ras2_pcc_subspace *pcc_subspace) +{ + struct acpi_ras2_shared_memory __iomem *generic_comm_base = pcc_subspace->pcc_comm_addr; + ktime_t next_deadline = ktime_add(ktime_get(), pcc_subspace->deadline); + u32 cap_status; + u16 status; + u32 ret; + + while (!ktime_after(ktime_get(), next_deadline)) { + /* + * As per ACPI spec, the PCC space will be initialized by + * platform and should have set the command completion bit when + * PCC can be used by OSPM + */ + status = readw_relaxed(&generic_comm_base->status); + if (status & RAS2_PCC_CMD_ERROR) { + cap_status = readw_relaxed(&generic_comm_base->set_capabilities_status); + ret = ras2_report_cap_error(cap_status); + + status &= ~RAS2_PCC_CMD_ERROR; + writew_relaxed(status, &generic_comm_base->status); + return ret; + } + if (status & RAS2_PCC_CMD_COMPLETE) + return 0; + /* + * Reducing the bus traffic in case this loop takes longer than + * a few retries. + */ + msleep(10); + } + + return -EIO; +} + +/** + * ras2_send_pcc_cmd() - Send RAS2 command via PCC channel + * @ras2_ctx: pointer to the RAS2 context structure + * @cmd: command to send + * + * Returns: 0 on success, an error otherwise + */ +int ras2_send_pcc_cmd(struct ras2_scrub_ctx *ras2_ctx, u16 cmd) +{ + struct ras2_pcc_subspace *pcc_subspace = ras2_ctx->pcc_subspace; + struct acpi_ras2_shared_memory *generic_comm_base = pcc_subspace->pcc_comm_addr; + static ktime_t last_cmd_cmpl_time, last_mpar_reset; + struct mbox_chan *pcc_channel; + unsigned int time_delta; + static int mpar_count; + int ret; + + guard(mutex)(&ras2_pcc_subspace_lock); + ret = ras2_check_pcc_chan(pcc_subspace); + if (ret < 0) + return ret; + pcc_channel = pcc_subspace->pcc_chan->mchan; + + /* + * Handle the Minimum Request Turnaround Time(MRTT) + * "The minimum amount of time that OSPM must wait after the completion + * of a command before issuing the next command, in microseconds" + */ + if (pcc_subspace->pcc_mrtt) { + time_delta = ktime_us_delta(ktime_get(), last_cmd_cmpl_time); + if (pcc_subspace->pcc_mrtt > time_delta) + udelay(pcc_subspace->pcc_mrtt - time_delta); + } + + /* + * Handle the non-zero Maximum Periodic Access Rate(MPAR) + * "The maximum number of periodic requests that the subspace channel can + * support, reported in commands per minute. 0 indicates no limitation." + * + * This parameter should be ideally zero or large enough so that it can + * handle maximum number of requests that all the cores in the system can + * collectively generate. If it is not, we will follow the spec and just + * not send the request to the platform after hitting the MPAR limit in + * any 60s window + */ + if (pcc_subspace->pcc_mpar) { + if (mpar_count == 0) { + time_delta = ktime_ms_delta(ktime_get(), last_mpar_reset); + if (time_delta < 60 * MSEC_PER_SEC) { + dev_dbg(ras2_ctx->dev, + "PCC cmd not sent due to MPAR limit"); + return -EIO; + } + last_mpar_reset = ktime_get(); + mpar_count = pcc_subspace->pcc_mpar; + } + mpar_count--; + } + + /* Write to the shared comm region. */ + writew_relaxed(cmd, &generic_comm_base->command); + + /* Flip CMD COMPLETE bit */ + writew_relaxed(0, &generic_comm_base->status); + + /* Ring doorbell */ + ret = mbox_send_message(pcc_channel, &cmd); + if (ret < 0) { + dev_err(ras2_ctx->dev, + "Err sending PCC mbox message. cmd:%d, ret:%d\n", + cmd, ret); + return ret; + } + + /* + * If Minimum Request Turnaround Time is non-zero, we need + * to record the completion time of both READ and WRITE + * command for proper handling of MRTT, so we need to check + * for pcc_mrtt in addition to CMD_READ + */ + if (cmd == RAS2_PCC_CMD_EXEC || pcc_subspace->pcc_mrtt) { + ret = ras2_check_pcc_chan(pcc_subspace); + if (pcc_subspace->pcc_mrtt) + last_cmd_cmpl_time = ktime_get(); + } + + if (pcc_channel->mbox->txdone_irq) + mbox_chan_txdone(pcc_channel, ret); + else + mbox_client_txdone(pcc_channel, ret); + + return ret >= 0 ? 0 : ret; +} +EXPORT_SYMBOL_GPL(ras2_send_pcc_cmd); + +static int ras2_register_pcc_channel(struct device *dev, struct ras2_scrub_ctx *ras2_ctx, + int pcc_subspace_id) +{ + struct acpi_pcct_hw_reduced *ras2_ss; + struct mbox_client *ras2_mbox_cl; + struct pcc_mbox_chan *pcc_chan; + struct ras2_pcc_subspace *pcc_subspace; + + if (pcc_subspace_id < 0) + return -EINVAL; + + mutex_lock(&ras2_pcc_subspace_lock); + list_for_each_entry(pcc_subspace, &ras2_pcc_subspaces, elem) { + if (pcc_subspace->pcc_subspace_id == pcc_subspace_id) { + ras2_ctx->pcc_subspace = pcc_subspace; + pcc_subspace->ref_count++; + mutex_unlock(&ras2_pcc_subspace_lock); + return 0; + } + } + mutex_unlock(&ras2_pcc_subspace_lock); + + pcc_subspace = kcalloc(1, sizeof(*pcc_subspace), GFP_KERNEL); + if (!pcc_subspace) + return -ENOMEM; + pcc_subspace->pcc_subspace_id = pcc_subspace_id; + ras2_mbox_cl = &pcc_subspace->mbox_client; + ras2_mbox_cl->dev = dev; + ras2_mbox_cl->knows_txdone = true; + + pcc_chan = pcc_mbox_request_channel(ras2_mbox_cl, pcc_subspace_id); + if (IS_ERR(pcc_chan)) { + kfree(pcc_subspace); + return PTR_ERR(pcc_chan); + } + pcc_subspace->pcc_chan = pcc_chan; + ras2_ss = pcc_chan->mchan->con_priv; + pcc_subspace->comm_base_addr = ras2_ss->base_address; + + /* + * ras2_ss->latency is just a Nominal value. In reality + * the remote processor could be much slower to reply. + * So add an arbitrary amount of wait on top of Nominal. + */ + pcc_subspace->deadline = ns_to_ktime(RAS2_NUM_RETRIES * ras2_ss->latency * + NSEC_PER_USEC); + pcc_subspace->pcc_mrtt = ras2_ss->min_turnaround_time; + pcc_subspace->pcc_mpar = ras2_ss->max_access_rate; + pcc_subspace->pcc_comm_addr = acpi_os_ioremap(pcc_subspace->comm_base_addr, + ras2_ss->length); + /* Set flag so that we dont come here for each CPU. */ + pcc_subspace->pcc_channel_acquired = true; + + mutex_lock(&ras2_pcc_subspace_lock); + list_add(&pcc_subspace->elem, &ras2_pcc_subspaces); + pcc_subspace->ref_count++; + mutex_unlock(&ras2_pcc_subspace_lock); + ras2_ctx->pcc_subspace = pcc_subspace; + + return 0; +} + +static void ras2_unregister_pcc_channel(void *ctx) +{ + struct ras2_scrub_ctx *ras2_ctx = ctx; + struct ras2_pcc_subspace *pcc_subspace = ras2_ctx->pcc_subspace; + + if (!pcc_subspace || !pcc_subspace->pcc_chan) + return; + + guard(mutex)(&ras2_pcc_subspace_lock); + if (pcc_subspace->ref_count > 0) + pcc_subspace->ref_count--; + if (!pcc_subspace->ref_count) { + list_del(&pcc_subspace->elem); + pcc_mbox_free_channel(pcc_subspace->pcc_chan); + kfree(pcc_subspace); + } +} + +/** + * devm_ras2_register_pcc_channel() - Register RAS2 PCC channel + * @dev: pointer to the RAS2 device + * @ras2_ctx: pointer to the RAS2 context structure + * @pcc_subspace_id: identifier of the RAS2 PCC channel. + * + * Returns: 0 on success, an error otherwise + */ +int devm_ras2_register_pcc_channel(struct device *dev, struct ras2_scrub_ctx *ras2_ctx, + int pcc_subspace_id) +{ + int ret; + + ret = ras2_register_pcc_channel(dev, ras2_ctx, pcc_subspace_id); + if (ret) + return ret; + + return devm_add_action_or_reset(dev, ras2_unregister_pcc_channel, ras2_ctx); +} +EXPORT_SYMBOL_NS_GPL(devm_ras2_register_pcc_channel, ACPI_RAS2); + +static struct platform_device *ras2_add_platform_device(char *name, int channel) +{ + int ret; + struct platform_device *pdev __free(platform_device_put) = + platform_device_alloc(name, PLATFORM_DEVID_AUTO); + if (!pdev) + return ERR_PTR(-ENOMEM); + + ret = platform_device_add_data(pdev, &channel, sizeof(channel)); + if (ret) + return ERR_PTR(ret); + + ret = platform_device_add(pdev); + if (ret) + return ERR_PTR(ret); + + return_ptr(pdev); +} + +static int __init ras2_acpi_init(void) +{ + struct acpi_table_header *pAcpiTable = NULL; + struct acpi_ras2_pcc_desc *pcc_desc_list; + struct acpi_table_ras2 *pRas2Table; + struct platform_device *pdev; + int pcc_subspace_id; + acpi_size ras2_size; + acpi_status status; + u8 count = 0, i; + int ret; + + status = acpi_get_table("RAS2", 0, &pAcpiTable); + if (ACPI_FAILURE(status) || !pAcpiTable) { + pr_err("ACPI RAS2 driver failed to initialize, get table failed\n"); + return -EINVAL; + } + + ras2_size = pAcpiTable->length; + if (ras2_size < sizeof(struct acpi_table_ras2)) { + pr_err("ACPI RAS2 table present but broken (too short #1)\n"); + ret = -EINVAL; + goto free_ras2_table; + } + + pRas2Table = (struct acpi_table_ras2 *)pAcpiTable; + if (pRas2Table->num_pcc_descs <= 0) { + pr_err("ACPI RAS2 table does not contain PCC descriptors\n"); + ret = -EINVAL; + goto free_ras2_table; + } + + struct platform_device **pdev_list __free(kfree) = + kcalloc(pRas2Table->num_pcc_descs, sizeof(*pdev_list), + GFP_KERNEL); + if (!pdev_list) { + ret = -ENOMEM; + goto free_ras2_table; + } + + pcc_desc_list = (struct acpi_ras2_pcc_desc *)(pRas2Table + 1); + /* Double scan for the case of only one actual controller */ + pcc_subspace_id = -1; + count = 0; + for (i = 0; i < pRas2Table->num_pcc_descs; i++, pcc_desc_list++) { + if (pcc_desc_list->feature_type != RAS2_FEATURE_TYPE_MEMORY) + continue; + if (pcc_subspace_id == -1) { + pcc_subspace_id = pcc_desc_list->channel_id; + count++; + } + if (pcc_desc_list->channel_id != pcc_subspace_id) + count++; + } + if (count == 1) { + pdev = ras2_add_platform_device("acpi_ras2", pcc_subspace_id); + if (!pdev) { + ret = -ENODEV; + goto free_ras2_pdev; + } + pdev_list[0] = pdev; + return 0; + } + + count = 0; + for (i = 0; i < pRas2Table->num_pcc_descs; i++, pcc_desc_list++) { + if (pcc_desc_list->feature_type != RAS2_FEATURE_TYPE_MEMORY) + continue; + pcc_subspace_id = pcc_desc_list->channel_id; + /* Add the platform device and bind ACPI RAS2 memory driver */ + pdev = ras2_add_platform_device("acpi_ras2", pcc_subspace_id); + if (!pdev) + goto free_ras2_pdev; + pdev_list[count++] = pdev; + } + + acpi_put_table(pAcpiTable); + return 0; + +free_ras2_pdev: + for (i = count; i >= 0; i++) + platform_device_put(pdev_list[i]); + +free_ras2_table: + acpi_put_table(pAcpiTable); + + return ret; +} +late_initcall(ras2_acpi_init) diff --git a/include/acpi/ras2_acpi.h b/include/acpi/ras2_acpi.h new file mode 100644 index 000000000000..edfca253d88a --- /dev/null +++ b/include/acpi/ras2_acpi.h @@ -0,0 +1,60 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * RAS2 ACPI driver header file + * + * (C) Copyright 2014, 2015 Hewlett-Packard Enterprises + * + * Copyright (c) 2024 HiSilicon Limited + */ + +#ifndef _RAS2_ACPI_H +#define _RAS2_ACPI_H + +#include +#include +#include +#include + +#define RAS2_PCC_CMD_COMPLETE BIT(0) +#define RAS2_PCC_CMD_ERROR BIT(2) + +/* RAS2 specific PCC commands */ +#define RAS2_PCC_CMD_EXEC 0x01 + +struct device; + +/* Data structures for PCC communication and RAS2 table */ +struct pcc_mbox_chan; + +struct ras2_pcc_subspace { + int pcc_subspace_id; + struct mbox_client mbox_client; + struct pcc_mbox_chan *pcc_chan; + struct acpi_ras2_shared_memory __iomem *pcc_comm_addr; + u64 comm_base_addr; + bool pcc_channel_acquired; + ktime_t deadline; + unsigned int pcc_mpar; + unsigned int pcc_mrtt; + struct list_head elem; + u16 ref_count; +}; + +struct ras2_scrub_ctx { + struct device *dev; + struct ras2_pcc_subspace *pcc_subspace; + int id; + u8 instance; + struct device *scrub_dev; + bool bg; + u64 base, size; + u8 scrub_cycle_hrs, min_scrub_cycle, max_scrub_cycle; + /* Lock to provide mutually exclusive access to PCC channel */ + struct mutex lock; +}; + +int ras2_send_pcc_cmd(struct ras2_scrub_ctx *ras2_ctx, u16 cmd); +int devm_ras2_register_pcc_channel(struct device *dev, struct ras2_scrub_ctx *ras2_ctx, + int pcc_subspace_id); + +#endif /* _RAS2_ACPI_H */ From patchwork Wed Oct 9 12:41:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 834014 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BF57F198A32; Wed, 9 Oct 2024 12:43:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728477823; cv=none; b=tp+1WzCoqKJgq/OvOwR5ikrxRWtB7aw/2/z5B2FPzQevYrcBpIs8HqrBnoiiB/ssmgA0EWJW/gvWTN02PxYTkYtuQuFpmLCDKoJVjOEVUnBr3VwRPVDtEiEcswmR/OWTAyI004feJAe99MYkMS+C+MqZyUhEuQtsnQ6OWBn3fTc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728477823; c=relaxed/simple; bh=exVNWYUAl0xZAEnakz5RPUTwn45GQ+/AuAEht8i/a3A=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=I0MABIaLMJ+h9YPqV3n91/2j7LXbeCGgKF3SmYgPWdCKJjnj7TL8oH866HjirXTpsxJjCqJmDu1N68KCa3srsjejWuoi4bwZ07pHinyc7YLIfT0idebhOH1dL7gnsaoQxgntcjPvZr2sNvZixxXJu5COR10tG4TXj8AolX60B1Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.231]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4XNswZ1RY2z6K72b; Wed, 9 Oct 2024 20:42:22 +0800 (CST) Received: from frapeml500007.china.huawei.com (unknown [7.182.85.172]) by mail.maildlp.com (Postfix) with ESMTPS id D6FFE140CB1; Wed, 9 Oct 2024 20:43:39 +0800 (CST) Received: from P_UKIT01-A7bmah.china.huawei.com (10.48.152.209) by frapeml500007.china.huawei.com (7.182.85.172) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 9 Oct 2024 14:43:37 +0200 From: To: , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v13 14/18] ras: mem: Add memory ACPI RAS2 driver Date: Wed, 9 Oct 2024 13:41:15 +0100 Message-ID: <20241009124120.1124-15-shiju.jose@huawei.com> X-Mailer: git-send-email 2.43.0.windows.1 In-Reply-To: <20241009124120.1124-1-shiju.jose@huawei.com> References: <20241009124120.1124-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500003.china.huawei.com (7.191.162.67) To frapeml500007.china.huawei.com (7.182.85.172) From: Shiju Jose Memory ACPI RAS2 driver binds to the platform device add by the ACPI RAS2 table parser. Driver uses a PCC subspace for communicating with the ACPI compliant platform. Device with ACPI RAS2 scrub feature registers with EDAC device driver, which retrieves the scrub descriptor from EDAC scrub driver and exposes the scrub control attributes for RAS2 scrub instance to userspace in /sys/bus/edac/devices/acpi_ras_mem0/scrubX/. Co-developed-by: Jonathan Cameron Signed-off-by: Jonathan Cameron Signed-off-by: Shiju Jose --- Documentation/edac/edac-scrub.rst | 41 +++ drivers/ras/Kconfig | 10 + drivers/ras/Makefile | 1 + drivers/ras/acpi_ras2.c | 449 ++++++++++++++++++++++++++++++ 4 files changed, 501 insertions(+) create mode 100644 drivers/ras/acpi_ras2.c diff --git a/Documentation/edac/edac-scrub.rst b/Documentation/edac/edac-scrub.rst index 243035957e99..483824d0872f 100644 --- a/Documentation/edac/edac-scrub.rst +++ b/Documentation/edac/edac-scrub.rst @@ -72,3 +72,44 @@ root@localhost:~# cat /sys/bus/edac/devices/cxl_region0/scrub0/enable_background root@localhost:~# echo 0 > /sys/bus/edac/devices/cxl_region0/scrub0/enable_background root@localhost:~# cat /sys/bus/edac/devices/cxl_region0/scrub0/enable_background 0 + +2. RAS2 +2.1 On demand scrubbing for a specific memory region. +root@localhost:~# echo 0x120000 > /sys/bus/edac/devices/acpi_ras_mem0/scrub0/addr_range_base +root@localhost:~# echo 0x150000 > /sys/bus/edac/devices/acpi_ras_mem0/scrub0/addr_range_size +root@localhost:~# cat /sys/bus/edac/devices/acpi_ras_mem0/scrub0/min_cycle_duration +3600 +root@localhost:~# cat /sys/bus/edac/devices/acpi_ras_mem0/scrub0/max_cycle_duration +86400 +root@localhost:~# cat /sys/bus/edac/devices/acpi_ras_mem0/scrub0/current_cycle_duration +36000 +root@localhost:~# echo 54000 > /sys/bus/edac/devices/acpi_ras_mem0/scrub0/current_cycle_duration +root@localhost:~# echo 1 > /sys/bus/edac/devices/acpi_ras_mem0/scrub0/enable_on_demand +root@localhost:~# cat /sys/bus/edac/devices/acpi_ras_mem0/scrub0/enable_on_demand +1 +root@localhost:~# cat /sys/bus/edac/devices/acpi_ras_mem0/scrub0/current_cycle_duration +54000 +root@localhost:~# cat /sys/bus/edac/devices/acpi_ras_mem0/scrub0/addr_range_base +0x120000 +root@localhost:~# cat //sys/bus/edac/devices/acpi_ras_mem0/scrub0/addr_range_size +0x150000 +root@localhost:~# echo 0 > /sys/bus/edac/devices/acpi_ras_mem0/scrub0/enable_on_demand +root@localhost:~# cat /sys/bus/edac/devices/acpi_ras_mem0/scrub0/enable_on_demand +0 + +2.2 Background scrubbing the entire memory +root@localhost:~# cat /sys/bus/edac/devices/acpi_ras_mem0/scrub0/min_cycle_duration +3600 +root@localhost:~# cat /sys/bus/edac/devices/acpi_ras_mem0/scrub0/max_cycle_duration +86400 +root@localhost:~# cat /sys/bus/edac/devices/acpi_ras_mem0/scrub0/current_cycle_duration +36000 +root@localhost:~# cat /sys/bus/edac/devices/acpi_ras_mem0/scrub0/enable_background +0 +root@localhost:~# echo 10800 > /sys/bus/edac/devices/acpi_ras_mem0/scrub0/current_cycle_duration +root@localhost:~# echo 1 > /sys/bus/edac/devices/acpi_ras_mem0/scrub0/enable_background +root@localhost:~# cat /sys/bus/edac/devices/acpi_ras_mem0/scrub0/enable_background +1 +root@localhost:~# cat /sys/bus/edac/devices/acpi_ras_mem0/scrub0/current_cycle_duration +10800 +root@localhost:~# echo 0 > /sys/bus/edac/devices/acpi_ras_mem0/scrub0/enable_background diff --git a/drivers/ras/Kconfig b/drivers/ras/Kconfig index fc4f4bb94a4c..b77790bdc73a 100644 --- a/drivers/ras/Kconfig +++ b/drivers/ras/Kconfig @@ -46,4 +46,14 @@ config RAS_FMPM Memory will be retired during boot time and run time depending on platform-specific policies. +config MEM_ACPI_RAS2 + tristate "Memory ACPI RAS2 driver" + depends on ACPI_RAS2 + depends on EDAC + help + The driver binds to the platform device added by the ACPI RAS2 + table parser. Use a PCC channel subspace for communicating with + the ACPI compliant platform to provide control of memory scrub + parameters to the user via the EDAC scrub. + endif diff --git a/drivers/ras/Makefile b/drivers/ras/Makefile index 11f95d59d397..a0e6e903d6b0 100644 --- a/drivers/ras/Makefile +++ b/drivers/ras/Makefile @@ -2,6 +2,7 @@ obj-$(CONFIG_RAS) += ras.o obj-$(CONFIG_DEBUG_FS) += debugfs.o obj-$(CONFIG_RAS_CEC) += cec.o +obj-$(CONFIG_MEM_ACPI_RAS2) += acpi_ras2.o obj-$(CONFIG_RAS_FMPM) += amd/fmpm.o obj-y += amd/atl/ diff --git a/drivers/ras/acpi_ras2.c b/drivers/ras/acpi_ras2.c new file mode 100644 index 000000000000..a42acf55d911 --- /dev/null +++ b/drivers/ras/acpi_ras2.c @@ -0,0 +1,449 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * ACPI RAS2 memory driver + * + * Copyright (c) 2024 HiSilicon Limited. + * + */ + +#define pr_fmt(fmt) "MEMORY ACPI RAS2: " fmt + +#include +#include +#include +#include + +#define RAS2_DEV_NUM_RAS_FEATURES 1 + +#define RAS2_SUPPORT_HW_PARTOL_SCRUB BIT(0) +#define RAS2_TYPE_PATROL_SCRUB 0x0000 + +#define RAS2_GET_PATROL_PARAMETERS 0x01 +#define RAS2_START_PATROL_SCRUBBER 0x02 +#define RAS2_STOP_PATROL_SCRUBBER 0x03 + +#define RAS2_PATROL_SCRUB_SCHRS_IN_MASK GENMASK(15, 8) +#define RAS2_PATROL_SCRUB_EN_BACKGROUND BIT(0) +#define RAS2_PATROL_SCRUB_SCHRS_OUT_MASK GENMASK(7, 0) +#define RAS2_PATROL_SCRUB_MIN_SCHRS_OUT_MASK GENMASK(15, 8) +#define RAS2_PATROL_SCRUB_MAX_SCHRS_OUT_MASK GENMASK(23, 16) +#define RAS2_PATROL_SCRUB_FLAG_SCRUBBER_RUNNING BIT(0) + +#define RAS2_SCRUB_NAME_LEN 128 +#define RAS2_HOUR_IN_SECS 3600 + +struct acpi_ras2_ps_shared_mem { + struct acpi_ras2_shared_memory common; + struct acpi_ras2_patrol_scrub_parameter params; +}; + +static int ras2_is_patrol_scrub_support(struct ras2_scrub_ctx *ras2_ctx) +{ + struct acpi_ras2_shared_memory __iomem *common = (void *) + ras2_ctx->pcc_subspace->pcc_comm_addr; + + guard(mutex)(&ras2_ctx->lock); + common->set_capabilities[0] = 0; + + return common->features[0] & RAS2_SUPPORT_HW_PARTOL_SCRUB; +} + +static int ras2_update_patrol_scrub_params_cache(struct ras2_scrub_ctx *ras2_ctx) +{ + struct acpi_ras2_ps_shared_mem __iomem *ps_sm = (void *) + ras2_ctx->pcc_subspace->pcc_comm_addr; + int ret; + + ps_sm->common.set_capabilities[0] = RAS2_SUPPORT_HW_PARTOL_SCRUB; + ps_sm->params.patrol_scrub_command = RAS2_GET_PATROL_PARAMETERS; + + ret = ras2_send_pcc_cmd(ras2_ctx, RAS2_PCC_CMD_EXEC); + if (ret) { + dev_err(ras2_ctx->dev, "failed to read parameters\n"); + return ret; + } + + ras2_ctx->min_scrub_cycle = FIELD_GET(RAS2_PATROL_SCRUB_MIN_SCHRS_OUT_MASK, + ps_sm->params.scrub_params_out); + ras2_ctx->max_scrub_cycle = FIELD_GET(RAS2_PATROL_SCRUB_MAX_SCHRS_OUT_MASK, + ps_sm->params.scrub_params_out); + if (!ras2_ctx->bg) { + ras2_ctx->base = ps_sm->params.actual_address_range[0]; + ras2_ctx->size = ps_sm->params.actual_address_range[1]; + } + ras2_ctx->scrub_cycle_hrs = FIELD_GET(RAS2_PATROL_SCRUB_SCHRS_OUT_MASK, + ps_sm->params.scrub_params_out); + + return 0; +} + +/* Context - lock must be held */ +static int ras2_get_patrol_scrub_running(struct ras2_scrub_ctx *ras2_ctx, + bool *running) +{ + struct acpi_ras2_ps_shared_mem __iomem *ps_sm = (void *) + ras2_ctx->pcc_subspace->pcc_comm_addr; + int ret; + + ps_sm->common.set_capabilities[0] = RAS2_SUPPORT_HW_PARTOL_SCRUB; + ps_sm->params.patrol_scrub_command = RAS2_GET_PATROL_PARAMETERS; + + ret = ras2_send_pcc_cmd(ras2_ctx, RAS2_PCC_CMD_EXEC); + if (ret) { + dev_err(ras2_ctx->dev, "failed to read parameters\n"); + return ret; + } + + *running = ps_sm->params.flags & RAS2_PATROL_SCRUB_FLAG_SCRUBBER_RUNNING; + + return 0; +} + +static int ras2_hw_scrub_read_min_scrub_cycle(struct device *dev, void *drv_data, + u32 *min) +{ + struct ras2_scrub_ctx *ras2_ctx = drv_data; + + *min = ras2_ctx->min_scrub_cycle * RAS2_HOUR_IN_SECS; + + return 0; +} + +static int ras2_hw_scrub_read_max_scrub_cycle(struct device *dev, void *drv_data, + u32 *max) +{ + struct ras2_scrub_ctx *ras2_ctx = drv_data; + + *max = ras2_ctx->max_scrub_cycle * RAS2_HOUR_IN_SECS; + + return 0; +} + +static int ras2_hw_scrub_cycle_read(struct device *dev, void *drv_data, + u32 *scrub_cycle_secs) +{ + struct ras2_scrub_ctx *ras2_ctx = drv_data; + + *scrub_cycle_secs = ras2_ctx->scrub_cycle_hrs * RAS2_HOUR_IN_SECS; + + return 0; +} + +static int ras2_hw_scrub_cycle_write(struct device *dev, void *drv_data, + u32 scrub_cycle_secs) +{ + u8 scrub_cycle_hrs = scrub_cycle_secs / RAS2_HOUR_IN_SECS; + struct ras2_scrub_ctx *ras2_ctx = drv_data; + bool running; + int ret; + + guard(mutex)(&ras2_ctx->lock); + ret = ras2_get_patrol_scrub_running(ras2_ctx, &running); + if (ret) + return ret; + + if (running) + return -EBUSY; + + if (scrub_cycle_hrs < ras2_ctx->min_scrub_cycle || + scrub_cycle_hrs > ras2_ctx->max_scrub_cycle) + return -EINVAL; + + ras2_ctx->scrub_cycle_hrs = scrub_cycle_hrs; + + return 0; +} + +static int ras2_hw_scrub_read_range_base(struct device *dev, void *drv_data, u64 *base) +{ + struct ras2_scrub_ctx *ras2_ctx = drv_data; + + /* + * When BG scrubbing is enabled the actual address range is not valid. + * Return -EBUSY now unless find out a method to retrieve actual full PA range. + */ + if (ras2_ctx->bg) + return -EBUSY; + + *base = ras2_ctx->base; + + return 0; +} + +static int ras2_hw_scrub_read_range_size(struct device *dev, void *drv_data, u64 *size) +{ + struct ras2_scrub_ctx *ras2_ctx = drv_data; + + if (ras2_ctx->bg) + return -EBUSY; + + *size = ras2_ctx->size; + + return 0; +} + +static int ras2_hw_scrub_write_range_base(struct device *dev, void *drv_data, u64 base) +{ + struct ras2_scrub_ctx *ras2_ctx = drv_data; + bool running; + int ret; + + guard(mutex)(&ras2_ctx->lock); + ret = ras2_get_patrol_scrub_running(ras2_ctx, &running); + if (ret) + return ret; + + if (running) + return -EBUSY; + + if (!base) { + dev_warn(dev, "%s: Invalid address range base=0x%llx\n", + __func__, base); + return -EINVAL; + } + + ras2_ctx->base = base; + + return 0; +} + +static int ras2_hw_scrub_write_range_size(struct device *dev, void *drv_data, u64 size) +{ + struct ras2_scrub_ctx *ras2_ctx = drv_data; + bool running; + int ret; + + guard(mutex)(&ras2_ctx->lock); + ret = ras2_get_patrol_scrub_running(ras2_ctx, &running); + if (ret) + return ret; + + if (running) + return -EBUSY; + + if (!size) { + dev_warn(dev, "%s: Invalid address range size=0x%llx\n", + __func__, size); + return -EINVAL; + } + + ras2_ctx->size = size; + + return 0; +} + +static int ras2_hw_scrub_set_enabled_bg(struct device *dev, void *drv_data, bool enable) +{ + struct ras2_scrub_ctx *ras2_ctx = drv_data; + struct acpi_ras2_ps_shared_mem __iomem *ps_sm = (void *) + ras2_ctx->pcc_subspace->pcc_comm_addr; + bool running; + int ret; + + guard(mutex)(&ras2_ctx->lock); + ps_sm->common.set_capabilities[0] = RAS2_SUPPORT_HW_PARTOL_SCRUB; + ret = ras2_get_patrol_scrub_running(ras2_ctx, &running); + if (ret) + return ret; + if (enable) { + if (ras2_ctx->bg || running) + return -EBUSY; + ps_sm->params.requested_address_range[0] = 0; + ps_sm->params.requested_address_range[1] = 0; + ps_sm->params.scrub_params_in &= ~RAS2_PATROL_SCRUB_SCHRS_IN_MASK; + ps_sm->params.scrub_params_in |= FIELD_PREP(RAS2_PATROL_SCRUB_SCHRS_IN_MASK, + ras2_ctx->scrub_cycle_hrs); + ps_sm->params.patrol_scrub_command = RAS2_START_PATROL_SCRUBBER; + } else { + if (!ras2_ctx->bg) + return -EPERM; + if (!ras2_ctx->bg && running) + return -EBUSY; + ps_sm->params.patrol_scrub_command = RAS2_STOP_PATROL_SCRUBBER; + } + ps_sm->params.scrub_params_in &= ~RAS2_PATROL_SCRUB_EN_BACKGROUND; + ps_sm->params.scrub_params_in |= FIELD_PREP(RAS2_PATROL_SCRUB_EN_BACKGROUND, + enable); + ret = ras2_send_pcc_cmd(ras2_ctx, RAS2_PCC_CMD_EXEC); + if (ret) { + dev_err(ras2_ctx->dev, "Failed to %s background scrubbing\n", + enable ? "enable" : "disable"); + return ret; + } + if (enable) { + ras2_ctx->bg = true; + /* Update the cache to account for rounding of supplied parameters and similar */ + ret = ras2_update_patrol_scrub_params_cache(ras2_ctx); + } else { + ret = ras2_update_patrol_scrub_params_cache(ras2_ctx); + ras2_ctx->bg = false; + } + + return ret; +} + +static int ras2_hw_scrub_get_enabled_bg(struct device *dev, void *drv_data, bool *enabled) +{ + struct ras2_scrub_ctx *ras2_ctx = drv_data; + + *enabled = ras2_ctx->bg; + + return 0; +} + +static int ras2_hw_scrub_set_enabled_od(struct device *dev, void *drv_data, bool enable) +{ + struct ras2_scrub_ctx *ras2_ctx = drv_data; + struct acpi_ras2_ps_shared_mem __iomem *ps_sm = (void *) + ras2_ctx->pcc_subspace->pcc_comm_addr; + bool running; + int ret; + + guard(mutex)(&ras2_ctx->lock); + ps_sm->common.set_capabilities[0] = RAS2_SUPPORT_HW_PARTOL_SCRUB; + if (ras2_ctx->bg) + return -EBUSY; + ret = ras2_get_patrol_scrub_running(ras2_ctx, &running); + if (ret) + return ret; + if (enable) { + if (!ras2_ctx->base || !ras2_ctx->size) { + dev_warn(ras2_ctx->dev, + "%s: Invalid address range, base=0x%llx " + "size=0x%llx\n", __func__, + ras2_ctx->base, ras2_ctx->size); + return -ERANGE; + } + if (running) + return -EBUSY; + ps_sm->params.scrub_params_in &= ~RAS2_PATROL_SCRUB_SCHRS_IN_MASK; + ps_sm->params.scrub_params_in |= FIELD_PREP(RAS2_PATROL_SCRUB_SCHRS_IN_MASK, + ras2_ctx->scrub_cycle_hrs); + ps_sm->params.requested_address_range[0] = ras2_ctx->base; + ps_sm->params.requested_address_range[1] = ras2_ctx->size; + ps_sm->params.scrub_params_in &= ~RAS2_PATROL_SCRUB_EN_BACKGROUND; + ps_sm->params.patrol_scrub_command = RAS2_START_PATROL_SCRUBBER; + } else { + if (!running) + return 0; + ps_sm->params.patrol_scrub_command = RAS2_STOP_PATROL_SCRUBBER; + } + + ret = ras2_send_pcc_cmd(ras2_ctx, RAS2_PCC_CMD_EXEC); + if (ret) { + dev_err(ras2_ctx->dev, "Failed to %s demand scrubbing\n", + enable ? "enable" : "disable"); + return ret; + } + + return ras2_update_patrol_scrub_params_cache(ras2_ctx); +} + +static int ras2_hw_scrub_get_enabled_od(struct device *dev, void *drv_data, bool *enabled) +{ + struct ras2_scrub_ctx *ras2_ctx = drv_data; + + guard(mutex)(&ras2_ctx->lock); + if (ras2_ctx->bg) { + *enabled = false; + return 0; + } + + return ras2_get_patrol_scrub_running(ras2_ctx, enabled); +} + +static const struct edac_scrub_ops ras2_scrub_ops = { + .read_range_base = ras2_hw_scrub_read_range_base, + .read_range_size = ras2_hw_scrub_read_range_size, + .write_range_base = ras2_hw_scrub_write_range_base, + .write_range_size = ras2_hw_scrub_write_range_size, + .get_enabled_bg = ras2_hw_scrub_get_enabled_bg, + .set_enabled_bg = ras2_hw_scrub_set_enabled_bg, + .get_enabled_od = ras2_hw_scrub_get_enabled_od, + .set_enabled_od = ras2_hw_scrub_set_enabled_od, + .get_min_cycle = ras2_hw_scrub_read_min_scrub_cycle, + .get_max_cycle = ras2_hw_scrub_read_max_scrub_cycle, + .get_cycle_duration = ras2_hw_scrub_cycle_read, + .set_cycle_duration = ras2_hw_scrub_cycle_write, +}; + +static DEFINE_IDA(ras2_ida); + +static void ida_release(void *ctx) +{ + struct ras2_scrub_ctx *ras2_ctx = ctx; + + ida_free(&ras2_ida, ras2_ctx->id); +} + +static int ras2_probe(struct platform_device *pdev) +{ + struct edac_dev_feature ras_features[RAS2_DEV_NUM_RAS_FEATURES]; + char scrub_name[RAS2_SCRUB_NAME_LEN]; + struct ras2_scrub_ctx *ras2_ctx; + int num_ras_features = 0; + int ret, id; + + /* RAS2 PCC Channel and Scrub specific context */ + ras2_ctx = devm_kzalloc(&pdev->dev, sizeof(*ras2_ctx), GFP_KERNEL); + if (!ras2_ctx) + return -ENOMEM; + + ras2_ctx->dev = &pdev->dev; + mutex_init(&ras2_ctx->lock); + + ret = devm_ras2_register_pcc_channel(&pdev->dev, ras2_ctx, + *((int *)dev_get_platdata(&pdev->dev))); + if (ret < 0) { + dev_dbg(ras2_ctx->dev, + "failed to register pcc channel ret=%d\n", ret); + return ret; + } + if (!ras2_is_patrol_scrub_support(ras2_ctx)) + return -EOPNOTSUPP; + + ret = ras2_update_patrol_scrub_params_cache(ras2_ctx); + if (ret) + return ret; + + id = ida_alloc(&ras2_ida, GFP_KERNEL); + if (id < 0) + return id; + + ras2_ctx->id = id; + + ret = devm_add_action_or_reset(&pdev->dev, ida_release, ras2_ctx); + if (ret < 0) + return ret; + + snprintf(scrub_name, sizeof(scrub_name), "acpi_ras_mem%d", + ras2_ctx->id); + + ras_features[num_ras_features].ft_type = RAS_FEAT_SCRUB; + ras_features[num_ras_features].instance = ras2_ctx->instance; + ras_features[num_ras_features].scrub_ops = &ras2_scrub_ops; + ras_features[num_ras_features].ctx = ras2_ctx; + num_ras_features++; + + return edac_dev_register(&pdev->dev, scrub_name, NULL, + num_ras_features, ras_features); +} + +static const struct platform_device_id ras2_id_table[] = { + { .name = "acpi_ras2", }, + { } +}; +MODULE_DEVICE_TABLE(platform, ras2_id_table); + +static struct platform_driver ras2_driver = { + .probe = ras2_probe, + .driver = { + .name = "acpi_ras2", + }, + .id_table = ras2_id_table, +}; +module_driver(ras2_driver, platform_driver_register, platform_driver_unregister); + +MODULE_IMPORT_NS(ACPI_RAS2); +MODULE_DESCRIPTION("ACPI RAS2 memory driver"); +MODULE_LICENSE("GPL"); From patchwork Wed Oct 9 12:41:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 834192 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B33CE1D358F; Wed, 9 Oct 2024 12:43:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728477827; cv=none; b=cMoi8auI4L/AQpPSe2lQ5lWw4YXEButfrwzfdLuHJIvKy7dizXR0s9RCvAte4ORgDpQwkZv7pWaLSgu8FWleJMJt4t5VZoAI1Iur+/dhC5pl4PZpyzhqji8LL+8V0ndjZTRbNkbvWpsVQ7dFV8jm2LyP2PHeTCrHHEvdxrc7A34= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728477827; c=relaxed/simple; bh=fD5qdbap92sS+Z8VZvkg+V5/LIecjuvoHW8ifUZZUjM=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=dgYi2ZPvstlbfsSZ3fjzeED/FQJmdNd7DO2bFLC0s5/kiXuCc0gDIQlpjEreBYWxB8xvqeO2ab+D7JP+M2aspb/8xBQSe6GTaicJBuqD+H0VElZxgt7dCQlWczh3TSFTD8FnWSOWrHyYlznInfMse4WEPU+vbTVYO5Qqri4h1dg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.231]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4XNsxm72Sbz6GD55; Wed, 9 Oct 2024 20:43:24 +0800 (CST) Received: from frapeml500007.china.huawei.com (unknown [7.182.85.172]) by mail.maildlp.com (Postfix) with ESMTPS id B5BB0140B3C; Wed, 9 Oct 2024 20:43:42 +0800 (CST) Received: from P_UKIT01-A7bmah.china.huawei.com (10.48.152.209) by frapeml500007.china.huawei.com (7.182.85.172) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 9 Oct 2024 14:43:40 +0200 From: To: , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v13 15/18] EDAC: Add memory repair control feature Date: Wed, 9 Oct 2024 13:41:16 +0100 Message-ID: <20241009124120.1124-16-shiju.jose@huawei.com> X-Mailer: git-send-email 2.43.0.windows.1 In-Reply-To: <20241009124120.1124-1-shiju.jose@huawei.com> References: <20241009124120.1124-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500003.china.huawei.com (7.191.162.67) To frapeml500007.china.huawei.com (7.182.85.172) From: Shiju Jose Add generic EDAC memory repair control, eg. PPR(Post Package Repair), memory sparing etc, control driver in order to control memory repairs in the system. Supports sPPR(soft PPR), hPPR(hard PPR), soft/hard memory sparing, memory sparing at cacheline/row/bank/rank granularity etc. Device with memory repair features registers with EDAC device driver, which retrieves memory repair descriptor from EDAC memory repair driver and exposes the sysfs repair control attributes to userspace in /sys/bus/edac/devices//mem_repairX/. The common memory repair control interface abstracts the control of an arbitrary memory repair functionality to a common set of functions. The sysfs memory repair attribute nodes would be present only if the client driver has implemented the corresponding attribute callback function and passed in ops to the EDAC device driver during registration. Signed-off-by: Shiju Jose --- .../ABI/testing/sysfs-edac-mem-repair | 152 +++++++++ drivers/edac/Makefile | 2 +- drivers/edac/edac_device.c | 31 ++ drivers/edac/mem_repair.c | 317 ++++++++++++++++++ include/linux/edac.h | 67 ++++ 5 files changed, 568 insertions(+), 1 deletion(-) create mode 100644 Documentation/ABI/testing/sysfs-edac-mem-repair create mode 100755 drivers/edac/mem_repair.c diff --git a/Documentation/ABI/testing/sysfs-edac-mem-repair b/Documentation/ABI/testing/sysfs-edac-mem-repair new file mode 100644 index 000000000000..9a8712ed9d47 --- /dev/null +++ b/Documentation/ABI/testing/sysfs-edac-mem-repair @@ -0,0 +1,152 @@ +What: /sys/bus/edac/devices//mem_repairX +Date: Oct 2024 +KernelVersion: 6.12 +Contact: linux-edac@vger.kernel.org +Description: + The sysfs EDAC bus devices //mem_repairX subdirectory + belongs to the memory media repair features control, such as + PPR (Post Package Repair), memory sparing etc, where + directory corresponds to a device registered with the EDAC + device driver for the memory repair features. + /mem_repairX belongs to either sPPR (Soft PPR) or hPPR (Hard PPR) + feature of PPR feature, hard or soft memory sparing etc. The memory + sparing is a repair function that replaces a portion of memory + (spared memory) with a portion of functional memory. The memory + sparing has cacheline/row/bank/rank sparing granularities. + The sysfs memory repair attr nodes would be only present if a + memory repair feature is supported. + +What: /sys/bus/edac/devices//mem_repairX/repair_type +Date: Oct 2024 +KernelVersion: 6.12 +Contact: linux-edac@vger.kernel.org +Description: + (RO) Type of the repair instance. For eg. sPPR, hPPR, cacheline/ + row/bank/rank memory sparing etc. + +What: /sys/bus/edac/devices//mem_repairX/persist_mode_avail +Date: Oct 2024 +KernelVersion: 6.12 +Contact: linux-edac@vger.kernel.org +Description: + (RO) Persist repair modes supported in the device. + For e.g. Hard PPR(hPPR)/hard memory sparing for a permanent memory + repair, Soft PPR(sPPR)/soft memory sparing for a temporary repair. + +What: /sys/bus/edac/devices//mem_repairX/persist_mode +Date: Oct 2024 +KernelVersion: 6.12 +Contact: linux-edac@vger.kernel.org +Description: + (RW) Current persist repair mode. + +What: /sys/bus/edac/devices//mem_repairX/dpa_support +Date: Oct 2024 +KernelVersion: 6.12 +Contact: linux-edac@vger.kernel.org +Description: + (RO) True if supports DPA for repair(PPR, memory sparing, ...) + maintenance operation. + +What: /sys/bus/edac/devices//mem_repairX/repair_safe_when_in_use +Date: Oct 2024 +KernelVersion: 6.12 +Contact: linux-edac@vger.kernel.org +Description: + (RO) True if memory media is accessible and data is retained + during the memory repair operation. + +What: /sys/bus/edac/devices//mem_repairX/hpa +Date: Oct 2024 +KernelVersion: 6.12 +Contact: linux-edac@vger.kernel.org +Description: + (WO) Set HPA (Host Physical Address) for memory repair. + +What: /sys/bus/edac/devices//mem_repairX/dpa +Date: Oct 2024 +KernelVersion: 6.12 +Contact: linux-edac@vger.kernel.org +Description: + (WO) Set DPA (Device Physical Address) for memory repair. + +What: /sys/bus/edac/devices//mem_repairX/nibble_mask +Date: Oct 2024 +KernelVersion: 6.12 +Contact: linux-edac@vger.kernel.org +Description: + (WO) Set nibble mask for memory repair. + +What: /sys/bus/edac/devices//mem_repairX/bank_group +Date: Oct 2024 +KernelVersion: 6.12 +Contact: linux-edac@vger.kernel.org +Description: + (WO) Set memory bank group for repair. + +What: /sys/bus/edac/devices//mem_repairX/bank +Date: Oct 2024 +KernelVersion: 6.12 +Contact: linux-edac@vger.kernel.org +Description: + (WO) Set memory bank for memory repair. + +What: /sys/bus/edac/devices//mem_repairX/rank +Date: Oct 2024 +KernelVersion: 6.12 +Contact: linux-edac@vger.kernel.org +Description: + (WO) Set memory rank for repair. + +What: /sys/bus/edac/devices//mem_repairX/row +Date: Oct 2024 +KernelVersion: 6.12 +Contact: linux-edac@vger.kernel.org +Description: + (WO) Set row for memory repair. + +What: /sys/bus/edac/devices//mem_repairX/column +Date: Oct 2024 +KernelVersion: 6.12 +Contact: linux-edac@vger.kernel.org +Description: + (WO) Set column for memory repair. + +What: /sys/bus/edac/devices//mem_repairX/channel +Date: Oct 2024 +KernelVersion: 6.12 +Contact: linux-edac@vger.kernel.org +Description: + (WO) Set channel for memory repair. + +What: /sys/bus/edac/devices//mem_repairX/sub_channel +Date: Oct 2024 +KernelVersion: 6.12 +Contact: linux-edac@vger.kernel.org +Description: + (WO) Set sub channel for memory repair. + +What: /sys/bus/edac/devices//mem_repairX/query +Date: Oct 2024 +KernelVersion: 6.12 +Contact: linux-edac@vger.kernel.org +Description: + (WO) Query whether the repair operation is supported for the + memory attributes set. Return failure if resources are + not available to perform repair. + +What: /sys/bus/edac/devices//mem_repairX/repair +Date: Oct 2024 +KernelVersion: 6.12 +Contact: linux-edac@vger.kernel.org +Description: + (WO) Start the memory repair operation for the memory attributes + set. Return failure if resources are not available to + perform repair. + In some states of system configuration (e.g. before address + decoders have been configured), memory devices (e.g. CXL) + may not have an active mapping in the main host address + physical address map. As such, the memory to repair must be + identified by a device specific physical addressing scheme + using a DPA. The DPA to use will be presented in related + error records. diff --git a/drivers/edac/Makefile b/drivers/edac/Makefile index c2135b94de34..fe0be2a1c8a4 100644 --- a/drivers/edac/Makefile +++ b/drivers/edac/Makefile @@ -10,7 +10,7 @@ obj-$(CONFIG_EDAC) := edac_core.o edac_core-y := edac_mc.o edac_device.o edac_mc_sysfs.o edac_core-y += edac_module.o edac_device_sysfs.o wq.o -edac_core-y += scrub.o ecs.o +edac_core-y += scrub.o ecs.o mem_repair.o edac_core-$(CONFIG_EDAC_DEBUG) += debugfs.o diff --git a/drivers/edac/edac_device.c b/drivers/edac/edac_device.c index 1743ce8e57a7..06ef09f5303d 100644 --- a/drivers/edac/edac_device.c +++ b/drivers/edac/edac_device.c @@ -576,6 +576,7 @@ static void edac_dev_release(struct device *dev) { struct edac_dev_feat_ctx *ctx = container_of(dev, struct edac_dev_feat_ctx, dev); + kfree(ctx->mem_repair); kfree(ctx->scrub); kfree(ctx->dev.groups); kfree(ctx); @@ -614,6 +615,7 @@ int edac_dev_register(struct device *parent, char *name, struct edac_dev_data *dev_data; struct edac_dev_feat_ctx *ctx; int scrub_cnt = 0, scrub_inst = 0; + int mem_repair_cnt = 0, mem_repair_inst = 0; int attr_gcnt = 0; int ret, feat; @@ -627,6 +629,10 @@ int edac_dev_register(struct device *parent, char *name, attr_gcnt++; scrub_cnt++; break; + case RAS_FEAT_MEM_REPAIR: + attr_gcnt++; + mem_repair_cnt++; + break; case RAS_FEAT_ECS: attr_gcnt += ras_features[feat].ecs_info.num_media_frus; break; @@ -656,6 +662,14 @@ int edac_dev_register(struct device *parent, char *name, } } + if (mem_repair_cnt) { + ctx->mem_repair = kcalloc(mem_repair_cnt, sizeof(*ctx->mem_repair), GFP_KERNEL); + if (!ctx->mem_repair) { + ret = -ENOMEM; + goto groups_free; + } + } + attr_gcnt = 0; for (feat = 0; feat < num_features; feat++, ras_features++) { switch (ras_features->ft_type) { @@ -687,6 +701,22 @@ int edac_dev_register(struct device *parent, char *name, goto data_mem_free; attr_gcnt += ras_features->ecs_info.num_media_frus; break; + case RAS_FEAT_MEM_REPAIR: + if (!ras_features->mem_repair_ops) + continue; + if (mem_repair_inst != ras_features->instance) + goto data_mem_free; + dev_data = &ctx->mem_repair[mem_repair_inst]; + dev_data->instance = mem_repair_inst; + dev_data->mem_repair_ops = ras_features->mem_repair_ops; + dev_data->private = ras_features->ctx; + ret = edac_mem_repair_get_desc(parent, &ras_attr_groups[attr_gcnt], + ras_features->instance); + if (ret) + goto data_mem_free; + mem_repair_inst++; + attr_gcnt++; + break; default: ret = -EINVAL; goto data_mem_free; @@ -713,6 +743,7 @@ int edac_dev_register(struct device *parent, char *name, return devm_add_action_or_reset(parent, edac_dev_unreg, &ctx->dev); data_mem_free: + kfree(ctx->mem_repair); kfree(ctx->scrub); groups_free: kfree(ras_attr_groups); diff --git a/drivers/edac/mem_repair.c b/drivers/edac/mem_repair.c new file mode 100755 index 000000000000..c0d419fa72d4 --- /dev/null +++ b/drivers/edac/mem_repair.c @@ -0,0 +1,317 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Generic EDAC memory repair driver in order to control the memory + * device with memory repair features, such as Post Package Repair (PPR), + * memory sparing features etc in the system. + * The common sysfs memory repair interface abstracts the control of an + * arbitrary memory repair functionality to a common set of functions. + * + * Copyright (c) 2024 HiSilicon Limited. + */ + +#define pr_fmt(fmt) "EDAC MEM REPAIR: " fmt + +#include + +enum edac_mem_repair_attributes { + MEM_REPAIR_TYPE, + MEM_REPAIR_PERSIST_MODE_AVAIL, + MEM_REPAIR_PERSIST_MODE, + MEM_REPAIR_DPA_SUPPORT, + MEM_REPAIR_SAFE_IN_USE, + MEM_REPAIR_HPA, + MEM_REPAIR_DPA, + MEM_REPAIR_NIBBLE_MASK, + MEM_REPAIR_BANK_GROUP, + MEM_REPAIR_BANK, + MEM_REPAIR_RANK, + MEM_REPAIR_ROW, + MEM_REPAIR_COLUMN, + MEM_REPAIR_CHANNEL, + MEM_REPAIR_SUB_CHANNEL, + MEM_REPAIR_QUERY, + MEM_DO_REPAIR, + MEM_REPAIR_MAX_ATTRS +}; + +struct edac_mem_repair_dev_attr { + struct device_attribute dev_attr; + u8 instance; +}; + +struct edac_mem_repair_context { + char name[EDAC_FEAT_NAME_LEN]; + struct edac_mem_repair_dev_attr mem_repair_dev_attr[MEM_REPAIR_MAX_ATTRS]; + struct attribute *mem_repair_attrs[MEM_REPAIR_MAX_ATTRS + 1]; + struct attribute_group group; +}; + +#define TO_MEM_REPAIR_DEV_ATTR(_dev_attr) \ + container_of(_dev_attr, struct edac_mem_repair_dev_attr, dev_attr) + +#define INIT_MEM_REPAIR_FUNC_VARS(attr) \ + u8 inst = TO_MEM_REPAIR_DEV_ATTR(attr)->instance; \ + struct edac_dev_feat_ctx *ctx = dev_get_drvdata(ras_feat_dev); \ + const struct edac_mem_repair_ops *ops = ctx->mem_repair[inst].mem_repair_ops + +#define EDAC_MEM_REPAIR_ATTR_SHOW(attrib, cb, type, format) \ +static ssize_t attrib##_show(struct device *ras_feat_dev, \ + struct device_attribute *attr, char *buf) \ +{ \ + u8 inst = TO_MEM_REPAIR_DEV_ATTR(attr)->instance; \ + struct edac_dev_feat_ctx *ctx = dev_get_drvdata(ras_feat_dev); \ + const struct edac_mem_repair_ops *ops = \ + ctx->mem_repair[inst].mem_repair_ops; \ + type data; \ + int ret; \ + \ + ret = ops->cb(ras_feat_dev->parent, ctx->mem_repair[inst].private, \ + &data); \ + if (ret) \ + return ret; \ + \ + return sysfs_emit(buf, format, data); \ +} + +EDAC_MEM_REPAIR_ATTR_SHOW(repair_type, get_repair_type, u32, "%u\n") +EDAC_MEM_REPAIR_ATTR_SHOW(persist_mode, get_persist_mode, u32, "%u\n") +EDAC_MEM_REPAIR_ATTR_SHOW(dpa_support, get_dpa_support, u32, "%u\n") +EDAC_MEM_REPAIR_ATTR_SHOW(repair_safe_when_in_use, get_repair_safe_when_in_use, u32, "%u\n") + +#define EDAC_MEM_REPAIR_ATTR_STORE(attrib, cb, type, conv_func) \ +static ssize_t attrib##_store(struct device *ras_feat_dev, \ + struct device_attribute *attr, \ + const char *buf, size_t len) \ +{ \ + u8 inst = TO_MEM_REPAIR_DEV_ATTR(attr)->instance; \ + struct edac_dev_feat_ctx *ctx = dev_get_drvdata(ras_feat_dev); \ + const struct edac_mem_repair_ops *ops = \ + ctx->mem_repair[inst].mem_repair_ops; \ + type data; \ + int ret; \ + \ + ret = conv_func(buf, 0, &data); \ + if (ret < 0) \ + return ret; \ + \ + ret = ops->cb(ras_feat_dev->parent, ctx->mem_repair[inst].private, \ + data); \ + if (ret) \ + return ret; \ + \ + return len; \ +} + +EDAC_MEM_REPAIR_ATTR_STORE(persist_mode, set_persist_mode, unsigned long, kstrtoul) +EDAC_MEM_REPAIR_ATTR_STORE(hpa, set_hpa, u64, kstrtou64) +EDAC_MEM_REPAIR_ATTR_STORE(dpa, set_dpa, u64, kstrtou64) +EDAC_MEM_REPAIR_ATTR_STORE(nibble_mask, set_nibble_mask, u64, kstrtou64) +EDAC_MEM_REPAIR_ATTR_STORE(bank_group, set_bank_group, unsigned long, kstrtoul) +EDAC_MEM_REPAIR_ATTR_STORE(bank, set_bank, unsigned long, kstrtoul) +EDAC_MEM_REPAIR_ATTR_STORE(rank, set_rank, unsigned long, kstrtoul) +EDAC_MEM_REPAIR_ATTR_STORE(row, set_row, u64, kstrtou64) +EDAC_MEM_REPAIR_ATTR_STORE(column, set_column, unsigned long, kstrtoul) +EDAC_MEM_REPAIR_ATTR_STORE(channel, set_channel, unsigned long, kstrtoul) +EDAC_MEM_REPAIR_ATTR_STORE(sub_channel, set_sub_channel, unsigned long, kstrtoul) + +static ssize_t persist_mode_avail_show(struct device *ras_feat_dev, + struct device_attribute *attr, char *buf) +{ + INIT_MEM_REPAIR_FUNC_VARS(attr); + + return ops->get_persist_mode_avail(ras_feat_dev->parent, + ctx->mem_repair[inst].private, buf); +} + +static ssize_t query_store(struct device *ras_feat_dev, struct device_attribute *attr, + const char *buf, size_t len) +{ + int ret; + + INIT_MEM_REPAIR_FUNC_VARS(attr); + + ret = ops->do_query(ras_feat_dev->parent, ctx->mem_repair[inst].private); + if (ret) + return ret; + + return len; +} + +static ssize_t repair_store(struct device *ras_feat_dev, struct device_attribute *attr, + const char *buf, size_t len) +{ + int ret; + + INIT_MEM_REPAIR_FUNC_VARS(attr); + + ret = ops->do_repair(ras_feat_dev->parent, ctx->mem_repair[inst].private); + if (ret) + return ret; + + return len; +} + +static umode_t mem_repair_attr_visible(struct kobject *kobj, struct attribute *a, int attr_id) +{ + struct device *ras_feat_dev = kobj_to_dev(kobj); + struct device_attribute *dev_attr = container_of(a, struct device_attribute, attr); + + INIT_MEM_REPAIR_FUNC_VARS(dev_attr); + switch (attr_id) { + case MEM_REPAIR_TYPE: + if (ops->get_repair_type) + return a->mode; + break; + case MEM_REPAIR_PERSIST_MODE_AVAIL: + if (ops->get_persist_mode_avail) + return a->mode; + break; + case MEM_REPAIR_PERSIST_MODE: + if (ops->get_persist_mode) { + if (ops->set_persist_mode) + return a->mode; + else + return 0444; + } + break; + case MEM_REPAIR_DPA_SUPPORT: + if (ops->get_dpa_support) + return a->mode; + break; + case MEM_REPAIR_SAFE_IN_USE: + if (ops->get_repair_safe_when_in_use) + return a->mode; + break; + case MEM_REPAIR_HPA: + if (ops->set_hpa) + return a->mode; + break; + case MEM_REPAIR_DPA: + if (ops->set_dpa) + return a->mode; + break; + case MEM_REPAIR_NIBBLE_MASK: + if (ops->set_nibble_mask) + return a->mode; + break; + case MEM_REPAIR_BANK_GROUP: + if (ops->set_bank_group) + return a->mode; + break; + case MEM_REPAIR_BANK: + if (ops->set_bank) + return a->mode; + break; + case MEM_REPAIR_RANK: + if (ops->set_rank) + return a->mode; + break; + case MEM_REPAIR_ROW: + if (ops->set_row) + return a->mode; + break; + case MEM_REPAIR_COLUMN: + if (ops->set_column) + return a->mode; + break; + case MEM_REPAIR_CHANNEL: + if (ops->set_channel) + return a->mode; + break; + case MEM_REPAIR_SUB_CHANNEL: + if (ops->set_sub_channel) + return a->mode; + break; + case MEM_REPAIR_QUERY: + if (ops->do_query) + return a->mode; + break; + case MEM_DO_REPAIR: + if (ops->do_repair) + return a->mode; + break; + default: + break; + } + + return 0; +} + +#define EDAC_MEM_REPAIR_ATTR_RO(_name, _instance) \ + ((struct edac_mem_repair_dev_attr) { .dev_attr = __ATTR_RO(_name), \ + .instance = _instance }) + +#define EDAC_MEM_REPAIR_ATTR_WO(_name, _instance) \ + ((struct edac_mem_repair_dev_attr) { .dev_attr = __ATTR_WO(_name), \ + .instance = _instance }) + +#define EDAC_MEM_REPAIR_ATTR_RW(_name, _instance) \ + ((struct edac_mem_repair_dev_attr) { .dev_attr = __ATTR_RW(_name), \ + .instance = _instance }) + +static int mem_repair_create_desc(struct device *dev, + const struct attribute_group **attr_groups, u8 instance) +{ + struct edac_mem_repair_context *ctx; + struct attribute_group *group; + int i; + struct edac_mem_repair_dev_attr dev_attr[] = { + [MEM_REPAIR_TYPE] = EDAC_MEM_REPAIR_ATTR_RO(repair_type, instance), + [MEM_REPAIR_PERSIST_MODE_AVAIL] = + EDAC_MEM_REPAIR_ATTR_RO(persist_mode_avail, instance), + [MEM_REPAIR_PERSIST_MODE] = EDAC_MEM_REPAIR_ATTR_RW(persist_mode, instance), + [MEM_REPAIR_DPA_SUPPORT] = EDAC_MEM_REPAIR_ATTR_RO(dpa_support, instance), + [MEM_REPAIR_SAFE_IN_USE] = + EDAC_MEM_REPAIR_ATTR_RO(repair_safe_when_in_use, instance), + [MEM_REPAIR_HPA] = EDAC_MEM_REPAIR_ATTR_WO(hpa, instance), + [MEM_REPAIR_DPA] = EDAC_MEM_REPAIR_ATTR_WO(dpa, instance), + [MEM_REPAIR_NIBBLE_MASK] = EDAC_MEM_REPAIR_ATTR_WO(nibble_mask, instance), + [MEM_REPAIR_BANK_GROUP] = EDAC_MEM_REPAIR_ATTR_WO(bank_group, instance), + [MEM_REPAIR_BANK] = EDAC_MEM_REPAIR_ATTR_WO(bank, instance), + [MEM_REPAIR_RANK] = EDAC_MEM_REPAIR_ATTR_WO(rank, instance), + [MEM_REPAIR_ROW] = EDAC_MEM_REPAIR_ATTR_WO(row, instance), + [MEM_REPAIR_COLUMN] = EDAC_MEM_REPAIR_ATTR_WO(column, instance), + [MEM_REPAIR_CHANNEL] = EDAC_MEM_REPAIR_ATTR_WO(channel, instance), + [MEM_REPAIR_SUB_CHANNEL] = EDAC_MEM_REPAIR_ATTR_WO(sub_channel, instance), + [MEM_REPAIR_QUERY] = EDAC_MEM_REPAIR_ATTR_WO(query, instance), + [MEM_DO_REPAIR] = EDAC_MEM_REPAIR_ATTR_WO(repair, instance) + }; + + ctx = devm_kzalloc(dev, sizeof(*ctx), GFP_KERNEL); + if (!ctx) + return -ENOMEM; + + for (i = 0; i < MEM_REPAIR_MAX_ATTRS; i++) { + memcpy(&ctx->mem_repair_dev_attr[i].dev_attr, &dev_attr[i], sizeof(dev_attr[i])); + ctx->mem_repair_attrs[i] = &ctx->mem_repair_dev_attr[i].dev_attr.attr; + } + sprintf(ctx->name, "%s%d", "mem_repair", instance); + group = &ctx->group; + group->name = ctx->name; + group->attrs = ctx->mem_repair_attrs; + group->is_visible = mem_repair_attr_visible; + + attr_groups[0] = group; + + return 0; +} + +/** + * edac_mem_repair_get_desc - get EDAC memory repair descriptors + * @dev: client device with memory repair feature + * @attr_groups: pointer to attribute group container + * @instance: device's memory repair instance number. + * + * Return: + * * %0 - Success. + * * %-EINVAL - Invalid parameters passed. + * * %-ENOMEM - Dynamic memory allocation failed. + */ +int edac_mem_repair_get_desc(struct device *dev, + const struct attribute_group **attr_groups, u8 instance) +{ + if (!dev || !attr_groups) + return -EINVAL; + + return mem_repair_create_desc(dev, attr_groups, instance); +} diff --git a/include/linux/edac.h b/include/linux/edac.h index 20bdb08c7626..d18947d18c53 100644 --- a/include/linux/edac.h +++ b/include/linux/edac.h @@ -670,6 +670,7 @@ static inline struct dimm_info *edac_get_dimm(struct mem_ctl_info *mci, enum edac_dev_feat { RAS_FEAT_SCRUB, RAS_FEAT_ECS, + RAS_FEAT_MEM_REPAIR, RAS_FEAT_MAX }; @@ -745,11 +746,75 @@ int edac_ecs_get_desc(struct device *ecs_dev, const struct attribute_group **attr_groups, u16 num_media_frus); +enum edac_mem_repair_type { + EDAC_TYPE_SPPR, + EDAC_TYPE_HPPR, + EDAC_TYPE_CACHELINE_MEM_SPARING, + EDAC_TYPE_ROW_MEM_SPARING, + EDAC_TYPE_BANK_MEM_SPARING, + EDAC_TYPE_RANK_MEM_SPARING, +}; + +enum edac_mem_repair_persist_mode { + EDAC_MEM_REPAIR_SOFT, /* soft memory repair */ + EDAC_MEM_REPAIR_HARD, /* hard memory repair */ +}; + +/** + * struct edac_mem_repair_ops - memory repair device operations + * (all elements optional) + * @get_repair_type: get the memory repair type, listed in enum edac_mem_repair_type. + * @get_persist_mode_avail: get the persist modes supported in the device. + * @get_persist_mode: get the persist mode of the memory repair instance. + * @set_persist_mode: set the persist mode for the memory repair instance. + * @get_dpa_support: get dpa support flag. + * @get_repair_safe_when_in_use: get whether memory media is accessible and + * data is retained during repair operation. + * @set_hpa: set HPA for memory repair. + * @set_dpa: set DPA for memory repair. + * @set_nibble_mask: set nibble mask for memory repair. + * @set_bank_group: set bank group for memory repair. + * @set_bank: set bank for memory repair. + * @set_rank: set rank for memory repair. + * @set_row: set row for memory repair. + * @set_column: set column for memory repair. + * @set_channel: set channel for memory repair. + * @set_sub_channel: set sub channel for memory repair. + * @do_query: Query memory repair operation for the HPA/DPA/other attrs set + * is supported or not. + * @do_repair: start memory repair operation for the HPA/DPA/other attrs set. + */ +struct edac_mem_repair_ops { + int (*get_repair_type)(struct device *dev, void *drv_data, u32 *val); + int (*get_persist_mode_avail)(struct device *dev, void *drv_data, char *buf); + int (*get_persist_mode)(struct device *dev, void *drv_data, u32 *mode); + int (*set_persist_mode)(struct device *dev, void *drv_data, u32 mode); + int (*get_dpa_support)(struct device *dev, void *drv_data, u32 *val); + int (*get_repair_safe_when_in_use)(struct device *dev, void *drv_data, u32 *val); + int (*set_hpa)(struct device *dev, void *drv_data, u64 hpa); + int (*set_dpa)(struct device *dev, void *drv_data, u64 dpa); + int (*set_nibble_mask)(struct device *dev, void *drv_data, u64 val); + int (*set_bank_group)(struct device *dev, void *drv_data, u32 val); + int (*set_bank)(struct device *dev, void *drv_data, u32 val); + int (*set_rank)(struct device *dev, void *drv_data, u32 val); + int (*set_row)(struct device *dev, void *drv_data, u64 val); + int (*set_column)(struct device *dev, void *drv_data, u32 val); + int (*set_channel)(struct device *dev, void *drv_data, u32 val); + int (*set_sub_channel)(struct device *dev, void *drv_data, u32 val); + int (*do_query)(struct device *dev, void *drv_data); + int (*do_repair)(struct device *dev, void *drv_data); +}; + +int edac_mem_repair_get_desc(struct device *dev, + const struct attribute_group **attr_groups, + u8 instance); + /* EDAC device feature information structure */ struct edac_dev_data { union { const struct edac_scrub_ops *scrub_ops; const struct edac_ecs_ops *ecs_ops; + const struct edac_mem_repair_ops *mem_repair_ops; }; u8 instance; void *private; @@ -762,6 +827,7 @@ struct edac_dev_feat_ctx { void *private; struct edac_dev_data *scrub; struct edac_dev_data ecs; + struct edac_dev_data *mem_repair; }; struct edac_dev_feature { @@ -770,6 +836,7 @@ struct edac_dev_feature { union { const struct edac_scrub_ops *scrub_ops; const struct edac_ecs_ops *ecs_ops; + const struct edac_mem_repair_ops *mem_repair_ops; }; void *ctx; struct edac_ecs_ex_info ecs_info; From patchwork Wed Oct 9 12:41:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 834013 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7CBEB1DEFD6; Wed, 9 Oct 2024 12:43:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728477829; cv=none; b=lFphrFyvv/JHnrywLBHnP/6gjxdWeiI1Mj7q821ZczQjgJPOWnaj2BPaDcXNPJ7lL9RkqkpZGA0BEUAxy42ZadURfZ5c+I9i0M0XbmA7tOvNtSOKl7sEpUmQTDLZCJygHOzVka4HZ7YPg3efbIktpC2ADh+iykVpvArlOj+6+Pg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728477829; c=relaxed/simple; bh=FUQ6nkr9AAtsgBjR8nZxh3wxHSgNv4ZyAcEsgMP2TZM=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Oyusrw2o0Xdrg5SrYmadihQYp3MWQFEB429cOSDRF0TJnrCWqUI4JVQ2KzG6vjUdXJIgdyHrlYB7FueAUM35vf6DBzHuhWSBzDEXAUpDgjfDTFCVyiRNeyHksJj/DZxvjyhR+yDle1qkPis/j6sNpM9cbV6Sl0I6N1WM6b4V4Mg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4XNsxq5HVdz6GD5L; Wed, 9 Oct 2024 20:43:27 +0800 (CST) Received: from frapeml500007.china.huawei.com (unknown [7.182.85.172]) by mail.maildlp.com (Postfix) with ESMTPS id 79C74140133; Wed, 9 Oct 2024 20:43:45 +0800 (CST) Received: from P_UKIT01-A7bmah.china.huawei.com (10.48.152.209) by frapeml500007.china.huawei.com (7.182.85.172) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 9 Oct 2024 14:43:43 +0200 From: To: , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v13 16/18] cxl/mbox: Add support for PERFORM_MAINTENANCE mailbox command Date: Wed, 9 Oct 2024 13:41:17 +0100 Message-ID: <20241009124120.1124-17-shiju.jose@huawei.com> X-Mailer: git-send-email 2.43.0.windows.1 In-Reply-To: <20241009124120.1124-1-shiju.jose@huawei.com> References: <20241009124120.1124-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500003.china.huawei.com (7.191.162.67) To frapeml500007.china.huawei.com (7.182.85.172) From: Shiju Jose Add support for PERFORM_MAINTENANCE mailbox command. CXL spec 3.1 section 8.2.9.7.1 describes the Perform Maintenance command. This command requests the device to execute the maintenance operation specified by the maintenance operation class and the maintenance operation subclass. Signed-off-by: Shiju Jose Reviewed-by: Jonathan Cameron --- drivers/cxl/core/mbox.c | 40 ++++++++++++++++++++++++++++++++++++++++ drivers/cxl/cxlmem.h | 17 +++++++++++++++++ 2 files changed, 57 insertions(+) diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c index 30c44ab11347..a6f69864c504 100644 --- a/drivers/cxl/core/mbox.c +++ b/drivers/cxl/core/mbox.c @@ -1084,6 +1084,46 @@ int cxl_set_feature(struct cxl_memdev_state *mds, } EXPORT_SYMBOL_NS_GPL(cxl_set_feature, CXL); +int cxl_do_maintenance(struct cxl_memdev_state *mds, + u8 class, u8 subclass, + void *data_in, size_t data_in_size) +{ + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; + struct cxl_memdev_maintenance_pi { + struct cxl_mbox_do_maintenance_hdr hdr; + u8 data[]; + } __packed; + struct cxl_mbox_cmd mbox_cmd; + size_t hdr_size; + int rc = 0; + + struct cxl_memdev_maintenance_pi *pi __free(kfree) = + kmalloc(cxl_mbox->payload_size, GFP_KERNEL); + pi->hdr.op_class = class; + pi->hdr.op_subclass = subclass; + hdr_size = sizeof(pi->hdr); + /* + * Check minimum mbox payload size is available for + * the maintenance data transfer. + */ + if (hdr_size + data_in_size > cxl_mbox->payload_size) + return -ENOMEM; + + memcpy(pi->data, data_in, data_in_size); + mbox_cmd = (struct cxl_mbox_cmd) { + .opcode = CXL_MBOX_OP_DO_MAINTENANCE, + .size_in = hdr_size + data_in_size, + .payload_in = pi, + }; + + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); + if (rc < 0) + return rc; + + return 0; +} +EXPORT_SYMBOL_NS_GPL(cxl_do_maintenance, CXL); + /** * cxl_enumerate_cmds() - Enumerate commands for a device. * @mds: The driver data for the operation diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index e1156ea93fe7..1cd50ada551b 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -531,6 +531,7 @@ enum cxl_opcode { CXL_MBOX_OP_GET_SUPPORTED_FEATURES = 0x0500, CXL_MBOX_OP_GET_FEATURE = 0x0501, CXL_MBOX_OP_SET_FEATURE = 0x0502, + CXL_MBOX_OP_DO_MAINTENANCE = 0x0600, CXL_MBOX_OP_IDENTIFY = 0x4000, CXL_MBOX_OP_GET_PARTITION_INFO = 0x4100, CXL_MBOX_OP_SET_PARTITION_INFO = 0x4101, @@ -907,6 +908,19 @@ struct cxl_mbox_set_feat_hdr { u8 rsvd[9]; } __packed; +/* + * Perform Maintenance CXL 3.1 Spec 8.2.9.7.1 + */ + +/* + * Perform Maintenance input payload + * CXL rev 3.1 section 8.2.9.7.1 Table 8-102 + */ +struct cxl_mbox_do_maintenance_hdr { + u8 op_class; + u8 op_subclass; +} __packed; + int cxl_internal_send_cmd(struct cxl_mailbox *cxl_mbox, struct cxl_mbox_cmd *cmd); int cxl_dev_state_identify(struct cxl_memdev_state *mds); @@ -984,4 +998,7 @@ int cxl_set_feature(struct cxl_memdev_state *mds, const uuid_t feat_uuid, u8 feat_version, void *feat_data, size_t feat_data_size, u8 feat_flag); +int cxl_do_maintenance(struct cxl_memdev_state *mds, + u8 class, u8 subclass, + void *data_in, size_t data_in_size); #endif /* __CXL_MEM_H__ */ From patchwork Wed Oct 9 12:41:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 834191 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3F9EC1DFDA2; Wed, 9 Oct 2024 12:43:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728477832; cv=none; b=YqRfL70HFdJyWpsuAAuYMAiJFOeOnNh3wq/y+rKnnXv5wyZHBt/DDBg9FEg9KzOt1pDywfr/K/FCvF3i4pA2J51W7M7jMos+0mkSZGeLm/cgu2Zd9//fnt+zjRU34TgO4wP2VXaxksfu5d2GK9KflAeUoglZgsWfzJtMj6eb2w4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728477832; c=relaxed/simple; bh=G3gYPyZdicSAlLT7JhUDWsOU4EpI/GJFyJDbEAqSsLQ=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=X3e5moQsaYqPqhbMN1CBGpGF5pxQFyQp7wsVjDiDAXeknncawpYNUqrk1DdWCtfm2YHc6t6y3xp/BoKUESx6ozbrp/vE2mQuyBfJ+xXaH4EVYDHRPNsU52+Pbr/VRocbtZ5J8gdAnkJLRNkMjlgm2rQOeMF/q8VMh1PFitFDbrQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.231]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4XNswk4Zcpz6K71Z; Wed, 9 Oct 2024 20:42:30 +0800 (CST) Received: from frapeml500007.china.huawei.com (unknown [7.182.85.172]) by mail.maildlp.com (Postfix) with ESMTPS id 4D9D4140B3C; Wed, 9 Oct 2024 20:43:48 +0800 (CST) Received: from P_UKIT01-A7bmah.china.huawei.com (10.48.152.209) by frapeml500007.china.huawei.com (7.182.85.172) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 9 Oct 2024 14:43:46 +0200 From: To: , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v13 17/18] cxl/memfeature: Add CXL memory device PPR control feature Date: Wed, 9 Oct 2024 13:41:18 +0100 Message-ID: <20241009124120.1124-18-shiju.jose@huawei.com> X-Mailer: git-send-email 2.43.0.windows.1 In-Reply-To: <20241009124120.1124-1-shiju.jose@huawei.com> References: <20241009124120.1124-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500003.china.huawei.com (7.191.162.67) To frapeml500007.china.huawei.com (7.182.85.172) From: Shiju Jose Post Package Repair (PPR) maintenance operations may be supported by CXL devices that implement CXL.mem protocol. A PPR maintenance operation requests the CXL device to perform a repair operation on its media. For example, a CXL device with DRAM components that support PPR features may implement PPR Maintenance operations. DRAM components may support two types of PPR: Hard PPR (hPPR), for a permanent row repair, and Soft PPR (sPPR), for a temporary row repair. sPPR is much faster than hPPR, but the repair is lost with a power cycle. During the execution of a PPR Maintenance operation, a CXL memory device: - May or may not retain data - May or may not be able to process CXL.mem requests correctly, including the ones that target the DPA involved in the repair. These CXL Memory Device capabilities are specified by Restriction Flags in the sPPR Feature and hPPR Feature. sPPR maintenance operation may be executed at runtime, if data is retained and CXL.mem requests are correctly processed. For CXL devices with DRAM components, hPPR maintenance operation may be executed only at boot because data would not be retained. When a CXL device identifies a failure on a memory component, the device may inform the host about the need for a PPR maintenance operation by using an Event Record, where the Maintenance Needed flag is set. The Event Record specifies the DPA that should be repaired. A CXL device may not keep track of the requests that have already been sent and the information on which DPA should be repaired may be lost upon power cycle. The userspace tool requests for maintenance operation if the number of corrected error reported on a CXL.mem media exceeds error threshold. CXL spec 3.1 section 8.2.9.7.1.2 describes the device's sPPR (soft PPR) maintenance operation and section 8.2.9.7.1.3 describes the device's hPPR (hard PPR) maintenance operation feature. CXL spec 3.1 section 8.2.9.7.2.1 describes the sPPR feature discovery and configuration. CXL spec 3.1 section 8.2.9.7.2.2 describes the hPPR feature discovery and configuration. Add support for controlling CXL memory device PPR feature. Register with EDAC driver, which gets the memory repair attr descriptors from the EDAC memory repair driver and exposes sysfs repair control attributes for PRR to the userspace. For example CXL PPR control for the CXL mem0 device is exposed in /sys/bus/edac/devices/cxl_mem0/mem_repairX/ Tested with QEMU patch for CXL PPR feature. https://lore.kernel.org/all/20240730045722.71482-1-dave@stgolabs.net/ Signed-off-by: Shiju Jose --- drivers/cxl/core/memfeature.c | 335 +++++++++++++++++++++++++++++++++- 1 file changed, 329 insertions(+), 6 deletions(-) diff --git a/drivers/cxl/core/memfeature.c b/drivers/cxl/core/memfeature.c index 567406566c77..a0c9a6bd73c0 100644 --- a/drivers/cxl/core/memfeature.c +++ b/drivers/cxl/core/memfeature.c @@ -18,8 +18,9 @@ #include #include #include +#include "core.h" -#define CXL_DEV_NUM_RAS_FEATURES 2 +#define CXL_DEV_NUM_RAS_FEATURES 3 #define CXL_DEV_HOUR_IN_SECS 3600 #define CXL_SCRUB_NAME_LEN 128 @@ -723,6 +724,294 @@ static const struct edac_ecs_ops cxl_ecs_ops = { .set_threshold = cxl_ecs_set_threshold, }; +/* CXL memory soft PPR & hard PPR control definitions */ +static const uuid_t cxl_sppr_uuid = + UUID_INIT(0x892ba475, 0xfad8, 0x474e, 0x9d, 0x3e, 0x69, 0x2c, 0x91, \ + 0x75, 0x68, 0xbb); + +static const uuid_t cxl_hppr_uuid = + UUID_INIT(0x80ea4521, 0x786f, 0x4127, 0xaf, 0xb1, 0xec, 0x74, 0x59, \ + 0xfb, 0x0e, 0x24); + +struct cxl_ppr_context { + uuid_t repair_uuid; + u8 instance; + u16 get_feat_size; + u16 set_feat_size; + u8 get_version; + u8 set_version; + u16 set_effects; + struct cxl_memdev *cxlmd; + enum edac_mem_repair_type repair_type; + enum edac_mem_repair_persist_mode persist_mode; + u64 dpa; + u32 nibble_mask; +}; + +/** + * struct cxl_memdev_ppr_params - CXL memory PPR parameter data structure. + * @op_class[OUT]: PPR operation class. + * @op_subclass[OUT]: PPR operation subclass. + * @dpa_support[OUT]: device physical address for PPR support. + * @media_accessible[OUT]: memory media is accessible or not during PPR operation. + * @data_retained[OUT]: data is retained or not during PPR operation. + * @dpa:[IN]: device physical address. + */ +struct cxl_memdev_ppr_params { + u8 op_class; + u8 op_subclass; + bool dpa_support; + bool media_accessible; + bool data_retained; + u64 dpa; +}; + +enum cxl_ppr_param { + CXL_PPR_PARAM_DO_QUERY, + CXL_PPR_PARAM_DO_PPR, +}; + +#define CXL_MEMDEV_PPR_QUERY_RESOURCE_FLAG BIT(0) + +#define CXL_MEMDEV_PPR_DEVICE_INITIATED_MASK BIT(0) +#define CXL_MEMDEV_PPR_FLAG_DPA_SUPPORT_MASK BIT(0) +#define CXL_MEMDEV_PPR_FLAG_NIBBLE_SUPPORT_MASK BIT(1) +#define CXL_MEMDEV_PPR_FLAG_MEM_SPARING_EV_REC_SUPPORT_MASK BIT(2) + +#define CXL_MEMDEV_PPR_RESTRICTION_FLAG_MEDIA_ACCESSIBLE_MASK BIT(0) +#define CXL_MEMDEV_PPR_RESTRICTION_FLAG_DATA_RETAINED_MASK BIT(2) + +#define CXL_MEMDEV_PPR_SPARING_EV_REC_EN_MASK BIT(0) + +struct cxl_memdev_ppr_rd_attrs { + u8 max_op_latency; + __le16 op_cap; + __le16 op_mode; + u8 op_class; + u8 op_subclass; + u8 rsvd[9]; + u8 ppr_flags; + __le16 restriction_flags; + u8 ppr_op_mode; +} __packed; + +struct cxl_memdev_ppr_wr_attrs { + __le16 op_mode; + u8 ppr_op_mode; +} __packed; + +struct cxl_memdev_ppr_maintenance_attrs { + u8 flags; + __le64 dpa; + u8 nibble_mask[3]; +} __packed; + +static int cxl_mem_ppr_get_attrs(struct device *dev, void *drv_data, + struct cxl_memdev_ppr_params *params) +{ + struct cxl_ppr_context *cxl_ppr_ctx = drv_data; + struct cxl_memdev *cxlmd = cxl_ppr_ctx->cxlmd; + struct cxl_dev_state *cxlds = cxlmd->cxlds; + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); + size_t rd_data_size = sizeof(struct cxl_memdev_ppr_rd_attrs); + size_t data_size; + struct cxl_memdev_ppr_rd_attrs *rd_attrs __free(kfree) = + kmalloc(rd_data_size, GFP_KERNEL); + if (!rd_attrs) + return -ENOMEM; + + data_size = cxl_get_feature(mds, cxl_ppr_ctx->repair_uuid, + CXL_GET_FEAT_SEL_CURRENT_VALUE, + rd_attrs, rd_data_size); + if (!data_size) + return -EIO; + + params->op_class = rd_attrs->op_class; + params->op_subclass = rd_attrs->op_subclass; + params->dpa_support = FIELD_GET(CXL_MEMDEV_PPR_FLAG_DPA_SUPPORT_MASK, + rd_attrs->ppr_flags); + params->media_accessible = FIELD_GET(CXL_MEMDEV_PPR_RESTRICTION_FLAG_MEDIA_ACCESSIBLE_MASK, + rd_attrs->restriction_flags) ^ 1; + params->data_retained = FIELD_GET(CXL_MEMDEV_PPR_RESTRICTION_FLAG_DATA_RETAINED_MASK, + rd_attrs->restriction_flags) ^ 1; + + return 0; +} + +static int cxl_mem_ppr_set_attrs(struct device *dev, void *drv_data, + enum cxl_ppr_param param_type) +{ + struct cxl_memdev_ppr_maintenance_attrs maintenance_attrs; + struct cxl_ppr_context *cxl_ppr_ctx = drv_data; + struct cxl_memdev *cxlmd = cxl_ppr_ctx->cxlmd; + struct cxl_dev_state *cxlds = cxlmd->cxlds; + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); + struct cxl_memdev_ppr_params rd_params; + struct cxl_region *cxlr; + int ret; + + ret = cxl_mem_ppr_get_attrs(dev, drv_data, &rd_params); + if (ret) { + dev_err(dev, "Get cxlmemdev PPR params failed ret=%d\n", + ret); + return ret; + } + + switch (param_type) { + case CXL_PPR_PARAM_DO_QUERY: + case CXL_PPR_PARAM_DO_PPR: + ret = down_read_interruptible(&cxl_region_rwsem); + if (ret) + return ret; + if (!rd_params.media_accessible || !rd_params.data_retained) { + /* Check if DPA is mapped */ + ret = down_read_interruptible(&cxl_dpa_rwsem); + if (ret) { + up_read(&cxl_region_rwsem); + return ret; + } + + cxlr = cxl_dpa_to_region(cxlmd, cxl_ppr_ctx->dpa); + up_read(&cxl_dpa_rwsem); + if (cxlr) { + dev_err(dev, "CXL can't do PPR as DPA is mapped\n"); + up_read(&cxl_region_rwsem); + return -EBUSY; + } + } + + memset(&maintenance_attrs, 0, sizeof(maintenance_attrs)); + if (param_type == CXL_PPR_PARAM_DO_QUERY) + maintenance_attrs.flags = CXL_MEMDEV_PPR_QUERY_RESOURCE_FLAG; + else + maintenance_attrs.flags = 0; + maintenance_attrs.dpa = cxl_ppr_ctx->dpa; + *((u32 *)&maintenance_attrs.nibble_mask[0]) = cxl_ppr_ctx->nibble_mask; + ret = cxl_do_maintenance(mds, rd_params.op_class, rd_params.op_subclass, + &maintenance_attrs, sizeof(maintenance_attrs)); + if (ret) { + dev_err(dev, "CXL do PPR maintenance failed ret=%d\n", ret); + up_read(&cxl_region_rwsem); + return ret; + } + up_read(&cxl_region_rwsem); + return 0; + default: + return -EINVAL; + } +} + +static int cxl_ppr_get_repair_type(struct device *dev, void *drv_data, + u32 *repair_type) +{ + struct cxl_ppr_context *cxl_ppr_ctx = drv_data; + + *repair_type = cxl_ppr_ctx->repair_type; + + return 0; +} + +static int cxl_ppr_get_persist_mode_avail(struct device *dev, void *drv_data, + char *buf) +{ + return sysfs_emit(buf, "%u\n", EDAC_MEM_REPAIR_SOFT); +} + +static int cxl_ppr_get_persist_mode(struct device *dev, void *drv_data, + u32 *persist_mode) +{ + struct cxl_ppr_context *cxl_ppr_ctx = drv_data; + + *persist_mode = cxl_ppr_ctx->persist_mode; + + return 0; +} + +static int cxl_ppr_get_dpa_support(struct device *dev, void *drv_data, + u32 *dpa_support) +{ + struct cxl_memdev_ppr_params params; + int ret; + + ret = cxl_mem_ppr_get_attrs(dev, drv_data, ¶ms); + if (ret) + return ret; + + *dpa_support = params.dpa_support; + + return 0; +} + +static int cxl_get_ppr_safe_when_in_use(struct device *dev, void *drv_data, + u32 *safe) +{ + struct cxl_memdev_ppr_params params; + int ret; + + ret = cxl_mem_ppr_get_attrs(dev, drv_data, ¶ms); + if (ret) + return ret; + + *safe = params.media_accessible & params.data_retained; + + return 0; +} + +static int cxl_set_ppr_dpa(struct device *dev, void *drv_data, u64 dpa) +{ + struct cxl_ppr_context *cxl_ppr_ctx = drv_data; + + if (!dpa) + return -EINVAL; + + cxl_ppr_ctx->dpa = dpa; + + return 0; +} + +static int cxl_set_ppr_nibble_mask(struct device *dev, void *drv_data, u64 nibble_mask) +{ + struct cxl_ppr_context *cxl_ppr_ctx = drv_data; + + cxl_ppr_ctx->nibble_mask = nibble_mask; + + return 0; +} + +static int cxl_do_query_ppr(struct device *dev, void *drv_data) +{ + struct cxl_ppr_context *cxl_ppr_ctx = drv_data; + + if (!cxl_ppr_ctx->dpa) + return -EINVAL; + + return cxl_mem_ppr_set_attrs(dev, drv_data, CXL_PPR_PARAM_DO_QUERY); +} + +static int cxl_do_ppr(struct device *dev, void *drv_data) +{ + struct cxl_ppr_context *cxl_ppr_ctx = drv_data; + int ret; + + if (!cxl_ppr_ctx->dpa) + return -EINVAL; + ret = cxl_mem_ppr_set_attrs(dev, drv_data, CXL_PPR_PARAM_DO_PPR); + + return ret; +} + +static const struct edac_mem_repair_ops cxl_sppr_ops = { + .get_repair_type = cxl_ppr_get_repair_type, + .get_persist_mode_avail = cxl_ppr_get_persist_mode_avail, + .get_persist_mode = cxl_ppr_get_persist_mode, + .get_dpa_support = cxl_ppr_get_dpa_support, + .get_repair_safe_when_in_use = cxl_get_ppr_safe_when_in_use, + .set_dpa = cxl_set_ppr_dpa, + .set_nibble_mask = cxl_set_ppr_nibble_mask, + .do_query = cxl_do_query_ppr, + .do_repair = cxl_do_ppr, +}; + int cxl_mem_ras_features_init(struct cxl_memdev *cxlmd, struct cxl_region *cxlr) { struct edac_dev_feature ras_features[CXL_DEV_NUM_RAS_FEATURES]; @@ -732,9 +1021,10 @@ int cxl_mem_ras_features_init(struct cxl_memdev *cxlmd, struct cxl_region *cxlr) struct cxl_feat_entry feat_entry; char cxl_dev_name[CXL_SCRUB_NAME_LEN]; struct cxl_ecs_context *cxl_ecs_ctx; + struct cxl_ppr_context *cxl_sppr_ctx; int rc, i, num_ras_features = 0; int num_media_frus; - u8 scrub_inst = 0; + u8 scrub_inst = 0, repair_inst = 0; if (cxlr) { struct cxl_region_params *p = &cxlr->params; @@ -800,19 +1090,19 @@ int cxl_mem_ras_features_init(struct cxl_memdev *cxlmd, struct cxl_region *cxlr) rc = cxl_get_supported_feature_entry(mds, &cxl_ecs_uuid, &feat_entry); if (rc < 0) - goto feat_register; + goto feat_ppr; if (!(feat_entry.attr_flags & CXL_FEAT_ENTRY_FLAG_CHANGABLE)) - goto feat_register; + goto feat_ppr; num_media_frus = (feat_entry.get_feat_size - sizeof(struct cxl_ecs_rd_attrs)) / sizeof(struct cxl_ecs_fru_rd_attrs); if (!num_media_frus) - goto feat_register; + goto feat_ppr; cxl_ecs_ctx = devm_kzalloc(&cxlmd->dev, sizeof(*cxl_ecs_ctx), GFP_KERNEL); if (!cxl_ecs_ctx) - goto feat_register; + goto feat_ppr; *cxl_ecs_ctx = (struct cxl_ecs_context) { .get_feat_size = feat_entry.get_feat_size, .set_feat_size = feat_entry.set_feat_size, @@ -829,6 +1119,39 @@ int cxl_mem_ras_features_init(struct cxl_memdev *cxlmd, struct cxl_region *cxlr) ras_features[num_ras_features].ecs_info.num_media_frus = num_media_frus; num_ras_features++; + + /* CXL sPPR */ +feat_ppr: + rc = cxl_get_supported_feature_entry(mds, &cxl_sppr_uuid, + &feat_entry); + if (rc < 0) + goto feat_register; + + if (!(feat_entry.attr_flags & CXL_FEAT_ENTRY_FLAG_CHANGABLE)) + goto feat_register; + + cxl_sppr_ctx = devm_kzalloc(&cxlmd->dev, sizeof(*cxl_sppr_ctx), + GFP_KERNEL); + if (!cxl_sppr_ctx) + goto feat_register; + *cxl_sppr_ctx = (struct cxl_ppr_context) { + .repair_uuid = cxl_sppr_uuid, + .get_feat_size = feat_entry.get_feat_size, + .set_feat_size = feat_entry.set_feat_size, + .get_version = feat_entry.get_feat_ver, + .set_version = feat_entry.set_feat_ver, + .set_effects = feat_entry.set_effects, + .cxlmd = cxlmd, + .repair_type = EDAC_TYPE_SPPR, + .persist_mode = EDAC_MEM_REPAIR_SOFT, + .instance = repair_inst++, + }; + + ras_features[num_ras_features].ft_type = RAS_FEAT_MEM_REPAIR; + ras_features[num_ras_features].instance = cxl_sppr_ctx->instance; + ras_features[num_ras_features].mem_repair_ops = &cxl_sppr_ops; + ras_features[num_ras_features].ctx = cxl_sppr_ctx; + num_ras_features++; } feat_register: From patchwork Wed Oct 9 12:41:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 834012 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2AD621E0495; Wed, 9 Oct 2024 12:43:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728477835; cv=none; b=FPa3BY+cC0mQMGW2EVau9NXzUBb1VEoIxcXYV8fHUIbb0C1r4nH+arC2AMEMULOvemvOm1TIjqRnTuJY/CBSsARDY/9pCvounrX2y66NcA1qAzsl7nCtTDZ+ps5n62UQANEoMrTroAXl0erDveJmd6MTytpVkUDZBgHofN62ohE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728477835; c=relaxed/simple; bh=AXqMUDYgamXjXJNwTQJ0mxFUISYk+DeJ9p+tvf9ggXE=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Vij0LckDNLscGfG7vz9DF5nX5+VE9NHfAqlkVKdV1mDI5QMzNsWrTtLLKi1Nmoxea1+Ehp/MJw5CJvdWrGhlzIf1D07xyJfpY0rsuV2kSc6K0ArEOYWpSdUcgueBjLedTFyA3OlfbDC95nN/N9Ktq6fy8Vh14NOQREcAcItepao= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.231]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4XNswn3TT2z6K72M; Wed, 9 Oct 2024 20:42:33 +0800 (CST) Received: from frapeml500007.china.huawei.com (unknown [7.182.85.172]) by mail.maildlp.com (Postfix) with ESMTPS id 28293140B3C; Wed, 9 Oct 2024 20:43:51 +0800 (CST) Received: from P_UKIT01-A7bmah.china.huawei.com (10.48.152.209) by frapeml500007.china.huawei.com (7.182.85.172) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 9 Oct 2024 14:43:49 +0200 From: To: , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v13 18/18] cxl/memfeature: Add CXL memory device memory sparing control feature Date: Wed, 9 Oct 2024 13:41:19 +0100 Message-ID: <20241009124120.1124-19-shiju.jose@huawei.com> X-Mailer: git-send-email 2.43.0.windows.1 In-Reply-To: <20241009124120.1124-1-shiju.jose@huawei.com> References: <20241009124120.1124-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500003.china.huawei.com (7.191.162.67) To frapeml500007.china.huawei.com (7.182.85.172) From: Shiju Jose Memory sparing is defined as a repair function that replaces a portion of memory with a portion of functional memory at that same DPA. The subclasses for this operation vary in terms of the scope of the sparing being performed. The cacheline sparing subclass refers to a sparing action that can replace a full cacheline. Row sparing is provided as an alternative to PPR sparing functions and its scope is that of a single DDR row. Bank sparing allows an entire bank to be replaced. Rank sparing is defined as an operation in which an entire DDR rank is replaced. Memory sparing maintenance operations may be supported by CXL devices that implement CXL.mem protocol. A sparing maintenance operation requests the CXL device to perform a repair operation on its media. For example, a CXL device with DRAM components that support memory sparing features may implement sparing maintenance operations. The host may issue a query command by setting query resources flag in the input payload (CXL spec 3.1 Table 8-105) to determine availability of sparing resources for a given address. In response to a query request, the device shall report the resource availability by producing the memory sparing event record (CXL spec 3.1 Table 8-48) in which the Channel, Rank, Nibble Mask, Bank Group, Bank, Row, Column, Sub-Channel fields are a copy of the values specified in the request. During the execution of a sparing maintenance operation, a CXL memory device: - May or may not retain data - May or may not be able to process CXL.mem requests correctly. These CXL memory device capabilities are specified by restriction flags in the memory sparing feature readable attributes. When a CXL device identifies a failure on a memory component, the device may inform the host about the need for a memory sparing maintenance operation by using an Event Record, where the maintenance needed flag may set. The event record specifies some of the DPA, Channel, Rank, Nibble Mask, Bank Group, Bank, Row, Column, Sub-Channel fields that should be repaired. The userspace tool requests for maintenance operation if the number of corrected error reported on a CXL.mem media exceeds error threshold. CXL spec 3.1 section 8.2.9.7.1.4 describes the device's memory sparing maintenance operation feature. CXL spec 3.1 section 8.2.9.7.2.3 describes the memory sparing feature discovery and configuration. Add support for controlling CXL memory device memory sparing feature. Register with EDAC driver, which gets the memory repair attr descriptors from the EDAC memory repair driver and exposes sysfs repair control attributes for memory sparing to the userspace. For example CXL memory sparing control for the CXL mem0 device is exposed in /sys/bus/edac/devices/cxl_mem0/mem_repairX/ Use case ======== 1. CXL device identifies a failure in a memory component, report to userspace in a CXL generic/DRAM trace event. 2. Rasdaemon process the trace event and issue query request in sysfs to check resources available for memory sparing if either of the following conditions met. - number of corrected error reported on a CXL.mem media exceeds error threshold - maintenance needed flag set in the event record. 3. CXL device shall report the resource availability by producing the memory sparing event record in which the channel, rank, nibble mask, bank Group, bank, row, column, sub-channel fields are a copy of the values specified in the request. The query resource command shall return error (invalid input) if the controller does not support reporting resource is available. 4. Rasdaemon process the memory sparing trace event and issue repair request for memory sparing. Kernel CXL driver shall report memory sparing event record to the userspace with the resource availability in order rasdaemon to process the event record and issue a repair request in sysfs for the memory sparing operation in the CXL device. Tested for memory sparing control feature with "hw/cxl: Add memory sparing control feature" Repository: "https://gitlab.com/shiju.jose/qemu.git" Branch: cxl-ras-features-2024-10-02 Signed-off-by: Shiju Jose --- drivers/cxl/core/memfeature.c | 503 +++++++++++++++++++++++++++++++++- 1 file changed, 499 insertions(+), 4 deletions(-) diff --git a/drivers/cxl/core/memfeature.c b/drivers/cxl/core/memfeature.c index a0c9a6bd73c0..ecebdf0176ed 100644 --- a/drivers/cxl/core/memfeature.c +++ b/drivers/cxl/core/memfeature.c @@ -20,7 +20,7 @@ #include #include "core.h" -#define CXL_DEV_NUM_RAS_FEATURES 3 +#define CXL_DEV_NUM_RAS_FEATURES 7 #define CXL_DEV_HOUR_IN_SECS 3600 #define CXL_SCRUB_NAME_LEN 128 @@ -1012,6 +1012,463 @@ static const struct edac_mem_repair_ops cxl_sppr_ops = { .do_repair = cxl_do_ppr, }; +/* CXL memory sparing control definitions */ +#define CXL_CACHELINE_SPARING_UUID UUID_INIT(0x96C33386, 0x91dd, 0x44c7, 0x9e, 0xcb, \ + 0xfd, 0xaf, 0x65, 0x03, 0xba, 0xc4) +#define CXL_ROW_SPARING_UUID UUID_INIT(0x450ebf67, 0xb135, 0x4f97, 0xa4, 0x98, \ + 0xc2, 0xd5, 0x7f, 0x27, 0x9b, 0xed) +#define CXL_BANK_SPARING_UUID UUID_INIT(0x78b79636, 0x90ac, 0x4b64, 0xa4, 0xef, \ + 0xfa, 0xac, 0x5d, 0x18, 0xa8, 0x63) +#define CXL_RANK_SPARING_UUID UUID_INIT(0x34dbaff5, 0x0552, 0x4281, 0x8f, 0x76, \ + 0xda, 0x0b, 0x5e, 0x7a, 0x76, 0xa7) + +enum cxl_mem_sparing_granularity { + CXL_MEM_SPARING_CACHELINE, + CXL_MEM_SPARING_ROW, + CXL_MEM_SPARING_BANK, + CXL_MEM_SPARING_RANK, + CXL_MEM_SPARING_MAX +}; + +struct cxl_mem_sparing_context { + uuid_t repair_uuid; + u8 instance; + u16 get_feat_size; + u16 set_feat_size; + u8 get_version; + u8 set_version; + u16 set_effects; + struct cxl_memdev *cxlmd; + enum edac_mem_repair_type repair_type; + enum edac_mem_repair_persist_mode persist_mode; + enum cxl_mem_sparing_granularity granularity; + u64 dpa; + u8 channel; + u8 rank; + u32 nibble_mask; + u8 bank_group; + u8 bank; + u32 row; + u16 column; + u8 sub_channel; +}; + +struct cxl_memdev_sparing_params { + u8 op_class; + u8 op_subclass; + bool cap_safe_when_in_use; + bool cap_hard_sparing; + bool cap_soft_sparing; +}; + +enum cxl_mem_sparing_param_type { + CXL_MEM_SPARING_PARAM_DO_QUERY, + CXL_MEM_SPARING_PARAM_DO_REPAIR, +}; + +#define CXL_MEMDEV_SPARING_RD_CAP_SAFE_IN_USE_MASK BIT(0) +#define CXL_MEMDEV_SPARING_RD_CAP_HARD_SPARING_MASK BIT(1) +#define CXL_MEMDEV_SPARING_RD_CAP_SOFT_SPARING_MASK BIT(2) + +#define CXL_MEMDEV_SPARING_WR_DEVICE_INITIATED_MASK BIT(0) + +#define CXL_MEMDEV_SPARING_QUERY_RESOURCE_FLAG BIT(0) +#define CXL_MEMDEV_SET_HARD_SPARING_FLAG BIT(1) +#define CXL_MEMDEV_SPARING_SUB_CHANNEL_VALID_FLAG BIT(2) +#define CXL_MEMDEV_SPARING_NIB_MASK_VALID_FLAG BIT(3) + +struct cxl_memdev_sparing_rd_attrs { + u8 max_op_latency; + __le16 op_cap; + __le16 op_mode; + u8 op_class; + u8 op_subclass; + u8 rsvd[10]; + __le16 restriction_flags; +} __packed; + +struct cxl_memdev_sparing_wr_attrs { + __le16 op_mode; +} __packed; + +struct cxl_memdev_sparing_maintenance_attrs { + u8 flags; + u8 channel; + u8 rank; + u8 nibble_mask[3]; + u8 bank_group; + u8 bank; + u8 row[3]; + u16 column; + u8 sub_channel; +} __packed; + +static int cxl_mem_sparing_get_attrs(struct device *dev, void *drv_data, + struct cxl_memdev_sparing_params *params) +{ + struct cxl_mem_sparing_context *cxl_sparing_ctx = drv_data; + struct cxl_memdev *cxlmd = cxl_sparing_ctx->cxlmd; + struct cxl_dev_state *cxlds = cxlmd->cxlds; + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); + size_t rd_data_size = sizeof(struct cxl_memdev_sparing_rd_attrs); + size_t data_size; + struct cxl_memdev_sparing_rd_attrs *rd_attrs __free(kfree) = + kmalloc(rd_data_size, GFP_KERNEL); + if (!rd_attrs) + return -ENOMEM; + + data_size = cxl_get_feature(mds, cxl_sparing_ctx->repair_uuid, + CXL_GET_FEAT_SEL_CURRENT_VALUE, + rd_attrs, rd_data_size); + if (!data_size) + return -EIO; + + params->op_class = rd_attrs->op_class; + params->op_subclass = rd_attrs->op_subclass; + params->cap_safe_when_in_use = FIELD_GET(CXL_MEMDEV_SPARING_RD_CAP_SAFE_IN_USE_MASK, + rd_attrs->restriction_flags) ^ 1; + params->cap_hard_sparing = FIELD_GET(CXL_MEMDEV_SPARING_RD_CAP_HARD_SPARING_MASK, + rd_attrs->restriction_flags); + params->cap_soft_sparing = FIELD_GET(CXL_MEMDEV_SPARING_RD_CAP_SOFT_SPARING_MASK, + rd_attrs->restriction_flags); + + return 0; +} + +static int cxl_mem_sparing_set_attrs(struct device *dev, void *drv_data, + enum cxl_mem_sparing_param_type param_type) +{ + struct cxl_memdev_sparing_maintenance_attrs maintenance_attrs; + struct cxl_mem_sparing_context *cxl_sparing_ctx = drv_data; + struct cxl_memdev *cxlmd = cxl_sparing_ctx->cxlmd; + struct cxl_dev_state *cxlds = cxlmd->cxlds; + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); + struct cxl_memdev_sparing_params rd_params; + struct cxl_region *cxlr; + int ret; + + ret = cxl_mem_sparing_get_attrs(dev, drv_data, &rd_params); + if (ret) { + dev_err(dev, "Get cxlmemdev sparing params failed ret=%d\n", + ret); + return ret; + } + + switch (param_type) { + case CXL_MEM_SPARING_PARAM_DO_QUERY: + case CXL_MEM_SPARING_PARAM_DO_REPAIR: + ret = down_read_interruptible(&cxl_region_rwsem); + if (ret) + return ret; + if (!rd_params.cap_safe_when_in_use && cxl_sparing_ctx->dpa) { + /* Check if DPA is mapped */ + ret = down_read_interruptible(&cxl_dpa_rwsem); + if (ret) { + up_read(&cxl_region_rwsem); + return ret; + } + + cxlr = cxl_dpa_to_region(cxlmd, cxl_sparing_ctx->dpa); + up_read(&cxl_dpa_rwsem); + if (cxlr) { + dev_err(dev, "CXL can't do PPR as DPA is mapped\n"); + up_read(&cxl_region_rwsem); + return -EBUSY; + } + } + + memset(&maintenance_attrs, 0, sizeof(maintenance_attrs)); + if (param_type == CXL_MEM_SPARING_PARAM_DO_QUERY) { + maintenance_attrs.flags = CXL_MEMDEV_SPARING_QUERY_RESOURCE_FLAG; + } else { + maintenance_attrs.flags = + FIELD_PREP(CXL_MEMDEV_SPARING_QUERY_RESOURCE_FLAG, 0); + /* Do need set hard sparing, sub-channel & nb mask flags for query? */ + if (cxl_sparing_ctx->persist_mode == EDAC_MEM_REPAIR_HARD) + maintenance_attrs.flags |= + FIELD_PREP(CXL_MEMDEV_SET_HARD_SPARING_FLAG, 1); + if (cxl_sparing_ctx->sub_channel) + maintenance_attrs.flags |= + FIELD_PREP(CXL_MEMDEV_SPARING_SUB_CHANNEL_VALID_FLAG, 1); + if (cxl_sparing_ctx->nibble_mask) + maintenance_attrs.flags |= + FIELD_PREP(CXL_MEMDEV_SPARING_NIB_MASK_VALID_FLAG, 1); + } + /* Common atts for all memory sparing types */ + maintenance_attrs.channel = cxl_sparing_ctx->channel; + maintenance_attrs.rank = cxl_sparing_ctx->rank; + *((u32 *)&maintenance_attrs.nibble_mask[0]) = cxl_sparing_ctx->nibble_mask; + + if (cxl_sparing_ctx->repair_type == EDAC_TYPE_CACHELINE_MEM_SPARING || + cxl_sparing_ctx->repair_type == EDAC_TYPE_ROW_MEM_SPARING || + cxl_sparing_ctx->repair_type == EDAC_TYPE_BANK_MEM_SPARING) { + maintenance_attrs.bank_group = cxl_sparing_ctx->bank_group; + maintenance_attrs.bank = cxl_sparing_ctx->bank; + } + if (cxl_sparing_ctx->repair_type == EDAC_TYPE_CACHELINE_MEM_SPARING || + cxl_sparing_ctx->repair_type == EDAC_TYPE_ROW_MEM_SPARING) + *((u32 *)&maintenance_attrs.row[0]) = cxl_sparing_ctx->row; + if (cxl_sparing_ctx->repair_type == EDAC_TYPE_CACHELINE_MEM_SPARING) { + maintenance_attrs.column = cxl_sparing_ctx->column; + maintenance_attrs.sub_channel = cxl_sparing_ctx->sub_channel; + } + + ret = cxl_do_maintenance(mds, rd_params.op_class, rd_params.op_subclass, + &maintenance_attrs, sizeof(maintenance_attrs)); + if (ret) { + dev_err(dev, "CXL do mem sparing maintenance failed ret=%d\n", ret); + up_read(&cxl_region_rwsem); + return ret; + } + up_read(&cxl_region_rwsem); + return 0; + default: + return -EINVAL; + } +} + +static int cxl_mem_sparing_get_repair_type(struct device *dev, void *drv_data, + u32 *repair_type) +{ + struct cxl_mem_sparing_context *ctx = drv_data; + + *repair_type = ctx->repair_type; + + return 0; +} + +static int cxl_mem_sparing_get_persist_mode_avail(struct device *dev, void *drv_data, + char *buf) +{ + return sysfs_emit(buf, "%u\n", EDAC_MEM_REPAIR_SOFT); +} + +static int cxl_mem_sparing_get_persist_mode(struct device *dev, void *drv_data, + u32 *persist_mode) +{ + struct cxl_mem_sparing_context *ctx = drv_data; + + *persist_mode = ctx->persist_mode; + + return 0; +} + +static int cxl_mem_sparing_set_persist_mode(struct device *dev, void *drv_data, u32 persist_mode) +{ + struct cxl_mem_sparing_context *ctx = drv_data; + + switch (persist_mode) { + case EDAC_MEM_REPAIR_SOFT: + ctx->persist_mode = EDAC_MEM_REPAIR_SOFT; + return 0; + case EDAC_MEM_REPAIR_HARD: + ctx->persist_mode = EDAC_MEM_REPAIR_HARD; + return 0; + default: + return -EINVAL; + } +} + +static int cxl_get_mem_sparing_safe_when_in_use(struct device *dev, void *drv_data, + u32 *safe) +{ + struct cxl_memdev_sparing_params params; + int ret; + + ret = cxl_mem_sparing_get_attrs(dev, drv_data, ¶ms); + if (ret) + return ret; + + *safe = params.cap_safe_when_in_use; + + return 0; +} + +static int cxl_get_sparing_dpa_support(struct device *dev, void *drv_data, + u32 *dpa_support) +{ + *dpa_support = true; + + return 0; +} + +static int cxl_set_mem_sparing_dpa(struct device *dev, void *drv_data, u64 dpa) +{ + struct cxl_mem_sparing_context *ctx = drv_data; + + if (!dpa) + return -EINVAL; + + ctx->dpa = dpa; + + return 0; +} + +static int cxl_set_mem_sparing_nibble_mask(struct device *dev, void *drv_data, u64 nibble_mask) +{ + struct cxl_mem_sparing_context *ctx = drv_data; + + ctx->nibble_mask = nibble_mask; + + return 0; +} + +static int cxl_set_mem_sparing_bank_group(struct device *dev, void *drv_data, u32 bank_group) +{ + struct cxl_mem_sparing_context *ctx = drv_data; + + ctx->bank_group = bank_group; + + return 0; +} + +static int cxl_set_mem_sparing_bank(struct device *dev, void *drv_data, u32 bank) +{ + struct cxl_mem_sparing_context *ctx = drv_data; + + ctx->bank = bank; + + return 0; +} + +static int cxl_set_mem_sparing_rank(struct device *dev, void *drv_data, u32 rank) +{ + struct cxl_mem_sparing_context *ctx = drv_data; + + ctx->rank = rank; + + return 0; +} + +static int cxl_set_mem_sparing_row(struct device *dev, void *drv_data, u64 row) +{ + struct cxl_mem_sparing_context *ctx = drv_data; + + ctx->row = row; + + return 0; +} + +static int cxl_set_mem_sparing_column(struct device *dev, void *drv_data, u32 column) +{ + struct cxl_mem_sparing_context *ctx = drv_data; + + ctx->column = column; + + return 0; +} + +static int cxl_set_mem_sparing_channel(struct device *dev, void *drv_data, u32 channel) +{ + struct cxl_mem_sparing_context *ctx = drv_data; + + ctx->channel = channel; + + return 0; +} + +static int cxl_set_mem_sparing_sub_channel(struct device *dev, void *drv_data, u32 sub_channel) +{ + struct cxl_mem_sparing_context *ctx = drv_data; + + ctx->sub_channel = sub_channel; + + return 0; +} + +static int cxl_do_query_mem_sparing(struct device *dev, void *drv_data) +{ + return cxl_mem_sparing_set_attrs(dev, drv_data, CXL_MEM_SPARING_PARAM_DO_QUERY); +} + +static int cxl_do_mem_sparing(struct device *dev, void *drv_data) +{ + return cxl_mem_sparing_set_attrs(dev, drv_data, CXL_MEM_SPARING_PARAM_DO_REPAIR); +} + +#define RANK_OPS \ + .get_repair_type = cxl_mem_sparing_get_repair_type, \ + .get_persist_mode_avail = cxl_mem_sparing_get_persist_mode_avail, \ + .get_persist_mode = cxl_mem_sparing_get_persist_mode, \ + .set_persist_mode = cxl_mem_sparing_set_persist_mode, \ + .get_repair_safe_when_in_use = cxl_get_mem_sparing_safe_when_in_use, \ + .get_dpa_support = cxl_get_sparing_dpa_support, \ + .set_dpa = cxl_set_mem_sparing_dpa, \ + .set_nibble_mask = cxl_set_mem_sparing_nibble_mask, \ + .set_rank = cxl_set_mem_sparing_rank, \ + .set_channel = cxl_set_mem_sparing_channel, \ + .do_query = cxl_do_query_mem_sparing, \ + .do_repair = cxl_do_mem_sparing + +#define BANK_OPS \ + RANK_OPS, \ + .set_bank_group = cxl_set_mem_sparing_bank_group, \ + .set_bank = cxl_set_mem_sparing_bank + +#define ROW_OPS \ + BANK_OPS, \ + .set_row = cxl_set_mem_sparing_row + +#define CACHELINE_OPS \ + ROW_OPS, \ + .set_column = cxl_set_mem_sparing_column, \ + .set_sub_channel = cxl_set_mem_sparing_sub_channel + +static const struct edac_mem_repair_ops cxl_rank_sparing_ops = { + RANK_OPS +}; + +static const struct edac_mem_repair_ops cxl_bank_sparing_ops = { + BANK_OPS +}; + +static const struct edac_mem_repair_ops cxl_row_sparing_ops = { + ROW_OPS +}; + +static const struct edac_mem_repair_ops cxl_cacheline_sparing_ops = { + CACHELINE_OPS +}; + +struct cxl_mem_sparing_desc { + const uuid_t repair_uuid; + enum edac_mem_repair_type repair_type; + enum edac_mem_repair_persist_mode persist_mode; + enum cxl_mem_sparing_granularity granularity; + const struct edac_mem_repair_ops *repair_ops; +}; + +static struct cxl_mem_sparing_desc mem_sparing_desc[] = { + { + .repair_uuid = CXL_CACHELINE_SPARING_UUID, + .repair_type = EDAC_TYPE_CACHELINE_MEM_SPARING, + .persist_mode = EDAC_MEM_REPAIR_SOFT, + .granularity = CXL_MEM_SPARING_CACHELINE, + .repair_ops = &cxl_cacheline_sparing_ops, + }, + { + .repair_uuid = CXL_ROW_SPARING_UUID, + .repair_type = EDAC_TYPE_ROW_MEM_SPARING, + .persist_mode = EDAC_MEM_REPAIR_SOFT, + .granularity = CXL_MEM_SPARING_ROW, + .repair_ops = &cxl_row_sparing_ops, + }, + { + .repair_uuid = CXL_BANK_SPARING_UUID, + .repair_type = EDAC_TYPE_BANK_MEM_SPARING, + .persist_mode = EDAC_MEM_REPAIR_SOFT, + .granularity = CXL_MEM_SPARING_BANK, + .repair_ops = &cxl_bank_sparing_ops, + }, + { + .repair_uuid = CXL_RANK_SPARING_UUID, + .repair_type = EDAC_TYPE_RANK_MEM_SPARING, + .persist_mode = EDAC_MEM_REPAIR_SOFT, + .granularity = CXL_MEM_SPARING_RANK, + .repair_ops = &cxl_rank_sparing_ops, + }, +}; + int cxl_mem_ras_features_init(struct cxl_memdev *cxlmd, struct cxl_region *cxlr) { struct edac_dev_feature ras_features[CXL_DEV_NUM_RAS_FEATURES]; @@ -1022,6 +1479,7 @@ int cxl_mem_ras_features_init(struct cxl_memdev *cxlmd, struct cxl_region *cxlr) char cxl_dev_name[CXL_SCRUB_NAME_LEN]; struct cxl_ecs_context *cxl_ecs_ctx; struct cxl_ppr_context *cxl_sppr_ctx; + struct cxl_mem_sparing_context *cxl_sparing_ctx; int rc, i, num_ras_features = 0; int num_media_frus; u8 scrub_inst = 0, repair_inst = 0; @@ -1125,15 +1583,15 @@ int cxl_mem_ras_features_init(struct cxl_memdev *cxlmd, struct cxl_region *cxlr) rc = cxl_get_supported_feature_entry(mds, &cxl_sppr_uuid, &feat_entry); if (rc < 0) - goto feat_register; + goto feat_mem_sparing; if (!(feat_entry.attr_flags & CXL_FEAT_ENTRY_FLAG_CHANGABLE)) - goto feat_register; + goto feat_mem_sparing; cxl_sppr_ctx = devm_kzalloc(&cxlmd->dev, sizeof(*cxl_sppr_ctx), GFP_KERNEL); if (!cxl_sppr_ctx) - goto feat_register; + goto feat_mem_sparing; *cxl_sppr_ctx = (struct cxl_ppr_context) { .repair_uuid = cxl_sppr_uuid, .get_feat_size = feat_entry.get_feat_size, @@ -1152,6 +1610,43 @@ int cxl_mem_ras_features_init(struct cxl_memdev *cxlmd, struct cxl_region *cxlr) ras_features[num_ras_features].mem_repair_ops = &cxl_sppr_ops; ras_features[num_ras_features].ctx = cxl_sppr_ctx; num_ras_features++; + + /* CXL memory sparing */ +feat_mem_sparing: + for (i = 0; i < CXL_MEM_SPARING_MAX; i++) { + rc = cxl_get_supported_feature_entry(mds, &mem_sparing_desc[i].repair_uuid, + &feat_entry); + if (rc < 0) + continue; + + if (!(feat_entry.attr_flags & CXL_FEAT_ENTRY_FLAG_CHANGABLE)) + continue; + + cxl_sparing_ctx = devm_kzalloc(&cxlmd->dev, sizeof(*cxl_sparing_ctx), + GFP_KERNEL); + if (!cxl_sparing_ctx) + goto feat_register; + + *cxl_sparing_ctx = (struct cxl_mem_sparing_context) { + .repair_uuid = mem_sparing_desc[i].repair_uuid, + .get_feat_size = feat_entry.get_feat_size, + .set_feat_size = feat_entry.set_feat_size, + .get_version = feat_entry.get_feat_ver, + .set_version = feat_entry.set_feat_ver, + .set_effects = feat_entry.set_effects, + .cxlmd = cxlmd, + .repair_type = mem_sparing_desc[i].repair_type, + .persist_mode = mem_sparing_desc[i].persist_mode, + .granularity = mem_sparing_desc[i].granularity, + .instance = repair_inst++, + }; + ras_features[num_ras_features].ft_type = RAS_FEAT_MEM_REPAIR; + ras_features[num_ras_features].instance = cxl_sparing_ctx->instance; + ras_features[num_ras_features].mem_repair_ops = + mem_sparing_desc[i].repair_ops; + ras_features[num_ras_features].ctx = cxl_sparing_ctx; + num_ras_features++; + } } feat_register: