From patchwork Tue Jan 25 01:10:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Iwona Winiarska X-Patchwork-Id: 536626 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 876CDC43217 for ; Tue, 25 Jan 2022 03:42:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1325625AbiAYDiA (ORCPT ); Mon, 24 Jan 2022 22:38:00 -0500 Received: from mga12.intel.com ([192.55.52.136]:21854 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S3419781AbiAYCUr (ORCPT ); Mon, 24 Jan 2022 21:20:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1643077234; x=1674613234; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+ZXypthltIN0M4U9ONm8JDamYC9OeGyQdWqMLAif9aQ=; b=n+2RRYZ0NByJPR0eHLM7U3q52oIJ23cJpYC+aHuJnaJyvwW4+57K6vfH 9hcq99ywW2uQbbZOMAk9X7FIAJcDmzOYtym1KYcLZN2++/L9jdo95W4wB /DuTtUN5Xv2VFdyOabjFLrCL4T72fe2JRSDDzC2D6jMLtKybn7/4+uDc9 d8z5uXI4CAJrYcWS5ZAjoP2q7QnfkuzEUp4hdmpn5iUtMSGq5b9G8UPvk 72xgG45F4NcwCe83j/tV8/G5BMwIGhY19vBf/F7Fxfc9MkL4ZX1MZzWxZ dMvxYKotN2H3FCNsZith9bPqz9lmpeXX1qf7PU5twruOdlQDnnRSpHekk g==; X-IronPort-AV: E=McAfee;i="6200,9189,10237"; a="226168064" X-IronPort-AV: E=Sophos;i="5.88,313,1635231600"; d="scan'208";a="226168064" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Jan 2022 17:12:33 -0800 X-IronPort-AV: E=Sophos;i="5.88,313,1635231600"; d="scan'208";a="695633494" Received: from kerguder-mobl.ger.corp.intel.com (HELO localhost) ([10.249.158.133]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Jan 2022 17:12:22 -0800 From: Iwona Winiarska To: linux-kernel@vger.kernel.org, openbmc@lists.ozlabs.org, Greg Kroah-Hartman Cc: devicetree@vger.kernel.org, linux-aspeed@lists.ozlabs.org, linux-arm-kernel@lists.infradead.org, linux-hwmon@vger.kernel.org, linux-doc@vger.kernel.org, Rob Herring , Joel Stanley , Andrew Jeffery , Jean Delvare , Guenter Roeck , Arnd Bergmann , Olof Johansson , Jonathan Corbet , Borislav Petkov , Pierre-Louis Bossart , Tony Luck , Andy Shevchenko , Dan Williams , Randy Dunlap , Zev Weiss , David Muller , Dave Hansen , Billy Tsai , Iwona Winiarska , Jae Hyun Yoo Subject: [PATCH v6 02/13] dt-bindings: Add bindings for peci-aspeed Date: Tue, 25 Jan 2022 02:10:53 +0100 Message-Id: <20220125011104.2480133-3-iwona.winiarska@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220125011104.2480133-1-iwona.winiarska@intel.com> References: <20220125011104.2480133-1-iwona.winiarska@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Add device tree bindings for the peci-aspeed controller driver. Co-developed-by: Jae Hyun Yoo Signed-off-by: Jae Hyun Yoo Signed-off-by: Iwona Winiarska Reviewed-by: Joel Stanley --- .../devicetree/bindings/peci/peci-aspeed.yaml | 72 +++++++++++++++++++ 1 file changed, 72 insertions(+) create mode 100644 Documentation/devicetree/bindings/peci/peci-aspeed.yaml diff --git a/Documentation/devicetree/bindings/peci/peci-aspeed.yaml b/Documentation/devicetree/bindings/peci/peci-aspeed.yaml new file mode 100644 index 000000000000..1e68a801a92a --- /dev/null +++ b/Documentation/devicetree/bindings/peci/peci-aspeed.yaml @@ -0,0 +1,72 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/peci/peci-aspeed.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Aspeed PECI Bus Device Tree Bindings + +maintainers: + - Iwona Winiarska + - Jae Hyun Yoo + +allOf: + - $ref: peci-controller.yaml# + +properties: + compatible: + enum: + - aspeed,ast2400-peci + - aspeed,ast2500-peci + - aspeed,ast2600-peci + + reg: + maxItems: 1 + + interrupts: + maxItems: 1 + + clocks: + description: + Clock source for PECI controller. Should reference the external + oscillator clock. + maxItems: 1 + + resets: + maxItems: 1 + + cmd-timeout-ms: + minimum: 1 + maximum: 1000 + default: 1000 + + clock-frequency: + description: + The desired operation frequency of PECI controller in Hz. + minimum: 2000 + maximum: 2000000 + default: 1000000 + +required: + - compatible + - reg + - interrupts + - clocks + - resets + +additionalProperties: false + +examples: + - | + #include + #include + peci-controller@1e78b000 { + compatible = "aspeed,ast2600-peci"; + reg = <0x1e78b000 0x100>; + interrupts = ; + clocks = <&syscon ASPEED_CLK_GATE_REF0CLK>; + resets = <&syscon ASPEED_RESET_PECI>; + cmd-timeout-ms = <1000>; + clock-frequency = <1000000>; + }; +... From patchwork Tue Jan 25 01:10:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Iwona Winiarska X-Patchwork-Id: 536622 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91894C43219 for ; Tue, 25 Jan 2022 03:42:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1325775AbiAYDic (ORCPT ); Mon, 24 Jan 2022 22:38:32 -0500 Received: from mga12.intel.com ([192.55.52.136]:21854 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S3420112AbiAYCWr (ORCPT ); Mon, 24 Jan 2022 21:22:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1643077367; x=1674613367; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=11FmUmyzPRvu6bhE6JIWPjo4Wz9Uq4PL7hSZ1Xwymco=; b=I4tSLrlh2puHzVAB+zgmDRSTXZk8kNkrQugwCR1no+vh3g/vJrCI2Ckg n4rBA6vKkZxzGPUTGaHcx3msFaa9HtjYvgaZwtJhQ0ZG45D2zy5PlV6Oj xZEEgcwUHdzSPiddVAhMqAIY+cYLhjxiAE9adG1Mu8seaJa2OeTRUnB6r gt6QrF3+8gI6MklDTBYiosF7kGRtDgZk6CPea//RIa/ZKrNzdinVWaQv8 0qCI8x234CgcEPuPi0aYlx89CRjPGILO8MCnDFanv5/cGzu5ty7t5OEom i7jWD1Nzsk1ajv18yaYoBZ+8V71UDaNJ6l2nG8Rv0LhgxXF3Zrw26MGua Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10237"; a="226168161" X-IronPort-AV: E=Sophos;i="5.88,313,1635231600"; d="scan'208";a="226168161" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Jan 2022 17:13:11 -0800 X-IronPort-AV: E=Sophos;i="5.88,313,1635231600"; d="scan'208";a="617443663" Received: from kerguder-mobl.ger.corp.intel.com (HELO localhost) ([10.249.158.133]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Jan 2022 17:13:03 -0800 From: Iwona Winiarska To: linux-kernel@vger.kernel.org, openbmc@lists.ozlabs.org, Greg Kroah-Hartman Cc: devicetree@vger.kernel.org, linux-aspeed@lists.ozlabs.org, linux-arm-kernel@lists.infradead.org, linux-hwmon@vger.kernel.org, linux-doc@vger.kernel.org, Rob Herring , Joel Stanley , Andrew Jeffery , Jean Delvare , Guenter Roeck , Arnd Bergmann , Olof Johansson , Jonathan Corbet , Borislav Petkov , Pierre-Louis Bossart , Tony Luck , Andy Shevchenko , Dan Williams , Randy Dunlap , Zev Weiss , David Muller , Dave Hansen , Billy Tsai , Iwona Winiarska , Jason M Bills , Jae Hyun Yoo Subject: [PATCH v6 04/13] peci: Add core infrastructure Date: Tue, 25 Jan 2022 02:10:55 +0100 Message-Id: <20220125011104.2480133-5-iwona.winiarska@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220125011104.2480133-1-iwona.winiarska@intel.com> References: <20220125011104.2480133-1-iwona.winiarska@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Intel processors provide access for various services designed to support processor and DRAM thermal management, platform manageability and processor interface tuning and diagnostics. Those services are available via the Platform Environment Control Interface (PECI) that provides a communication channel between the processor and the Baseboard Management Controller (BMC) or other platform management device. This change introduces PECI subsystem by adding the initial core module and API for controller drivers. Co-developed-by: Jason M Bills Signed-off-by: Jason M Bills Co-developed-by: Jae Hyun Yoo Signed-off-by: Jae Hyun Yoo Signed-off-by: Iwona Winiarska Reviewed-by: Pierre-Louis Bossart --- MAINTAINERS | 8 ++ drivers/Kconfig | 3 + drivers/Makefile | 1 + drivers/peci/Kconfig | 15 ++++ drivers/peci/Makefile | 5 ++ drivers/peci/core.c | 158 ++++++++++++++++++++++++++++++++++++++++ drivers/peci/internal.h | 16 ++++ include/linux/peci.h | 99 +++++++++++++++++++++++++ 8 files changed, 305 insertions(+) create mode 100644 drivers/peci/Kconfig create mode 100644 drivers/peci/Makefile create mode 100644 drivers/peci/core.c create mode 100644 drivers/peci/internal.h create mode 100644 include/linux/peci.h diff --git a/MAINTAINERS b/MAINTAINERS index ea3e6c914384..aa7ae643fdb0 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -15091,6 +15091,14 @@ L: platform-driver-x86@vger.kernel.org S: Maintained F: drivers/platform/x86/peaq-wmi.c +PECI SUBSYSTEM +M: Iwona Winiarska +L: openbmc@lists.ozlabs.org (moderated for non-subscribers) +S: Supported +F: Documentation/devicetree/bindings/peci/ +F: drivers/peci/ +F: include/linux/peci.h + PENSANDO ETHERNET DRIVERS M: Shannon Nelson M: drivers@pensando.io diff --git a/drivers/Kconfig b/drivers/Kconfig index 0d399ddaa185..8d6cd5d08722 100644 --- a/drivers/Kconfig +++ b/drivers/Kconfig @@ -236,4 +236,7 @@ source "drivers/interconnect/Kconfig" source "drivers/counter/Kconfig" source "drivers/most/Kconfig" + +source "drivers/peci/Kconfig" + endmenu diff --git a/drivers/Makefile b/drivers/Makefile index a110338c860c..020780b6b4d2 100644 --- a/drivers/Makefile +++ b/drivers/Makefile @@ -187,3 +187,4 @@ obj-$(CONFIG_GNSS) += gnss/ obj-$(CONFIG_INTERCONNECT) += interconnect/ obj-$(CONFIG_COUNTER) += counter/ obj-$(CONFIG_MOST) += most/ +obj-$(CONFIG_PECI) += peci/ diff --git a/drivers/peci/Kconfig b/drivers/peci/Kconfig new file mode 100644 index 000000000000..71a4ad81225a --- /dev/null +++ b/drivers/peci/Kconfig @@ -0,0 +1,15 @@ +# SPDX-License-Identifier: GPL-2.0-only + +menuconfig PECI + tristate "PECI support" + help + The Platform Environment Control Interface (PECI) is an interface + that provides a communication channel to Intel processors and + chipset components from external monitoring or control devices. + + If you are building a Baseboard Management Controller (BMC) kernel + for Intel platform say Y here and also to the specific driver for + your adapter(s) below. If unsure say N. + + This support is also available as a module. If so, the module + will be called peci. diff --git a/drivers/peci/Makefile b/drivers/peci/Makefile new file mode 100644 index 000000000000..e789a354e842 --- /dev/null +++ b/drivers/peci/Makefile @@ -0,0 +1,5 @@ +# SPDX-License-Identifier: GPL-2.0-only + +# Core functionality +peci-y := core.o +obj-$(CONFIG_PECI) += peci.o diff --git a/drivers/peci/core.c b/drivers/peci/core.c new file mode 100644 index 000000000000..73ad0a47fa9d --- /dev/null +++ b/drivers/peci/core.c @@ -0,0 +1,158 @@ +// SPDX-License-Identifier: GPL-2.0-only +// Copyright (c) 2018-2021 Intel Corporation + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "internal.h" + +static DEFINE_IDA(peci_controller_ida); + +static void peci_controller_dev_release(struct device *dev) +{ + struct peci_controller *controller = to_peci_controller(dev); + + mutex_destroy(&controller->bus_lock); + ida_free(&peci_controller_ida, controller->id); + kfree(controller); +} + +struct device_type peci_controller_type = { + .release = peci_controller_dev_release, +}; + +static struct peci_controller *peci_controller_alloc(struct device *dev, + struct peci_controller_ops *ops) +{ + struct peci_controller *controller; + int ret; + + if (!ops->xfer) + return ERR_PTR(-EINVAL); + + controller = kzalloc(sizeof(*controller), GFP_KERNEL); + if (!controller) + return ERR_PTR(-ENOMEM); + + ret = ida_alloc_max(&peci_controller_ida, U8_MAX, GFP_KERNEL); + if (ret < 0) + goto err; + controller->id = ret; + + controller->ops = ops; + + controller->dev.parent = dev; + controller->dev.bus = &peci_bus_type; + controller->dev.type = &peci_controller_type; + + device_initialize(&controller->dev); + + mutex_init(&controller->bus_lock); + + return controller; + +err: + kfree(controller); + return ERR_PTR(ret); +} + +static void unregister_controller(void *_controller) +{ + struct peci_controller *controller = _controller; + + device_unregister(&controller->dev); + + fwnode_handle_put(controller->dev.fwnode); + + pm_runtime_disable(&controller->dev); +} + +/** + * devm_peci_controller_add() - add PECI controller + * @dev: device for devm operations + * @ops: pointer to controller specific methods + * + * In final stage of its probe(), peci_controller driver calls + * devm_peci_controller_add() to register itself with the PECI bus. + * + * Return: Pointer to the newly allocated controller or ERR_PTR() in case of failure. + */ +struct peci_controller *devm_peci_controller_add(struct device *dev, + struct peci_controller_ops *ops) +{ + struct peci_controller *controller; + int ret; + + controller = peci_controller_alloc(dev, ops); + if (IS_ERR(controller)) + return controller; + + ret = dev_set_name(&controller->dev, "peci-%d", controller->id); + if (ret) + goto err_put; + + pm_runtime_no_callbacks(&controller->dev); + pm_suspend_ignore_children(&controller->dev, true); + pm_runtime_enable(&controller->dev); + + device_set_node(&controller->dev, fwnode_handle_get(dev_fwnode(dev))); + + ret = device_add(&controller->dev); + if (ret) + goto err_fwnode; + + ret = devm_add_action_or_reset(dev, unregister_controller, controller); + if (ret) + return ERR_PTR(ret); + + return controller; + +err_fwnode: + fwnode_handle_put(controller->dev.fwnode); + + pm_runtime_disable(&controller->dev); + +err_put: + put_device(&controller->dev); + + return ERR_PTR(ret); +} +EXPORT_SYMBOL_NS_GPL(devm_peci_controller_add, PECI); + +struct bus_type peci_bus_type = { + .name = "peci", +}; + +static int __init peci_init(void) +{ + int ret; + + ret = bus_register(&peci_bus_type); + if (ret < 0) { + pr_err("peci: failed to register PECI bus type!\n"); + return ret; + } + + return 0; +} +module_init(peci_init); + +static void __exit peci_exit(void) +{ + bus_unregister(&peci_bus_type); +} +module_exit(peci_exit); + +MODULE_AUTHOR("Jason M Bills "); +MODULE_AUTHOR("Jae Hyun Yoo "); +MODULE_AUTHOR("Iwona Winiarska "); +MODULE_DESCRIPTION("PECI bus core module"); +MODULE_LICENSE("GPL"); diff --git a/drivers/peci/internal.h b/drivers/peci/internal.h new file mode 100644 index 000000000000..918dea745a86 --- /dev/null +++ b/drivers/peci/internal.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (c) 2018-2021 Intel Corporation */ + +#ifndef __PECI_INTERNAL_H +#define __PECI_INTERNAL_H + +#include +#include + +struct peci_controller; + +extern struct bus_type peci_bus_type; + +extern struct device_type peci_controller_type; + +#endif /* __PECI_INTERNAL_H */ diff --git a/include/linux/peci.h b/include/linux/peci.h new file mode 100644 index 000000000000..26e0a4e73b50 --- /dev/null +++ b/include/linux/peci.h @@ -0,0 +1,99 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (c) 2018-2021 Intel Corporation */ + +#ifndef __LINUX_PECI_H +#define __LINUX_PECI_H + +#include +#include +#include +#include + +/* + * Currently we don't support any PECI command over 32 bytes. + */ +#define PECI_REQUEST_MAX_BUF_SIZE 32 + +struct peci_controller; +struct peci_request; + +/** + * struct peci_controller_ops - PECI controller specific methods + * @xfer: PECI transfer function + * + * PECI controllers may have different hardware interfaces - the drivers + * implementing PECI controllers can use this structure to abstract away those + * differences by exposing a common interface for PECI core. + */ +struct peci_controller_ops { + int (*xfer)(struct peci_controller *controller, u8 addr, struct peci_request *req); +}; + +/** + * struct peci_controller - PECI controller + * @dev: device object to register PECI controller to the device model + * @ops: pointer to device specific controller operations + * @bus_lock: lock used to protect multiple callers + * @id: PECI controller ID + * + * PECI controllers usually connect to their drivers using non-PECI bus, + * such as the platform bus. + * Each PECI controller can communicate with one or more PECI devices. + */ +struct peci_controller { + struct device dev; + struct peci_controller_ops *ops; + struct mutex bus_lock; /* held for the duration of xfer */ + u8 id; +}; + +struct peci_controller *devm_peci_controller_add(struct device *parent, + struct peci_controller_ops *ops); + +static inline struct peci_controller *to_peci_controller(void *d) +{ + return container_of(d, struct peci_controller, dev); +} + +/** + * struct peci_device - PECI device + * @dev: device object to register PECI device to the device model + * @controller: manages the bus segment hosting this PECI device + * @addr: address used on the PECI bus connected to the parent controller + * + * A peci_device identifies a single device (i.e. CPU) connected to a PECI bus. + * The behaviour exposed to the rest of the system is defined by the PECI driver + * managing the device. + */ +struct peci_device { + struct device dev; + u8 addr; +}; + +static inline struct peci_device *to_peci_device(struct device *d) +{ + return container_of(d, struct peci_device, dev); +} + +/** + * struct peci_request - PECI request + * @device: PECI device to which the request is sent + * @tx: TX buffer specific data + * @tx.buf: TX buffer + * @tx.len: transfer data length in bytes + * @rx: RX buffer specific data + * @rx.buf: RX buffer + * @rx.len: received data length in bytes + * + * A peci_request represents a request issued by PECI originator (TX) and + * a response received from PECI responder (RX). + */ +struct peci_request { + struct peci_device *device; + struct { + u8 buf[PECI_REQUEST_MAX_BUF_SIZE]; + u8 len; + } rx, tx; +}; + +#endif /* __LINUX_PECI_H */ From patchwork Tue Jan 25 01:10:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Iwona Winiarska X-Patchwork-Id: 536625 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05596C433EF for ; Tue, 25 Jan 2022 03:42:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1325700AbiAYDiL (ORCPT ); Mon, 24 Jan 2022 22:38:11 -0500 Received: from mga17.intel.com ([192.55.52.151]:60177 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349258AbiAYCVo (ORCPT ); Mon, 24 Jan 2022 21:21:44 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1643077303; x=1674613303; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=DFog99p9w5RfMCl9zbzJJWS6lzmpaZgRL91ADPd3II4=; b=MWjqxI1IwpzddGUjBpEe/AodtEZxCJASJC0TZypYbMHReYA7VOE8JuxG 8dbDIvx2jnDDI6veI+YZNRX0QcBvDqPzdVuOSZUjZNQ6EbnOqIqkKdHrj XVW7qH0JTIYn4UYiVqqwkWs0vnSeM0q2It4VXFnSd1hZFoI+MNz3NUhC5 bWP1dq78oyeiHlqjN+vlCXcxy2+QDX2rfih1SbaMDiDMDwhmfPB9p3m6w 3mlw8yGeVytTRzmPriyZBDKMJK/QxQRIs7SW2ro+9/BVhiHKP92FeZ+MU ZEQhMgwOu5JkrsrDaATk+iVyAYPTj/DdZR8ASHi0REawiaxvDoBNcABo3 g==; X-IronPort-AV: E=McAfee;i="6200,9189,10237"; a="226860168" X-IronPort-AV: E=Sophos;i="5.88,313,1635231600"; d="scan'208";a="226860168" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Jan 2022 17:14:25 -0800 X-IronPort-AV: E=Sophos;i="5.88,313,1635231600"; d="scan'208";a="596969675" Received: from kerguder-mobl.ger.corp.intel.com (HELO localhost) ([10.249.158.133]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Jan 2022 17:14:17 -0800 From: Iwona Winiarska To: linux-kernel@vger.kernel.org, openbmc@lists.ozlabs.org, Greg Kroah-Hartman Cc: devicetree@vger.kernel.org, linux-aspeed@lists.ozlabs.org, linux-arm-kernel@lists.infradead.org, linux-hwmon@vger.kernel.org, linux-doc@vger.kernel.org, Rob Herring , Joel Stanley , Andrew Jeffery , Jean Delvare , Guenter Roeck , Arnd Bergmann , Olof Johansson , Jonathan Corbet , Borislav Petkov , Pierre-Louis Bossart , Tony Luck , Andy Shevchenko , Dan Williams , Randy Dunlap , Zev Weiss , David Muller , Dave Hansen , Billy Tsai , Jae Hyun Yoo , Iwona Winiarska Subject: [PATCH v6 05/13] peci: Add peci-aspeed controller driver Date: Tue, 25 Jan 2022 02:10:56 +0100 Message-Id: <20220125011104.2480133-6-iwona.winiarska@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220125011104.2480133-1-iwona.winiarska@intel.com> References: <20220125011104.2480133-1-iwona.winiarska@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org From: Jae Hyun Yoo ASPEED AST24xx/AST25xx/AST26xx SoCs support the PECI electrical interface (a.k.a PECI wire) that provides a communication channel with Intel processors. This driver allows BMC to discover devices connected to it and communicate with them using PECI protocol. Signed-off-by: Jae Hyun Yoo Co-developed-by: Iwona Winiarska Signed-off-by: Iwona Winiarska Reviewed-by: Pierre-Louis Bossart Reviewed-by: Joel Stanley --- MAINTAINERS | 8 + drivers/peci/Kconfig | 6 + drivers/peci/Makefile | 3 + drivers/peci/controller/Kconfig | 18 + drivers/peci/controller/Makefile | 3 + drivers/peci/controller/peci-aspeed.c | 599 ++++++++++++++++++++++++++ 6 files changed, 637 insertions(+) create mode 100644 drivers/peci/controller/Kconfig create mode 100644 drivers/peci/controller/Makefile create mode 100644 drivers/peci/controller/peci-aspeed.c diff --git a/MAINTAINERS b/MAINTAINERS index aa7ae643fdb0..6c9b12004e4e 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -2985,6 +2985,14 @@ S: Maintained F: Documentation/devicetree/bindings/net/asix,ax88796c.yaml F: drivers/net/ethernet/asix/ax88796c_* +ASPEED PECI CONTROLLER +M: Iwona Winiarska +L: linux-aspeed@lists.ozlabs.org (moderated for non-subscribers) +L: openbmc@lists.ozlabs.org (moderated for non-subscribers) +S: Supported +F: Documentation/devicetree/bindings/peci/peci-aspeed.yaml +F: drivers/peci/controller/peci-aspeed.c + ASPEED PINCTRL DRIVERS M: Andrew Jeffery L: linux-aspeed@lists.ozlabs.org (moderated for non-subscribers) diff --git a/drivers/peci/Kconfig b/drivers/peci/Kconfig index 71a4ad81225a..99279df97a78 100644 --- a/drivers/peci/Kconfig +++ b/drivers/peci/Kconfig @@ -13,3 +13,9 @@ menuconfig PECI This support is also available as a module. If so, the module will be called peci. + +if PECI + +source "drivers/peci/controller/Kconfig" + +endif # PECI diff --git a/drivers/peci/Makefile b/drivers/peci/Makefile index e789a354e842..926d8df15cbd 100644 --- a/drivers/peci/Makefile +++ b/drivers/peci/Makefile @@ -3,3 +3,6 @@ # Core functionality peci-y := core.o obj-$(CONFIG_PECI) += peci.o + +# Hardware specific bus drivers +obj-y += controller/ diff --git a/drivers/peci/controller/Kconfig b/drivers/peci/controller/Kconfig new file mode 100644 index 000000000000..32bd8d685c0d --- /dev/null +++ b/drivers/peci/controller/Kconfig @@ -0,0 +1,18 @@ +# SPDX-License-Identifier: GPL-2.0-only + +config PECI_ASPEED + tristate "ASPEED PECI support" + depends on ARCH_ASPEED || COMPILE_TEST + depends on OF + depends on HAS_IOMEM + select COMMON_CLK + help + This option enables PECI controller driver for ASPEED AST2400, + AST2500 and AST2600 SoCs. It allows BMC to discover devices + connected to it, and communicate with them using PECI protocol. + + Say Y here if your system runs on ASPEED SoC and you are using it + as BMC for Intel platform. + + This driver can also be built as a module. If so, the module will + be called peci-aspeed. diff --git a/drivers/peci/controller/Makefile b/drivers/peci/controller/Makefile new file mode 100644 index 000000000000..022c28ef1bf0 --- /dev/null +++ b/drivers/peci/controller/Makefile @@ -0,0 +1,3 @@ +# SPDX-License-Identifier: GPL-2.0-only + +obj-$(CONFIG_PECI_ASPEED) += peci-aspeed.o diff --git a/drivers/peci/controller/peci-aspeed.c b/drivers/peci/controller/peci-aspeed.c new file mode 100644 index 000000000000..1925ddc13f00 --- /dev/null +++ b/drivers/peci/controller/peci-aspeed.c @@ -0,0 +1,599 @@ +// SPDX-License-Identifier: GPL-2.0-only +// Copyright (c) 2012-2017 ASPEED Technology Inc. +// Copyright (c) 2018-2021 Intel Corporation + +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/* ASPEED PECI Registers */ +/* Control Register */ +#define ASPEED_PECI_CTRL 0x00 +#define ASPEED_PECI_CTRL_SAMPLING_MASK GENMASK(19, 16) +#define ASPEED_PECI_CTRL_RD_MODE_MASK GENMASK(13, 12) +#define ASPEED_PECI_CTRL_RD_MODE_DBG BIT(13) +#define ASPEED_PECI_CTRL_RD_MODE_COUNT BIT(12) +#define ASPEED_PECI_CTRL_CLK_SRC_HCLK BIT(11) +#define ASPEED_PECI_CTRL_CLK_DIV_MASK GENMASK(10, 8) +#define ASPEED_PECI_CTRL_INVERT_OUT BIT(7) +#define ASPEED_PECI_CTRL_INVERT_IN BIT(6) +#define ASPEED_PECI_CTRL_BUS_CONTENTION_EN BIT(5) +#define ASPEED_PECI_CTRL_PECI_EN BIT(4) +#define ASPEED_PECI_CTRL_PECI_CLK_EN BIT(0) + +/* Timing Negotiation Register */ +#define ASPEED_PECI_TIMING_NEGOTIATION 0x04 +#define ASPEED_PECI_T_NEGO_MSG_MASK GENMASK(15, 8) +#define ASPEED_PECI_T_NEGO_ADDR_MASK GENMASK(7, 0) + +/* Command Register */ +#define ASPEED_PECI_CMD 0x08 +#define ASPEED_PECI_CMD_PIN_MONITORING BIT(31) +#define ASPEED_PECI_CMD_STS_MASK GENMASK(27, 24) +#define ASPEED_PECI_CMD_STS_ADDR_T_NEGO 0x3 +#define ASPEED_PECI_CMD_IDLE_MASK \ + (ASPEED_PECI_CMD_STS_MASK | ASPEED_PECI_CMD_PIN_MONITORING) +#define ASPEED_PECI_CMD_FIRE BIT(0) + +/* Read/Write Length Register */ +#define ASPEED_PECI_RW_LENGTH 0x0c +#define ASPEED_PECI_AW_FCS_EN BIT(31) +#define ASPEED_PECI_RD_LEN_MASK GENMASK(23, 16) +#define ASPEED_PECI_WR_LEN_MASK GENMASK(15, 8) +#define ASPEED_PECI_TARGET_ADDR_MASK GENMASK(7, 0) + +/* Expected FCS Data Register */ +#define ASPEED_PECI_EXPECTED_FCS 0x10 +#define ASPEED_PECI_EXPECTED_RD_FCS_MASK GENMASK(23, 16) +#define ASPEED_PECI_EXPECTED_AW_FCS_AUTO_MASK GENMASK(15, 8) +#define ASPEED_PECI_EXPECTED_WR_FCS_MASK GENMASK(7, 0) + +/* Captured FCS Data Register */ +#define ASPEED_PECI_CAPTURED_FCS 0x14 +#define ASPEED_PECI_CAPTURED_RD_FCS_MASK GENMASK(23, 16) +#define ASPEED_PECI_CAPTURED_WR_FCS_MASK GENMASK(7, 0) + +/* Interrupt Register */ +#define ASPEED_PECI_INT_CTRL 0x18 +#define ASPEED_PECI_TIMING_NEGO_SEL_MASK GENMASK(31, 30) +#define ASPEED_PECI_1ST_BIT_OF_ADDR_NEGO 0 +#define ASPEED_PECI_2ND_BIT_OF_ADDR_NEGO 1 +#define ASPEED_PECI_MESSAGE_NEGO 2 +#define ASPEED_PECI_INT_MASK GENMASK(4, 0) +#define ASPEED_PECI_INT_BUS_TIMEOUT BIT(4) +#define ASPEED_PECI_INT_BUS_CONTENTION BIT(3) +#define ASPEED_PECI_INT_WR_FCS_BAD BIT(2) +#define ASPEED_PECI_INT_WR_FCS_ABORT BIT(1) +#define ASPEED_PECI_INT_CMD_DONE BIT(0) + +/* Interrupt Status Register */ +#define ASPEED_PECI_INT_STS 0x1c +#define ASPEED_PECI_INT_TIMING_RESULT_MASK GENMASK(29, 16) + /* bits[4..0]: Same bit fields in the 'Interrupt Register' */ + +/* Rx/Tx Data Buffer Registers */ +#define ASPEED_PECI_WR_DATA0 0x20 +#define ASPEED_PECI_WR_DATA1 0x24 +#define ASPEED_PECI_WR_DATA2 0x28 +#define ASPEED_PECI_WR_DATA3 0x2c +#define ASPEED_PECI_RD_DATA0 0x30 +#define ASPEED_PECI_RD_DATA1 0x34 +#define ASPEED_PECI_RD_DATA2 0x38 +#define ASPEED_PECI_RD_DATA3 0x3c +#define ASPEED_PECI_WR_DATA4 0x40 +#define ASPEED_PECI_WR_DATA5 0x44 +#define ASPEED_PECI_WR_DATA6 0x48 +#define ASPEED_PECI_WR_DATA7 0x4c +#define ASPEED_PECI_RD_DATA4 0x50 +#define ASPEED_PECI_RD_DATA5 0x54 +#define ASPEED_PECI_RD_DATA6 0x58 +#define ASPEED_PECI_RD_DATA7 0x5c +#define ASPEED_PECI_DATA_BUF_SIZE_MAX 32 + +/* Timing Negotiation */ +#define ASPEED_PECI_CLK_FREQUENCY_MIN 2000 +#define ASPEED_PECI_CLK_FREQUENCY_DEFAULT 1000000 +#define ASPEED_PECI_CLK_FREQUENCY_MAX 2000000 +#define ASPEED_PECI_RD_SAMPLING_POINT_DEFAULT 8 +/* Timeout */ +#define ASPEED_PECI_IDLE_CHECK_TIMEOUT_US (50 * USEC_PER_MSEC) +#define ASPEED_PECI_IDLE_CHECK_INTERVAL_US (10 * USEC_PER_MSEC) +#define ASPEED_PECI_CMD_TIMEOUT_MS_DEFAULT 1000 +#define ASPEED_PECI_CMD_TIMEOUT_MS_MAX 1000 + +#define ASPEED_PECI_CLK_DIV1(msg_timing) (4 * (msg_timing) + 1) +#define ASPEED_PECI_CLK_DIV2(clk_div_exp) BIT(clk_div_exp) +#define ASPEED_PECI_CLK_DIV(msg_timing, clk_div_exp) \ + (4 * ASPEED_PECI_CLK_DIV1(msg_timing) * ASPEED_PECI_CLK_DIV2(clk_div_exp)) + +struct aspeed_peci { + struct peci_controller *controller; + struct device *dev; + void __iomem *base; + struct reset_control *rst; + int irq; + spinlock_t lock; /* to sync completion status handling */ + struct completion xfer_complete; + struct clk *clk; + u32 clk_frequency; + u32 status; + u32 cmd_timeout_ms; +}; + +struct clk_aspeed_peci { + struct clk_hw hw; + struct aspeed_peci *aspeed_peci; +}; + +static void aspeed_peci_controller_enable(struct aspeed_peci *priv) +{ + u32 val = readl(priv->base + ASPEED_PECI_CTRL); + + val |= ASPEED_PECI_CTRL_PECI_CLK_EN; + val |= ASPEED_PECI_CTRL_PECI_EN; + + writel(val, priv->base + ASPEED_PECI_CTRL); +} + +static void aspeed_peci_init_regs(struct aspeed_peci *priv) +{ + u32 val; + + /* Clear interrupts */ + writel(ASPEED_PECI_INT_MASK, priv->base + ASPEED_PECI_INT_STS); + + /* Set timing negotiation mode and enable interrupts */ + val = FIELD_PREP(ASPEED_PECI_TIMING_NEGO_SEL_MASK, ASPEED_PECI_1ST_BIT_OF_ADDR_NEGO); + val |= ASPEED_PECI_INT_MASK; + writel(val, priv->base + ASPEED_PECI_INT_CTRL); + + val = FIELD_PREP(ASPEED_PECI_CTRL_SAMPLING_MASK, ASPEED_PECI_RD_SAMPLING_POINT_DEFAULT); + writel(val, priv->base + ASPEED_PECI_CTRL); +} + +static int aspeed_peci_check_idle(struct aspeed_peci *priv) +{ + u32 cmd_sts = readl(priv->base + ASPEED_PECI_CMD); + int ret; + + /* + * Under normal circumstances, we expect to be idle here. + * In case there were any errors/timeouts that led to the situation + * where the hardware is not in idle state - we need to reset and + * reinitialize it to avoid potential controller hang. + */ + if (FIELD_GET(ASPEED_PECI_CMD_STS_MASK, cmd_sts)) { + ret = reset_control_assert(priv->rst); + if (ret) { + dev_err(priv->dev, "cannot assert reset control\n"); + return ret; + } + + ret = reset_control_deassert(priv->rst); + if (ret) { + dev_err(priv->dev, "cannot deassert reset control\n"); + return ret; + } + + aspeed_peci_init_regs(priv); + + ret = clk_set_rate(priv->clk, priv->clk_frequency); + if (ret < 0) { + dev_err(priv->dev, "cannot set clock frequency\n"); + return ret; + } + + aspeed_peci_controller_enable(priv); + } + + return readl_poll_timeout(priv->base + ASPEED_PECI_CMD, + cmd_sts, + !(cmd_sts & ASPEED_PECI_CMD_IDLE_MASK), + ASPEED_PECI_IDLE_CHECK_INTERVAL_US, + ASPEED_PECI_IDLE_CHECK_TIMEOUT_US); +} + +static int aspeed_peci_xfer(struct peci_controller *controller, + u8 addr, struct peci_request *req) +{ + struct aspeed_peci *priv = dev_get_drvdata(controller->dev.parent); + unsigned long timeout = msecs_to_jiffies(priv->cmd_timeout_ms); + u32 peci_head; + int ret, i; + + if (req->tx.len > ASPEED_PECI_DATA_BUF_SIZE_MAX || + req->rx.len > ASPEED_PECI_DATA_BUF_SIZE_MAX) + return -EINVAL; + + /* Check command sts and bus idle state */ + ret = aspeed_peci_check_idle(priv); + if (ret) + return ret; /* -ETIMEDOUT */ + + spin_lock_irq(&priv->lock); + reinit_completion(&priv->xfer_complete); + + peci_head = FIELD_PREP(ASPEED_PECI_TARGET_ADDR_MASK, addr) | + FIELD_PREP(ASPEED_PECI_WR_LEN_MASK, req->tx.len) | + FIELD_PREP(ASPEED_PECI_RD_LEN_MASK, req->rx.len); + + writel(peci_head, priv->base + ASPEED_PECI_RW_LENGTH); + + for (i = 0; i < req->tx.len; i += 4) { + u32 reg = (i < 16 ? ASPEED_PECI_WR_DATA0 : ASPEED_PECI_WR_DATA4) + i % 16; + + writel(get_unaligned_le32(&req->tx.buf[i]), priv->base + reg); + } + +#if IS_ENABLED(CONFIG_DYNAMIC_DEBUG) + dev_dbg(priv->dev, "HEAD : %#08x\n", peci_head); + print_hex_dump_bytes("TX : ", DUMP_PREFIX_NONE, req->tx.buf, req->tx.len); +#endif + + priv->status = 0; + writel(ASPEED_PECI_CMD_FIRE, priv->base + ASPEED_PECI_CMD); + spin_unlock_irq(&priv->lock); + + ret = wait_for_completion_interruptible_timeout(&priv->xfer_complete, timeout); + if (ret < 0) + return ret; + + if (ret == 0) { + dev_dbg(priv->dev, "timeout waiting for a response\n"); + return -ETIMEDOUT; + } + + spin_lock_irq(&priv->lock); + + if (priv->status != ASPEED_PECI_INT_CMD_DONE) { + spin_unlock_irq(&priv->lock); + dev_dbg(priv->dev, "no valid response, status: %#02x\n", priv->status); + return -EIO; + } + + spin_unlock_irq(&priv->lock); + + /* + * We need to use dword reads for register access, make sure that the + * buffer size is multiple of 4-bytes. + */ + BUILD_BUG_ON(PECI_REQUEST_MAX_BUF_SIZE % 4); + + for (i = 0; i < req->rx.len; i += 4) { + u32 reg = (i < 16 ? ASPEED_PECI_RD_DATA0 : ASPEED_PECI_RD_DATA4) + i % 16; + u32 rx_data = readl(priv->base + reg); + + put_unaligned_le32(rx_data, &req->rx.buf[i]); + } + +#if IS_ENABLED(CONFIG_DYNAMIC_DEBUG) + print_hex_dump_bytes("RX : ", DUMP_PREFIX_NONE, req->rx.buf, req->rx.len); +#endif + return 0; +} + +static irqreturn_t aspeed_peci_irq_handler(int irq, void *arg) +{ + struct aspeed_peci *priv = arg; + u32 status; + + spin_lock(&priv->lock); + status = readl(priv->base + ASPEED_PECI_INT_STS); + writel(status, priv->base + ASPEED_PECI_INT_STS); + priv->status |= (status & ASPEED_PECI_INT_MASK); + + /* + * All commands should be ended up with a ASPEED_PECI_INT_CMD_DONE bit + * set even in an error case. + */ + if (status & ASPEED_PECI_INT_CMD_DONE) + complete(&priv->xfer_complete); + + writel(0, priv->base + ASPEED_PECI_CMD); + + spin_unlock(&priv->lock); + + return IRQ_HANDLED; +} + +static void clk_aspeed_peci_find_div_values(unsigned long rate, int *msg_timing, int *clk_div_exp) +{ + unsigned long best_diff = ~0ul, diff; + int msg_timing_temp, clk_div_exp_temp, i, j; + + for (i = 1; i <= 255; i++) + for (j = 0; j < 8; j++) { + diff = abs(rate - ASPEED_PECI_CLK_DIV1(i) * ASPEED_PECI_CLK_DIV2(j)); + if (diff < best_diff) { + msg_timing_temp = i; + clk_div_exp_temp = j; + best_diff = diff; + } + } + + *msg_timing = msg_timing_temp; + *clk_div_exp = clk_div_exp_temp; +} + +static int clk_aspeed_peci_get_div(unsigned long rate, const unsigned long *prate) +{ + unsigned long this_rate = *prate / (4 * rate); + int msg_timing, clk_div_exp; + + clk_aspeed_peci_find_div_values(this_rate, &msg_timing, &clk_div_exp); + + return ASPEED_PECI_CLK_DIV(msg_timing, clk_div_exp); +} + +static int clk_aspeed_peci_set_rate(struct clk_hw *hw, unsigned long rate, + unsigned long prate) +{ + struct clk_aspeed_peci *peci_clk = container_of(hw, struct clk_aspeed_peci, hw); + struct aspeed_peci *aspeed_peci = peci_clk->aspeed_peci; + unsigned long this_rate = prate / (4 * rate); + int clk_div_exp, msg_timing; + u32 val; + + clk_aspeed_peci_find_div_values(this_rate, &msg_timing, &clk_div_exp); + + val = readl(aspeed_peci->base + ASPEED_PECI_CTRL); + val |= FIELD_PREP(ASPEED_PECI_CTRL_CLK_DIV_MASK, clk_div_exp); + writel(val, aspeed_peci->base + ASPEED_PECI_CTRL); + + val = FIELD_PREP(ASPEED_PECI_T_NEGO_MSG_MASK, msg_timing); + val |= FIELD_PREP(ASPEED_PECI_T_NEGO_ADDR_MASK, msg_timing); + writel(val, aspeed_peci->base + ASPEED_PECI_TIMING_NEGOTIATION); + + return 0; +} + +static long clk_aspeed_peci_round_rate(struct clk_hw *hw, unsigned long rate, + unsigned long *prate) +{ + int div = clk_aspeed_peci_get_div(rate, prate); + + return DIV_ROUND_UP_ULL(*prate, div); +} + +static unsigned long clk_aspeed_peci_recalc_rate(struct clk_hw *hw, unsigned long prate) +{ + struct clk_aspeed_peci *peci_clk = container_of(hw, struct clk_aspeed_peci, hw); + struct aspeed_peci *aspeed_peci = peci_clk->aspeed_peci; + int div, msg_timing, addr_timing, clk_div_exp; + u32 reg; + + reg = readl(aspeed_peci->base + ASPEED_PECI_TIMING_NEGOTIATION); + msg_timing = FIELD_GET(ASPEED_PECI_T_NEGO_MSG_MASK, reg); + addr_timing = FIELD_GET(ASPEED_PECI_T_NEGO_ADDR_MASK, reg); + + if (msg_timing != addr_timing) + return 0; + + reg = readl(aspeed_peci->base + ASPEED_PECI_CTRL); + clk_div_exp = FIELD_GET(ASPEED_PECI_CTRL_CLK_DIV_MASK, reg); + + div = ASPEED_PECI_CLK_DIV(msg_timing, clk_div_exp); + + return DIV_ROUND_UP_ULL(prate, div); +} + +static const struct clk_ops clk_aspeed_peci_ops = { + .set_rate = clk_aspeed_peci_set_rate, + .round_rate = clk_aspeed_peci_round_rate, + .recalc_rate = clk_aspeed_peci_recalc_rate, +}; + +/* + * PECI HW contains a clock divider which is a combination of: + * div0: 4 (fixed divider) + * div1: x + 1 + * div2: 1 << y + * In other words, out_clk = in_clk / (div0 * div1 * div2) + * The resulting frequency is used by PECI Controller to drive the PECI bus to + * negotiate optimal transfer rate. + */ +static struct clk *devm_aspeed_peci_register_clk_div(struct device *dev, struct clk *parent, + struct aspeed_peci *priv) +{ + struct clk_aspeed_peci *peci_clk; + struct clk_init_data init; + const char *parent_name; + char name[32]; + int ret; + + snprintf(name, sizeof(name), "%s_div", dev_name(dev)); + + parent_name = __clk_get_name(parent); + + init.ops = &clk_aspeed_peci_ops; + init.name = name; + init.parent_names = (const char* []) { parent_name }; + init.num_parents = 1; + init.flags = 0; + + peci_clk = devm_kzalloc(dev, sizeof(struct clk_aspeed_peci), GFP_KERNEL); + if (!peci_clk) + return ERR_PTR(-ENOMEM); + + peci_clk->hw.init = &init; + peci_clk->aspeed_peci = priv; + + ret = devm_clk_hw_register(dev, &peci_clk->hw); + if (ret) + return ERR_PTR(ret); + + return peci_clk->hw.clk; +} + +static void aspeed_peci_property_sanitize(struct device *dev, const char *propname, + u32 min, u32 max, u32 default_val, u32 *propval) +{ + u32 val; + int ret; + + ret = device_property_read_u32(dev, propname, &val); + if (ret) { + val = default_val; + } else if (val > max || val < min) { + dev_warn(dev, "invalid %s: %u, falling back to: %u\n", + propname, val, default_val); + + val = default_val; + } + + *propval = val; +} + +static void aspeed_peci_property_setup(struct aspeed_peci *priv) +{ + aspeed_peci_property_sanitize(priv->dev, "clock-frequency", + ASPEED_PECI_CLK_FREQUENCY_MIN, ASPEED_PECI_CLK_FREQUENCY_MAX, + ASPEED_PECI_CLK_FREQUENCY_DEFAULT, &priv->clk_frequency); + aspeed_peci_property_sanitize(priv->dev, "cmd-timeout-ms", + 1, ASPEED_PECI_CMD_TIMEOUT_MS_MAX, + ASPEED_PECI_CMD_TIMEOUT_MS_DEFAULT, &priv->cmd_timeout_ms); +} + +static struct peci_controller_ops aspeed_ops = { + .xfer = aspeed_peci_xfer, +}; + +static void aspeed_peci_reset_control_release(void *data) +{ + reset_control_assert(data); +} + +static int devm_aspeed_peci_reset_control_deassert(struct device *dev, struct reset_control *rst) +{ + int ret; + + ret = reset_control_deassert(rst); + if (ret) + return ret; + + return devm_add_action_or_reset(dev, aspeed_peci_reset_control_release, rst); +} + +static void aspeed_peci_clk_release(void *data) +{ + clk_disable_unprepare(data); +} + +static int devm_aspeed_peci_clk_enable(struct device *dev, struct clk *clk) +{ + int ret; + + ret = clk_prepare_enable(clk); + if (ret) + return ret; + + return devm_add_action_or_reset(dev, aspeed_peci_clk_release, clk); +} + +static int aspeed_peci_probe(struct platform_device *pdev) +{ + struct peci_controller *controller; + struct aspeed_peci *priv; + struct clk *ref_clk; + int ret; + + priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL); + if (!priv) + return -ENOMEM; + + priv->dev = &pdev->dev; + dev_set_drvdata(priv->dev, priv); + + priv->base = devm_platform_ioremap_resource(pdev, 0); + if (IS_ERR(priv->base)) + return PTR_ERR(priv->base); + + priv->irq = platform_get_irq(pdev, 0); + if (!priv->irq) + return priv->irq; + + ret = devm_request_irq(&pdev->dev, priv->irq, aspeed_peci_irq_handler, + 0, "peci-aspeed", priv); + if (ret) + return ret; + + init_completion(&priv->xfer_complete); + spin_lock_init(&priv->lock); + + priv->rst = devm_reset_control_get(&pdev->dev, NULL); + if (IS_ERR(priv->rst)) + return dev_err_probe(priv->dev, PTR_ERR(priv->rst), + "failed to get reset control\n"); + + ret = devm_aspeed_peci_reset_control_deassert(priv->dev, priv->rst); + if (ret) + return dev_err_probe(priv->dev, ret, "cannot deassert reset control\n"); + + aspeed_peci_property_setup(priv); + + aspeed_peci_init_regs(priv); + + ref_clk = devm_clk_get(priv->dev, NULL); + if (IS_ERR(ref_clk)) + return dev_err_probe(priv->dev, PTR_ERR(ref_clk), "failed to get ref clock\n"); + + priv->clk = devm_aspeed_peci_register_clk_div(priv->dev, ref_clk, priv); + if (IS_ERR(priv->clk)) + return dev_err_probe(priv->dev, PTR_ERR(priv->clk), "cannot register clock\n"); + + ret = clk_set_rate(priv->clk, priv->clk_frequency); + if (ret < 0) + return dev_err_probe(priv->dev, ret, "cannot set clock frequency\n"); + + ret = devm_aspeed_peci_clk_enable(priv->dev, priv->clk); + if (ret) + return dev_err_probe(priv->dev, ret, "failed to enable clock\n"); + + aspeed_peci_controller_enable(priv); + + controller = devm_peci_controller_add(priv->dev, &aspeed_ops); + if (IS_ERR(controller)) + return dev_err_probe(priv->dev, PTR_ERR(controller), + "failed to add aspeed peci controller\n"); + + priv->controller = controller; + + return 0; +} + +static const struct of_device_id aspeed_peci_of_table[] = { + { .compatible = "aspeed,ast2400-peci", }, + { .compatible = "aspeed,ast2500-peci", }, + { .compatible = "aspeed,ast2600-peci", }, + { } +}; +MODULE_DEVICE_TABLE(of, aspeed_peci_of_table); + +static struct platform_driver aspeed_peci_driver = { + .probe = aspeed_peci_probe, + .driver = { + .name = "peci-aspeed", + .of_match_table = aspeed_peci_of_table, + }, +}; +module_platform_driver(aspeed_peci_driver); + +MODULE_AUTHOR("Ryan Chen "); +MODULE_AUTHOR("Jae Hyun Yoo "); +MODULE_DESCRIPTION("ASPEED PECI driver"); +MODULE_LICENSE("GPL"); +MODULE_IMPORT_NS(PECI); From patchwork Tue Jan 25 01:11:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Iwona Winiarska X-Patchwork-Id: 536624 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BE13C433FE for ; Tue, 25 Jan 2022 03:42:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1325730AbiAYDiS (ORCPT ); Mon, 24 Jan 2022 22:38:18 -0500 Received: from mga11.intel.com ([192.55.52.93]:6676 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S3420054AbiAYCW0 (ORCPT ); Mon, 24 Jan 2022 21:22:26 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1643077346; x=1674613346; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=uOIuSyrlEWyfDI/Dp0D5p3sU+DC/g595+kST+UoLmG8=; b=WWC8TD+Vsv2z9eU3xcD/j3e835TI7L4vDV13jq72VYC3UASG72rODial qSikPDKYRZztRZmSOOViXvqoAMnD+/2CgPhmZFVLYwxFx4IwmXau931dF yxlw4mOQ/kaIOEBmVbIZDmo6HFlosMrHHZNrC+hhgSb62q48lxK/by5G9 YaISEfZUVP46CBe0GEeA1bcIDyaGnVMI6DxCL8shKPZWmrOzBuCn6B/41 +h4CsIb8kIo8An2qFQpmuurUepv+kG3H9IUylvazjBWKE5mb3s/Xbvy2C h8eJRbBFPI4WmGXbcZEBvpg9tXtIUZbGDWcVLhX6U+3aTaj0ga5B8erSH Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10237"; a="243788655" X-IronPort-AV: E=Sophos;i="5.88,313,1635231600"; d="scan'208";a="243788655" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Jan 2022 17:17:19 -0800 X-IronPort-AV: E=Sophos;i="5.88,313,1635231600"; d="scan'208";a="532266572" Received: from kerguder-mobl.ger.corp.intel.com (HELO localhost) ([10.249.158.133]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Jan 2022 17:17:10 -0800 From: Iwona Winiarska To: linux-kernel@vger.kernel.org, openbmc@lists.ozlabs.org, Greg Kroah-Hartman Cc: devicetree@vger.kernel.org, linux-aspeed@lists.ozlabs.org, linux-arm-kernel@lists.infradead.org, linux-hwmon@vger.kernel.org, linux-doc@vger.kernel.org, Rob Herring , Joel Stanley , Andrew Jeffery , Jean Delvare , Guenter Roeck , Arnd Bergmann , Olof Johansson , Jonathan Corbet , Borislav Petkov , Pierre-Louis Bossart , Tony Luck , Andy Shevchenko , Dan Williams , Randy Dunlap , Zev Weiss , David Muller , Dave Hansen , Billy Tsai , Iwona Winiarska , Jae Hyun Yoo Subject: [PATCH v6 11/13] hwmon: peci: Add dimmtemp driver Date: Tue, 25 Jan 2022 02:11:02 +0100 Message-Id: <20220125011104.2480133-12-iwona.winiarska@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220125011104.2480133-1-iwona.winiarska@intel.com> References: <20220125011104.2480133-1-iwona.winiarska@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Add peci-dimmtemp driver for Temperature Sensor on DIMM readings that are accessible via the processor PECI interface. The main use case for the driver (and PECI interface) is out-of-band management, where we're able to obtain thermal readings from an external entity connected with PECI, e.g. BMC on server platforms. Co-developed-by: Jae Hyun Yoo Signed-off-by: Jae Hyun Yoo Signed-off-by: Iwona Winiarska Reviewed-by: Pierre-Louis Bossart Acked-by: Guenter Roeck --- drivers/hwmon/peci/Kconfig | 13 + drivers/hwmon/peci/Makefile | 2 + drivers/hwmon/peci/dimmtemp.c | 630 ++++++++++++++++++++++++++++++++++ 3 files changed, 645 insertions(+) create mode 100644 drivers/hwmon/peci/dimmtemp.c diff --git a/drivers/hwmon/peci/Kconfig b/drivers/hwmon/peci/Kconfig index e10eed68d70a..9d32a57badfe 100644 --- a/drivers/hwmon/peci/Kconfig +++ b/drivers/hwmon/peci/Kconfig @@ -14,5 +14,18 @@ config SENSORS_PECI_CPUTEMP This driver can also be built as a module. If so, the module will be called peci-cputemp. +config SENSORS_PECI_DIMMTEMP + tristate "PECI DIMM temperature monitoring client" + depends on PECI + select SENSORS_PECI + select PECI_CPU + help + If you say yes here you get support for the generic Intel PECI hwmon + driver which provides Temperature Sensor on DIMM readings that are + accessible via the processor PECI interface. + + This driver can also be built as a module. If so, the module + will be called peci-dimmtemp. + config SENSORS_PECI tristate diff --git a/drivers/hwmon/peci/Makefile b/drivers/hwmon/peci/Makefile index e8a0ada5ab1f..191cfa0227f3 100644 --- a/drivers/hwmon/peci/Makefile +++ b/drivers/hwmon/peci/Makefile @@ -1,5 +1,7 @@ # SPDX-License-Identifier: GPL-2.0-only peci-cputemp-y := cputemp.o +peci-dimmtemp-y := dimmtemp.o obj-$(CONFIG_SENSORS_PECI_CPUTEMP) += peci-cputemp.o +obj-$(CONFIG_SENSORS_PECI_DIMMTEMP) += peci-dimmtemp.o diff --git a/drivers/hwmon/peci/dimmtemp.c b/drivers/hwmon/peci/dimmtemp.c new file mode 100644 index 000000000000..c8222354c005 --- /dev/null +++ b/drivers/hwmon/peci/dimmtemp.c @@ -0,0 +1,630 @@ +// SPDX-License-Identifier: GPL-2.0-only +// Copyright (c) 2018-2021 Intel Corporation + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "common.h" + +#define DIMM_MASK_CHECK_DELAY_JIFFIES msecs_to_jiffies(5000) + +/* Max number of channel ranks and DIMM index per channel */ +#define CHAN_RANK_MAX_ON_HSX 8 +#define DIMM_IDX_MAX_ON_HSX 3 +#define CHAN_RANK_MAX_ON_BDX 4 +#define DIMM_IDX_MAX_ON_BDX 3 +#define CHAN_RANK_MAX_ON_BDXD 2 +#define DIMM_IDX_MAX_ON_BDXD 2 +#define CHAN_RANK_MAX_ON_SKX 6 +#define DIMM_IDX_MAX_ON_SKX 2 +#define CHAN_RANK_MAX_ON_ICX 8 +#define DIMM_IDX_MAX_ON_ICX 2 +#define CHAN_RANK_MAX_ON_ICXD 4 +#define DIMM_IDX_MAX_ON_ICXD 2 + +#define CHAN_RANK_MAX CHAN_RANK_MAX_ON_HSX +#define DIMM_IDX_MAX DIMM_IDX_MAX_ON_HSX +#define DIMM_NUMS_MAX (CHAN_RANK_MAX * DIMM_IDX_MAX) + +#define CPU_SEG_MASK GENMASK(23, 16) +#define GET_CPU_SEG(x) (((x) & CPU_SEG_MASK) >> 16) +#define CPU_BUS_MASK GENMASK(7, 0) +#define GET_CPU_BUS(x) ((x) & CPU_BUS_MASK) + +#define DIMM_TEMP_MAX GENMASK(15, 8) +#define DIMM_TEMP_CRIT GENMASK(23, 16) +#define GET_TEMP_MAX(x) (((x) & DIMM_TEMP_MAX) >> 8) +#define GET_TEMP_CRIT(x) (((x) & DIMM_TEMP_CRIT) >> 16) + +#define NO_DIMM_RETRY_COUNT_MAX 5 + +struct peci_dimmtemp; + +struct dimm_info { + int chan_rank_max; + int dimm_idx_max; + u8 min_peci_revision; + int (*read_thresholds)(struct peci_dimmtemp *priv, int dimm_order, + int chan_rank, u32 *data); +}; + +struct peci_dimm_thresholds { + long temp_max; + long temp_crit; + struct peci_sensor_state state; +}; + +enum peci_dimm_threshold_type { + temp_max_type, + temp_crit_type, +}; + +struct peci_dimmtemp { + struct peci_device *peci_dev; + struct device *dev; + const char *name; + const struct dimm_info *gen_info; + struct delayed_work detect_work; + struct { + struct peci_sensor_data temp; + struct peci_dimm_thresholds thresholds; + } dimm[DIMM_NUMS_MAX]; + char **dimmtemp_label; + DECLARE_BITMAP(dimm_mask, DIMM_NUMS_MAX); + u8 no_dimm_retry_count; +}; + +static u8 __dimm_temp(u32 reg, int dimm_order) +{ + return (reg >> (dimm_order * 8)) & 0xff; +} + +static int get_dimm_temp(struct peci_dimmtemp *priv, int dimm_no, long *val) +{ + int dimm_order = dimm_no % priv->gen_info->dimm_idx_max; + int chan_rank = dimm_no / priv->gen_info->dimm_idx_max; + int ret = 0; + u32 data; + + mutex_lock(&priv->dimm[dimm_no].temp.state.lock); + if (!peci_sensor_need_update(&priv->dimm[dimm_no].temp.state)) + goto skip_update; + + ret = peci_pcs_read(priv->peci_dev, PECI_PCS_DDR_DIMM_TEMP, chan_rank, &data); + if (ret) + goto unlock; + + priv->dimm[dimm_no].temp.value = __dimm_temp(data, dimm_order) * MILLIDEGREE_PER_DEGREE; + + peci_sensor_mark_updated(&priv->dimm[dimm_no].temp.state); + +skip_update: + *val = priv->dimm[dimm_no].temp.value; +unlock: + mutex_unlock(&priv->dimm[dimm_no].temp.state.lock); + return ret; +} + +static int update_thresholds(struct peci_dimmtemp *priv, int dimm_no) +{ + int dimm_order = dimm_no % priv->gen_info->dimm_idx_max; + int chan_rank = dimm_no / priv->gen_info->dimm_idx_max; + u32 data; + int ret; + + if (!peci_sensor_need_update(&priv->dimm[dimm_no].thresholds.state)) + return 0; + + ret = priv->gen_info->read_thresholds(priv, dimm_order, chan_rank, &data); + if (ret == -ENODATA) /* Use default or previous value */ + return 0; + if (ret) + return ret; + + priv->dimm[dimm_no].thresholds.temp_max = GET_TEMP_MAX(data) * MILLIDEGREE_PER_DEGREE; + priv->dimm[dimm_no].thresholds.temp_crit = GET_TEMP_CRIT(data) * MILLIDEGREE_PER_DEGREE; + + peci_sensor_mark_updated(&priv->dimm[dimm_no].thresholds.state); + + return 0; +} + +static int get_dimm_thresholds(struct peci_dimmtemp *priv, enum peci_dimm_threshold_type type, + int dimm_no, long *val) +{ + int ret; + + mutex_lock(&priv->dimm[dimm_no].thresholds.state.lock); + ret = update_thresholds(priv, dimm_no); + if (ret) + goto unlock; + + switch (type) { + case temp_max_type: + *val = priv->dimm[dimm_no].thresholds.temp_max; + break; + case temp_crit_type: + *val = priv->dimm[dimm_no].thresholds.temp_crit; + break; + default: + ret = -EOPNOTSUPP; + break; + } +unlock: + mutex_unlock(&priv->dimm[dimm_no].thresholds.state.lock); + + return ret; +} + +static int dimmtemp_read_string(struct device *dev, + enum hwmon_sensor_types type, + u32 attr, int channel, const char **str) +{ + struct peci_dimmtemp *priv = dev_get_drvdata(dev); + + if (attr != hwmon_temp_label) + return -EOPNOTSUPP; + + *str = (const char *)priv->dimmtemp_label[channel]; + + return 0; +} + +static int dimmtemp_read(struct device *dev, enum hwmon_sensor_types type, + u32 attr, int channel, long *val) +{ + struct peci_dimmtemp *priv = dev_get_drvdata(dev); + + switch (attr) { + case hwmon_temp_input: + return get_dimm_temp(priv, channel, val); + case hwmon_temp_max: + return get_dimm_thresholds(priv, temp_max_type, channel, val); + case hwmon_temp_crit: + return get_dimm_thresholds(priv, temp_crit_type, channel, val); + default: + break; + } + + return -EOPNOTSUPP; +} + +static umode_t dimmtemp_is_visible(const void *data, enum hwmon_sensor_types type, + u32 attr, int channel) +{ + const struct peci_dimmtemp *priv = data; + + if (test_bit(channel, priv->dimm_mask)) + return 0444; + + return 0; +} + +static const struct hwmon_ops peci_dimmtemp_ops = { + .is_visible = dimmtemp_is_visible, + .read_string = dimmtemp_read_string, + .read = dimmtemp_read, +}; + +static int check_populated_dimms(struct peci_dimmtemp *priv) +{ + int chan_rank_max = priv->gen_info->chan_rank_max; + int dimm_idx_max = priv->gen_info->dimm_idx_max; + u32 chan_rank_empty = 0; + u64 dimm_mask = 0; + int chan_rank, dimm_idx, ret; + u32 pcs; + + BUILD_BUG_ON(BITS_PER_TYPE(chan_rank_empty) < CHAN_RANK_MAX); + BUILD_BUG_ON(BITS_PER_TYPE(dimm_mask) < DIMM_NUMS_MAX); + if (chan_rank_max * dimm_idx_max > DIMM_NUMS_MAX) { + WARN_ONCE(1, "Unsupported number of DIMMs - chan_rank_max: %d, dimm_idx_max: %d", + chan_rank_max, dimm_idx_max); + return -EINVAL; + } + + for (chan_rank = 0; chan_rank < chan_rank_max; chan_rank++) { + ret = peci_pcs_read(priv->peci_dev, PECI_PCS_DDR_DIMM_TEMP, chan_rank, &pcs); + if (ret) { + /* + * Overall, we expect either success or -EINVAL in + * order to determine whether DIMM is populated or not. + * For anything else we fall back to deferring the + * detection to be performed at a later point in time. + */ + if (ret == -EINVAL) { + chan_rank_empty |= BIT(chan_rank); + continue; + } + + return -EAGAIN; + } + + for (dimm_idx = 0; dimm_idx < dimm_idx_max; dimm_idx++) + if (__dimm_temp(pcs, dimm_idx)) + dimm_mask |= BIT(chan_rank * dimm_idx_max + dimm_idx); + } + + /* + * If we got all -EINVALs, it means that the CPU doesn't have any + * DIMMs. Unfortunately, it may also happen at the very start of + * host platform boot. Retrying a couple of times lets us make sure + * that the state is persistent. + */ + if (chan_rank_empty == GENMASK(chan_rank_max - 1, 0)) { + if (priv->no_dimm_retry_count < NO_DIMM_RETRY_COUNT_MAX) { + priv->no_dimm_retry_count++; + + return -EAGAIN; + } + + return -ENODEV; + } + + /* + * It's possible that memory training is not done yet. In this case we + * defer the detection to be performed at a later point in time. + */ + if (!dimm_mask) { + priv->no_dimm_retry_count = 0; + return -EAGAIN; + } + + dev_dbg(priv->dev, "Scanned populated DIMMs: %#llx\n", dimm_mask); + + bitmap_from_u64(priv->dimm_mask, dimm_mask); + + return 0; +} + +static int create_dimm_temp_label(struct peci_dimmtemp *priv, int chan) +{ + int rank = chan / priv->gen_info->dimm_idx_max; + int idx = chan % priv->gen_info->dimm_idx_max; + + priv->dimmtemp_label[chan] = devm_kasprintf(priv->dev, GFP_KERNEL, + "DIMM %c%d", 'A' + rank, + idx + 1); + if (!priv->dimmtemp_label[chan]) + return -ENOMEM; + + return 0; +} + +static const u32 peci_dimmtemp_temp_channel_config[] = { + [0 ... DIMM_NUMS_MAX - 1] = HWMON_T_LABEL | HWMON_T_INPUT | HWMON_T_MAX | HWMON_T_CRIT, + 0 +}; + +static const struct hwmon_channel_info peci_dimmtemp_temp_channel = { + .type = hwmon_temp, + .config = peci_dimmtemp_temp_channel_config, +}; + +static const struct hwmon_channel_info *peci_dimmtemp_temp_info[] = { + &peci_dimmtemp_temp_channel, + NULL +}; + +static const struct hwmon_chip_info peci_dimmtemp_chip_info = { + .ops = &peci_dimmtemp_ops, + .info = peci_dimmtemp_temp_info, +}; + +static int create_dimm_temp_info(struct peci_dimmtemp *priv) +{ + int ret, i, channels; + struct device *dev; + + /* + * We expect to either find populated DIMMs and carry on with creating + * sensors, or find out that there are no DIMMs populated. + * All other states mean that the platform never reached the state that + * allows to check DIMM state - causing us to retry later on. + */ + ret = check_populated_dimms(priv); + if (ret == -ENODEV) { + dev_dbg(priv->dev, "No DIMMs found\n"); + return 0; + } else if (ret) { + schedule_delayed_work(&priv->detect_work, DIMM_MASK_CHECK_DELAY_JIFFIES); + dev_dbg(priv->dev, "Deferred populating DIMM temp info\n"); + return ret; + } + + channels = priv->gen_info->chan_rank_max * priv->gen_info->dimm_idx_max; + + priv->dimmtemp_label = devm_kzalloc(priv->dev, channels * sizeof(char *), GFP_KERNEL); + if (!priv->dimmtemp_label) + return -ENOMEM; + + for_each_set_bit(i, priv->dimm_mask, DIMM_NUMS_MAX) { + ret = create_dimm_temp_label(priv, i); + if (ret) + return ret; + mutex_init(&priv->dimm[i].thresholds.state.lock); + mutex_init(&priv->dimm[i].temp.state.lock); + } + + dev = devm_hwmon_device_register_with_info(priv->dev, priv->name, priv, + &peci_dimmtemp_chip_info, NULL); + if (IS_ERR(dev)) { + dev_err(priv->dev, "Failed to register hwmon device\n"); + return PTR_ERR(dev); + } + + dev_dbg(priv->dev, "%s: sensor '%s'\n", dev_name(dev), priv->name); + + return 0; +} + +static void create_dimm_temp_info_delayed(struct work_struct *work) +{ + struct peci_dimmtemp *priv = container_of(to_delayed_work(work), + struct peci_dimmtemp, + detect_work); + int ret; + + ret = create_dimm_temp_info(priv); + if (ret && ret != -EAGAIN) + dev_err(priv->dev, "Failed to populate DIMM temp info\n"); +} + +static void remove_delayed_work(void *_priv) +{ + struct peci_dimmtemp *priv = _priv; + + cancel_delayed_work_sync(&priv->detect_work); +} + +static int peci_dimmtemp_probe(struct auxiliary_device *adev, const struct auxiliary_device_id *id) +{ + struct device *dev = &adev->dev; + struct peci_device *peci_dev = to_peci_device(dev->parent); + struct peci_dimmtemp *priv; + int ret; + + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); + if (!priv) + return -ENOMEM; + + priv->name = devm_kasprintf(dev, GFP_KERNEL, "peci_dimmtemp.cpu%d", + peci_dev->info.socket_id); + if (!priv->name) + return -ENOMEM; + + priv->dev = dev; + priv->peci_dev = peci_dev; + priv->gen_info = (const struct dimm_info *)id->driver_data; + + /* + * This is just a sanity check. Since we're using commands that are + * guaranteed to be supported on a given platform, we should never see + * revision lower than expected. + */ + if (peci_dev->info.peci_revision < priv->gen_info->min_peci_revision) + dev_warn(priv->dev, + "Unexpected PECI revision %#x, some features may be unavailable\n", + peci_dev->info.peci_revision); + + INIT_DELAYED_WORK(&priv->detect_work, create_dimm_temp_info_delayed); + + ret = devm_add_action_or_reset(priv->dev, remove_delayed_work, priv); + if (ret) + return ret; + + ret = create_dimm_temp_info(priv); + if (ret && ret != -EAGAIN) { + dev_err(dev, "Failed to populate DIMM temp info\n"); + return ret; + } + + return 0; +} + +static int +read_thresholds_hsx(struct peci_dimmtemp *priv, int dimm_order, int chan_rank, u32 *data) +{ + u8 dev, func; + u16 reg; + int ret; + + /* + * Device 20, Function 0: IMC 0 channel 0 -> rank 0 + * Device 20, Function 1: IMC 0 channel 1 -> rank 1 + * Device 21, Function 0: IMC 0 channel 2 -> rank 2 + * Device 21, Function 1: IMC 0 channel 3 -> rank 3 + * Device 23, Function 0: IMC 1 channel 0 -> rank 4 + * Device 23, Function 1: IMC 1 channel 1 -> rank 5 + * Device 24, Function 0: IMC 1 channel 2 -> rank 6 + * Device 24, Function 1: IMC 1 channel 3 -> rank 7 + */ + dev = 20 + chan_rank / 2 + chan_rank / 4; + func = chan_rank % 2; + reg = 0x120 + dimm_order * 4; + + ret = peci_pci_local_read(priv->peci_dev, 1, dev, func, reg, data); + if (ret) + return ret; + + return 0; +} + +static int +read_thresholds_bdxd(struct peci_dimmtemp *priv, int dimm_order, int chan_rank, u32 *data) +{ + u8 dev, func; + u16 reg; + int ret; + + /* + * Device 10, Function 2: IMC 0 channel 0 -> rank 0 + * Device 10, Function 6: IMC 0 channel 1 -> rank 1 + * Device 12, Function 2: IMC 1 channel 0 -> rank 2 + * Device 12, Function 6: IMC 1 channel 1 -> rank 3 + */ + dev = 10 + chan_rank / 2 * 2; + func = (chan_rank % 2) ? 6 : 2; + reg = 0x120 + dimm_order * 4; + + ret = peci_pci_local_read(priv->peci_dev, 2, dev, func, reg, data); + if (ret) + return ret; + + return 0; +} + +static int +read_thresholds_skx(struct peci_dimmtemp *priv, int dimm_order, int chan_rank, u32 *data) +{ + u8 dev, func; + u16 reg; + int ret; + + /* + * Device 10, Function 2: IMC 0 channel 0 -> rank 0 + * Device 10, Function 6: IMC 0 channel 1 -> rank 1 + * Device 11, Function 2: IMC 0 channel 2 -> rank 2 + * Device 12, Function 2: IMC 1 channel 0 -> rank 3 + * Device 12, Function 6: IMC 1 channel 1 -> rank 4 + * Device 13, Function 2: IMC 1 channel 2 -> rank 5 + */ + dev = 10 + chan_rank / 3 * 2 + (chan_rank % 3 == 2 ? 1 : 0); + func = chan_rank % 3 == 1 ? 6 : 2; + reg = 0x120 + dimm_order * 4; + + ret = peci_pci_local_read(priv->peci_dev, 2, dev, func, reg, data); + if (ret) + return ret; + + return 0; +} + +static int +read_thresholds_icx(struct peci_dimmtemp *priv, int dimm_order, int chan_rank, u32 *data) +{ + u32 reg_val; + u64 offset; + int ret; + u8 dev; + + ret = peci_ep_pci_local_read(priv->peci_dev, 0, 13, 0, 2, 0xd4, ®_val); + if (ret || !(reg_val & BIT(31))) + return -ENODATA; /* Use default or previous value */ + + ret = peci_ep_pci_local_read(priv->peci_dev, 0, 13, 0, 2, 0xd0, ®_val); + if (ret) + return -ENODATA; /* Use default or previous value */ + + /* + * Device 26, Offset 224e0: IMC 0 channel 0 -> rank 0 + * Device 26, Offset 264e0: IMC 0 channel 1 -> rank 1 + * Device 27, Offset 224e0: IMC 1 channel 0 -> rank 2 + * Device 27, Offset 264e0: IMC 1 channel 1 -> rank 3 + * Device 28, Offset 224e0: IMC 2 channel 0 -> rank 4 + * Device 28, Offset 264e0: IMC 2 channel 1 -> rank 5 + * Device 29, Offset 224e0: IMC 3 channel 0 -> rank 6 + * Device 29, Offset 264e0: IMC 3 channel 1 -> rank 7 + */ + dev = 26 + chan_rank / 2; + offset = 0x224e0 + dimm_order * 4 + (chan_rank % 2) * 0x4000; + + ret = peci_mmio_read(priv->peci_dev, 0, GET_CPU_SEG(reg_val), GET_CPU_BUS(reg_val), + dev, 0, offset, data); + if (ret) + return ret; + + return 0; +} + +static const struct dimm_info dimm_hsx = { + .chan_rank_max = CHAN_RANK_MAX_ON_HSX, + .dimm_idx_max = DIMM_IDX_MAX_ON_HSX, + .min_peci_revision = 0x33, + .read_thresholds = &read_thresholds_hsx, +}; + +static const struct dimm_info dimm_bdx = { + .chan_rank_max = CHAN_RANK_MAX_ON_BDX, + .dimm_idx_max = DIMM_IDX_MAX_ON_BDX, + .min_peci_revision = 0x33, + .read_thresholds = &read_thresholds_hsx, +}; + +static const struct dimm_info dimm_bdxd = { + .chan_rank_max = CHAN_RANK_MAX_ON_BDXD, + .dimm_idx_max = DIMM_IDX_MAX_ON_BDXD, + .min_peci_revision = 0x33, + .read_thresholds = &read_thresholds_bdxd, +}; + +static const struct dimm_info dimm_skx = { + .chan_rank_max = CHAN_RANK_MAX_ON_SKX, + .dimm_idx_max = DIMM_IDX_MAX_ON_SKX, + .min_peci_revision = 0x33, + .read_thresholds = &read_thresholds_skx, +}; + +static const struct dimm_info dimm_icx = { + .chan_rank_max = CHAN_RANK_MAX_ON_ICX, + .dimm_idx_max = DIMM_IDX_MAX_ON_ICX, + .min_peci_revision = 0x40, + .read_thresholds = &read_thresholds_icx, +}; + +static const struct dimm_info dimm_icxd = { + .chan_rank_max = CHAN_RANK_MAX_ON_ICXD, + .dimm_idx_max = DIMM_IDX_MAX_ON_ICXD, + .min_peci_revision = 0x40, + .read_thresholds = &read_thresholds_icx, +}; + +static const struct auxiliary_device_id peci_dimmtemp_ids[] = { + { + .name = "peci_cpu.dimmtemp.hsx", + .driver_data = (kernel_ulong_t)&dimm_hsx, + }, + { + .name = "peci_cpu.dimmtemp.bdx", + .driver_data = (kernel_ulong_t)&dimm_bdx, + }, + { + .name = "peci_cpu.dimmtemp.bdxd", + .driver_data = (kernel_ulong_t)&dimm_bdxd, + }, + { + .name = "peci_cpu.dimmtemp.skx", + .driver_data = (kernel_ulong_t)&dimm_skx, + }, + { + .name = "peci_cpu.dimmtemp.icx", + .driver_data = (kernel_ulong_t)&dimm_icx, + }, + { + .name = "peci_cpu.dimmtemp.icxd", + .driver_data = (kernel_ulong_t)&dimm_icxd, + }, + { } +}; +MODULE_DEVICE_TABLE(auxiliary, peci_dimmtemp_ids); + +static struct auxiliary_driver peci_dimmtemp_driver = { + .probe = peci_dimmtemp_probe, + .id_table = peci_dimmtemp_ids, +}; + +module_auxiliary_driver(peci_dimmtemp_driver); + +MODULE_AUTHOR("Jae Hyun Yoo "); +MODULE_AUTHOR("Iwona Winiarska "); +MODULE_DESCRIPTION("PECI dimmtemp driver"); +MODULE_LICENSE("GPL"); +MODULE_IMPORT_NS(PECI_CPU); From patchwork Tue Jan 25 01:11:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Iwona Winiarska X-Patchwork-Id: 536629 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 955D2C433F5 for ; Tue, 25 Jan 2022 03:35:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S3420503AbiAYDfh (ORCPT ); Mon, 24 Jan 2022 22:35:37 -0500 Received: from mga09.intel.com ([134.134.136.24]:16189 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S3415309AbiAYBkN (ORCPT ); Mon, 24 Jan 2022 20:40:13 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1643074813; x=1674610813; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=o9Was9ki72grzMbUkZV5f169uo+CSki1buMNORj9WMM=; b=Iosa+Kgzx0kFeJdFxfW8uK+QuingCN9G16mSMlvGKUnkajpDY+ZElQpS GTLjyzf4S5hvYD4cSO61srK1b0FrNqu3BDAaUDMupzWlpuYoMHVEXbOia nnlsgbCruCX2nJUzB7bm7Qasxv1DGZWmIX9YBlfbHfSon/K09AJzSwejS ysFkjlUFDioS/zkmbfzDws7egrHObwjgmSNosgBTtLcC1VM2R+EySeihZ 86xsrGEVqNOrhtws6CHxSPOKdN8dxKe+/jdfYl62Yy3KoGD8KtcJaSctp JMpgRlzhx81IeUnGsYgkzUqgJy2zVSqHG0jwV7+hHPzpUhQEeTAP4sdpH A==; X-IronPort-AV: E=McAfee;i="6200,9189,10237"; a="245971866" X-IronPort-AV: E=Sophos;i="5.88,313,1635231600"; d="scan'208";a="245971866" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Jan 2022 17:17:58 -0800 X-IronPort-AV: E=Sophos;i="5.88,313,1635231600"; d="scan'208";a="695635197" Received: from kerguder-mobl.ger.corp.intel.com (HELO localhost) ([10.249.158.133]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Jan 2022 17:17:44 -0800 From: Iwona Winiarska To: linux-kernel@vger.kernel.org, openbmc@lists.ozlabs.org, Greg Kroah-Hartman Cc: devicetree@vger.kernel.org, linux-aspeed@lists.ozlabs.org, linux-arm-kernel@lists.infradead.org, linux-hwmon@vger.kernel.org, linux-doc@vger.kernel.org, Rob Herring , Joel Stanley , Andrew Jeffery , Jean Delvare , Guenter Roeck , Arnd Bergmann , Olof Johansson , Jonathan Corbet , Borislav Petkov , Pierre-Louis Bossart , Tony Luck , Andy Shevchenko , Dan Williams , Randy Dunlap , Zev Weiss , David Muller , Dave Hansen , Billy Tsai , Iwona Winiarska Subject: [PATCH v6 13/13] docs: Add PECI documentation Date: Tue, 25 Jan 2022 02:11:04 +0100 Message-Id: <20220125011104.2480133-14-iwona.winiarska@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220125011104.2480133-1-iwona.winiarska@intel.com> References: <20220125011104.2480133-1-iwona.winiarska@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Add a brief overview of PECI and PECI wire interface. The documentation also contains kernel-doc for PECI subsystem internals and PECI CPU Driver API. Signed-off-by: Iwona Winiarska Reviewed-by: Pierre-Louis Bossart --- Documentation/index.rst | 1 + Documentation/peci/index.rst | 16 +++++++++++ Documentation/peci/peci.rst | 51 ++++++++++++++++++++++++++++++++++++ MAINTAINERS | 1 + 4 files changed, 69 insertions(+) create mode 100644 Documentation/peci/index.rst create mode 100644 Documentation/peci/peci.rst diff --git a/Documentation/index.rst b/Documentation/index.rst index 2b4de3926858..deaa7f669fcd 100644 --- a/Documentation/index.rst +++ b/Documentation/index.rst @@ -138,6 +138,7 @@ needed). scheduler/index mhi/index tty/index + peci/index Architecture-agnostic documentation ----------------------------------- diff --git a/Documentation/peci/index.rst b/Documentation/peci/index.rst new file mode 100644 index 000000000000..989de10416e7 --- /dev/null +++ b/Documentation/peci/index.rst @@ -0,0 +1,16 @@ +.. SPDX-License-Identifier: GPL-2.0-only + +==================== +Linux PECI Subsystem +==================== + +.. toctree:: + + peci + +.. only:: subproject and html + + Indices + ======= + + * :ref:`genindex` diff --git a/Documentation/peci/peci.rst b/Documentation/peci/peci.rst new file mode 100644 index 000000000000..331b1ec00e22 --- /dev/null +++ b/Documentation/peci/peci.rst @@ -0,0 +1,51 @@ +.. SPDX-License-Identifier: GPL-2.0-only + +======== +Overview +======== + +The Platform Environment Control Interface (PECI) is a communication +interface between Intel processor and management controllers +(e.g. Baseboard Management Controller, BMC). +PECI provides services that allow the management controller to +configure, monitor and debug platform by accessing various registers. +It defines a dedicated command protocol, where the management +controller is acting as a PECI originator and the processor - as +a PECI responder. +PECI can be used in both single processor and multiple-processor based +systems. + +NOTE: +Intel PECI specification is not released as a dedicated document, +instead it is a part of External Design Specification (EDS) for given +Intel CPU. External Design Specifications are usually not publicly +available. + +PECI Wire +--------- + +PECI Wire interface uses a single wire for self-clocking and data +transfer. It does not require any additional control lines - the +physical layer is a self-clocked one-wire bus signal that begins each +bit with a driven, rising edge from an idle near zero volts. The +duration of the signal driven high allows to determine whether the bit +value is logic '0' or logic '1'. PECI Wire also includes variable data +rate established with every message. + +For PECI Wire, each processor package will utilize unique, fixed +addresses within a defined range and that address should +have a fixed relationship with the processor socket ID - if one of the +processors is removed, it does not affect addresses of remaining +processors. + +PECI subsystem internals +------------------------ + +.. kernel-doc:: include/linux/peci.h +.. kernel-doc:: drivers/peci/internal.h +.. kernel-doc:: drivers/peci/core.c +.. kernel-doc:: drivers/peci/request.c + +PECI CPU Driver API +------------------- +.. kernel-doc:: drivers/peci/cpu.c diff --git a/MAINTAINERS b/MAINTAINERS index d821aeaaa2d3..06449cc42a10 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -15112,6 +15112,7 @@ M: Iwona Winiarska L: openbmc@lists.ozlabs.org (moderated for non-subscribers) S: Supported F: Documentation/devicetree/bindings/peci/ +F: Documentation/peci/ F: drivers/peci/ F: include/linux/peci-cpu.h F: include/linux/peci.h