From patchwork Wed Aug 17 05:13:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Abhishek Sahu X-Patchwork-Id: 598052 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 242E2C28B2B for ; Wed, 17 Aug 2022 05:13:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232824AbiHQFNs (ORCPT ); Wed, 17 Aug 2022 01:13:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53314 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233617AbiHQFNp (ORCPT ); Wed, 17 Aug 2022 01:13:45 -0400 Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2087.outbound.protection.outlook.com [40.107.244.87]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 78F0A48C84; Tue, 16 Aug 2022 22:13:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=XcWDMc7ArrPWrrWNV9HWPWx5QK2ubzWG1PwPfO1VzIKV0iSmce3smT5OFvEd8jxjRfCn+0RxnNwwQUf5EYzJjHqUVRxIwhhXrQfrxgZ5/ufARbRv2IiSz8QOL2DbxNU+kAV/fFT4AAsVieKDvdL8XShk2SdNdhb7J+Vhwtza4RvHdnzGZ4jNJjSCIHeHHenbLkRaeGiGByTwGziCcziQRnL3NDvR7PLIDoryGQ8ZM/Ya35BvmmEtBYJtmxFaRSxq/FBMSA905FjGYUSRneBIWJYdMVUsUx0UPLrrDKxUdgr3xSAyZeF2+7N7Vpcz/2KUaNNf6honVbueEzaDXbb/7w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=GS+0AX+1mjY+BsvJVY/oyr/YmF59D85FHAMSBSUwoQA=; b=DBj6zQyydfqzl2WqT0uHPMVRlQQmfg0qgqCqzJOwSW8p69tjLrIcbwaH2W6DexMYWhDTaZ5c9WeCy3QHdFgcLcIvVADlEW0mUXMnaIcJm2mtGY/EIxGu6MTcirckbdeesvq6cx1A/wnoMnno2X7RPVFygnNUad7ssxkgNFHV9y9N/fQKBbp6R21bsXmTD9afW269dgzmZq2oO5rqRUUdNm6qc4FzgJWAvU1vmlZHkW9IM1G1tsCjWRZCqBkQykVmgkC9fNcOeyWdIV/zvCN80rfaEFqjAZf1bckljk+9Rt4rRbLuvH+sBTptCUuLtSaqH/Ere/tGGBv7+0ejsRSndQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=GS+0AX+1mjY+BsvJVY/oyr/YmF59D85FHAMSBSUwoQA=; b=e5sPQaloTjkhnZTNdy+TRzbZyksu8gFLHVkO8pLaPleJ+2jmm6dx/aQRu38HLV92toRwvqyHi+oDjT5zZ8boU47+e/yTKnprfOy2VS9BSkl38j9dVnVLZyT7UK/FS05LxSyHYF0UyKCLLkD9isiMpDlZ9NjcwnGHTaWL7J+4fePYwItVrVraZxJVd0RnzRjCsOnn75X4god2sE3BIfmoojcm34UB3tFsV6/b9peDI9vo0wZw0kqzfgUZuQbsI6bzNLgmyU9aCzEOLG3jSaRW7xaiIOrX0GHpJ+xfi3+Iv0PnUbhwxsBhLG9/v17jEaaPMwjgH1aeO2m054BgLNMOdA== Received: from MW4PR04CA0226.namprd04.prod.outlook.com (2603:10b6:303:87::21) by BN8PR12MB3489.namprd12.prod.outlook.com (2603:10b6:408:44::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5504.16; Wed, 17 Aug 2022 05:13:41 +0000 Received: from CO1NAM11FT040.eop-nam11.prod.protection.outlook.com (2603:10b6:303:87:cafe::be) by MW4PR04CA0226.outlook.office365.com (2603:10b6:303:87::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5504.18 via Frontend Transport; Wed, 17 Aug 2022 05:13:40 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.235) by CO1NAM11FT040.mail.protection.outlook.com (10.13.174.140) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5546.15 via Frontend Transport; Wed, 17 Aug 2022 05:13:39 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.38; Wed, 17 Aug 2022 05:13:39 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Tue, 16 Aug 2022 22:13:38 -0700 Received: from nvidia-abhsahu-1.nvidia.com (10.127.8.12) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server id 15.2.986.29 via Frontend Transport; Tue, 16 Aug 2022 22:13:33 -0700 From: Abhishek Sahu To: Alex Williamson , Cornelia Huck , Yishai Hadas , Jason Gunthorpe , Shameer Kolothum , Kevin Tian , "Rafael J . Wysocki" CC: Max Gurtovoy , Bjorn Helgaas , , , , , Abhishek Sahu Subject: [PATCH v6 1/5] vfio: Add the device features for the low power entry and exit Date: Wed, 17 Aug 2022 10:43:19 +0530 Message-ID: <20220817051323.20091-2-abhsahu@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220817051323.20091-1-abhsahu@nvidia.com> References: <20220817051323.20091-1-abhsahu@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 67af347e-e51a-4a11-44e1-08da800f3f36 X-MS-TrafficTypeDiagnostic: BN8PR12MB3489:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: ExIDsbLLb5VPXHhX0AUrLf3bIv3XDE179j6qZY+/Ju5OYiiOLT/cA2Ot/lziOKDGdhNnXxLH8NnNlIcws6m+j8KLQBViosE8UWp6A1lLoXszpSrAKWJjX0U9Sgh5EkqxWEMxdSPPOA6D1w/5OXVkpwd0s0prrQMRlHe5fzgpk0CuHTb8BAJRkdvEwJGOpBlk5wdcKBeoI15Q6oyQwZcXFh65IcMRqyWUGfTzMzNDyYxVSPKj0QcW8hwtrzn+/fTUXypecGxMrqjn8hdxN5oIHJzl1m1CdfKmb7C7KnxeiwzoJwzYAGFcq0969G79kP21yNnlu9Lx+cOq9Fu5Ozi6/QRZEdXvAvFXZ/defLZgDqPNpBpD6z5A5ilmFY1AUZXg6g2qWmZL4L401TBmm+Uy8e7TB+hvp5gttlN6gBQwOQlvrd8gb2Vxy5rieMxW5GsdAiunOAT4HIzDPd7JvdIjQDEZ73JWCBqySeSqxguzat7BNIGOCXSQEXUj73MQ5TDBumFTodHKpi5OC95+m6sCm6YkFKLCSs8+b0XxbUbxudLarNuJpTo96UEcH8Y+bnmT5woGga+dQLWQlbYux8UPw9vk8WFTini2eeomnLbC97+kyPtvet6V0TW3efY/xR9ejWrao5b7oDdspYJdSilTV3A8Dts0NQFrrr0G4zlcVejgh5K6ob3bwpsVtQvIUjeHJQTZmEVMjMPrjz5BtEQFXnlz1vYJNnZ8RfikVzuU9AkSTzrhwdQKUJ5+QmJs8EqTWRWiVFab9jVNaHaXPHjDJSRssNVbJ9iX7HvYTl0M8X3iXKU9Cnb+88Tt/qMjWK6WjdYlHkftAwHeqcrlEecnNg== X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230016)(4636009)(39860400002)(136003)(376002)(396003)(346002)(46966006)(36840700001)(40470700004)(8676002)(81166007)(356005)(7416002)(36756003)(70586007)(4326008)(70206006)(82740400003)(5660300002)(82310400005)(36860700001)(6666004)(8936002)(2616005)(54906003)(316002)(2906002)(186003)(83380400001)(426003)(40480700001)(110136005)(26005)(478600001)(7696005)(86362001)(1076003)(107886003)(40460700003)(41300700001)(336012)(47076005)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Aug 2022 05:13:39.8795 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 67af347e-e51a-4a11-44e1-08da800f3f36 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT040.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR12MB3489 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org This patch adds the following new device features for the low power entry and exit in the header file. The implementation for the same will be added in the subsequent patches. - VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY - VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY_WITH_WAKEUP - VFIO_DEVICE_FEATURE_LOW_POWER_EXIT For vfio-pci based devices, with the standard PCI PM registers, all power states cannot be achieved. The platform-based power management needs to be involved to go into the lowest power state. For doing low power entry and exit with platform-based power management, these device features can be used. The entry device feature has two variants. These two variants are mainly to support the different behaviour for the low power entry. If there is any access for the VFIO device on the host side, then the device will be moved out of the low power state without the user's guest driver involvement. Some devices (for example NVIDIA VGA or 3D controller) require the user's guest driver involvement for each low-power entry. In the first variant, the host can return the device to low power automatically. The device will continue to attempt to reach low power until the low power exit feature is called. In the second variant, if the device exits low power due to an access, the host kernel will signal the user via the provided eventfd and will not return the device to low power without a subsequent call to one of the low power entry features. A call to the low power exit feature is optional if the user provided eventfd is signaled. These device features only support VFIO_DEVICE_FEATURE_SET and VFIO_DEVICE_FEATURE_PROBE operations. Signed-off-by: Abhishek Sahu --- include/uapi/linux/vfio.h | 56 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 56 insertions(+) diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h index 733a1cddde30..76a173f973de 100644 --- a/include/uapi/linux/vfio.h +++ b/include/uapi/linux/vfio.h @@ -986,6 +986,62 @@ enum vfio_device_mig_state { VFIO_DEVICE_STATE_RUNNING_P2P = 5, }; +/* + * Upon VFIO_DEVICE_FEATURE_SET, allow the device to be moved into a low power + * state with the platform-based power management. Device use of lower power + * states depends on factors managed by the runtime power management core, + * including system level support and coordinating support among dependent + * devices. Enabling device low power entry does not guarantee lower power + * usage by the device, nor is a mechanism provided through this feature to + * know the current power state of the device. If any device access happens + * (either from the host or through the vfio uAPI) when the device is in the + * low power state, then the host will move the device out of the low power + * state as necessary prior to the access. Once the access is completed, the + * device may re-enter the low power state. For single shot low power support + * with wake-up notification, see + * VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY_WITH_WAKEUP below. Access to mmap'd + * device regions is disabled on LOW_POWER_ENTRY and may only be resumed after + * calling LOW_POWER_EXIT. + */ +#define VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY 3 + +/* + * This device feature has the same behavior as + * VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY with the exception that the user + * provides an eventfd for wake-up notification. When the device moves out of + * the low power state for the wake-up, the host will not allow the device to + * re-enter a low power state without a subsequent user call to one of the low + * power entry device feature IOCTLs. Access to mmap'd device regions is + * disabled on LOW_POWER_ENTRY_WITH_WAKEUP and may only be resumed after the + * low power exit. The low power exit can happen either through LOW_POWER_EXIT + * or through any other access (where the wake-up notification has been + * generated). The access to mmap'd device regions will not trigger low power + * exit. + * + * The notification through the provided eventfd will be generated only when + * the device has entered and is resumed from a low power state after + * calling this device feature IOCTL. A device that has not entered low power + * state, as managed through the runtime power management core, will not + * generate a notification through the provided eventfd on access. Calling the + * LOW_POWER_EXIT feature is optional in the case where notification has been + * signaled on the provided eventfd that a resume from low power has occurred. + */ +struct vfio_device_low_power_entry_with_wakeup { + __s32 wakeup_eventfd; + __u32 reserved; +}; + +#define VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY_WITH_WAKEUP 4 + +/* + * Upon VFIO_DEVICE_FEATURE_SET, disallow use of device low power states as + * previously enabled via VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY or + * VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY_WITH_WAKEUP device features. + * This device feature IOCTL may itself generate a wakeup eventfd notification + * in the latter case if the device had previously entered a low power state. + */ +#define VFIO_DEVICE_FEATURE_LOW_POWER_EXIT 5 + /* -------- API for Type1 VFIO IOMMU -------- */ /** From patchwork Wed Aug 17 05:13:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Abhishek Sahu X-Patchwork-Id: 598401 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8846C28B2B for ; Wed, 17 Aug 2022 05:13:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236800AbiHQFNv (ORCPT ); Wed, 17 Aug 2022 01:13:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53636 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237509AbiHQFNs (ORCPT ); Wed, 17 Aug 2022 01:13:48 -0400 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2084.outbound.protection.outlook.com [40.107.94.84]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E99204D264; Tue, 16 Aug 2022 22:13:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=HFEf2PYORAMCxPuTD2EHluoVVU/BQkHBl4LjFjv9s1al+qmyqDmwnbyq2w9yKPBdymlb20T0HGSo+hu6CVVYAkOWTJ5q9ez558b0WfTaqJUedpMJceufugAXTslechvinMKmDIW97/00MsrUiKwmlShk0bWS3z2icorgJv9DgXCFtXGKOpRQtaJvV3dCh0i3MkH6NggU001Nm2JkQi1Y4bVB24u4DCcKP8mSsyKBH9V5jUvZ0gp8xQnggA0webCpcfKx9Ms9dVQtG86J14ydceYWUbMs5tp4NgkkYNOFwbkMmldLinHjkpWRUw0sG4p8SMIhfG8Ij4+qNaZocz1HqQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=nTmANG1L7NOW7uPQE252yydgK6aBBriyQ8aKqX8Jvq0=; b=jmqBQzuAJHKdfBDGQ80nle4k2w9tRT24Pr49ChzCEyXVl5/GDYfU4dZ6DrGZNH+SicWIToWLoZfx4+gLG+QLKmdaBRUuZuyxkn/PXvf2AzZ16SCDRg1AfB1XYibovWDhIlnom/Xj0vDFv9J2BFPyZT5su6lw3tgvjPIjbQRlLRru+PI4B3iPj3GhXBwcbShnun7RR5GNLBS5RStvWZ89Rj23Y4esaBMRHkddtzWUl6qy9rCyg8lC7jmKTom8mb0uli4b6h/H0Ddg5aufdYPhmFtKYxhbJ0znhfkoP285CKrkQ+jzyXj7GBeqZriZe13Uhq0JR9KWqV0MWNAxHJW6Jw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=nTmANG1L7NOW7uPQE252yydgK6aBBriyQ8aKqX8Jvq0=; b=tgU3IFkEA15MjgRvcinGl9ioO7i/AZzKoLMO51Dnv5w1SGlrNKERpAsg8ow5TbtdJY0KLb7iGvdOMp9t/63RCVOg+Uw2j7Tu0JzMwAP9RbAg2Y0Chwyy1mItyCrASRppMEOEfzoIrAGShrnXkuDEovYOqUIZ57Xi1xGsI0eEbVWu4Ct+uU1vRVf1gJRSCVWuhhcdPEZ7GeFrBoeRCDmmimUp7etj7yexqaNnV+G1j5p4i73R0p7KlIcHa700GBAoZEe+BBWvt0+lJsKr1KBNnp50oTDQeEcmiva+jn4sL8CDL/vkuRj28JIaD/X1LkMwyJWfFSttFxskPN+bJ2Rhcg== Received: from MW4PR03CA0027.namprd03.prod.outlook.com (2603:10b6:303:8f::32) by DM5PR1201MB2521.namprd12.prod.outlook.com (2603:10b6:3:ea::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5525.11; Wed, 17 Aug 2022 05:13:45 +0000 Received: from CO1NAM11FT003.eop-nam11.prod.protection.outlook.com (2603:10b6:303:8f:cafe::1d) by MW4PR03CA0027.outlook.office365.com (2603:10b6:303:8f::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5525.19 via Frontend Transport; Wed, 17 Aug 2022 05:13:45 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.238) by CO1NAM11FT003.mail.protection.outlook.com (10.13.175.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5546.15 via Frontend Transport; Wed, 17 Aug 2022 05:13:44 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.38; Wed, 17 Aug 2022 05:13:44 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Tue, 16 Aug 2022 22:13:43 -0700 Received: from nvidia-abhsahu-1.nvidia.com (10.127.8.12) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server id 15.2.986.29 via Frontend Transport; Tue, 16 Aug 2022 22:13:39 -0700 From: Abhishek Sahu To: Alex Williamson , Cornelia Huck , Yishai Hadas , Jason Gunthorpe , Shameer Kolothum , Kevin Tian , "Rafael J . Wysocki" CC: Max Gurtovoy , Bjorn Helgaas , , , , , Abhishek Sahu Subject: [PATCH v6 2/5] vfio: Increment the runtime PM usage count during IOCTL call Date: Wed, 17 Aug 2022 10:43:20 +0530 Message-ID: <20220817051323.20091-3-abhsahu@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220817051323.20091-1-abhsahu@nvidia.com> References: <20220817051323.20091-1-abhsahu@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 9f39fc7d-3ae0-432b-7921-08da800f423d X-MS-TrafficTypeDiagnostic: DM5PR1201MB2521:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: RPn6xTAIjtCn+IvJTLf0HvYVFwhne+EFCY0JyRpgg4raN19SHIG84VMC7qLqs+quudYApcDCQl0b2ijMnEIpQJHN41orCc6hMpZJfl3Qx+XCB4ExF0rfCfIE7ZMliLjauoRFh3X2BUFDr9NPyhHWuB90Iaeu1QR39N/mAb4ONgyYPSZN2PGBvsHTdQ0PJh2msEl6+MSuVssvFy0D6rNBOw2A6sVpia4X9roKutjceJjdJ9boGLss3xVpVD0MmUku6I8+lOX08dcH9FThpFzFbAK3fAAfe9bdoAWFjDFsJzwiaz94FqVFgzXH3bZ3uu6i2/lbPCH3tJC00clCF8fLqREUd2HhdtZCiY27zcBL/ZuyWzjWq8mW6mSQHZEEOI9LtoWTUnYOTq1tlz70BV9LVBDeeVyECdxkSbQHtkQdubK4vU6qGoJKbPLYf+jYtCHBSfxuDJqb54MJHDPU2oquG8H8h13eVE+plMKCIvba2qoKIS0Zza0tPHR+rk36L42xIqYmv//h4LUVcSSA0MN1129n+Nj7j7Th7xJxj9W5fXjWN/a+AXG1UKlR6ES8Mv/iT0e9NG6FB1Nsio637d9pKo6cyy29k+ctrMHnDK/tQ/YXHnlk0KMC50dUHVWQLg4bDGdiM6GvWJ8GkgE9tKjWolWWkNCrtFmT9uiUw5nvY0YY8kwT3Quq2+CuYDL761HfeBx31xbOFH3G4T0jIR0Cq2WOyu0JFPsY4PwOr7CZuAz2lpXM+h/Rs7KvB2gLl4FiiHRTR2QbQazPkn7xtAuP7Jw85VkJJvSVzKUfhNh7gX5sFv5uuvW7RaG9wBtfK61QVr0cG5FkSiQGeRjlvf+e/A== X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230016)(4636009)(136003)(376002)(39860400002)(396003)(346002)(36840700001)(46966006)(40470700004)(336012)(47076005)(54906003)(2616005)(1076003)(40480700001)(6666004)(186003)(86362001)(7696005)(36860700001)(36756003)(26005)(83380400001)(110136005)(40460700003)(107886003)(82310400005)(316002)(4326008)(426003)(41300700001)(70206006)(2906002)(70586007)(478600001)(5660300002)(81166007)(356005)(82740400003)(8676002)(7416002)(8936002)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Aug 2022 05:13:44.9442 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9f39fc7d-3ae0-432b-7921-08da800f423d X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT003.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR1201MB2521 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org The vfio-pci based drivers will have runtime power management support where the user can put the device into the low power state and then PCI devices can go into the D3cold state. If the device is in the low power state and the user issues any IOCTL, then the device should be moved out of the low power state first. Once the IOCTL is serviced, then it can go into the low power state again. The runtime PM framework manages this with help of usage count. One option was to add the runtime PM related API's inside vfio-pci driver but some IOCTL (like VFIO_DEVICE_FEATURE) can follow a different path and more IOCTL can be added in the future. Also, the runtime PM will be added for vfio-pci based drivers variant currently, but the other VFIO based drivers can use the same in the future. So, this patch adds the runtime calls runtime-related API in the top-level IOCTL function itself. For the VFIO drivers which do not have runtime power management support currently, the runtime PM API's won't be invoked. Only for vfio-pci based drivers currently, the runtime PM API's will be invoked to increment and decrement the usage count. In the vfio-pci drivers also, the variant drivers can opt-out by incrementing the usage count during device-open. The pm_runtime_resume_and_get() checks the device current status and will return early if the device is already in the ACTIVE state. Taking this usage count incremented while servicing IOCTL will make sure that the user won't put the device into the low power state when any other IOCTL is being serviced in parallel. Let's consider the following scenario: 1. Some other IOCTL is called. 2. The user has opened another device instance and called the IOCTL for low power entry. 3. The low power entry IOCTL moves the device into the low power state. 4. The other IOCTL finishes. If we don't keep the usage count incremented then the device access will happen between step 3 and 4 while the device has already gone into the low power state. The pm_runtime_resume_and_get() will be the first call so its error should not be propagated to user space directly. For example, if pm_runtime_resume_and_get() can return -EINVAL for the cases where the user has passed the correct argument. So the pm_runtime_resume_and_get() errors have been masked behind -EIO. Signed-off-by: Abhishek Sahu --- drivers/vfio/vfio_main.c | 52 +++++++++++++++++++++++++++++++++++++--- 1 file changed, 49 insertions(+), 3 deletions(-) diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c index 7cb56c382c97..535e5ef0640d 100644 --- a/drivers/vfio/vfio_main.c +++ b/drivers/vfio/vfio_main.c @@ -32,6 +32,7 @@ #include #include #include +#include #include "vfio.h" #define DRIVER_VERSION "0.3" @@ -1354,6 +1355,39 @@ static const struct file_operations vfio_group_fops = { .release = vfio_group_fops_release, }; +/* + * Wrapper around pm_runtime_resume_and_get(). + * Return error code on failure or 0 on success. + */ +static inline int vfio_device_pm_runtime_get(struct vfio_device *device) +{ + struct device *dev = device->dev; + + if (dev->driver && dev->driver->pm) { + int ret; + + ret = pm_runtime_resume_and_get(dev); + if (ret) { + dev_info_ratelimited(dev, + "vfio: runtime resume failed %d\n", ret); + return -EIO; + } + } + + return 0; +} + +/* + * Wrapper around pm_runtime_put(). + */ +static inline void vfio_device_pm_runtime_put(struct vfio_device *device) +{ + struct device *dev = device->dev; + + if (dev->driver && dev->driver->pm) + pm_runtime_put(dev); +} + /* * VFIO Device fd */ @@ -1674,15 +1708,27 @@ static long vfio_device_fops_unl_ioctl(struct file *filep, unsigned int cmd, unsigned long arg) { struct vfio_device *device = filep->private_data; + int ret; + + ret = vfio_device_pm_runtime_get(device); + if (ret) + return ret; switch (cmd) { case VFIO_DEVICE_FEATURE: - return vfio_ioctl_device_feature(device, (void __user *)arg); + ret = vfio_ioctl_device_feature(device, (void __user *)arg); + break; + default: if (unlikely(!device->ops->ioctl)) - return -EINVAL; - return device->ops->ioctl(device, cmd, arg); + ret = -EINVAL; + else + ret = device->ops->ioctl(device, cmd, arg); + break; } + + vfio_device_pm_runtime_put(device); + return ret; } static ssize_t vfio_device_fops_read(struct file *filep, char __user *buf, From patchwork Wed Aug 17 05:13:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Abhishek Sahu X-Patchwork-Id: 598051 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9CF52C25B08 for ; Wed, 17 Aug 2022 05:14:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238254AbiHQFOP (ORCPT ); Wed, 17 Aug 2022 01:14:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54604 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238139AbiHQFOC (ORCPT ); Wed, 17 Aug 2022 01:14:02 -0400 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2083.outbound.protection.outlook.com [40.107.94.83]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 99E475281C; Tue, 16 Aug 2022 22:13:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=UNjq+wfH5U/Ntb3W1EmI8Er1qDZoWrNq4z6Y2Ce/wlp9nNcAkeu7BPSLfW+rFiOD2vzmhmk+Msf8R8/flGjuLe6Dmu+0SDrmNiN5NAZg+6lOyjLeI3E/cGEy7jJNtCAcnqgpGl2sAd7WMBeah/DHeQmME2srkk4OjsaXMeiQnC6PzSpZyXXMmvAyyUxrrWPedTZR8H5qadN6WVMDReRLlaEBLlRPBohMXy7UZK9Gj55HqkWO3bBduW9bNn3n2ka6oxv2ki4IfCLnMcDNN/3cyym2OgUsUw5oVoDU87Hy2zbVYZ5oTCHiQTr56MQbhgYbgC65a/59sTpy8VT70RR5mg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=zYcq+020TEMkHIt1a9/l5HvK3tXtJOS63qlRHnAdzEM=; b=YOxg56qW8TlEqaUodgWokN+qcYJniXZKluvqMJDwCGhflp0bSqmpH06RcdH2z8yqNSw8TwpncOup5TXpvfgkjKnAYDuK7sH+FnslsdTERR7YuBh9bpkacGk9xmL8haY19+joUJh3ZdEBV9gQ6CkcXusHPax4Gek/SfPmAPq30suyCnwCVroLM9Ca3FVJGtI608cVSJEGIOxwAg1miZw9LyhWbze13P2PFkidUJoQ62psv0XXPD3bxifXL3in11Ddjv1KE41iqVKxQnJpYgI7xdhBV3sB3eDxiIsWDoAQPWbBrJfBDIry3LCK8vtE6fHWOe7SrPd30RMTxdIS+ANlBA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=zYcq+020TEMkHIt1a9/l5HvK3tXtJOS63qlRHnAdzEM=; b=Aea9/5Mr9iAo/tp89DZsMvTUsA68EcFsXf75Ll7SIRrCAdCVTRBrB7UnoFXP6Ui1ciyaKW/Wx9xxCpzmd8oQiXQHJzJwqzCzy9TWEGUnn5SVXd4F9fXYQ/ebaA3om+6b3se9AtwEGxCOY9Ak6TabSqUug1MN7YmsMicsPrqobmkjUq5UNNespC+CYDeV4xro3Ilhd0NLCEQPH5G4grY0OFUtPtFcWu1PnxqCcQa5NPdpoqgyST+7u0sZFqystuNnDXDr6AN1QGoFwxreXBdaT+N6xAbQa14tsEWoUDbxKdpH2qtrQjWsGYWxsHIlSJke1v+jA5hnAuKoNOGVupc6/g== Received: from DM6PR17CA0024.namprd17.prod.outlook.com (2603:10b6:5:1b3::37) by CY4PR12MB1926.namprd12.prod.outlook.com (2603:10b6:903:11b::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5504.17; Wed, 17 Aug 2022 05:13:51 +0000 Received: from DM6NAM11FT020.eop-nam11.prod.protection.outlook.com (2603:10b6:5:1b3:cafe::2a) by DM6PR17CA0024.outlook.office365.com (2603:10b6:5:1b3::37) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5525.19 via Frontend Transport; Wed, 17 Aug 2022 05:13:51 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.236) by DM6NAM11FT020.mail.protection.outlook.com (10.13.172.224) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5546.15 via Frontend Transport; Wed, 17 Aug 2022 05:13:51 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.38; Wed, 17 Aug 2022 05:13:50 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Tue, 16 Aug 2022 22:13:49 -0700 Received: from nvidia-abhsahu-1.nvidia.com (10.127.8.12) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server id 15.2.986.29 via Frontend Transport; Tue, 16 Aug 2022 22:13:44 -0700 From: Abhishek Sahu To: Alex Williamson , Cornelia Huck , Yishai Hadas , Jason Gunthorpe , Shameer Kolothum , Kevin Tian , "Rafael J . Wysocki" CC: Max Gurtovoy , Bjorn Helgaas , , , , , Abhishek Sahu Subject: [PATCH v6 3/5] vfio/pci: Mask INTx during runtime suspend Date: Wed, 17 Aug 2022 10:43:21 +0530 Message-ID: <20220817051323.20091-4-abhsahu@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220817051323.20091-1-abhsahu@nvidia.com> References: <20220817051323.20091-1-abhsahu@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 183d0be0-ad67-45a0-50f4-08da800f45e6 X-MS-TrafficTypeDiagnostic: CY4PR12MB1926:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: TlCOZxbEd+tjnk9AhI/Fw4J+gsmNX0Eu+LLpRMPP6DCIK5vJXEtRdRQ+fwV5bmywjKPBdUjQlSzTbMy4A0HOQzyYtIjuO5DVzX+RbVOuiBBGkQcG9EFfVZwfcIb07Xj2HSjsC9mxz8S7GkJejeZtv59dh7EpdIoZTCl1u7yw2dZbvpXZSjkxBSZjJkEfmX6Aq0ydzpS3cSZF8mZQgwp3r8sigSScA/oVW6SCp5hDGCEShn2J4C3cJn32q69mj6USKH0ZlkoJRhtD1pBkcZaSvLK10Z4qfFIVUrbRdFGyjRj/BPPS7NhuqyhUm0/sGfnY23nPWUQ9n+iG/e6elcnZKqDgK9OChrzB8bS48B6JVWV2EXP/clc+CF6bGdzNcWoyqm+4JSNR0aRHNuoZVv/211MZPtoTFJtAluUpYsewAMHoY2nU6k1tbuhhlbjn+RQHoqWpwifhbo/80MbaMQ1yWV1BEBAii4aRzCtVIfPnL1jlL+dfMiUdayMWcWSd9qJHTvM0H5C3HJW8Cumw/BjKDD9w7JZ9X+gFQHrNGS/NVE5vZZP14f2EwkRCd3h8rTPapr5KsauR95KsKc33ztDWL1TfavhM+K7+j+IeGqZaTfgo9zAJgKe4xt24FvS3TpwBKwozze6lkpC2ekhELYAhtdbdiaoeliFIiiuAdXRMYz2EDtkEPV5AnwaPJFzXO3g17JlX5/a1kgPeiulGrLVX1Yf9xP8F+EZFF48VyGrT4sodBQfS8Csxc+kbSkZJMxjf3etVo2s8kcfJQ5IFhvHxkfktmcgqak+fAi/bD7MWTizUGzTWu0ixXVPqle62+G2sDijx+DThwqAsObCQDZLF6A== X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230016)(4636009)(396003)(39860400002)(376002)(346002)(136003)(46966006)(40470700004)(36840700001)(47076005)(426003)(81166007)(8936002)(107886003)(2616005)(1076003)(336012)(83380400001)(8676002)(70206006)(15650500001)(2906002)(70586007)(5660300002)(7416002)(40480700001)(36756003)(4326008)(478600001)(110136005)(82310400005)(86362001)(316002)(54906003)(6666004)(41300700001)(7696005)(26005)(36860700001)(40460700003)(82740400003)(356005)(186003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Aug 2022 05:13:51.0683 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 183d0be0-ad67-45a0-50f4-08da800f45e6 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT020.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR12MB1926 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org This patch adds INTx handling during runtime suspend/resume. All the suspend/resume related code for the user to put the device into the low power state will be added in subsequent patches. The INTx lines may be shared among devices. Whenever any INTx interrupt comes for the VFIO devices, then vfio_intx_handler() will be called for each device sharing the interrupt. Inside vfio_intx_handler(), it calls pci_check_and_mask_intx() and checks if the interrupt has been generated for the current device. Now, if the device is already in the D3cold state, then the config space can not be read. Attempt to read config space in D3cold state can cause system unresponsiveness in a few systems. To prevent this, mask INTx in runtime suspend callback, and unmask the same in runtime resume callback. If INTx has been already masked, then no handling is needed in runtime suspend/resume callbacks. 'pm_intx_masked' tracks this, and vfio_pci_intx_mask() has been updated to return true if the INTx vfio_pci_irq_ctx.masked value is changed inside this function. For the runtime suspend which is triggered for the no user of VFIO device, the is_intx() will return false and these callbacks won't do anything. The MSI/MSI-X are not shared so similar handling should not be needed for MSI/MSI-X. vfio_msihandler() triggers eventfd_signal() without doing any device-specific config access. When the user performs any config access or IOCTL after receiving the eventfd notification, then the device will be moved to the D0 state first before servicing any request. Another option was to check this flag 'pm_intx_masked' inside vfio_intx_handler() instead of masking the interrupts. This flag is being set inside the runtime_suspend callback but the device can be in non-D3cold state (for example, if the user has disabled D3cold explicitly by sysfs, the D3cold is not supported in the platform, etc.). Also, in D3cold supported case, the device will be in D0 till the PCI core moves the device into D3cold. In this case, there is a possibility that the device can generate an interrupt. Adding check in the IRQ handler will not clear the IRQ status and the interrupt line will still be asserted. This can cause interrupt flooding. Signed-off-by: Abhishek Sahu --- drivers/vfio/pci/vfio_pci_core.c | 37 +++++++++++++++++++++++++++---- drivers/vfio/pci/vfio_pci_intrs.c | 6 ++++- include/linux/vfio_pci_core.h | 3 ++- 3 files changed, 40 insertions(+), 6 deletions(-) diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c index c8d3b0450fb3..a97fb8cbf903 100644 --- a/drivers/vfio/pci/vfio_pci_core.c +++ b/drivers/vfio/pci/vfio_pci_core.c @@ -260,16 +260,45 @@ int vfio_pci_set_power_state(struct vfio_pci_core_device *vdev, pci_power_t stat return ret; } +#ifdef CONFIG_PM +static int vfio_pci_core_runtime_suspend(struct device *dev) +{ + struct vfio_pci_core_device *vdev = dev_get_drvdata(dev); + + /* + * If INTx is enabled, then mask INTx before going into the runtime + * suspended state and unmask the same in the runtime resume. + * If INTx has already been masked by the user, then + * vfio_pci_intx_mask() will return false and in that case, INTx + * should not be unmasked in the runtime resume. + */ + vdev->pm_intx_masked = (is_intx(vdev) && vfio_pci_intx_mask(vdev)); + + return 0; +} + +static int vfio_pci_core_runtime_resume(struct device *dev) +{ + struct vfio_pci_core_device *vdev = dev_get_drvdata(dev); + + if (vdev->pm_intx_masked) + vfio_pci_intx_unmask(vdev); + + return 0; +} +#endif /* CONFIG_PM */ + /* - * The dev_pm_ops needs to be provided to make pci-driver runtime PM working, - * so use structure without any callbacks. - * * The pci-driver core runtime PM routines always save the device state * before going into suspended state. If the device is going into low power * state with only with runtime PM ops, then no explicit handling is needed * for the devices which have NoSoftRst-. */ -static const struct dev_pm_ops vfio_pci_core_pm_ops = { }; +static const struct dev_pm_ops vfio_pci_core_pm_ops = { + SET_RUNTIME_PM_OPS(vfio_pci_core_runtime_suspend, + vfio_pci_core_runtime_resume, + NULL) +}; int vfio_pci_core_enable(struct vfio_pci_core_device *vdev) { diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c index 6069a11fb51a..8b805d5d19e1 100644 --- a/drivers/vfio/pci/vfio_pci_intrs.c +++ b/drivers/vfio/pci/vfio_pci_intrs.c @@ -33,10 +33,12 @@ static void vfio_send_intx_eventfd(void *opaque, void *unused) eventfd_signal(vdev->ctx[0].trigger, 1); } -void vfio_pci_intx_mask(struct vfio_pci_core_device *vdev) +/* Returns true if the INTx vfio_pci_irq_ctx.masked value is changed. */ +bool vfio_pci_intx_mask(struct vfio_pci_core_device *vdev) { struct pci_dev *pdev = vdev->pdev; unsigned long flags; + bool masked_changed = false; spin_lock_irqsave(&vdev->irqlock, flags); @@ -60,9 +62,11 @@ void vfio_pci_intx_mask(struct vfio_pci_core_device *vdev) disable_irq_nosync(pdev->irq); vdev->ctx[0].masked = true; + masked_changed = true; } spin_unlock_irqrestore(&vdev->irqlock, flags); + return masked_changed; } /* diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h index 5579ece4347b..98c0af5b5bba 100644 --- a/include/linux/vfio_pci_core.h +++ b/include/linux/vfio_pci_core.h @@ -124,6 +124,7 @@ struct vfio_pci_core_device { bool needs_reset; bool nointx; bool needs_pm_restore; + bool pm_intx_masked; struct pci_saved_state *pci_saved_state; struct pci_saved_state *pm_save; int ioeventfds_nr; @@ -147,7 +148,7 @@ struct vfio_pci_core_device { #define is_irq_none(vdev) (!(is_intx(vdev) || is_msi(vdev) || is_msix(vdev))) #define irq_is(vdev, type) (vdev->irq_type == type) -void vfio_pci_intx_mask(struct vfio_pci_core_device *vdev); +bool vfio_pci_intx_mask(struct vfio_pci_core_device *vdev); void vfio_pci_intx_unmask(struct vfio_pci_core_device *vdev); int vfio_pci_set_irqs_ioctl(struct vfio_pci_core_device *vdev, From patchwork Wed Aug 17 05:13:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Abhishek Sahu X-Patchwork-Id: 598050 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78B96C28B2B for ; Wed, 17 Aug 2022 05:14:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238520AbiHQFOh (ORCPT ); Wed, 17 Aug 2022 01:14:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56428 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238392AbiHQFOT (ORCPT ); Wed, 17 Aug 2022 01:14:19 -0400 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2076.outbound.protection.outlook.com [40.107.94.76]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AB43D5AA39; Tue, 16 Aug 2022 22:14:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ScTYQrAvggkpDQ16JiDDZgjD3qJ4Mr2NtooPFYX5lNxJKkHEpzZA169BTfw9x0nR2AoJqZn9hH8xyFXgsukuUmgymVEHxDVKkLC69n5jgM9LuMonQ9AI5J/P6CbBrR55NXp668/whh8WuqIZUGf6n5RwTt2e95i/H6ABFxlulB1RHdBh/Mhv7lZMXNmtWGMgbCjMulgSoulWY8BtvySMAFbNHaj2UXlddu71tos3j7UOj35tvNv0IGDpH5+oWzU09auZbpbbGcgPPEtttYfA0XwWUXnSY23l7qIrSdTpvuKZ0ZfDpd30YjjrtwaYM4Er1CEEFPAAxoldNk02U/aOrw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=/Cqa1WVtFLUgzdX+8KLrc/mOw6T7FU/y/cnLOT32F0c=; b=Ts0398ByGNbFn417a8KBLmfpw1v9cn71LQV+1DotmeXP9h7AqGva7fexYhxRuWP8iIwfROETParXrxwq1F4ErwojbSb5wLk51v4gcDZaSFmVDQKS6B7j/8kqFqmgJKrCYwpJUn2dNf/5q1afAsL675jGMlStXut0y0CDg9Qi+oriW0QvvCs1te0lNf2A+Z9YrPBWyuawLJ7h0HagSZXKoEC7TPP/9uxe/WPSwS0lka6wgtV1zQ+SQGFSkNsF8cWVeM41Jex8PJ2Q7Sm7DHMD5PKPHhoehWWPmJrD8MKSbEn2C1ywrcwC3+BDAxoxk6rrQO67wOuXorBWbjAdXGlRXg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/Cqa1WVtFLUgzdX+8KLrc/mOw6T7FU/y/cnLOT32F0c=; b=KIJt5gO8LIB1w9hMSWqfzxRhAlQG9EPQTtmx90ZQEKwHbQw9dZkwsjNWlmMlDJ8lXtPGLhGSGAcA/xVWZhgFka/pidjbUc2CG2nwWjCr0NWARcdpuythQBJBoL8JLELKjkfRRH0CTtrgXE/zDzRKaZtI2H+mFYe0oK+A7oendVUySNTqFsIrLFqdlYg7nZO8Ys8ij+VTzgcrUXrlQgafuB0vnxR+lLm9Z8Mcf+bBMUoZ7pF46qmgOjh5yDW0CPeDPPw45yZ3KEQ/nDZa3rMmSPpj+HDgQgDcMfdLZlL84yvfWneQfwHVHBWmGcR3ZAhvbBf/+2K+qLGgEMNY3EPUHA== Received: from MW4PR03CA0021.namprd03.prod.outlook.com (2603:10b6:303:8f::26) by DM6PR12MB3833.namprd12.prod.outlook.com (2603:10b6:5:1cf::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5504.15; Wed, 17 Aug 2022 05:13:57 +0000 Received: from CO1NAM11FT036.eop-nam11.prod.protection.outlook.com (2603:10b6:303:8f:cafe::2d) by MW4PR03CA0021.outlook.office365.com (2603:10b6:303:8f::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5525.19 via Frontend Transport; Wed, 17 Aug 2022 05:13:57 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.235) by CO1NAM11FT036.mail.protection.outlook.com (10.13.174.124) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5546.15 via Frontend Transport; Wed, 17 Aug 2022 05:13:56 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.38; Wed, 17 Aug 2022 05:13:55 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Tue, 16 Aug 2022 22:13:54 -0700 Received: from nvidia-abhsahu-1.nvidia.com (10.127.8.12) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server id 15.2.986.29 via Frontend Transport; Tue, 16 Aug 2022 22:13:50 -0700 From: Abhishek Sahu To: Alex Williamson , Cornelia Huck , Yishai Hadas , Jason Gunthorpe , Shameer Kolothum , Kevin Tian , "Rafael J . Wysocki" CC: Max Gurtovoy , Bjorn Helgaas , , , , , Abhishek Sahu Subject: [PATCH v6 4/5] vfio/pci: Implement VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY/EXIT Date: Wed, 17 Aug 2022 10:43:22 +0530 Message-ID: <20220817051323.20091-5-abhsahu@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220817051323.20091-1-abhsahu@nvidia.com> References: <20220817051323.20091-1-abhsahu@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: bec075a7-2fd4-4657-c274-08da800f491c X-MS-TrafficTypeDiagnostic: DM6PR12MB3833:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: m9iyC21swLDC1KYTYri/TbEYzWZzNJBQAW4zLAUnNjqiDNZxbgaR886vD4SRCQSs8YPbGaBnEyZj0dnQcokffkZk5NutV/LtxvKeYoAkDwfTIIsQYY/aHUL5V3HwOiq0DVa3iH+pcsIL2J94FPgqQO37Gv1F6rgwbJ1zaRo24vGRO/tp7wnVCZJkHJOCIBGo02pNZAD9PaXOnE0cWnn1VZDj6smDquUbhIe+mlfGuCSFUKr3z7lgd8pJ2Mi51Gx1+TefT/8N90duGNYSrv4iZoov2CIEVC/CfBv1gOeRuNrnuvVpHSI/e/AFoSUzaVmDT2b2kmVCzsa/F23x5/73NpRScGXzufN5pA+qNDH6eWwDz37TjNRAOy4DHzZn9Csk75iQlalyc7vI9mX1B2WKeONsc4a8iG9LjugGQDocLKNrdiwPhNYcTA9HcaehQfZQqwcLoLLXtZylJrsY3IynQ0U6lqF8/zkgZNZMLuybijyqX5desSOWI9bTp5NLFrJbJ/cQTuZ2Zom+gRmNTEcF+8ASNhV+1RJ/sbdHR1db3dILOuoo0yY6BAmfRCBbMsZlNzzHwOt5Aihaltey6CtCQyoobVXFJQjG4WVsWMXcLF4xc8mjAkHsdddg4WpvZmPIi/FKSI7AN0ajNkF3m4Hv95iEZAIy8FlVfph4lgto/j5aoU/2j0Nw33LJ8MoNixZlj56HzEZ7ErUskj449S0ZtE/xxYORs24ZWznAa+OCoSb2gdHbFU3qMUGQLtcW9u3Oeb0sDKBO4zEbwhdtiroFuWwD/OqH3hEoLhu07TGJSgnmWUyo6zr0hQ1Gs5kiNwXV9TuUTbC3Rj2IJa3DobUSKw== X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230016)(4636009)(136003)(39860400002)(376002)(396003)(346002)(40470700004)(36840700001)(46966006)(426003)(478600001)(186003)(54906003)(336012)(81166007)(40460700003)(82740400003)(47076005)(83380400001)(86362001)(1076003)(8676002)(36756003)(70206006)(4326008)(70586007)(316002)(110136005)(6666004)(5660300002)(8936002)(2906002)(7696005)(356005)(2616005)(107886003)(36860700001)(40480700001)(82310400005)(26005)(7416002)(41300700001)(30864003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Aug 2022 05:13:56.4739 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: bec075a7-2fd4-4657-c274-08da800f491c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT036.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3833 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org Currently, if the runtime power management is enabled for vfio-pci based devices in the guest OS, then the guest OS will do the register write for PCI_PM_CTRL register. This write request will be handled in vfio_pm_config_write() where it will do the actual register write of PCI_PM_CTRL register. With this, the maximum D3hot state can be achieved for low power. If we can use the runtime PM framework, then we can achieve the D3cold state (on the supported systems) which will help in saving maximum power. 1. D3cold state can't be achieved by writing PCI standard PM config registers. This patch implements the following newly added low power related device features: - VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY - VFIO_DEVICE_FEATURE_LOW_POWER_EXIT The VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY feature will allow the device to make use of low power platform states on the host while the VFIO_DEVICE_FEATURE_LOW_POWER_EXIT will prevent further use of those power states. 2. The vfio-pci driver uses runtime PM framework for low power entry and exit. On the platforms where D3cold state is supported, the runtime PM framework will put the device into D3cold otherwise, D3hot or some other power state will be used. There are various cases where the device will not go into the runtime suspended state. For example, - The runtime power management is disabled on the host side for the device. - The user keeps the device busy after calling LOW_POWER_ENTRY. - There are dependent devices that are still in runtime active state. For these cases, the device will be in the same power state that has been configured by the user through PCI_PM_CTRL register. 3. The hypervisors can implement virtual ACPI methods. For example, in guest linux OS if PCI device ACPI node has _PR3 and _PR0 power resources with _ON/_OFF method, then guest linux OS invokes the _OFF method during D3cold transition and then _ON during D0 transition. The hypervisor can tap these virtual ACPI calls and then call the low power device feature IOCTL. 4. The 'pm_runtime_engaged' flag tracks the entry and exit to runtime PM. This flag is protected with 'memory_lock' semaphore. 5. All the config and other region access are wrapped under pm_runtime_resume_and_get() and pm_runtime_put(). So, if any device access happens while the device is in the runtime suspended state, then the device will be resumed first before access. Once the access has been finished, then the device will again go into the runtime suspended state. 6. The memory region access through mmap will not be allowed in the low power state. Since __vfio_pci_memory_enabled() is a common function, so check for 'pm_runtime_engaged' has been added explicitly in vfio_pci_mmap_fault() to block only mmap'ed access. Signed-off-by: Abhishek Sahu --- drivers/vfio/pci/vfio_pci_core.c | 153 +++++++++++++++++++++++++++++-- include/linux/vfio_pci_core.h | 1 + 2 files changed, 146 insertions(+), 8 deletions(-) diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c index a97fb8cbf903..d7d3c4392f70 100644 --- a/drivers/vfio/pci/vfio_pci_core.c +++ b/drivers/vfio/pci/vfio_pci_core.c @@ -260,11 +260,100 @@ int vfio_pci_set_power_state(struct vfio_pci_core_device *vdev, pci_power_t stat return ret; } +static int vfio_pci_runtime_pm_entry(struct vfio_pci_core_device *vdev) +{ + /* + * The vdev power related flags are protected with 'memory_lock' + * semaphore. + */ + vfio_pci_zap_and_down_write_memory_lock(vdev); + if (vdev->pm_runtime_engaged) { + up_write(&vdev->memory_lock); + return -EINVAL; + } + + vdev->pm_runtime_engaged = true; + pm_runtime_put_noidle(&vdev->pdev->dev); + up_write(&vdev->memory_lock); + + return 0; +} + +static int vfio_pci_core_pm_entry(struct vfio_device *device, u32 flags, + void __user *arg, size_t argsz) +{ + struct vfio_pci_core_device *vdev = + container_of(device, struct vfio_pci_core_device, vdev); + int ret; + + ret = vfio_check_feature(flags, argsz, VFIO_DEVICE_FEATURE_SET, 0); + if (ret != 1) + return ret; + + /* + * Inside vfio_pci_runtime_pm_entry(), only the runtime PM usage count + * will be decremented. The pm_runtime_put() will be invoked again + * while returning from the ioctl and then the device can go into + * runtime suspended state. + */ + return vfio_pci_runtime_pm_entry(vdev); +} + +static void __vfio_pci_runtime_pm_exit(struct vfio_pci_core_device *vdev) +{ + if (vdev->pm_runtime_engaged) { + vdev->pm_runtime_engaged = false; + pm_runtime_get_noresume(&vdev->pdev->dev); + } +} + +static void vfio_pci_runtime_pm_exit(struct vfio_pci_core_device *vdev) +{ + /* + * The vdev power related flags are protected with 'memory_lock' + * semaphore. + */ + down_write(&vdev->memory_lock); + __vfio_pci_runtime_pm_exit(vdev); + up_write(&vdev->memory_lock); +} + +static int vfio_pci_core_pm_exit(struct vfio_device *device, u32 flags, + void __user *arg, size_t argsz) +{ + struct vfio_pci_core_device *vdev = + container_of(device, struct vfio_pci_core_device, vdev); + int ret; + + ret = vfio_check_feature(flags, argsz, VFIO_DEVICE_FEATURE_SET, 0); + if (ret != 1) + return ret; + + /* + * The device is always in the active state here due to pm wrappers + * around ioctls. + */ + vfio_pci_runtime_pm_exit(vdev); + return 0; +} + #ifdef CONFIG_PM static int vfio_pci_core_runtime_suspend(struct device *dev) { struct vfio_pci_core_device *vdev = dev_get_drvdata(dev); + down_write(&vdev->memory_lock); + /* + * The user can move the device into D3hot state before invoking + * power management IOCTL. Move the device into D0 state here and then + * the pci-driver core runtime PM suspend function will move the device + * into the low power state. Also, for the devices which have + * NoSoftRst-, it will help in restoring the original state + * (saved locally in 'vdev->pm_save'). + */ + vfio_pci_set_power_state(vdev, PCI_D0); + up_write(&vdev->memory_lock); + /* * If INTx is enabled, then mask INTx before going into the runtime * suspended state and unmask the same in the runtime resume. @@ -400,6 +489,18 @@ void vfio_pci_core_disable(struct vfio_pci_core_device *vdev) /* * This function can be invoked while the power state is non-D0. + * This non-D0 power state can be with or without runtime PM. + * vfio_pci_runtime_pm_exit() will internally increment the usage + * count corresponding to pm_runtime_put() called during low power + * feature entry and then pm_runtime_resume() will wake up the device, + * if the device has already gone into the suspended state. Otherwise, + * the vfio_pci_set_power_state() will change the device power state + * to D0. + */ + vfio_pci_runtime_pm_exit(vdev); + pm_runtime_resume(&pdev->dev); + + /* * This function calls __pci_reset_function_locked() which internally * can use pci_pm_reset() for the function reset. pci_pm_reset() will * fail if the power state is non-D0. Also, for the devices which @@ -1233,6 +1334,10 @@ int vfio_pci_core_ioctl_feature(struct vfio_device *device, u32 flags, switch (flags & VFIO_DEVICE_FEATURE_MASK) { case VFIO_DEVICE_FEATURE_PCI_VF_TOKEN: return vfio_pci_core_feature_token(device, flags, arg, argsz); + case VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY: + return vfio_pci_core_pm_entry(device, flags, arg, argsz); + case VFIO_DEVICE_FEATURE_LOW_POWER_EXIT: + return vfio_pci_core_pm_exit(device, flags, arg, argsz); default: return -ENOTTY; } @@ -1243,31 +1348,47 @@ static ssize_t vfio_pci_rw(struct vfio_pci_core_device *vdev, char __user *buf, size_t count, loff_t *ppos, bool iswrite) { unsigned int index = VFIO_PCI_OFFSET_TO_INDEX(*ppos); + int ret; if (index >= VFIO_PCI_NUM_REGIONS + vdev->num_regions) return -EINVAL; + ret = pm_runtime_resume_and_get(&vdev->pdev->dev); + if (ret) { + pci_info_ratelimited(vdev->pdev, "runtime resume failed %d\n", + ret); + return -EIO; + } + switch (index) { case VFIO_PCI_CONFIG_REGION_INDEX: - return vfio_pci_config_rw(vdev, buf, count, ppos, iswrite); + ret = vfio_pci_config_rw(vdev, buf, count, ppos, iswrite); + break; case VFIO_PCI_ROM_REGION_INDEX: if (iswrite) - return -EINVAL; - return vfio_pci_bar_rw(vdev, buf, count, ppos, false); + ret = -EINVAL; + else + ret = vfio_pci_bar_rw(vdev, buf, count, ppos, false); + break; case VFIO_PCI_BAR0_REGION_INDEX ... VFIO_PCI_BAR5_REGION_INDEX: - return vfio_pci_bar_rw(vdev, buf, count, ppos, iswrite); + ret = vfio_pci_bar_rw(vdev, buf, count, ppos, iswrite); + break; case VFIO_PCI_VGA_REGION_INDEX: - return vfio_pci_vga_rw(vdev, buf, count, ppos, iswrite); + ret = vfio_pci_vga_rw(vdev, buf, count, ppos, iswrite); + break; + default: index -= VFIO_PCI_NUM_REGIONS; - return vdev->region[index].ops->rw(vdev, buf, + ret = vdev->region[index].ops->rw(vdev, buf, count, ppos, iswrite); + break; } - return -EINVAL; + pm_runtime_put(&vdev->pdev->dev); + return ret; } ssize_t vfio_pci_core_read(struct vfio_device *core_vdev, char __user *buf, @@ -1462,7 +1583,11 @@ static vm_fault_t vfio_pci_mmap_fault(struct vm_fault *vmf) mutex_lock(&vdev->vma_lock); down_read(&vdev->memory_lock); - if (!__vfio_pci_memory_enabled(vdev)) { + /* + * Memory region cannot be accessed if the low power feature is engaged + * or memory access is disabled. + */ + if (vdev->pm_runtime_engaged || !__vfio_pci_memory_enabled(vdev)) { ret = VM_FAULT_SIGBUS; goto up_out; } @@ -2177,6 +2302,15 @@ static int vfio_pci_dev_set_hot_reset(struct vfio_device_set *dev_set, goto err_unlock; } + /* + * Some of the devices in the dev_set can be in the runtime suspended + * state. Increment the usage count for all the devices in the dev_set + * before reset and decrement the same after reset. + */ + ret = vfio_pci_dev_set_pm_runtime_get(dev_set); + if (ret) + goto err_unlock; + list_for_each_entry(cur_vma, &dev_set->device_list, vdev.dev_set_list) { /* * Test whether all the affected devices are contained by the @@ -2232,6 +2366,9 @@ static int vfio_pci_dev_set_hot_reset(struct vfio_device_set *dev_set, else mutex_unlock(&cur->vma_lock); } + + list_for_each_entry(cur, &dev_set->device_list, vdev.dev_set_list) + pm_runtime_put(&cur->pdev->dev); err_unlock: mutex_unlock(&dev_set->lock); return ret; diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h index 98c0af5b5bba..d31cc9cc9c70 100644 --- a/include/linux/vfio_pci_core.h +++ b/include/linux/vfio_pci_core.h @@ -125,6 +125,7 @@ struct vfio_pci_core_device { bool nointx; bool needs_pm_restore; bool pm_intx_masked; + bool pm_runtime_engaged; struct pci_saved_state *pci_saved_state; struct pci_saved_state *pm_save; int ioeventfds_nr; From patchwork Wed Aug 17 05:13:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Abhishek Sahu X-Patchwork-Id: 598400 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75917C25B08 for ; Wed, 17 Aug 2022 05:14:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238388AbiHQFOg (ORCPT ); Wed, 17 Aug 2022 01:14:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54522 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238139AbiHQFOT (ORCPT ); Wed, 17 Aug 2022 01:14:19 -0400 Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam02on2069.outbound.protection.outlook.com [40.107.95.69]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 95B7D5F9AC; Tue, 16 Aug 2022 22:14:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=gLokerqDKB+N+TkiDUznX9u0xBuHMwP+WgaSUX4dRWCh5IpLfnBDEN2aHBOC0PzPjt/G3uLd/cXUMhQqp3wFAFzsLNjBkOEyNE1q8OQ1RhtldlmXRl/IbikSJpHass/uVEOQMSwtbGN9tXcu0+KmBln926S6h+mQB7JAecWWwnZ0BIm/+VDfFHVvGvwuVYN+DgLH4XJfrgU8j15pq+sYIGaUtgEXyiS4BvW1y74ew3alEHha5cTeSEujiR17ysiYOrLXkqYrjzew9MB0OBlZtDuRyn89oKirkawPMCu9TaMkI3wdctLkjZCB52TerlGAS2wXvuxEzRog2RlEx8MGOQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=gIO9PjhHh1krEmGmOGxgIXksTnnmw9+s2zW89SduBB0=; b=c+FEW18WWh1u059131gsUBw8F8swu92ZtvMCr7St0gaPx7UjDhsinIzlJpOJ1E4cZ7V9zl6CBoXFlhOxSgGYMz4cCAhyXgyX0bVL0xCMTPuzZy79lAueGbIpvOEUMhGgN0BmuXKIdc3DEfgrw+j/VW4bTjWDTK8zQBvOA9f1glVWg6IHQMnx+l/7+mgARvKzZvq9ej3tJ8XKeqtPehLac566ajlBoCVD4t8hZ6Bi0sBpkZei1+0Q0x7ZSK87lFG3kG6z6VUddfbkhf/oXrP3AbeQpO8qsI0zzxS4PjHpuHP77PmUfr9orfijjPRMft/3MCtEqECtDy6KzpJ66JqGSg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=gIO9PjhHh1krEmGmOGxgIXksTnnmw9+s2zW89SduBB0=; b=pz5Pf+nIMywFvTFOo0JaKZXASKCjWAC4bmFYFVqn04u4HBF8k7lyIDwafDIkHZD5kiRqVQWNR8EOlunyELNfYdftvW9PdBPfKwhO+RBhYo/b6UpehM9s32hxwfqwS58ovWuProxELSXy6GKOw9glXsSUnwJyJ4f+bDN9mR+ci8FqoadsB4uK2IXjeqsznFL+2ft8LORQuCcH0kzbcRQ0SW0T5TW4CXuAK1u6DW+pZY/+L+td0IaWpA5arlLuACWaET+Pt7/yZI2jUUty6w4UKdfRaOd0gOgiukm35zzxykDLqF5Kvnait5+uSxmCy/VPNafiTNJ9j70oQ5zkiqi0Yg== Received: from DM6PR13CA0050.namprd13.prod.outlook.com (2603:10b6:5:134::27) by BYAPR12MB2901.namprd12.prod.outlook.com (2603:10b6:a03:138::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5504.17; Wed, 17 Aug 2022 05:14:03 +0000 Received: from DM6NAM11FT041.eop-nam11.prod.protection.outlook.com (2603:10b6:5:134:cafe::6d) by DM6PR13CA0050.outlook.office365.com (2603:10b6:5:134::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5546.12 via Frontend Transport; Wed, 17 Aug 2022 05:14:03 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.234) by DM6NAM11FT041.mail.protection.outlook.com (10.13.172.98) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5546.15 via Frontend Transport; Wed, 17 Aug 2022 05:14:03 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.38; Wed, 17 Aug 2022 05:14:02 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail202.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Tue, 16 Aug 2022 22:14:01 -0700 Received: from nvidia-abhsahu-1.nvidia.com (10.127.8.12) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server id 15.2.986.29 via Frontend Transport; Tue, 16 Aug 2022 22:13:55 -0700 From: Abhishek Sahu To: Alex Williamson , Cornelia Huck , Yishai Hadas , Jason Gunthorpe , Shameer Kolothum , Kevin Tian , "Rafael J . Wysocki" CC: Max Gurtovoy , Bjorn Helgaas , , , , , Abhishek Sahu Subject: [PATCH v6 5/5] vfio/pci: Implement VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY_WITH_WAKEUP Date: Wed, 17 Aug 2022 10:43:23 +0530 Message-ID: <20220817051323.20091-6-abhsahu@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220817051323.20091-1-abhsahu@nvidia.com> References: <20220817051323.20091-1-abhsahu@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: bed82e95-dcdd-4bdb-cc38-08da800f4d07 X-MS-TrafficTypeDiagnostic: BYAPR12MB2901:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: L6x4t6dX4Ny27YhT/utgmh0lNugzwFiYwExkFqQDN3nd5tfLxTtGumISNaoXuDUodrlfqPBB+vCaly+nZkhDixdt+FLi6XkVBxnOSidTNLDs2zIl4iva198SB5Qi98P2Xv9RjXxwQx8Jun6u7DUixB0NBLnOeN6JWEqA1Yz859+alnAMiMqXa/B/rUdWd96mTVEbv0kopoU45CZcy0u0k36b6ORF6UYXkOCyzyzguL57xZ03BCYKG9U3JGptIEb9vjCmVbO/twnz50xZ6qrOQMhboBwY1Cvw4I7moYwhPy1zE2IAAHs1w3xmU2rWqPGgrhWCoGlwk1dTJ7Vh761HpPVxO67XTPrZLHoFezID8R8NkpGe7Sil/bp/c70jZdCtMjZmX7sepPJkAFQid9xyCm/p8NFr+pe0KDPapxkh/SuDBMkh/jwULKhwQC18RXTVd8pirFQb6UN2gH35LK1TkhiMB2Cqigb7CompNYZ25MmIqZWw/4eRb4pHcDTuEjD5YotFWC3+9Asyi866E79yvl9+6DedOamsQ0k+u+FOSqKT/wwXg+/33E9qdGWkvWnz6EXSlD3beeRaSwxP2kAT5TfJvTAlab1leclX+7t/sZpWkocm+w1wOBGVRmsWQggVGMV8m6KCgu9qAtHGQigrOnxUMMIeRLBwWoP3BIX+ma9/jt+eJWgmxm2HZQ6KSf2tmJHsGEPYhXLHP8V3ioVV1o6bCJ11nwgR7OkxgJYYOL7efTO78ESRWaafTew9oGb9FHoS3NAHXx5c3ZqxFhIZOidA+VgGfgSmfi2kmedNlTXuY3YXTWSlUrd+VPcXhqg83q4DNhIIx4EooguGppbANA== X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230016)(4636009)(396003)(346002)(376002)(136003)(39860400002)(46966006)(40470700004)(36840700001)(107886003)(8676002)(70586007)(26005)(82310400005)(4326008)(82740400003)(110136005)(316002)(54906003)(40460700003)(81166007)(478600001)(40480700001)(70206006)(6666004)(356005)(7696005)(2616005)(86362001)(83380400001)(8936002)(41300700001)(36756003)(1076003)(5660300002)(7416002)(36860700001)(336012)(2906002)(426003)(47076005)(186003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Aug 2022 05:14:03.0318 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: bed82e95-dcdd-4bdb-cc38-08da800f4d07 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT041.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR12MB2901 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org This patch implements VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY_WITH_WAKEUP device feature. In the VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY, if there is any access for the VFIO device on the host side, then the device will be moved out of the low power state without the user's guest driver involvement. Once the device access has been finished, then the host can move the device again into low power state. With the low power entry happened through VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY_WITH_WAKEUP, the device will not be moved back into the low power state and a notification will be sent to the user by triggering wakeup eventfd. vfio_pci_core_pm_entry() will be called for both the variants of low power feature entry so add an extra argument for wakeup eventfd context and store locally in 'struct vfio_pci_core_device'. For the entry happened without wakeup eventfd, all the exit related handling will be done by the LOW_POWER_EXIT device feature only. When the LOW_POWER_EXIT will be called, then the vfio core layer vfio_device_pm_runtime_get() will increment the usage count and will resume the device. In the driver runtime_resume callback, the 'pm_wake_eventfd_ctx' will be NULL. Then vfio_pci_core_pm_exit() will call vfio_pci_runtime_pm_exit() and all the exit related handling will be done. For the entry happened with wakeup eventfd, in the driver resume callback, eventfd will be triggered and all the exit related handling will be done. When vfio_pci_runtime_pm_exit() will be called by vfio_pci_core_pm_exit(), then it will return early. But if the runtime suspend has not happened on the host side, then all the exit related handling will be done in vfio_pci_core_pm_exit() only. Signed-off-by: Abhishek Sahu --- drivers/vfio/pci/vfio_pci_core.c | 62 ++++++++++++++++++++++++++++++-- include/linux/vfio_pci_core.h | 1 + 2 files changed, 60 insertions(+), 3 deletions(-) diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c index d7d3c4392f70..00d24243b89e 100644 --- a/drivers/vfio/pci/vfio_pci_core.c +++ b/drivers/vfio/pci/vfio_pci_core.c @@ -260,7 +260,8 @@ int vfio_pci_set_power_state(struct vfio_pci_core_device *vdev, pci_power_t stat return ret; } -static int vfio_pci_runtime_pm_entry(struct vfio_pci_core_device *vdev) +static int vfio_pci_runtime_pm_entry(struct vfio_pci_core_device *vdev, + struct eventfd_ctx *efdctx) { /* * The vdev power related flags are protected with 'memory_lock' @@ -273,6 +274,7 @@ static int vfio_pci_runtime_pm_entry(struct vfio_pci_core_device *vdev) } vdev->pm_runtime_engaged = true; + vdev->pm_wake_eventfd_ctx = efdctx; pm_runtime_put_noidle(&vdev->pdev->dev); up_write(&vdev->memory_lock); @@ -296,7 +298,39 @@ static int vfio_pci_core_pm_entry(struct vfio_device *device, u32 flags, * while returning from the ioctl and then the device can go into * runtime suspended state. */ - return vfio_pci_runtime_pm_entry(vdev); + return vfio_pci_runtime_pm_entry(vdev, NULL); +} + +static int +vfio_pci_core_pm_entry_with_wakeup(struct vfio_device *device, u32 flags, + void __user *arg, size_t argsz) +{ + struct vfio_pci_core_device *vdev = + container_of(device, struct vfio_pci_core_device, vdev); + struct vfio_device_low_power_entry_with_wakeup entry; + struct eventfd_ctx *efdctx; + int ret; + + ret = vfio_check_feature(flags, argsz, VFIO_DEVICE_FEATURE_SET, + sizeof(entry)); + if (ret != 1) + return ret; + + if (copy_from_user(&entry, arg, sizeof(entry))) + return -EFAULT; + + if (entry.wakeup_eventfd < 0) + return -EINVAL; + + efdctx = eventfd_ctx_fdget(entry.wakeup_eventfd); + if (IS_ERR(efdctx)) + return PTR_ERR(efdctx); + + ret = vfio_pci_runtime_pm_entry(vdev, efdctx); + if (ret) + eventfd_ctx_put(efdctx); + + return ret; } static void __vfio_pci_runtime_pm_exit(struct vfio_pci_core_device *vdev) @@ -304,6 +338,11 @@ static void __vfio_pci_runtime_pm_exit(struct vfio_pci_core_device *vdev) if (vdev->pm_runtime_engaged) { vdev->pm_runtime_engaged = false; pm_runtime_get_noresume(&vdev->pdev->dev); + + if (vdev->pm_wake_eventfd_ctx) { + eventfd_ctx_put(vdev->pm_wake_eventfd_ctx); + vdev->pm_wake_eventfd_ctx = NULL; + } } } @@ -331,7 +370,10 @@ static int vfio_pci_core_pm_exit(struct vfio_device *device, u32 flags, /* * The device is always in the active state here due to pm wrappers - * around ioctls. + * around ioctls. If the device had entered a low power state and + * pm_wake_eventfd_ctx is valid, vfio_pci_core_runtime_resume() has + * already signaled the eventfd and exited low power mode itself. + * pm_runtime_engaged protects the redundant call here. */ vfio_pci_runtime_pm_exit(vdev); return 0; @@ -370,6 +412,17 @@ static int vfio_pci_core_runtime_resume(struct device *dev) { struct vfio_pci_core_device *vdev = dev_get_drvdata(dev); + /* + * Resume with a pm_wake_eventfd_ctx signals the eventfd and exit + * low power mode. + */ + down_write(&vdev->memory_lock); + if (vdev->pm_wake_eventfd_ctx) { + eventfd_signal(vdev->pm_wake_eventfd_ctx, 1); + __vfio_pci_runtime_pm_exit(vdev); + } + up_write(&vdev->memory_lock); + if (vdev->pm_intx_masked) vfio_pci_intx_unmask(vdev); @@ -1336,6 +1389,9 @@ int vfio_pci_core_ioctl_feature(struct vfio_device *device, u32 flags, return vfio_pci_core_feature_token(device, flags, arg, argsz); case VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY: return vfio_pci_core_pm_entry(device, flags, arg, argsz); + case VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY_WITH_WAKEUP: + return vfio_pci_core_pm_entry_with_wakeup(device, flags, + arg, argsz); case VFIO_DEVICE_FEATURE_LOW_POWER_EXIT: return vfio_pci_core_pm_exit(device, flags, arg, argsz); default: diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h index d31cc9cc9c70..8bdf20cd94a9 100644 --- a/include/linux/vfio_pci_core.h +++ b/include/linux/vfio_pci_core.h @@ -131,6 +131,7 @@ struct vfio_pci_core_device { int ioeventfds_nr; struct eventfd_ctx *err_trigger; struct eventfd_ctx *req_trigger; + struct eventfd_ctx *pm_wake_eventfd_ctx; struct list_head dummy_resources_list; struct mutex ioeventfds_lock; struct list_head ioeventfds_list;