From patchwork Tue Jan 9 19:37:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761572 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1E03F3D985; Tue, 9 Jan 2024 19:38:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="EHKSxPvf" Received: from pps.filterd (m0279864.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409JOgBj011291; Tue, 9 Jan 2024 19:37:51 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=p3ECLYR1EhDyKpGFByfsjEWGnmAeD0l1Cu7wgMUx9xg =; b=EHKSxPvfma757+tVBikjBSXSFQDUzBfy7i0Bb913KYHFokPD2Miln4xfHru 6bxts1fsUg7seteSIvGLGK1QIlLIZowQtPFo0Qcn8X5pjDyXOumoK15e09g7x0CJ XmvSNVUznSKbIey4WRBMps5f7aJc+aUdzHcAh2ZZLrIvkil2tuxGKyqkGuE5RvQX JXEjveSB/FUD0TJ689lEZn6Viw6ZtM3DqZnDCprsUzRRv7/brWF2Q7nVeUdO4Luq kNTPxslmRaYt3/Qmgk/9OLLtkR4cRN/OSjsr8582dERonoh8o1YuS/mfNllk59oR X43BlVkmF9UEwVcrQquHj4AsOtA== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vgwx39w7c-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:37:50 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA02.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409JboWo011889 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:37:50 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:37:49 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:37:39 -0800 Subject: [PATCH v16 01/34] docs: gunyah: Introduce Gunyah Hypervisor Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-1-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: X-4l7SyfES1N1PETwMAXozRqSx643D91 X-Proofpoint-ORIG-GUID: X-4l7SyfES1N1PETwMAXozRqSx643D91 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_01,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 lowpriorityscore=0 adultscore=0 spamscore=0 priorityscore=1501 malwarescore=0 mlxlogscore=948 impostorscore=0 clxscore=1015 bulkscore=0 phishscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Gunyah is an open-source Type-1 hypervisor developed by Qualcomm. It does not depend on any lower-privileged OS/kernel code for its core functionality. This increases its security and can support a smaller trusted computing based when compared to Type-2 hypervisors. Add documentation describing the Gunyah hypervisor and the main components of the Gunyah hypervisor which are of interest to Linux virtualization development. Signed-off-by: Elliot Berman --- Documentation/virt/gunyah/index.rst | 134 ++++++++++++++++++++++++++++ Documentation/virt/gunyah/message-queue.rst | 68 ++++++++++++++ Documentation/virt/index.rst | 1 + 3 files changed, 203 insertions(+) diff --git a/Documentation/virt/gunyah/index.rst b/Documentation/virt/gunyah/index.rst new file mode 100644 index 000000000000..da8e5e4b9cac --- /dev/null +++ b/Documentation/virt/gunyah/index.rst @@ -0,0 +1,134 @@ +.. SPDX-License-Identifier: GPL-2.0 + +================= +Gunyah Hypervisor +================= + +.. toctree:: + :maxdepth: 1 + + message-queue + +Gunyah is a Type-1 hypervisor which is independent of any OS kernel, and runs in +a higher CPU privilege level. It does not depend on any lower-privileged +operating system for its core functionality. This increases its security and can +support a much smaller trusted computing base than a Type-2 hypervisor. + +Gunyah is an open source hypervisor. The source repo is available at +https://github.com/quic/gunyah-hypervisor. + +Gunyah provides these following features. + +- Scheduling: + + A scheduler for virtual CPUs (vCPUs) on physical CPUs enables time-sharing + of the CPUs. Gunyah supports two models of scheduling which can co-exist on + a running system: + + 1. Hypervisor vCPU scheduling in which Gunyah hypervisor schedules vCPUS on + its own. The default is a real-time priority with round-robin scheduler. + 2. "Proxy" scheduling in which an owner-VM can donate the remainder of its + own vCPU's time slice to an owned-VM's vCPU via a hypercall. + +- Memory Management: + + APIs handling memory, abstracted as objects, limiting direct use of physical + addresses. Memory ownership and usage tracking of all memory under its control. + Memory partitioning between VMs is a fundamental security feature. + +- Interrupt Virtualization: + + Interrupt ownership is tracked and interrupt delivery is directly to the + assigned VM. Gunyah makes use of hardware interrupt virtualization where + possible. + +- Inter-VM Communication: + + There are several different mechanisms provided for communicating between VMs. + + 1. Message queues + 2. Doorbells + 3. Virtio MMIO transport + 4. Shared memory + +- Virtual platform: + + Architectural devices such as interrupt controllers and CPU timers are + directly provided by the hypervisor as well as core virtual platform devices + and system APIs such as ARM PSCI. + +- Device Virtualization: + + Para-virtualization of devices is supported using inter-VM communication and + virtio transport support. Select stage 2 faults by virtual machines that use + proxy-scheduled vCPUs can be handled directly by Linux to provide Type-2 + hypervisor style on-demand paging and/or device emulation. + +Architectures supported +======================= +AArch64 with a GICv3 or GICv4.1 + +Resources and Capabilities +========================== + +Services/resources provided by the Gunyah hypervisor are accessible to a +virtual machine through capabilities. A capability is an access control +token granting the holder a set of permissions to operate on a specific +hypervisor object (conceptually similar to a file-descriptor). +For example, inter-VM communication using Gunyah doorbells and message queues +is performed using hypercalls taking Capability ID arguments for the required +IPC objects. These resources are described in Linux as a struct gunyah_resource. + +Unlike UNIX file descriptors, there is no path-based or similar lookup of +an object to create a new Capability, meaning simpler security analysis. +Creation of a new Capability requires the holding of a set of privileged +Capabilities which are typically never given out by the Resource Manager (RM). + +Gunyah itself provides no APIs for Capability ID discovery. Enumeration of +Capability IDs is provided by RM as a higher level service to VMs. + +Resource Manager +================ + +The Gunyah Resource Manager (RM) is a privileged application VM supporting the +Gunyah Hypervisor. It provides policy enforcement aspects of the virtualization +system. The resource manager can be treated as an extension of the Hypervisor +but is separated to its own partition to ensure that the hypervisor layer itself +remains small and secure and to maintain a separation of policy and mechanism in +the platform. The resource manager runs at arm64 NS-EL1, similar to other +virtual machines. + +Communication with the resource manager from other virtual machines happens with +message-queue.rst. Details about the specific messages can be found in +drivers/virt/gunyah/rsc_mgr.c + +:: + + +-------+ +--------+ +--------+ + | RM | | VM_A | | VM_B | + +-.-.-.-+ +---.----+ +---.----+ + | | | | + +-.-.-----------.------------.----+ + | | \==========/ | | + | \========================/ | + | Gunyah | + +---------------------------------+ + +The source for the resource manager is available at +https://github.com/quic/gunyah-resource-manager. + +The resource manager provides the following features: + +- VM lifecycle management: allocating a VM, starting VMs, destruction of VMs +- VM access control policy, including memory sharing and lending +- Interrupt routing configuration +- Forwarding of system-level events (e.g. VM shutdown) to owner VM +- Resource (capability) discovery + +A VM requires boot configuration to establish communication with the resource +manager. This is provided to VMs via a 'hypervisor' device tree node which is +overlayed to the VMs DT by the RM. This node lets guests know they are running +as a Gunyah guest VM, how to communicate with resource manager, and basic +description and capabilities of this VM. See +Documentation/devicetree/bindings/firmware/gunyah-hypervisor.yaml for a +description of this node. diff --git a/Documentation/virt/gunyah/message-queue.rst b/Documentation/virt/gunyah/message-queue.rst new file mode 100644 index 000000000000..cd94710e381a --- /dev/null +++ b/Documentation/virt/gunyah/message-queue.rst @@ -0,0 +1,68 @@ +.. SPDX-License-Identifier: GPL-2.0 + +Message Queues +============== +Message queue is a simple low-capacity IPC channel between two virtual machines. +It is intended for sending small control and configuration messages. Each +message queue is unidirectional and buffered in the hypervisor. A full-duplex +IPC channel requires a pair of queues. + +The size of the queue and the maximum size of the message that can be passed is +fixed at creation of the message queue. Resource manager is presently the only +use case for message queues, and creates messages queues between itself and VMs +with a fixed maximum message size of 240 bytes. Longer messages require a +further protocol on top of the message queue messages themselves. For instance, +communication with the resource manager adds a header field for sending longer +messages which are split into smaller fragments. + +The diagram below shows how message queue works. A typical configuration +involves 2 message queues. Message queue 1 allows VM_A to send messages to VM_B. +Message queue 2 allows VM_B to send messages to VM_A. + +1. VM_A sends a message of up to 240 bytes in length. It makes a hypercall + with the message to request the hypervisor to add the message to + message queue 1's queue. The hypervisor copies memory into the internal + message queue buffer; the memory doesn't need to be shared between + VM_A and VM_B. + +2. Gunyah raises the corresponding interrupt for VM_B (Rx vIRQ) when any of + these happens: + + a. gunyah_msgq_send() has PUSH flag. This is a typical case when the message + queue is being used to implement an RPC-like interface. + b. Explicility with gunyah_msgq_push hypercall from VM_A. + c. Message queue has reached a threshold depth. Typically, this threshold + depth is the size of the queue (in other words: when queue is full, Rx + vIRQ is raised). + +3. VM_B calls gunyah_msgq_recv() and Gunyah copies message to requested buffer. + +4. Gunyah raises the corresponding interrupt for VM_A (Tx vIRQ) when the message + queue falls below a watermark depth. Typically, this is when the queue is + drained. Note the watermark depth and the threshold depth for the Rx vIRQ are + independent values. Coincidentally, this signal is conceptually similar to + Clear-to-Send. + +For VM_B to send a message to VM_A, the process is identical, except that +hypercalls reference message queue 2's capability ID. The IRQ will be different +for the second message queue. + +:: + + +-------------------+ +-----------------+ +-------------------+ + | VM_A | |Gunyah hypervisor| | VM_B | + | | | | | | + | | | | | | + | | Tx | | | | + | |-------->| | Rx vIRQ | | + |gunyah_msgq_send() | Tx vIRQ |Message queue 1 |-------->|gunyah_msgq_recv() | + | |<------- | | | | + | | | | | | + | | | | | | + | | | | Tx | | + | | Rx vIRQ | |<--------| | + |gunyah_msgq_recv() |<--------|Message queue 2 | Tx vIRQ |gunyah_msgq_send() | + | | | |-------->| | + | | | | | | + | | | | | | + +-------------------+ +-----------------+ +---------------+ diff --git a/Documentation/virt/index.rst b/Documentation/virt/index.rst index 7fb55ae08598..15869ee059b3 100644 --- a/Documentation/virt/index.rst +++ b/Documentation/virt/index.rst @@ -16,6 +16,7 @@ Virtualization Support coco/sev-guest coco/tdx-guest hyperv/index + gunyah/index .. only:: html and subproject From patchwork Tue Jan 9 19:37:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761098 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 305393D39F; Tue, 9 Jan 2024 19:38:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="YO5qyhSi" Received: from pps.filterd (m0279869.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409GcW81031587; Tue, 9 Jan 2024 19:37:52 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=lLDuUBmGZvNBtcKj7wbGM3D9GD/JTPUN1398mSvaVtA =; b=YO5qyhSisJBy7bUqFg0nkNPGfLRzIKbQDeG9MAPPgqUiW7vKHtIwoENkQz2 n5fu5lqjw3x4/m5OoeWWfpmerLUdwOdjXirXdP16JIIZcl5x6a69s7mA0uVddIIL 6DTzU7lGUQA3QjNW4ChDZJd6pJlHtSNCGUtqo7qnTVtD4je0UxIgOdqnubX7xGpU XuzYTmG1CXX2jBxT+XnqPED7YjyavmNlqCMphduUcLs/B+vgUJk48RLtnlm1wMZF mJ/Jh3vEXQ24EF/Hh92nd/k4ozvX86tJtyxHoxOJkpd9h9plP0UMub3Lr+2pTBGI V8UM6u4tT7zdgewLdsE7sMmCpYQ== Received: from nasanppmta03.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh9u9gdbv-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:37:51 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA03.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409Jbot3011357 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:37:50 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:37:49 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:37:40 -0800 Subject: [PATCH v16 02/34] dt-bindings: Add binding for gunyah hypervisor Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-2-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman , Rob Herring X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: edcD4wUb0AIlamWiAts9LXox64A9mtS0 X-Proofpoint-ORIG-GUID: edcD4wUb0AIlamWiAts9LXox64A9mtS0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 impostorscore=0 bulkscore=0 adultscore=0 suspectscore=0 phishscore=0 clxscore=1015 lowpriorityscore=0 malwarescore=0 spamscore=0 mlxlogscore=999 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 The Gunyah Resource Manager applies a devicetree overlay describing the virtual platform configuration of the guest VM, such as the message queue capability IDs for communicating with the Resource Manager. This information is not otherwise discoverable by a VM: the Gunyah hypervisor core does not provide a direct interface to discover capability IDs nor a way to communicate with RM without having already known the corresponding message queue capability ID. Add the DT bindings that Gunyah adheres for the hypervisor node and message queues. Reviewed-by: Rob Herring Signed-off-by: Elliot Berman --- .../bindings/firmware/gunyah-hypervisor.yaml | 82 ++++++++++++++++++++++ 1 file changed, 82 insertions(+) diff --git a/Documentation/devicetree/bindings/firmware/gunyah-hypervisor.yaml b/Documentation/devicetree/bindings/firmware/gunyah-hypervisor.yaml new file mode 100644 index 000000000000..cdeb4885a807 --- /dev/null +++ b/Documentation/devicetree/bindings/firmware/gunyah-hypervisor.yaml @@ -0,0 +1,82 @@ +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/firmware/gunyah-hypervisor.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Gunyah Hypervisor + +maintainers: + - Prakruthi Deepak Heragu + - Elliot Berman + +description: |+ + Gunyah virtual machines use this information to determine the capability IDs + of the message queues used to communicate with the Gunyah Resource Manager. + See also: https://github.com/quic/gunyah-resource-manager/blob/develop/src/vm_creation/dto_construct.c + +properties: + compatible: + const: gunyah-hypervisor + + "#address-cells": + description: Number of cells needed to represent 64-bit capability IDs. + const: 2 + + "#size-cells": + description: must be 0, because capability IDs are not memory address + ranges and do not have a size. + const: 0 + +patternProperties: + "^gunyah-resource-mgr(@.*)?": + type: object + description: + Resource Manager node which is required to communicate to Resource + Manager VM using Gunyah Message Queues. + + properties: + compatible: + const: gunyah-resource-manager + + reg: + items: + - description: Gunyah capability ID of the TX message queue + - description: Gunyah capability ID of the RX message queue + + interrupts: + items: + - description: Interrupt for the TX message queue + - description: Interrupt for the RX message queue + + additionalProperties: false + + required: + - compatible + - reg + - interrupts + +additionalProperties: false + +required: + - compatible + - "#address-cells" + - "#size-cells" + +examples: + - | + #include + + hypervisor { + #address-cells = <2>; + #size-cells = <0>; + compatible = "gunyah-hypervisor"; + + gunyah-resource-mgr@0 { + compatible = "gunyah-resource-manager"; + interrupts = , /* TX allowed IRQ */ + ; /* RX requested IRQ */ + reg = <0x00000000 0x00000000>, /* TX capability ID */ + <0x00000000 0x00000001>; /* RX capability ID */ + }; + }; From patchwork Tue Jan 9 19:37:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761574 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D8BAA3D3BB; Tue, 9 Jan 2024 19:38:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="jTlTqzla" Received: from pps.filterd (m0279871.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409Gxr7f030302; Tue, 9 Jan 2024 19:37:52 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=79hmU2lp1B5mL+vTI4VCC81K4NZ9OnMRLXsigEg7NjA =; b=jTlTqzlaIoNWJ3xWkBqZpGr2TQgOr6wi6b494/cSFWtw7QSSyTqCrtx96LT +ombeaEOt3hRv6ciEFkaP5u4mcpL8hKDry+NnqIHHtl6NAG8ZQQGfxj8iKIf9WaL pZcqabCID5S1f8gc8Voude/n7IkfvqwTS64JD+EzAxvZauvtcbGbKhEVYuwOG9BA mWBMm4QhN9XaGZrc/mXNK2w+LC8WSChTVFbukp7kpmSSmM0r0vwtmMh6F0oTb35/ XRX0T1w9PUS5Nr3MI2erJ1rCVscVnuLQbqRUuTBpQyM+A07n70RDXG1pAt3FxGxr IlhBZb2LTRUeput0KGlPLU7jfqg== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh98m8hqd-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:37:52 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA02.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409Jbp4d011905 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:37:51 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:37:50 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:37:41 -0800 Subject: [PATCH v16 03/34] gunyah: Common types and error codes for Gunyah hypercalls Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-3-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: c6Fe_jSNy_BQ31t0hfFNRHbgo2Psl9eu X-Proofpoint-GUID: c6Fe_jSNy_BQ31t0hfFNRHbgo2Psl9eu X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 impostorscore=0 spamscore=0 mlxscore=0 priorityscore=1501 phishscore=0 malwarescore=0 mlxlogscore=368 suspectscore=0 clxscore=1015 bulkscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Add architecture-independent standard error codes, types, and macros for Gunyah hypercalls. Reviewed-by: Dmitry Baryshkov Reviewed-by: Srinivas Kandagatla Reviewed-by: Alex Elder Signed-off-by: Elliot Berman --- include/linux/gunyah.h | 106 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 106 insertions(+) diff --git a/include/linux/gunyah.h b/include/linux/gunyah.h new file mode 100644 index 000000000000..1eab631a49b6 --- /dev/null +++ b/include/linux/gunyah.h @@ -0,0 +1,106 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2022-2024 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef _LINUX_GUNYAH_H +#define _LINUX_GUNYAH_H + +#include +#include +#include + +/* Matches resource manager's resource types for VM_GET_HYP_RESOURCES RPC */ +enum gunyah_resource_type { + /* clang-format off */ + GUNYAH_RESOURCE_TYPE_BELL_TX = 0, + GUNYAH_RESOURCE_TYPE_BELL_RX = 1, + GUNYAH_RESOURCE_TYPE_MSGQ_TX = 2, + GUNYAH_RESOURCE_TYPE_MSGQ_RX = 3, + GUNYAH_RESOURCE_TYPE_VCPU = 4, + GUNYAH_RESOURCE_TYPE_MEM_EXTENT = 9, + GUNYAH_RESOURCE_TYPE_ADDR_SPACE = 10, + /* clang-format on */ +}; + +struct gunyah_resource { + enum gunyah_resource_type type; + u64 capid; + unsigned int irq; +}; + +/******************************************************************************/ +/* Common arch-independent definitions for Gunyah hypercalls */ +#define GUNYAH_CAPID_INVAL U64_MAX +#define GUNYAH_VMID_ROOT_VM 0xff + +enum gunyah_error { + /* clang-format off */ + GUNYAH_ERROR_OK = 0, + GUNYAH_ERROR_UNIMPLEMENTED = -1, + GUNYAH_ERROR_RETRY = -2, + + GUNYAH_ERROR_ARG_INVAL = 1, + GUNYAH_ERROR_ARG_SIZE = 2, + GUNYAH_ERROR_ARG_ALIGN = 3, + + GUNYAH_ERROR_NOMEM = 10, + + GUNYAH_ERROR_ADDR_OVFL = 20, + GUNYAH_ERROR_ADDR_UNFL = 21, + GUNYAH_ERROR_ADDR_INVAL = 22, + + GUNYAH_ERROR_DENIED = 30, + GUNYAH_ERROR_BUSY = 31, + GUNYAH_ERROR_IDLE = 32, + + GUNYAH_ERROR_IRQ_BOUND = 40, + GUNYAH_ERROR_IRQ_UNBOUND = 41, + + GUNYAH_ERROR_CSPACE_CAP_NULL = 50, + GUNYAH_ERROR_CSPACE_CAP_REVOKED = 51, + GUNYAH_ERROR_CSPACE_WRONG_OBJ_TYPE = 52, + GUNYAH_ERROR_CSPACE_INSUF_RIGHTS = 53, + GUNYAH_ERROR_CSPACE_FULL = 54, + + GUNYAH_ERROR_MSGQUEUE_EMPTY = 60, + GUNYAH_ERROR_MSGQUEUE_FULL = 61, + /* clang-format on */ +}; + +/** + * gunyah_error_remap() - Remap Gunyah hypervisor errors into a Linux error code + * @gunyah_error: Gunyah hypercall return value + */ +static inline int gunyah_error_remap(enum gunyah_error gunyah_error) +{ + switch (gunyah_error) { + case GUNYAH_ERROR_OK: + return 0; + case GUNYAH_ERROR_NOMEM: + return -ENOMEM; + case GUNYAH_ERROR_DENIED: + case GUNYAH_ERROR_CSPACE_CAP_NULL: + case GUNYAH_ERROR_CSPACE_CAP_REVOKED: + case GUNYAH_ERROR_CSPACE_WRONG_OBJ_TYPE: + case GUNYAH_ERROR_CSPACE_INSUF_RIGHTS: + return -EACCES; + case GUNYAH_ERROR_CSPACE_FULL: + case GUNYAH_ERROR_BUSY: + case GUNYAH_ERROR_IDLE: + return -EBUSY; + case GUNYAH_ERROR_IRQ_BOUND: + case GUNYAH_ERROR_IRQ_UNBOUND: + case GUNYAH_ERROR_MSGQUEUE_FULL: + case GUNYAH_ERROR_MSGQUEUE_EMPTY: + return -EIO; + case GUNYAH_ERROR_UNIMPLEMENTED: + return -EOPNOTSUPP; + case GUNYAH_ERROR_RETRY: + return -EAGAIN; + default: + return -EINVAL; + } +} + +#endif From patchwork Tue Jan 9 19:37:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761575 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 304FA3D39A; Tue, 9 Jan 2024 19:38:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="RuWbIk+W" Received: from pps.filterd (m0279872.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409FLOTm019742; Tue, 9 Jan 2024 19:37:53 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=ZmNQftYQ9DObdFwpkA0z8j22sgDG+4nWIrT9hdY4fiQ =; b=RuWbIk+WvWluBTam3/XT8OhTxtqUlJZyo4xUdim7Vtw0mAdya48xt2pG2iM Ke2FC1lmFAtkxTEdGyfmWzKIVEkBM2WyYC2ZiWOkLkE59GivXlpf1q0H9ifrR5F9 ZHt5S7nmDI88Zyc2o8vioCU2/SIeN9Ddfj4hAF3OPhjzbyQwxN8OmG6BlSVsQN7D k8RCeHQc3FkUxOcoypQ13p1mLIWdQ7ut27u9UuwjBzDN0gVyt2cUokTjEL/uMhQD c6jZK24JTZ67IlC8FGbZyp8GT0Hhsu5JyRdds00zHA3jMAfK/B4GGBptY1LPwbdB JWFEideaQU6Aj28HU8exu4xHbkA== Received: from nasanppmta01.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh85t0pmn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:37:53 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA01.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409JbqJ0003334 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:37:52 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:37:51 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:37:42 -0800 Subject: [PATCH v16 04/34] virt: gunyah: Add hypercalls to identify Gunyah Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-4-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: SZJWMs5XkE7kjTL_2qezXSdD6Nq6wlJd X-Proofpoint-GUID: SZJWMs5XkE7kjTL_2qezXSdD6Nq6wlJd X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_01,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 bulkscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 clxscore=1015 priorityscore=1501 adultscore=0 impostorscore=0 phishscore=0 spamscore=0 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Add hypercalls to identify when Linux is running in a virtual machine under Gunyah. There are two calls to help identify Gunyah: 1. gh_hypercall_get_uid() returns a UID when running under a Gunyah hypervisor. 2. gh_hypercall_hyp_identify() returns build information and a set of feature flags that are supported by Gunyah. Reviewed-by: Srinivas Kandagatla Reviewed-by: Alex Elder Signed-off-by: Elliot Berman --- arch/arm64/Kbuild | 1 + arch/arm64/gunyah/Makefile | 3 ++ arch/arm64/gunyah/gunyah_hypercall.c | 62 ++++++++++++++++++++++++++++++++++++ drivers/virt/Kconfig | 2 ++ drivers/virt/gunyah/Kconfig | 12 +++++++ include/linux/gunyah.h | 38 ++++++++++++++++++++++ 6 files changed, 118 insertions(+) diff --git a/arch/arm64/Kbuild b/arch/arm64/Kbuild index 5bfbf7d79c99..e4847ba0e3c9 100644 --- a/arch/arm64/Kbuild +++ b/arch/arm64/Kbuild @@ -3,6 +3,7 @@ obj-y += kernel/ mm/ net/ obj-$(CONFIG_KVM) += kvm/ obj-$(CONFIG_XEN) += xen/ obj-$(subst m,y,$(CONFIG_HYPERV)) += hyperv/ +obj-$(CONFIG_GUNYAH) += gunyah/ obj-$(CONFIG_CRYPTO) += crypto/ # for cleaning diff --git a/arch/arm64/gunyah/Makefile b/arch/arm64/gunyah/Makefile new file mode 100644 index 000000000000..84f1e38cafb1 --- /dev/null +++ b/arch/arm64/gunyah/Makefile @@ -0,0 +1,3 @@ +# SPDX-License-Identifier: GPL-2.0 + +obj-$(CONFIG_GUNYAH) += gunyah_hypercall.o diff --git a/arch/arm64/gunyah/gunyah_hypercall.c b/arch/arm64/gunyah/gunyah_hypercall.c new file mode 100644 index 000000000000..d44663334f38 --- /dev/null +++ b/arch/arm64/gunyah/gunyah_hypercall.c @@ -0,0 +1,62 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2022-2024 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include +#include +#include +#include + +/* {c1d58fcd-a453-5fdb-9265-ce36673d5f14} */ +static const uuid_t GUNYAH_UUID = UUID_INIT(0xc1d58fcd, 0xa453, 0x5fdb, 0x92, + 0x65, 0xce, 0x36, 0x67, 0x3d, 0x5f, + 0x14); + +bool arch_is_gunyah_guest(void) +{ + struct arm_smccc_res res; + uuid_t uuid; + u32 *up; + + arm_smccc_1_1_invoke(ARM_SMCCC_VENDOR_HYP_CALL_UID_FUNC_ID, &res); + + up = (u32 *)&uuid.b[0]; + up[0] = lower_32_bits(res.a0); + up[1] = lower_32_bits(res.a1); + up[2] = lower_32_bits(res.a2); + up[3] = lower_32_bits(res.a3); + + return uuid_equal(&uuid, &GUNYAH_UUID); +} +EXPORT_SYMBOL_GPL(arch_is_gunyah_guest); + +#define GUNYAH_HYPERCALL(fn) \ + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, ARM_SMCCC_SMC_64, \ + ARM_SMCCC_OWNER_VENDOR_HYP, fn) + +/* clang-format off */ +#define GUNYAH_HYPERCALL_HYP_IDENTIFY GUNYAH_HYPERCALL(0x8000) +/* clang-format on */ + +/** + * gunyah_hypercall_hyp_identify() - Returns build information and feature flags + * supported by Gunyah. + * @hyp_identity: filled by the hypercall with the API info and feature flags. + */ +void gunyah_hypercall_hyp_identify( + struct gunyah_hypercall_hyp_identify_resp *hyp_identity) +{ + struct arm_smccc_res res; + + arm_smccc_1_1_hvc(GUNYAH_HYPERCALL_HYP_IDENTIFY, &res); + + hyp_identity->api_info = res.a0; + hyp_identity->flags[0] = res.a1; + hyp_identity->flags[1] = res.a2; + hyp_identity->flags[2] = res.a3; +} +EXPORT_SYMBOL_GPL(gunyah_hypercall_hyp_identify); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("Gunyah Hypervisor Hypercalls"); diff --git a/drivers/virt/Kconfig b/drivers/virt/Kconfig index 40129b6f0eca..172a6a12073c 100644 --- a/drivers/virt/Kconfig +++ b/drivers/virt/Kconfig @@ -50,4 +50,6 @@ source "drivers/virt/acrn/Kconfig" source "drivers/virt/coco/Kconfig" +source "drivers/virt/gunyah/Kconfig" + endif diff --git a/drivers/virt/gunyah/Kconfig b/drivers/virt/gunyah/Kconfig new file mode 100644 index 000000000000..6f4c85db80b5 --- /dev/null +++ b/drivers/virt/gunyah/Kconfig @@ -0,0 +1,12 @@ +# SPDX-License-Identifier: GPL-2.0-only + +config GUNYAH + tristate "Gunyah Virtualization drivers" + depends on ARM64 + help + The Gunyah drivers are the helper interfaces that run in a guest VM + such as basic inter-VM IPC and signaling mechanisms, and higher level + services such as memory/device sharing, IRQ sharing, and so on. + + Say Y/M here to enable the drivers needed to interact in a Gunyah + virtual environment. diff --git a/include/linux/gunyah.h b/include/linux/gunyah.h index 1eab631a49b6..33bcbd22d39f 100644 --- a/include/linux/gunyah.h +++ b/include/linux/gunyah.h @@ -6,9 +6,11 @@ #ifndef _LINUX_GUNYAH_H #define _LINUX_GUNYAH_H +#include #include #include #include +#include /* Matches resource manager's resource types for VM_GET_HYP_RESOURCES RPC */ enum gunyah_resource_type { @@ -103,4 +105,40 @@ static inline int gunyah_error_remap(enum gunyah_error gunyah_error) } } +enum gunyah_api_feature { + /* clang-format off */ + GUNYAH_FEATURE_DOORBELL = 1, + GUNYAH_FEATURE_MSGQUEUE = 2, + GUNYAH_FEATURE_VCPU = 5, + GUNYAH_FEATURE_MEMEXTENT = 6, + /* clang-format on */ +}; + +bool arch_is_gunyah_guest(void); + +#define GUNYAH_API_V1 1 + +/* Other bits reserved for future use and will be zero */ +/* clang-format off */ +#define GUNYAH_API_INFO_API_VERSION_MASK GENMASK_ULL(13, 0) +#define GUNYAH_API_INFO_BIG_ENDIAN BIT_ULL(14) +#define GUNYAH_API_INFO_IS_64BIT BIT_ULL(15) +#define GUNYAH_API_INFO_VARIANT_MASK GENMASK_ULL(63, 56) +/* clang-format on */ + +struct gunyah_hypercall_hyp_identify_resp { + u64 api_info; + u64 flags[3]; +}; + +static inline u16 +gunyah_api_version(const struct gunyah_hypercall_hyp_identify_resp *gunyah_api) +{ + return FIELD_GET(GUNYAH_API_INFO_API_VERSION_MASK, + gunyah_api->api_info); +} + +void gunyah_hypercall_hyp_identify( + struct gunyah_hypercall_hyp_identify_resp *hyp_identity); + #endif From patchwork Tue Jan 9 19:37:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761097 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A4E7F3D3A0; Tue, 9 Jan 2024 19:38:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="bqDBYTlT" Received: from pps.filterd (m0279868.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409Gashn012246; Tue, 9 Jan 2024 19:37:54 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=icjWnj0mkCwFQ21u1pvtQcXcenrQAbg51OlUAoFY4js =; b=bqDBYTlTznzQzDDe/plCWxNrXBj0C+AuBiF9nl2bWqlMa4C867i8U3UyhWg ftNcHS0TTD/PChYDLzXd1s3/lzOa4m8O8uh0Kzqnj1h3n3xcyfvBC7W85E//60A5 4L5MBJ2SbgfI46/u6FNCHd6/gV4r/TvTnlzhuYCR6RHVezFSf3f+1w3srNDzevO9 fbuIpK+Ea2ADnFDMspYMDE8v7eNkFtIp0zOo9KRXZVxtg4fA4yXeS08P8E2S6ef0 ArVuxvy++oFgiYVs6OE6IhJ2AmeQzXZDk1ChOFIWkSMH7B9wagruHaoyTU6Hqs7t nH8m6jW6Z1rrKtJweqGbJ27evvQ== Received: from nasanppmta05.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh9ta0dw3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:37:53 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409Jbq3R024506 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:37:52 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:37:51 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:37:43 -0800 Subject: [PATCH v16 05/34] virt: gunyah: Add hypervisor driver Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-5-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: Ig6iQvpCLqOoTnh4LaxOTKhPE4RmuWK3 X-Proofpoint-ORIG-GUID: Ig6iQvpCLqOoTnh4LaxOTKhPE4RmuWK3 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 impostorscore=0 malwarescore=0 mlxscore=0 adultscore=0 mlxlogscore=999 bulkscore=0 priorityscore=1501 lowpriorityscore=0 clxscore=1015 phishscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Add driver to detect when running under Gunyah. It performs basic identification hypercall and populates the platform bus for resource manager to probe. Signed-off-by: Elliot Berman --- drivers/virt/Makefile | 1 + drivers/virt/gunyah/Makefile | 3 +++ drivers/virt/gunyah/gunyah.c | 52 ++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 56 insertions(+) diff --git a/drivers/virt/Makefile b/drivers/virt/Makefile index f29901bd7820..ef6a3835d078 100644 --- a/drivers/virt/Makefile +++ b/drivers/virt/Makefile @@ -10,3 +10,4 @@ obj-y += vboxguest/ obj-$(CONFIG_NITRO_ENCLAVES) += nitro_enclaves/ obj-$(CONFIG_ACRN_HSM) += acrn/ obj-y += coco/ +obj-y += gunyah/ diff --git a/drivers/virt/gunyah/Makefile b/drivers/virt/gunyah/Makefile new file mode 100644 index 000000000000..34f32110faf9 --- /dev/null +++ b/drivers/virt/gunyah/Makefile @@ -0,0 +1,3 @@ +# SPDX-License-Identifier: GPL-2.0 + +obj-$(CONFIG_GUNYAH) += gunyah.o diff --git a/drivers/virt/gunyah/gunyah.c b/drivers/virt/gunyah/gunyah.c new file mode 100644 index 000000000000..ef8a85f27590 --- /dev/null +++ b/drivers/virt/gunyah/gunyah.c @@ -0,0 +1,52 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2023-2024 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include +#include +#include +#include + +static int gunyah_probe(struct platform_device *pdev) +{ + struct gunyah_hypercall_hyp_identify_resp gunyah_api; + + if (!arch_is_gunyah_guest()) + return -ENODEV; + + gunyah_hypercall_hyp_identify(&gunyah_api); + + pr_info("Running under Gunyah hypervisor %llx/v%u\n", + FIELD_GET(GUNYAH_API_INFO_VARIANT_MASK, gunyah_api.api_info), + gunyah_api_version(&gunyah_api)); + + /* Might move this out to individual drivers if there's ever an API version bump */ + if (gunyah_api_version(&gunyah_api) != GUNYAH_API_V1) { + pr_info("Unsupported Gunyah version: %u\n", + gunyah_api_version(&gunyah_api)); + return -ENODEV; + } + + return devm_of_platform_populate(&pdev->dev); +} + +static const struct of_device_id gunyah_of_match[] = { + { .compatible = "gunyah-hypervisor" }, + {} +}; +MODULE_DEVICE_TABLE(of, gunyah_of_match); + +/* clang-format off */ +static struct platform_driver gunyah_driver = { + .probe = gunyah_probe, + .driver = { + .name = "gunyah", + .of_match_table = gunyah_of_match, + } +}; +/* clang-format on */ +module_platform_driver(gunyah_driver); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("Gunyah Driver"); From patchwork Tue Jan 9 19:37:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761095 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1E07D3D986; Tue, 9 Jan 2024 19:38:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="E6tW3/B+" Received: from pps.filterd (m0279863.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409IoBRb003328; Tue, 9 Jan 2024 19:37:54 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=QdXgUydOaam49lWM9hHVQ3unnDzFxn1NJUUWfFsZFDo =; b=E6tW3/B+w/7M7KQyszu7fLxOIA8YVNUOkXkKBqpZecK92jpzWSK+73gjZg9 8Dma90uOkX7/19qX+UkCnUezDivGni5SPs+ixExx8dW3odCRFkCq6oGmM/Z+coQu jPTUPSDHRFuwf2JSrx49vyi1yWGirXpb9s2nj9zCEpG/XGpoBDX/hAXdB4ecBDZe 47rBlajR9AMHgdXg5YhjC6PNFBLozVGhkCB+pkuiitwU/nPHA5TJtZQYtY7vxEUO KVBZodGA+q7at45H1BO0puz7fNakgfb2yFqTkBlz0v+j5xUPogPHpdjGGn66PMG4 zJ0cY33jugoCY5X1hfKOSeYRT3A== Received: from nasanppmta05.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh9bmggct-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:37:54 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409JbrLC024513 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:37:53 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:37:52 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:37:44 -0800 Subject: [PATCH v16 06/34] virt: gunyah: msgq: Add hypercalls to send and receive messages Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-6-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: 63U6h9UTcv_bPBxoxi0a4YNlTfRwGL0t X-Proofpoint-GUID: 63U6h9UTcv_bPBxoxi0a4YNlTfRwGL0t X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_01,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 mlxscore=0 clxscore=1015 spamscore=0 priorityscore=1501 malwarescore=0 mlxlogscore=857 adultscore=0 bulkscore=0 suspectscore=0 lowpriorityscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Add hypercalls to send and receive messages on a Gunyah message queue. Reviewed-by: Alex Elder Reviewed-by: Srinivas Kandagatla Signed-off-by: Elliot Berman --- arch/arm64/gunyah/gunyah_hypercall.c | 55 ++++++++++++++++++++++++++++++++++++ include/linux/gunyah.h | 8 ++++++ 2 files changed, 63 insertions(+) diff --git a/arch/arm64/gunyah/gunyah_hypercall.c b/arch/arm64/gunyah/gunyah_hypercall.c index d44663334f38..1302e128be6e 100644 --- a/arch/arm64/gunyah/gunyah_hypercall.c +++ b/arch/arm64/gunyah/gunyah_hypercall.c @@ -37,6 +37,8 @@ EXPORT_SYMBOL_GPL(arch_is_gunyah_guest); /* clang-format off */ #define GUNYAH_HYPERCALL_HYP_IDENTIFY GUNYAH_HYPERCALL(0x8000) +#define GUNYAH_HYPERCALL_MSGQ_SEND GUNYAH_HYPERCALL(0x801B) +#define GUNYAH_HYPERCALL_MSGQ_RECV GUNYAH_HYPERCALL(0x801C) /* clang-format on */ /** @@ -58,5 +60,58 @@ void gunyah_hypercall_hyp_identify( } EXPORT_SYMBOL_GPL(gunyah_hypercall_hyp_identify); +/** + * gunyah_hypercall_msgq_send() - Send a buffer on a message queue + * @capid: capability ID of the message queue to add message + * @size: Size of @buff + * @buff: Address of buffer to send + * @tx_flags: See GUNYAH_HYPERCALL_MSGQ_TX_FLAGS_* + * @ready: If the send was successful, ready is filled with true if more + * messages can be sent on the queue. If false, then the tx IRQ will + * be raised in future when send can succeed. + */ +enum gunyah_error gunyah_hypercall_msgq_send(u64 capid, size_t size, void *buff, + u64 tx_flags, bool *ready) +{ + struct arm_smccc_res res; + + arm_smccc_1_1_hvc(GUNYAH_HYPERCALL_MSGQ_SEND, capid, size, + (uintptr_t)buff, tx_flags, 0, &res); + + if (res.a0 == GUNYAH_ERROR_OK) + *ready = !!res.a1; + + return res.a0; +} +EXPORT_SYMBOL_GPL(gunyah_hypercall_msgq_send); + +/** + * gunyah_hypercall_msgq_recv() - Send a buffer on a message queue + * @capid: capability ID of the message queue to add message + * @buff: Address of buffer to copy received data into + * @size: Size of @buff + * @recv_size: If the receive was successful, recv_size is filled with the + * size of data received. Will be <= size. + * @ready: If the receive was successful, ready is filled with true if more + * messages are ready to be received on the queue. If false, then the + * rx IRQ will be raised in future when recv can succeed. + */ +enum gunyah_error gunyah_hypercall_msgq_recv(u64 capid, void *buff, size_t size, + size_t *recv_size, bool *ready) +{ + struct arm_smccc_res res; + + arm_smccc_1_1_hvc(GUNYAH_HYPERCALL_MSGQ_RECV, capid, (uintptr_t)buff, + size, 0, &res); + + if (res.a0 == GUNYAH_ERROR_OK) { + *recv_size = res.a1; + *ready = !!res.a2; + } + + return res.a0; +} +EXPORT_SYMBOL_GPL(gunyah_hypercall_msgq_recv); + MODULE_LICENSE("GPL"); MODULE_DESCRIPTION("Gunyah Hypervisor Hypercalls"); diff --git a/include/linux/gunyah.h b/include/linux/gunyah.h index 33bcbd22d39f..acd70f982425 100644 --- a/include/linux/gunyah.h +++ b/include/linux/gunyah.h @@ -141,4 +141,12 @@ gunyah_api_version(const struct gunyah_hypercall_hyp_identify_resp *gunyah_api) void gunyah_hypercall_hyp_identify( struct gunyah_hypercall_hyp_identify_resp *hyp_identity); +/* Immediately raise RX vIRQ on receiver VM */ +#define GUNYAH_HYPERCALL_MSGQ_TX_FLAGS_PUSH BIT(0) + +enum gunyah_error gunyah_hypercall_msgq_send(u64 capid, size_t size, void *buff, + u64 tx_flags, bool *ready); +enum gunyah_error gunyah_hypercall_msgq_recv(u64 capid, void *buff, size_t size, + size_t *recv_size, bool *ready); + #endif From patchwork Tue Jan 9 19:37:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761096 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7CD623D56E; Tue, 9 Jan 2024 19:38:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="Tp3GARDz" Received: from pps.filterd (m0279872.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409IbI9t001477; Tue, 9 Jan 2024 19:37:55 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=PVavHXaSOmGFizSDr2TbfnB4vcPUF8zAyUPvn7d4XeM =; b=Tp3GARDzwIqil+y9XsjAyyrnArDlj4heQeEoNvOIJiv/Bd0nT9jcGgSNibX o0YVEJWNm1d/lk1MY/DF+/ZNA+Tusi8hf70rgWNnsAHac/vZanuQmMqIzhEERtJY gLlC74d6ghmc39H+K4XMi9rGdbDCNkkNq6aPwLZZJeCJa7763zZGGDM3EAOq/73s 5KTk1fbyX5ymFpbSo519S36MQyAzArduWCGMjopb4dh2HySBiXFBQ2lcsa+AcP8f 7oMMMYAWh2jX+lzuMjAnVoiQHTYNuvNEh/sfIAJ0PWdfPdU8lleujHPOAO5woTdt Z/s9xzTJW0yzecJFwooOL1/UCMw== Received: from nasanppmta03.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh85t0pmr-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:37:55 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA03.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409JbsRr011365 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:37:54 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:37:53 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:37:45 -0800 Subject: [PATCH v16 07/34] gunyah: rsc_mgr: Add resource manager RPC core Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-7-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: cU8gJwoxE30MYItzgi1xt3CNb7AHA64h X-Proofpoint-GUID: cU8gJwoxE30MYItzgi1xt3CNb7AHA64h X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_01,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 bulkscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 clxscore=1015 priorityscore=1501 adultscore=0 impostorscore=0 phishscore=0 spamscore=0 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 The resource manager is a special virtual machine which is always running on a Gunyah system. It provides APIs for creating and destroying VMs, secure memory management, sharing/lending of memory between VMs, and setup of inter-VM communication. Calls to the resource manager are made via message queues. This patch implements the basic probing and RPC mechanism to make those API calls. Request/response calls can be made with gh_rm_call. Drivers can also register to notifications pushed by RM via gh_rm_register_notifier Specific API calls that resource manager supports will be implemented in subsequent patches. Signed-off-by: Elliot Berman --- drivers/virt/gunyah/Makefile | 4 +- drivers/virt/gunyah/rsc_mgr.c | 724 ++++++++++++++++++++++++++++++++++++++++++ drivers/virt/gunyah/rsc_mgr.h | 28 ++ 3 files changed, 755 insertions(+), 1 deletion(-) diff --git a/drivers/virt/gunyah/Makefile b/drivers/virt/gunyah/Makefile index 34f32110faf9..c2308389f551 100644 --- a/drivers/virt/gunyah/Makefile +++ b/drivers/virt/gunyah/Makefile @@ -1,3 +1,5 @@ # SPDX-License-Identifier: GPL-2.0 -obj-$(CONFIG_GUNYAH) += gunyah.o +gunyah_rsc_mgr-y += rsc_mgr.o + +obj-$(CONFIG_GUNYAH) += gunyah.o gunyah_rsc_mgr.o diff --git a/drivers/virt/gunyah/rsc_mgr.c b/drivers/virt/gunyah/rsc_mgr.c new file mode 100644 index 000000000000..a3578a0c10b4 --- /dev/null +++ b/drivers/virt/gunyah/rsc_mgr.c @@ -0,0 +1,724 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2022-2024 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include + +#include "rsc_mgr.h" + +/* clang-format off */ +#define RM_RPC_API_VERSION_MASK GENMASK(3, 0) +#define RM_RPC_HEADER_WORDS_MASK GENMASK(7, 4) +#define RM_RPC_API_VERSION FIELD_PREP(RM_RPC_API_VERSION_MASK, 1) +#define RM_RPC_HEADER_WORDS FIELD_PREP(RM_RPC_HEADER_WORDS_MASK, \ + (sizeof(struct gunyah_rm_rpc_hdr) / sizeof(u32))) +#define RM_RPC_API (RM_RPC_API_VERSION | RM_RPC_HEADER_WORDS) + +#define RM_RPC_TYPE_CONTINUATION 0x0 +#define RM_RPC_TYPE_REQUEST 0x1 +#define RM_RPC_TYPE_REPLY 0x2 +#define RM_RPC_TYPE_NOTIF 0x3 +#define RM_RPC_TYPE_MASK GENMASK(1, 0) + +#define GUNYAH_RM_MAX_NUM_FRAGMENTS 62 +#define RM_RPC_FRAGMENTS_MASK GENMASK(7, 2) +/* clang-format on */ + +struct gunyah_rm_rpc_hdr { + u8 api; + u8 type; + __le16 seq; + __le32 msg_id; +} __packed; + +struct gunyah_rm_rpc_reply_hdr { + struct gunyah_rm_rpc_hdr hdr; + __le32 err_code; /* GUNYAH_RM_ERROR_* */ +} __packed; + +#define GUNYAH_RM_MSGQ_MSG_SIZE 240 +#define GUNYAH_RM_PAYLOAD_SIZE \ + (GUNYAH_RM_MSGQ_MSG_SIZE - sizeof(struct gunyah_rm_rpc_hdr)) + +/* RM Error codes */ +enum gunyah_rm_error { + /* clang-format off */ + GUNYAH_RM_ERROR_OK = 0x0, + GUNYAH_RM_ERROR_UNIMPLEMENTED = 0xFFFFFFFF, + GUNYAH_RM_ERROR_NOMEM = 0x1, + GUNYAH_RM_ERROR_NORESOURCE = 0x2, + GUNYAH_RM_ERROR_DENIED = 0x3, + GUNYAH_RM_ERROR_INVALID = 0x4, + GUNYAH_RM_ERROR_BUSY = 0x5, + GUNYAH_RM_ERROR_ARGUMENT_INVALID = 0x6, + GUNYAH_RM_ERROR_HANDLE_INVALID = 0x7, + GUNYAH_RM_ERROR_VALIDATE_FAILED = 0x8, + GUNYAH_RM_ERROR_MAP_FAILED = 0x9, + GUNYAH_RM_ERROR_MEM_INVALID = 0xA, + GUNYAH_RM_ERROR_MEM_INUSE = 0xB, + GUNYAH_RM_ERROR_MEM_RELEASED = 0xC, + GUNYAH_RM_ERROR_VMID_INVALID = 0xD, + GUNYAH_RM_ERROR_LOOKUP_FAILED = 0xE, + GUNYAH_RM_ERROR_IRQ_INVALID = 0xF, + GUNYAH_RM_ERROR_IRQ_INUSE = 0x10, + GUNYAH_RM_ERROR_IRQ_RELEASED = 0x11, + /* clang-format on */ +}; + +/** + * struct gunyah_rm_message - Represents a complete message from resource manager + * @payload: Combined payload of all the fragments (msg headers stripped off). + * @size: Size of the payload received so far. + * @msg_id: Message ID from the header. + * @type: RM_RPC_TYPE_REPLY or RM_RPC_TYPE_NOTIF. + * @num_fragments: total number of fragments expected to be received. + * @fragments_received: fragments received so far. + * @reply: Fields used for request/reply sequences + */ +struct gunyah_rm_message { + void *payload; + size_t size; + u32 msg_id; + u8 type; + + u8 num_fragments; + u8 fragments_received; + + /** + * @ret: Linux return code, there was an error processing message + * @seq: Sequence ID for the main message. + * @rm_error: For request/reply sequences with standard replies + * @seq_done: Signals caller that the RM reply has been received + */ + struct { + int ret; + u16 seq; + enum gunyah_rm_error rm_error; + struct completion seq_done; + } reply; +}; + +/** + * struct gunyah_rm - private data for communicating w/Gunyah resource manager + * @dev: pointer to RM platform device + * @tx_ghrsc: message queue resource to TX to RM + * @rx_ghrsc: message queue resource to RX from RM + * @active_rx_message: ongoing gunyah_rm_message for which we're receiving fragments + * @call_xarray: xarray to allocate & lookup sequence IDs for Request/Response flows + * @next_seq: next ID to allocate (for xa_alloc_cyclic) + * @recv_msg: cached allocation for Rx messages + * @send_msg: cached allocation for Tx messages. Must hold @send_lock to manipulate. + * @send_lock: synchronization to allow only one request to be sent at a time + * @send_ready: completed when we know Tx message queue can take more messages + * @nh: notifier chain for clients interested in RM notification messages + */ +struct gunyah_rm { + struct device *dev; + struct gunyah_resource tx_ghrsc; + struct gunyah_resource rx_ghrsc; + struct gunyah_rm_message *active_rx_message; + + struct xarray call_xarray; + u32 next_seq; + + unsigned char recv_msg[GUNYAH_RM_MSGQ_MSG_SIZE]; + unsigned char send_msg[GUNYAH_RM_MSGQ_MSG_SIZE]; + struct mutex send_lock; + struct completion send_ready; + struct blocking_notifier_head nh; +}; + +/** + * gunyah_rm_error_remap() - Remap Gunyah resource manager errors into a Linux error code + * @rm_error: "Standard" return value from Gunyah resource manager + */ +static inline int gunyah_rm_error_remap(enum gunyah_rm_error rm_error) +{ + switch (rm_error) { + case GUNYAH_RM_ERROR_OK: + return 0; + case GUNYAH_RM_ERROR_UNIMPLEMENTED: + return -EOPNOTSUPP; + case GUNYAH_RM_ERROR_NOMEM: + return -ENOMEM; + case GUNYAH_RM_ERROR_NORESOURCE: + return -ENODEV; + case GUNYAH_RM_ERROR_DENIED: + return -EPERM; + case GUNYAH_RM_ERROR_BUSY: + return -EBUSY; + case GUNYAH_RM_ERROR_INVALID: + case GUNYAH_RM_ERROR_ARGUMENT_INVALID: + case GUNYAH_RM_ERROR_HANDLE_INVALID: + case GUNYAH_RM_ERROR_VALIDATE_FAILED: + case GUNYAH_RM_ERROR_MAP_FAILED: + case GUNYAH_RM_ERROR_MEM_INVALID: + case GUNYAH_RM_ERROR_MEM_INUSE: + case GUNYAH_RM_ERROR_MEM_RELEASED: + case GUNYAH_RM_ERROR_VMID_INVALID: + case GUNYAH_RM_ERROR_LOOKUP_FAILED: + case GUNYAH_RM_ERROR_IRQ_INVALID: + case GUNYAH_RM_ERROR_IRQ_INUSE: + case GUNYAH_RM_ERROR_IRQ_RELEASED: + return -EINVAL; + default: + return -EBADMSG; + } +} + +static int gunyah_rm_init_message_payload(struct gunyah_rm_message *message, + const void *msg, size_t hdr_size, + size_t msg_size) +{ + const struct gunyah_rm_rpc_hdr *hdr = msg; + size_t max_buf_size, payload_size; + + if (msg_size < hdr_size) + return -EINVAL; + + payload_size = msg_size - hdr_size; + + message->num_fragments = FIELD_GET(RM_RPC_FRAGMENTS_MASK, hdr->type); + message->fragments_received = 0; + + /* There's not going to be any payload, no need to allocate buffer. */ + if (!payload_size && !message->num_fragments) + return 0; + + if (message->num_fragments > GUNYAH_RM_MAX_NUM_FRAGMENTS) + return -EINVAL; + + max_buf_size = payload_size + + (message->num_fragments * GUNYAH_RM_PAYLOAD_SIZE); + + message->payload = kzalloc(max_buf_size, GFP_KERNEL); + if (!message->payload) + return -ENOMEM; + + memcpy(message->payload, msg + hdr_size, payload_size); + message->size = payload_size; + return 0; +} + +static void gunyah_rm_abort_message(struct gunyah_rm *rm) +{ + switch (rm->active_rx_message->type) { + case RM_RPC_TYPE_REPLY: + rm->active_rx_message->reply.ret = -EIO; + complete(&rm->active_rx_message->reply.seq_done); + break; + case RM_RPC_TYPE_NOTIF: + fallthrough; + default: + kfree(rm->active_rx_message->payload); + kfree(rm->active_rx_message); + } + + rm->active_rx_message = NULL; +} + +static inline void gunyah_rm_try_complete_message(struct gunyah_rm *rm) +{ + struct gunyah_rm_message *message = rm->active_rx_message; + + if (!message || message->fragments_received != message->num_fragments) + return; + + switch (message->type) { + case RM_RPC_TYPE_REPLY: + complete(&message->reply.seq_done); + break; + case RM_RPC_TYPE_NOTIF: + blocking_notifier_call_chain(&rm->nh, message->msg_id, + message->payload); + + kfree(message->payload); + kfree(message); + break; + default: + dev_err_ratelimited(rm->dev, + "Invalid message type (%u) received\n", + message->type); + gunyah_rm_abort_message(rm); + break; + } + + rm->active_rx_message = NULL; +} + +static void gunyah_rm_process_notif(struct gunyah_rm *rm, const void *msg, + size_t msg_size) +{ + const struct gunyah_rm_rpc_hdr *hdr = msg; + struct gunyah_rm_message *message; + int ret; + + if (rm->active_rx_message) { + dev_err(rm->dev, + "Unexpected new notification, still processing an active message"); + gunyah_rm_abort_message(rm); + } + + message = kzalloc(sizeof(*message), GFP_KERNEL); + if (!message) + return; + + message->type = RM_RPC_TYPE_NOTIF; + message->msg_id = le32_to_cpu(hdr->msg_id); + + ret = gunyah_rm_init_message_payload(message, msg, sizeof(*hdr), + msg_size); + if (ret) { + dev_err(rm->dev, + "Failed to initialize message for notification: %d\n", + ret); + kfree(message); + return; + } + + rm->active_rx_message = message; + + gunyah_rm_try_complete_message(rm); +} + +static void gunyah_rm_process_reply(struct gunyah_rm *rm, const void *msg, + size_t msg_size) +{ + const struct gunyah_rm_rpc_reply_hdr *reply_hdr = msg; + struct gunyah_rm_message *message; + u16 seq_id; + + seq_id = le16_to_cpu(reply_hdr->hdr.seq); + message = xa_load(&rm->call_xarray, seq_id); + + if (!message || message->msg_id != le32_to_cpu(reply_hdr->hdr.msg_id)) + return; + + if (rm->active_rx_message) { + dev_err(rm->dev, + "Unexpected new reply, still processing an active message"); + gunyah_rm_abort_message(rm); + } + + if (gunyah_rm_init_message_payload(message, msg, sizeof(*reply_hdr), + msg_size)) { + dev_err(rm->dev, + "Failed to alloc message buffer for sequence %d\n", + seq_id); + /* Send message complete and error the client. */ + message->reply.ret = -ENOMEM; + complete(&message->reply.seq_done); + return; + } + + message->reply.rm_error = le32_to_cpu(reply_hdr->err_code); + rm->active_rx_message = message; + + gunyah_rm_try_complete_message(rm); +} + +static void gunyah_rm_process_cont(struct gunyah_rm *rm, + struct gunyah_rm_message *message, + const void *msg, size_t msg_size) +{ + const struct gunyah_rm_rpc_hdr *hdr = msg; + size_t payload_size = msg_size - sizeof(*hdr); + + if (!rm->active_rx_message) + return; + + /* + * hdr->fragments and hdr->msg_id preserves the value from first reply + * or notif message. To detect mishandling, check it's still intact. + */ + if (message->msg_id != le32_to_cpu(hdr->msg_id) || + message->num_fragments != + FIELD_GET(RM_RPC_FRAGMENTS_MASK, hdr->type)) { + gunyah_rm_abort_message(rm); + return; + } + + memcpy(message->payload + message->size, msg + sizeof(*hdr), + payload_size); + message->size += payload_size; + message->fragments_received++; + + gunyah_rm_try_complete_message(rm); +} + +static irqreturn_t gunyah_rm_rx(int irq, void *data) +{ + enum gunyah_error gunyah_error; + struct gunyah_rm_rpc_hdr *hdr; + struct gunyah_rm *rm = data; + void *msg = &rm->recv_msg[0]; + size_t len; + bool ready; + + do { + gunyah_error = gunyah_hypercall_msgq_recv(rm->rx_ghrsc.capid, + msg, + sizeof(rm->recv_msg), + &len, &ready); + if (gunyah_error != GUNYAH_ERROR_OK) { + if (gunyah_error != GUNYAH_ERROR_MSGQUEUE_EMPTY) + dev_warn(rm->dev, + "Failed to receive data: %d\n", + gunyah_error); + return IRQ_HANDLED; + } + + if (len < sizeof(*hdr)) { + dev_err_ratelimited( + rm->dev, + "Too small message received. size=%ld\n", len); + continue; + } + + hdr = msg; + if (hdr->api != RM_RPC_API) { + dev_err(rm->dev, "Unknown RM RPC API version: %x\n", + hdr->api); + return IRQ_HANDLED; + } + + switch (FIELD_GET(RM_RPC_TYPE_MASK, hdr->type)) { + case RM_RPC_TYPE_NOTIF: + gunyah_rm_process_notif(rm, msg, len); + break; + case RM_RPC_TYPE_REPLY: + gunyah_rm_process_reply(rm, msg, len); + break; + case RM_RPC_TYPE_CONTINUATION: + gunyah_rm_process_cont(rm, rm->active_rx_message, msg, + len); + break; + default: + dev_err(rm->dev, + "Invalid message type (%lu) received\n", + FIELD_GET(RM_RPC_TYPE_MASK, hdr->type)); + return IRQ_HANDLED; + } + } while (ready); + + return IRQ_HANDLED; +} + +static irqreturn_t gunyah_rm_tx(int irq, void *data) +{ + struct gunyah_rm *rm = data; + + complete_all(&rm->send_ready); + + return IRQ_HANDLED; +} + +static int gunyah_rm_msgq_send(struct gunyah_rm *rm, size_t size, bool push) + __must_hold(&rm->send_lock) +{ + const u64 tx_flags = push ? GUNYAH_HYPERCALL_MSGQ_TX_FLAGS_PUSH : 0; + enum gunyah_error gunyah_error; + void *data = &rm->send_msg[0]; + bool ready; + +again: + wait_for_completion(&rm->send_ready); + /* reinit completion before hypercall. As soon as hypercall returns, we could get the + * ready interrupt. This might be before we have time to reinit the completion + */ + reinit_completion(&rm->send_ready); + gunyah_error = gunyah_hypercall_msgq_send(rm->tx_ghrsc.capid, size, + data, tx_flags, &ready); + + /* Should never happen because Linux properly tracks the ready-state of the msgq */ + if (WARN_ON(gunyah_error == GUNYAH_ERROR_MSGQUEUE_FULL)) + goto again; + + if (ready) + complete_all(&rm->send_ready); + + return gunyah_error_remap(gunyah_error); +} + +static int gunyah_rm_send_request(struct gunyah_rm *rm, u32 message_id, + const void *req_buf, size_t req_buf_size, + struct gunyah_rm_message *message) +{ + size_t buf_size_remaining = req_buf_size; + const void *req_buf_curr = req_buf; + struct gunyah_rm_rpc_hdr *hdr = + (struct gunyah_rm_rpc_hdr *)&rm->send_msg[0]; + struct gunyah_rm_rpc_hdr hdr_template; + void *payload = hdr + 1; + u32 cont_fragments = 0; + size_t payload_size; + bool push; + int ret; + + if (req_buf_size > + GUNYAH_RM_MAX_NUM_FRAGMENTS * GUNYAH_RM_PAYLOAD_SIZE) { + dev_warn( + rm->dev, + "Limit (%lu bytes) exceeded for the maximum message size: %lu\n", + GUNYAH_RM_MAX_NUM_FRAGMENTS * GUNYAH_RM_PAYLOAD_SIZE, + req_buf_size); + dump_stack(); + return -E2BIG; + } + + if (req_buf_size) + cont_fragments = (req_buf_size - 1) / GUNYAH_RM_PAYLOAD_SIZE; + + hdr_template.api = RM_RPC_API; + hdr_template.type = FIELD_PREP(RM_RPC_TYPE_MASK, RM_RPC_TYPE_REQUEST) | + FIELD_PREP(RM_RPC_FRAGMENTS_MASK, cont_fragments); + hdr_template.seq = cpu_to_le16(message->reply.seq); + hdr_template.msg_id = cpu_to_le32(message_id); + + ret = mutex_lock_interruptible(&rm->send_lock); + if (ret) + return ret; + + do { + *hdr = hdr_template; + + /* Copy payload */ + payload_size = min(buf_size_remaining, GUNYAH_RM_PAYLOAD_SIZE); + memcpy(payload, req_buf_curr, payload_size); + req_buf_curr += payload_size; + buf_size_remaining -= payload_size; + + /* Only the last message should have push flag set */ + push = !buf_size_remaining; + ret = gunyah_rm_msgq_send(rm, sizeof(*hdr) + payload_size, + push); + if (ret) + break; + + hdr_template.type = + FIELD_PREP(RM_RPC_TYPE_MASK, RM_RPC_TYPE_CONTINUATION) | + FIELD_PREP(RM_RPC_FRAGMENTS_MASK, cont_fragments); + } while (buf_size_remaining); + + mutex_unlock(&rm->send_lock); + return ret; +} + +/** + * gunyah_rm_call: Achieve request-response type communication with RPC + * @rm: Pointer to Gunyah resource manager internal data + * @message_id: The RM RPC message-id + * @req_buf: Request buffer that contains the payload + * @req_buf_size: Total size of the payload + * @resp_buf: Pointer to a response buffer + * @resp_buf_size: Size of the response buffer + * + * Make a request to the Resource Manager and wait for reply back. For a successful + * response, the function returns the payload. The size of the payload is set in + * resp_buf_size. The resp_buf must be freed by the caller when 0 is returned + * and resp_buf_size != 0. + * + * req_buf should be not NULL for req_buf_size >0. If req_buf_size == 0, + * req_buf *can* be NULL and no additional payload is sent. + * + * Context: Process context. Will sleep waiting for reply. + * Return: 0 on success. <0 if error. + */ +int gunyah_rm_call(struct gunyah_rm *rm, u32 message_id, const void *req_buf, + size_t req_buf_size, void **resp_buf, size_t *resp_buf_size) +{ + struct gunyah_rm_message message = { 0 }; + u32 seq_id; + int ret; + + /* message_id 0 is reserved. req_buf_size implies req_buf is not NULL */ + if (!rm || !message_id || (!req_buf && req_buf_size)) + return -EINVAL; + + message.type = RM_RPC_TYPE_REPLY; + message.msg_id = message_id; + + message.reply.seq_done = + COMPLETION_INITIALIZER_ONSTACK(message.reply.seq_done); + + /* Allocate a new seq number for this message */ + ret = xa_alloc_cyclic(&rm->call_xarray, &seq_id, &message, xa_limit_16b, + &rm->next_seq, GFP_KERNEL); + if (ret < 0) + return ret; + message.reply.seq = lower_16_bits(seq_id); + + /* Send the request to the Resource Manager */ + ret = gunyah_rm_send_request(rm, message_id, req_buf, req_buf_size, + &message); + if (ret < 0) { + dev_warn(rm->dev, "Failed to send request. Error: %d\n", ret); + goto out; + } + + /* + * Wait for response. Uninterruptible because rollback based on what RM did to VM + * requires us to know how RM handled the call. + */ + wait_for_completion(&message.reply.seq_done); + + /* Check for internal (kernel) error waiting for the response */ + if (message.reply.ret) { + ret = message.reply.ret; + if (ret != -ENOMEM) + kfree(message.payload); + goto out; + } + + /* Got a response, did resource manager give us an error? */ + if (message.reply.rm_error != GUNYAH_RM_ERROR_OK) { + dev_warn(rm->dev, "RM rejected message %08x. Error: %d\n", + message_id, message.reply.rm_error); + ret = gunyah_rm_error_remap(message.reply.rm_error); + kfree(message.payload); + goto out; + } + + /* Everything looks good, return the payload */ + if (resp_buf_size) + *resp_buf_size = message.size; + + if (message.size && resp_buf) { + *resp_buf = message.payload; + } else { + /* kfree in case RM sent us multiple fragments but never any data in + * those fragments. We would've allocated memory for it, but message.size == 0 + */ + kfree(message.payload); + } + +out: + xa_erase(&rm->call_xarray, message.reply.seq); + return ret; +} + +int gunyah_rm_notifier_register(struct gunyah_rm *rm, struct notifier_block *nb) +{ + return blocking_notifier_chain_register(&rm->nh, nb); +} +EXPORT_SYMBOL_GPL(gunyah_rm_notifier_register); + +int gunyah_rm_notifier_unregister(struct gunyah_rm *rm, + struct notifier_block *nb) +{ + return blocking_notifier_chain_unregister(&rm->nh, nb); +} +EXPORT_SYMBOL_GPL(gunyah_rm_notifier_unregister); + +static int gunyah_platform_probe_capability(struct platform_device *pdev, + int idx, + struct gunyah_resource *ghrsc) +{ + int ret; + + ghrsc->irq = platform_get_irq(pdev, idx); + if (ghrsc->irq < 0) { + dev_err(&pdev->dev, "Failed to get %s irq: %d\n", + idx ? "rx" : "tx", ghrsc->irq); + return ghrsc->irq; + } + + ret = of_property_read_u64_index(pdev->dev.of_node, "reg", idx, + &ghrsc->capid); + if (ret) { + dev_err(&pdev->dev, "Failed to get %s capid: %d\n", + idx ? "rx" : "tx", ret); + return ret; + } + + return 0; +} + +static int gunyah_rm_probe_tx_msgq(struct gunyah_rm *rm, + struct platform_device *pdev) +{ + int ret; + + rm->tx_ghrsc.type = GUNYAH_RESOURCE_TYPE_MSGQ_TX; + ret = gunyah_platform_probe_capability(pdev, 0, &rm->tx_ghrsc); + if (ret) + return ret; + + enable_irq_wake(rm->tx_ghrsc.irq); + + return devm_request_threaded_irq(rm->dev, rm->tx_ghrsc.irq, NULL, + gunyah_rm_tx, IRQF_ONESHOT, + "gunyah_rm_tx", rm); +} + +static int gunyah_rm_probe_rx_msgq(struct gunyah_rm *rm, + struct platform_device *pdev) +{ + int ret; + + rm->rx_ghrsc.type = GUNYAH_RESOURCE_TYPE_MSGQ_RX; + ret = gunyah_platform_probe_capability(pdev, 1, &rm->rx_ghrsc); + if (ret) + return ret; + + enable_irq_wake(rm->rx_ghrsc.irq); + + return devm_request_threaded_irq(rm->dev, rm->rx_ghrsc.irq, NULL, + gunyah_rm_rx, IRQF_ONESHOT, + "gunyah_rm_rx", rm); +} + +static int gunyah_rm_probe(struct platform_device *pdev) +{ + struct gunyah_rm *rm; + int ret; + + rm = devm_kzalloc(&pdev->dev, sizeof(*rm), GFP_KERNEL); + if (!rm) + return -ENOMEM; + + platform_set_drvdata(pdev, rm); + rm->dev = &pdev->dev; + + mutex_init(&rm->send_lock); + init_completion(&rm->send_ready); + BLOCKING_INIT_NOTIFIER_HEAD(&rm->nh); + xa_init_flags(&rm->call_xarray, XA_FLAGS_ALLOC); + + ret = gunyah_rm_probe_tx_msgq(rm, pdev); + if (ret) + return ret; + /* assume RM is ready to receive messages from us */ + complete_all(&rm->send_ready); + + ret = gunyah_rm_probe_rx_msgq(rm, pdev); + if (ret) + return ret; + + return 0; +} + +static const struct of_device_id gunyah_rm_of_match[] = { + { .compatible = "gunyah-resource-manager" }, + {} +}; +MODULE_DEVICE_TABLE(of, gunyah_rm_of_match); + +static struct platform_driver gunyah_rm_driver = { + .probe = gunyah_rm_probe, + .driver = { + .name = "gunyah_rsc_mgr", + .of_match_table = gunyah_rm_of_match, + }, +}; +module_platform_driver(gunyah_rm_driver); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("Gunyah Resource Manager Driver"); diff --git a/drivers/virt/gunyah/rsc_mgr.h b/drivers/virt/gunyah/rsc_mgr.h new file mode 100644 index 000000000000..21318ef25040 --- /dev/null +++ b/drivers/virt/gunyah/rsc_mgr.h @@ -0,0 +1,28 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2022-2024 Qualcomm Innovation Center, Inc. All rights reserved. + */ +#ifndef __GUNYAH_RSC_MGR_PRIV_H +#define __GUNYAH_RSC_MGR_PRIV_H + +#include +#include +#include + +#define GUNYAH_VMID_INVAL U16_MAX + +struct gunyah_rm; + +int gunyah_rm_notifier_register(struct gunyah_rm *rm, + struct notifier_block *nb); +int gunyah_rm_notifier_unregister(struct gunyah_rm *rm, + struct notifier_block *nb); +struct device *gunyah_rm_get(struct gunyah_rm *rm); +void gunyah_rm_put(struct gunyah_rm *rm); + + +int gunyah_rm_call(struct gunyah_rm *rsc_mgr, u32 message_id, + const void *req_buf, size_t req_buf_size, void **resp_buf, + size_t *resp_buf_size); + +#endif From patchwork Tue Jan 9 19:37:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761571 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C66DF3D99E; Tue, 9 Jan 2024 19:38:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="E+VAyyYo" Received: from pps.filterd (m0279862.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409FdruR011442; Tue, 9 Jan 2024 19:37:55 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=qTohYqnlkOS9djmdgfBSDZ+nbkISb+SqC3fLuzzxepU =; b=E+VAyyYorxHmshatDCMLH8ECLlsp+HATu5eUsuW9WvRcqDyq1NDkpxtWszx RHJlAVhKQ+uwSwk4chnb8S/lO8t/55fWatsjXJC0lu3d14aKHGROyivUJhyZtiiZ A3UFNMX67OSG1hnqLTGBzG/fmpsDzZ3pwMcbcqgsLPEXEQks60CJku80OS9uEm5B QoID6o+GrEInIhyg8KBQgcgBSO/ZQQhNtrrq6uO1PY2Xc+c5TUQ/9oFkQTGDec2z hxyspICVouuIPol88vuUI9A0XpgfctWs7Yp7MuxRlyIfQjkUelwLwdpx84YsK9W7 gMYoxDfkV22GY8eH9YeY3Fupunw== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh3me184r-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:37:55 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA02.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409JbsPw011911 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:37:54 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:37:53 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:37:46 -0800 Subject: [PATCH v16 08/34] gunyah: vm_mgr: Introduce basic VM Manager Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-8-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: lzNTDnqHMufbCm4ApLHJgRhpzW9et7Xt X-Proofpoint-GUID: lzNTDnqHMufbCm4ApLHJgRhpzW9et7Xt X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_01,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 spamscore=0 priorityscore=1501 mlxlogscore=999 lowpriorityscore=0 mlxscore=0 suspectscore=0 impostorscore=0 clxscore=1015 adultscore=0 malwarescore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Gunyah VM manager is a kernel moduel which exposes an interface to Gunyah userspace to load, run, and interact with other Gunyah virtual machines. The interface is a character device at /dev/gunyah. Add a basic VM manager driver. Upcoming patches will add more ioctls into this driver. Reviewed-by: Srinivas Kandagatla Reviewed-by: Alex Elder Co-developed-by: Prakruthi Deepak Heragu Signed-off-by: Prakruthi Deepak Heragu Signed-off-by: Elliot Berman --- Documentation/userspace-api/ioctl/ioctl-number.rst | 1 + drivers/virt/gunyah/Makefile | 2 +- drivers/virt/gunyah/rsc_mgr.c | 51 ++++++++++++ drivers/virt/gunyah/vm_mgr.c | 94 ++++++++++++++++++++++ drivers/virt/gunyah/vm_mgr.h | 28 +++++++ include/uapi/linux/gunyah.h | 23 ++++++ 6 files changed, 198 insertions(+), 1 deletion(-) diff --git a/Documentation/userspace-api/ioctl/ioctl-number.rst b/Documentation/userspace-api/ioctl/ioctl-number.rst index d8b6cb1a3636..91e01925a41b 100644 --- a/Documentation/userspace-api/ioctl/ioctl-number.rst +++ b/Documentation/userspace-api/ioctl/ioctl-number.rst @@ -137,6 +137,7 @@ Code Seq# Include File Comments 'F' DD video/sstfb.h conflict! 'G' 00-3F drivers/misc/sgi-gru/grulib.h conflict! 'G' 00-0F xen/gntalloc.h, xen/gntdev.h conflict! +'G' 00-0F linux/gunyah.h conflict! 'H' 00-7F linux/hiddev.h conflict! 'H' 00-0F linux/hidraw.h conflict! 'H' 01 linux/mei.h conflict! diff --git a/drivers/virt/gunyah/Makefile b/drivers/virt/gunyah/Makefile index c2308389f551..ceccbbe68b38 100644 --- a/drivers/virt/gunyah/Makefile +++ b/drivers/virt/gunyah/Makefile @@ -1,5 +1,5 @@ # SPDX-License-Identifier: GPL-2.0 -gunyah_rsc_mgr-y += rsc_mgr.o +gunyah_rsc_mgr-y += rsc_mgr.o vm_mgr.o obj-$(CONFIG_GUNYAH) += gunyah.o gunyah_rsc_mgr.o diff --git a/drivers/virt/gunyah/rsc_mgr.c b/drivers/virt/gunyah/rsc_mgr.c index a3578a0c10b4..45f9514cfe0e 100644 --- a/drivers/virt/gunyah/rsc_mgr.c +++ b/drivers/virt/gunyah/rsc_mgr.c @@ -10,8 +10,10 @@ #include #include #include +#include #include "rsc_mgr.h" +#include "vm_mgr.h" /* clang-format off */ #define RM_RPC_API_VERSION_MASK GENMASK(3, 0) @@ -118,6 +120,7 @@ struct gunyah_rm_message { * @send_lock: synchronization to allow only one request to be sent at a time * @send_ready: completed when we know Tx message queue can take more messages * @nh: notifier chain for clients interested in RM notification messages + * @miscdev: /dev/gunyah */ struct gunyah_rm { struct device *dev; @@ -133,6 +136,8 @@ struct gunyah_rm { struct mutex send_lock; struct completion send_ready; struct blocking_notifier_head nh; + + struct miscdevice miscdev; }; /** @@ -617,6 +622,36 @@ int gunyah_rm_notifier_unregister(struct gunyah_rm *rm, } EXPORT_SYMBOL_GPL(gunyah_rm_notifier_unregister); +struct device *gunyah_rm_get(struct gunyah_rm *rm) +{ + return get_device(rm->miscdev.this_device); +} +EXPORT_SYMBOL_GPL(gunyah_rm_get); + +void gunyah_rm_put(struct gunyah_rm *rm) +{ + put_device(rm->miscdev.this_device); +} +EXPORT_SYMBOL_GPL(gunyah_rm_put); + +static long gunyah_dev_ioctl(struct file *filp, unsigned int cmd, + unsigned long arg) +{ + struct miscdevice *miscdev = filp->private_data; + struct gunyah_rm *rm = container_of(miscdev, struct gunyah_rm, miscdev); + + return gunyah_dev_vm_mgr_ioctl(rm, cmd, arg); +} + +static const struct file_operations gunyah_dev_fops = { + /* clang-format off */ + .owner = THIS_MODULE, + .unlocked_ioctl = gunyah_dev_ioctl, + .compat_ioctl = compat_ptr_ioctl, + .llseek = noop_llseek, + /* clang-format on */ +}; + static int gunyah_platform_probe_capability(struct platform_device *pdev, int idx, struct gunyah_resource *ghrsc) @@ -702,9 +737,24 @@ static int gunyah_rm_probe(struct platform_device *pdev) if (ret) return ret; + rm->miscdev.name = "gunyah"; + rm->miscdev.minor = MISC_DYNAMIC_MINOR; + rm->miscdev.fops = &gunyah_dev_fops; + + ret = misc_register(&rm->miscdev); + if (ret) + return ret; + return 0; } +static void gunyah_rm_remove(struct platform_device *pdev) +{ + struct gunyah_rm *rm = platform_get_drvdata(pdev); + + misc_deregister(&rm->miscdev); +} + static const struct of_device_id gunyah_rm_of_match[] = { { .compatible = "gunyah-resource-manager" }, {} @@ -713,6 +763,7 @@ MODULE_DEVICE_TABLE(of, gunyah_rm_of_match); static struct platform_driver gunyah_rm_driver = { .probe = gunyah_rm_probe, + .remove_new = gunyah_rm_remove, .driver = { .name = "gunyah_rsc_mgr", .of_match_table = gunyah_rm_of_match, diff --git a/drivers/virt/gunyah/vm_mgr.c b/drivers/virt/gunyah/vm_mgr.c new file mode 100644 index 000000000000..e9dff733e35e --- /dev/null +++ b/drivers/virt/gunyah/vm_mgr.c @@ -0,0 +1,94 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2022-2024 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#define pr_fmt(fmt) "gunyah_vm_mgr: " fmt + +#include +#include +#include +#include + +#include + +#include "rsc_mgr.h" +#include "vm_mgr.h" + +static __must_check struct gunyah_vm *gunyah_vm_alloc(struct gunyah_rm *rm) +{ + struct gunyah_vm *ghvm; + + ghvm = kzalloc(sizeof(*ghvm), GFP_KERNEL); + if (!ghvm) + return ERR_PTR(-ENOMEM); + + ghvm->parent = gunyah_rm_get(rm); + ghvm->rm = rm; + + return ghvm; +} + +static int gunyah_vm_release(struct inode *inode, struct file *filp) +{ + struct gunyah_vm *ghvm = filp->private_data; + + gunyah_rm_put(ghvm->rm); + kfree(ghvm); + return 0; +} + +static const struct file_operations gunyah_vm_fops = { + .owner = THIS_MODULE, + .release = gunyah_vm_release, + .llseek = noop_llseek, +}; + +static long gunyah_dev_ioctl_create_vm(struct gunyah_rm *rm, unsigned long arg) +{ + struct gunyah_vm *ghvm; + struct file *file; + int fd, err; + + /* arg reserved for future use. */ + if (arg) + return -EINVAL; + + ghvm = gunyah_vm_alloc(rm); + if (IS_ERR(ghvm)) + return PTR_ERR(ghvm); + + fd = get_unused_fd_flags(O_CLOEXEC); + if (fd < 0) { + err = fd; + goto err_destroy_vm; + } + + file = anon_inode_getfile("gunyah-vm", &gunyah_vm_fops, ghvm, O_RDWR); + if (IS_ERR(file)) { + err = PTR_ERR(file); + goto err_put_fd; + } + + fd_install(fd, file); + + return fd; + +err_put_fd: + put_unused_fd(fd); +err_destroy_vm: + gunyah_rm_put(ghvm->rm); + kfree(ghvm); + return err; +} + +long gunyah_dev_vm_mgr_ioctl(struct gunyah_rm *rm, unsigned int cmd, + unsigned long arg) +{ + switch (cmd) { + case GUNYAH_CREATE_VM: + return gunyah_dev_ioctl_create_vm(rm, arg); + default: + return -ENOTTY; + } +} diff --git a/drivers/virt/gunyah/vm_mgr.h b/drivers/virt/gunyah/vm_mgr.h new file mode 100644 index 000000000000..50790d402676 --- /dev/null +++ b/drivers/virt/gunyah/vm_mgr.h @@ -0,0 +1,28 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2022-2024 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef _GUNYAH_VM_MGR_PRIV_H +#define _GUNYAH_VM_MGR_PRIV_H + +#include + +#include + +#include "rsc_mgr.h" + +long gunyah_dev_vm_mgr_ioctl(struct gunyah_rm *rm, unsigned int cmd, + unsigned long arg); + +/** + * struct gunyah_vm - Main representation of a Gunyah Virtual machine + * @rm: Pointer to the resource manager struct to make RM calls + * @parent: For logging + */ +struct gunyah_vm { + struct gunyah_rm *rm; + struct device *parent; +}; + +#endif diff --git a/include/uapi/linux/gunyah.h b/include/uapi/linux/gunyah.h new file mode 100644 index 000000000000..ac338ec4b85d --- /dev/null +++ b/include/uapi/linux/gunyah.h @@ -0,0 +1,23 @@ +/* SPDX-License-Identifier: GPL-2.0-only WITH Linux-syscall-note */ +/* + * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef _UAPI_LINUX_GUNYAH_H +#define _UAPI_LINUX_GUNYAH_H + +/* + * Userspace interface for /dev/gunyah - gunyah based virtual machine + */ + +#include +#include + +#define GUNYAH_IOCTL_TYPE 'G' + +/* + * ioctls for /dev/gunyah fds: + */ +#define GUNYAH_CREATE_VM _IO(GUNYAH_IOCTL_TYPE, 0x0) /* Returns a Gunyah VM fd */ + +#endif From patchwork Tue Jan 9 19:37:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761569 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C316F3EA74; Tue, 9 Jan 2024 19:38:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="KlGrxGc5" Received: from pps.filterd (m0279867.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409EjSYZ004292; Tue, 9 Jan 2024 19:37:56 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=jdOcH+5uREAma+X+LuNjKDUhpy77XJ6kXrkSC1c7CX8 =; b=KlGrxGc5OBrYUf9bDGI4fqF+5xX3L9z+7BdCf3wizObiiGWcQ51rnfQr6Sp 0PRmrqZX1RhiDjht6PYl5TT/p3M6JVBAaMg9gsL/uQym5AuS3KWh9JO6q13aHVH3 JOw0AuhV12GqaiezNXvotX0cCR3HLW5GHcl4QnvU7u83QjgOyiVMQSgak8xOMn6G 9q0PBaNGpoNzQsY84RM6Vo6BZ0Ox6Hnbr4yZjnp1UQnuhomekFk6vv5f2ygpCKwk p1Rfhs1OJ0wviwLX+eyThMHMtDftjjYntBOfO1f9T9iUCTHWeT3H+zWMNqDk4M/s SpeT4Gvhn/YpDt5HGHfITcvRfCQ== Received: from nasanppmta05.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vgxxbhtdq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:37:56 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409JbtFk024536 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:37:55 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:37:54 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:37:47 -0800 Subject: [PATCH v16 09/34] gunyah: rsc_mgr: Add VM lifecycle RPC Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-9-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: l8dGEkiNtnOwwtVWaPmjPcS74FvkOAye X-Proofpoint-GUID: l8dGEkiNtnOwwtVWaPmjPcS74FvkOAye X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=999 lowpriorityscore=0 priorityscore=1501 bulkscore=0 phishscore=0 spamscore=0 clxscore=1015 mlxscore=0 adultscore=0 impostorscore=0 malwarescore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Add Gunyah Resource Manager RPC interfaces to launch an unauthenticated virtual machine. Signed-off-by: Elliot Berman --- drivers/virt/gunyah/Makefile | 2 +- drivers/virt/gunyah/rsc_mgr.h | 78 +++++++++++++ drivers/virt/gunyah/rsc_mgr_rpc.c | 238 ++++++++++++++++++++++++++++++++++++++ 3 files changed, 317 insertions(+), 1 deletion(-) diff --git a/drivers/virt/gunyah/Makefile b/drivers/virt/gunyah/Makefile index ceccbbe68b38..47f1fae5419b 100644 --- a/drivers/virt/gunyah/Makefile +++ b/drivers/virt/gunyah/Makefile @@ -1,5 +1,5 @@ # SPDX-License-Identifier: GPL-2.0 -gunyah_rsc_mgr-y += rsc_mgr.o vm_mgr.o +gunyah_rsc_mgr-y += rsc_mgr.o rsc_mgr_rpc.o vm_mgr.o obj-$(CONFIG_GUNYAH) += gunyah.o gunyah_rsc_mgr.o diff --git a/drivers/virt/gunyah/rsc_mgr.h b/drivers/virt/gunyah/rsc_mgr.h index 21318ef25040..205b9ea735e5 100644 --- a/drivers/virt/gunyah/rsc_mgr.h +++ b/drivers/virt/gunyah/rsc_mgr.h @@ -20,6 +20,84 @@ int gunyah_rm_notifier_unregister(struct gunyah_rm *rm, struct device *gunyah_rm_get(struct gunyah_rm *rm); void gunyah_rm_put(struct gunyah_rm *rm); +struct gunyah_rm_vm_exited_payload { + __le16 vmid; + __le16 exit_type; + __le32 exit_reason_size; + u8 exit_reason[]; +} __packed; + +enum gunyah_rm_notification_id { + /* clang-format off */ + GUNYAH_RM_NOTIFICATION_VM_EXITED = 0x56100001, + GUNYAH_RM_NOTIFICATION_VM_STATUS = 0x56100008, + /* clang-format on */ +}; + +enum gunyah_rm_vm_status { + /* clang-format off */ + GUNYAH_RM_VM_STATUS_NO_STATE = 0, + GUNYAH_RM_VM_STATUS_INIT = 1, + GUNYAH_RM_VM_STATUS_READY = 2, + GUNYAH_RM_VM_STATUS_RUNNING = 3, + GUNYAH_RM_VM_STATUS_PAUSED = 4, + GUNYAH_RM_VM_STATUS_LOAD = 5, + GUNYAH_RM_VM_STATUS_AUTH = 6, + GUNYAH_RM_VM_STATUS_INIT_FAILED = 8, + GUNYAH_RM_VM_STATUS_EXITED = 9, + GUNYAH_RM_VM_STATUS_RESETTING = 10, + GUNYAH_RM_VM_STATUS_RESET = 11, + /* clang-format on */ +}; + +struct gunyah_rm_vm_status_payload { + __le16 vmid; + u16 reserved; + u8 vm_status; + u8 os_status; + __le16 app_status; +} __packed; + +int gunyah_rm_alloc_vmid(struct gunyah_rm *rm, u16 vmid); +int gunyah_rm_dealloc_vmid(struct gunyah_rm *rm, u16 vmid); +int gunyah_rm_vm_reset(struct gunyah_rm *rm, u16 vmid); +int gunyah_rm_vm_start(struct gunyah_rm *rm, u16 vmid); +int gunyah_rm_vm_stop(struct gunyah_rm *rm, u16 vmid); + +enum gunyah_rm_vm_auth_mechanism { + /* clang-format off */ + GUNYAH_RM_VM_AUTH_NONE = 0, + GUNYAH_RM_VM_AUTH_QCOM_PIL_ELF = 1, + GUNYAH_RM_VM_AUTH_QCOM_ANDROID_PVM = 2, + /* clang-format on */ +}; + +int gunyah_rm_vm_configure(struct gunyah_rm *rm, u16 vmid, + enum gunyah_rm_vm_auth_mechanism auth_mechanism, + u32 mem_handle, u64 image_offset, u64 image_size, + u64 dtb_offset, u64 dtb_size); +int gunyah_rm_vm_init(struct gunyah_rm *rm, u16 vmid); + +struct gunyah_rm_hyp_resource { + u8 type; + u8 reserved; + __le16 partner_vmid; + __le32 resource_handle; + __le32 resource_label; + __le64 cap_id; + __le32 virq_handle; + __le32 virq; + __le64 base; + __le64 size; +} __packed; + +struct gunyah_rm_hyp_resources { + __le32 n_entries; + struct gunyah_rm_hyp_resource entries[]; +} __packed; + +int gunyah_rm_get_hyp_resources(struct gunyah_rm *rm, u16 vmid, + struct gunyah_rm_hyp_resources **resources); int gunyah_rm_call(struct gunyah_rm *rsc_mgr, u32 message_id, const void *req_buf, size_t req_buf_size, void **resp_buf, diff --git a/drivers/virt/gunyah/rsc_mgr_rpc.c b/drivers/virt/gunyah/rsc_mgr_rpc.c new file mode 100644 index 000000000000..141ce0145e91 --- /dev/null +++ b/drivers/virt/gunyah/rsc_mgr_rpc.c @@ -0,0 +1,238 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2022-2024 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include "rsc_mgr.h" + +/* Message IDs: VM Management */ +/* clang-format off */ +#define GUNYAH_RM_RPC_VM_ALLOC_VMID 0x56000001 +#define GUNYAH_RM_RPC_VM_DEALLOC_VMID 0x56000002 +#define GUNYAH_RM_RPC_VM_START 0x56000004 +#define GUNYAH_RM_RPC_VM_STOP 0x56000005 +#define GUNYAH_RM_RPC_VM_RESET 0x56000006 +#define GUNYAH_RM_RPC_VM_CONFIG_IMAGE 0x56000009 +#define GUNYAH_RM_RPC_VM_INIT 0x5600000B +#define GUNYAH_RM_RPC_VM_GET_HYP_RESOURCES 0x56000020 +/* clang-format on */ + +struct gunyah_rm_vm_common_vmid_req { + __le16 vmid; + __le16 _padding; +} __packed; + +/* Call: VM_ALLOC */ +struct gunyah_rm_vm_alloc_vmid_resp { + __le16 vmid; + __le16 _padding; +} __packed; + +/* Call: VM_STOP */ +#define GUNYAH_RM_VM_STOP_FLAG_FORCE_STOP BIT(0) + +#define GUNYAH_RM_VM_STOP_REASON_FORCE_STOP 3 + +struct gunyah_rm_vm_stop_req { + __le16 vmid; + u8 flags; + u8 _padding; + __le32 stop_reason; +} __packed; + +/* Call: VM_CONFIG_IMAGE */ +struct gunyah_rm_vm_config_image_req { + __le16 vmid; + __le16 auth_mech; + __le32 mem_handle; + __le64 image_offset; + __le64 image_size; + __le64 dtb_offset; + __le64 dtb_size; +} __packed; + +/* + * Several RM calls take only a VMID as a parameter and give only standard + * response back. Deduplicate boilerplate code by using this common call. + */ +static int gunyah_rm_common_vmid_call(struct gunyah_rm *rm, u32 message_id, + u16 vmid) +{ + struct gunyah_rm_vm_common_vmid_req req_payload = { + .vmid = cpu_to_le16(vmid), + }; + + return gunyah_rm_call(rm, message_id, &req_payload, sizeof(req_payload), + NULL, NULL); +} + +/** + * gunyah_rm_alloc_vmid() - Allocate a new VM in Gunyah. Returns the VM identifier. + * @rm: Handle to a Gunyah resource manager + * @vmid: Use 0 to dynamically allocate a VM. A reserved VMID can be supplied + * to request allocation of a platform-defined VM. + * + * Return: the allocated VMID or negative value on error + */ +int gunyah_rm_alloc_vmid(struct gunyah_rm *rm, u16 vmid) +{ + struct gunyah_rm_vm_common_vmid_req req_payload = { + .vmid = cpu_to_le16(vmid), + }; + struct gunyah_rm_vm_alloc_vmid_resp *resp_payload; + size_t resp_size; + void *resp; + int ret; + + ret = gunyah_rm_call(rm, GUNYAH_RM_RPC_VM_ALLOC_VMID, &req_payload, + sizeof(req_payload), &resp, &resp_size); + if (ret) + return ret; + + if (!vmid) { + resp_payload = resp; + ret = le16_to_cpu(resp_payload->vmid); + kfree(resp); + } + + return ret; +} + +/** + * gunyah_rm_dealloc_vmid() - Dispose of a VMID + * @rm: Handle to a Gunyah resource manager + * @vmid: VM identifier allocated with gunyah_rm_alloc_vmid + */ +int gunyah_rm_dealloc_vmid(struct gunyah_rm *rm, u16 vmid) +{ + return gunyah_rm_common_vmid_call(rm, GUNYAH_RM_RPC_VM_DEALLOC_VMID, + vmid); +} + +/** + * gunyah_rm_vm_reset() - Reset a VM's resources + * @rm: Handle to a Gunyah resource manager + * @vmid: VM identifier allocated with gunyah_rm_alloc_vmid + * + * As part of tearing down the VM, request RM to clean up all the VM resources + * associated with the VM. Only after this, Linux can clean up all the + * references it maintains to resources. + */ +int gunyah_rm_vm_reset(struct gunyah_rm *rm, u16 vmid) +{ + return gunyah_rm_common_vmid_call(rm, GUNYAH_RM_RPC_VM_RESET, vmid); +} + +/** + * gunyah_rm_vm_start() - Move a VM into "ready to run" state + * @rm: Handle to a Gunyah resource manager + * @vmid: VM identifier allocated with gunyah_rm_alloc_vmid + * + * On VMs which use proxy scheduling, vcpu_run is needed to actually run the VM. + * On VMs which use Gunyah's scheduling, the vCPUs start executing in accordance with Gunyah + * scheduling policies. + */ +int gunyah_rm_vm_start(struct gunyah_rm *rm, u16 vmid) +{ + return gunyah_rm_common_vmid_call(rm, GUNYAH_RM_RPC_VM_START, vmid); +} + +/** + * gunyah_rm_vm_stop() - Send a request to Resource Manager VM to forcibly stop a VM. + * @rm: Handle to a Gunyah resource manager + * @vmid: VM identifier allocated with gunyah_rm_alloc_vmid + */ +int gunyah_rm_vm_stop(struct gunyah_rm *rm, u16 vmid) +{ + struct gunyah_rm_vm_stop_req req_payload = { + .vmid = cpu_to_le16(vmid), + .flags = GUNYAH_RM_VM_STOP_FLAG_FORCE_STOP, + .stop_reason = cpu_to_le32(GUNYAH_RM_VM_STOP_REASON_FORCE_STOP), + }; + + return gunyah_rm_call(rm, GUNYAH_RM_RPC_VM_STOP, &req_payload, + sizeof(req_payload), NULL, NULL); +} + +/** + * gunyah_rm_vm_configure() - Prepare a VM to start and provide the common + * configuration needed by RM to configure a VM + * @rm: Handle to a Gunyah resource manager + * @vmid: VM identifier allocated with gunyah_rm_alloc_vmid + * @auth_mechanism: Authentication mechanism used by resource manager to verify + * the virtual machine + * @mem_handle: Handle to a previously shared memparcel that contains all parts + * of the VM image subject to authentication. + * @image_offset: Start address of VM image, relative to the start of memparcel + * @image_size: Size of the VM image + * @dtb_offset: Start address of the devicetree binary with VM configuration, + * relative to start of memparcel. + * @dtb_size: Maximum size of devicetree binary. + */ +int gunyah_rm_vm_configure(struct gunyah_rm *rm, u16 vmid, + enum gunyah_rm_vm_auth_mechanism auth_mechanism, + u32 mem_handle, u64 image_offset, u64 image_size, + u64 dtb_offset, u64 dtb_size) +{ + struct gunyah_rm_vm_config_image_req req_payload = { + .vmid = cpu_to_le16(vmid), + .auth_mech = cpu_to_le16(auth_mechanism), + .mem_handle = cpu_to_le32(mem_handle), + .image_offset = cpu_to_le64(image_offset), + .image_size = cpu_to_le64(image_size), + .dtb_offset = cpu_to_le64(dtb_offset), + .dtb_size = cpu_to_le64(dtb_size), + }; + + return gunyah_rm_call(rm, GUNYAH_RM_RPC_VM_CONFIG_IMAGE, &req_payload, + sizeof(req_payload), NULL, NULL); +} + +/** + * gunyah_rm_vm_init() - Move the VM to initialized state. + * @rm: Handle to a Gunyah resource manager + * @vmid: VM identifier + * + * RM will allocate needed resources for the VM. + */ +int gunyah_rm_vm_init(struct gunyah_rm *rm, u16 vmid) +{ + return gunyah_rm_common_vmid_call(rm, GUNYAH_RM_RPC_VM_INIT, vmid); +} + +/** + * gunyah_rm_get_hyp_resources() - Retrieve hypervisor resources (capabilities) associated with a VM + * @rm: Handle to a Gunyah resource manager + * @vmid: VMID of the other VM to get the resources of + * @resources: Set by gunyah_rm_get_hyp_resources and contains the returned hypervisor resources. + * Caller must free the resources pointer if successful. + */ +int gunyah_rm_get_hyp_resources(struct gunyah_rm *rm, u16 vmid, + struct gunyah_rm_hyp_resources **resources) +{ + struct gunyah_rm_vm_common_vmid_req req_payload = { + .vmid = cpu_to_le16(vmid), + }; + struct gunyah_rm_hyp_resources *resp; + size_t resp_size; + int ret; + + ret = gunyah_rm_call(rm, GUNYAH_RM_RPC_VM_GET_HYP_RESOURCES, + &req_payload, sizeof(req_payload), (void **)&resp, + &resp_size); + if (ret) + return ret; + + if (!resp_size) + return -EBADMSG; + + if (resp_size < struct_size(resp, entries, 0) || + resp_size != + struct_size(resp, entries, le32_to_cpu(resp->n_entries))) { + kfree(resp); + return -EBADMSG; + } + + *resources = resp; + return 0; +} From patchwork Tue Jan 9 19:37:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761094 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 096D43D961; Tue, 9 Jan 2024 19:38:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="TDAArpDa" Received: from pps.filterd (m0279868.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409GaJ18010710; Tue, 9 Jan 2024 19:37:57 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=xWdZbdG7dSafWq3cabzRvyRx9NAZngj+f9YcIbeZBfY =; b=TDAArpDahUKJYT7Ji4NE1rBHgi9T9MX6/TKw57bQQmvVzjuO7uQw0AHQLOS Ru7UbgJaSCXKlurAPEVVoULnJ8dyMtzYsYRJfxwGW1YVhFS+laYSAyr1cEWK1wH7 FnWH/fBjD7+iEyaaUwUeonBaw23vg0oR0ZEkjdz57ZBxnC4Imxor1MArNGW8BoE3 i4Lx7BTIvLf/7bOvorKdKiJggh3qm0CJUl7xYCguAHA0oHGKSyUmTMnluXLj1nKd k/obzOk5EFohZVl1J4FW+JLwuFaLDd/sgPojysn7Q1gIHCKUSMZ+AbAn+ZzSZC2j 39vNCH9hRwOh7UuVwR+iRBmpwdg== Received: from nasanppmta03.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh9ta0dw8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:37:57 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA03.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409JbuQD011419 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:37:56 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:37:55 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:37:48 -0800 Subject: [PATCH v16 10/34] gunyah: vm_mgr: Add VM start/stop Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-10-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: xkWEhD3cCcd4_26fH5x9GQrV4OQNq86C X-Proofpoint-ORIG-GUID: xkWEhD3cCcd4_26fH5x9GQrV4OQNq86C X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 impostorscore=0 malwarescore=0 mlxscore=0 adultscore=0 mlxlogscore=999 bulkscore=0 priorityscore=1501 lowpriorityscore=0 clxscore=1015 phishscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Add ioctl to trigger the start of a Gunyah virtual machine. Subsequent commits will provide memory to the virtual machine and add ability to interact with the resources (capabilities) of the virtual machine. Although start of the virtual machine can be done implicitly on the first vCPU run for proxy-schedule virtual machines, there is a non-trivial number of calls to Gunyah: a more precise error can be given to userspace which calls VM_START without looking at kernel logs because userspace can detect that the VM start failed instead of "couldn't run the vCPU". Co-developed-by: Prakruthi Deepak Heragu Signed-off-by: Prakruthi Deepak Heragu Signed-off-by: Elliot Berman --- drivers/virt/gunyah/vm_mgr.c | 198 +++++++++++++++++++++++++++++++++++++++++++ drivers/virt/gunyah/vm_mgr.h | 19 +++++ include/uapi/linux/gunyah.h | 5 ++ 3 files changed, 222 insertions(+) diff --git a/drivers/virt/gunyah/vm_mgr.c b/drivers/virt/gunyah/vm_mgr.c index e9dff733e35e..f6e6b5669aae 100644 --- a/drivers/virt/gunyah/vm_mgr.c +++ b/drivers/virt/gunyah/vm_mgr.c @@ -15,6 +15,68 @@ #include "rsc_mgr.h" #include "vm_mgr.h" +static int gunyah_vm_rm_notification_status(struct gunyah_vm *ghvm, void *data) +{ + struct gunyah_rm_vm_status_payload *payload = data; + + if (le16_to_cpu(payload->vmid) != ghvm->vmid) + return NOTIFY_OK; + + /* All other state transitions are synchronous to a corresponding RM call */ + if (payload->vm_status == GUNYAH_RM_VM_STATUS_RESET) { + down_write(&ghvm->status_lock); + ghvm->vm_status = payload->vm_status; + up_write(&ghvm->status_lock); + wake_up(&ghvm->vm_status_wait); + } + + return NOTIFY_DONE; +} + +static int gunyah_vm_rm_notification_exited(struct gunyah_vm *ghvm, void *data) +{ + struct gunyah_rm_vm_exited_payload *payload = data; + + if (le16_to_cpu(payload->vmid) != ghvm->vmid) + return NOTIFY_OK; + + down_write(&ghvm->status_lock); + ghvm->vm_status = GUNYAH_RM_VM_STATUS_EXITED; + up_write(&ghvm->status_lock); + wake_up(&ghvm->vm_status_wait); + + return NOTIFY_DONE; +} + +static int gunyah_vm_rm_notification(struct notifier_block *nb, + unsigned long action, void *data) +{ + struct gunyah_vm *ghvm = container_of(nb, struct gunyah_vm, nb); + + switch (action) { + case GUNYAH_RM_NOTIFICATION_VM_STATUS: + return gunyah_vm_rm_notification_status(ghvm, data); + case GUNYAH_RM_NOTIFICATION_VM_EXITED: + return gunyah_vm_rm_notification_exited(ghvm, data); + default: + return NOTIFY_OK; + } +} + +static void gunyah_vm_stop(struct gunyah_vm *ghvm) +{ + int ret; + + if (ghvm->vm_status == GUNYAH_RM_VM_STATUS_RUNNING) { + ret = gunyah_rm_vm_stop(ghvm->rm, ghvm->vmid); + if (ret) + dev_warn(ghvm->parent, "Failed to stop VM: %d\n", ret); + } + + wait_event(ghvm->vm_status_wait, + ghvm->vm_status != GUNYAH_RM_VM_STATUS_RUNNING); +} + static __must_check struct gunyah_vm *gunyah_vm_alloc(struct gunyah_rm *rm) { struct gunyah_vm *ghvm; @@ -24,14 +86,148 @@ static __must_check struct gunyah_vm *gunyah_vm_alloc(struct gunyah_rm *rm) return ERR_PTR(-ENOMEM); ghvm->parent = gunyah_rm_get(rm); + ghvm->vmid = GUNYAH_VMID_INVAL; ghvm->rm = rm; + init_rwsem(&ghvm->status_lock); + init_waitqueue_head(&ghvm->vm_status_wait); + ghvm->vm_status = GUNYAH_RM_VM_STATUS_NO_STATE; + return ghvm; } +static int gunyah_vm_start(struct gunyah_vm *ghvm) +{ + int ret; + + down_write(&ghvm->status_lock); + if (ghvm->vm_status != GUNYAH_RM_VM_STATUS_NO_STATE) { + up_write(&ghvm->status_lock); + return 0; + } + + ghvm->nb.notifier_call = gunyah_vm_rm_notification; + ret = gunyah_rm_notifier_register(ghvm->rm, &ghvm->nb); + if (ret) + goto err; + + ret = gunyah_rm_alloc_vmid(ghvm->rm, 0); + if (ret < 0) { + gunyah_rm_notifier_unregister(ghvm->rm, &ghvm->nb); + goto err; + } + ghvm->vmid = ret; + ghvm->vm_status = GUNYAH_RM_VM_STATUS_LOAD; + + ret = gunyah_rm_vm_configure(ghvm->rm, ghvm->vmid, ghvm->auth, 0, 0, 0, + 0, 0); + if (ret) { + dev_warn(ghvm->parent, "Failed to configure VM: %d\n", ret); + goto err; + } + + ret = gunyah_rm_vm_init(ghvm->rm, ghvm->vmid); + if (ret) { + ghvm->vm_status = GUNYAH_RM_VM_STATUS_INIT_FAILED; + dev_warn(ghvm->parent, "Failed to initialize VM: %d\n", ret); + goto err; + } + ghvm->vm_status = GUNYAH_RM_VM_STATUS_READY; + + ret = gunyah_rm_vm_start(ghvm->rm, ghvm->vmid); + if (ret) { + dev_warn(ghvm->parent, "Failed to start VM: %d\n", ret); + goto err; + } + + ghvm->vm_status = GUNYAH_RM_VM_STATUS_RUNNING; + up_write(&ghvm->status_lock); + return ret; +err: + /* gunyah_vm_free will handle releasing resources and reclaiming memory */ + up_write(&ghvm->status_lock); + return ret; +} + +static int gunyah_vm_ensure_started(struct gunyah_vm *ghvm) +{ + int ret; + + ret = down_read_interruptible(&ghvm->status_lock); + if (ret) + return ret; + + /* Unlikely because VM is typically started */ + if (unlikely(ghvm->vm_status == GUNYAH_RM_VM_STATUS_NO_STATE)) { + up_read(&ghvm->status_lock); + ret = gunyah_vm_start(ghvm); + if (ret) + return ret; + /** gunyah_vm_start() is guaranteed to bring status out of + * GUNYAH_RM_VM_STATUS_LOAD, thus infinitely recursive call is not + * possible + */ + return gunyah_vm_ensure_started(ghvm); + } + + /* Unlikely because VM is typically running */ + if (unlikely(ghvm->vm_status != GUNYAH_RM_VM_STATUS_RUNNING)) + ret = -ENODEV; + + up_read(&ghvm->status_lock); + return ret; +} + +static long gunyah_vm_ioctl(struct file *filp, unsigned int cmd, + unsigned long arg) +{ + struct gunyah_vm *ghvm = filp->private_data; + long r; + + switch (cmd) { + case GUNYAH_VM_START: { + r = gunyah_vm_ensure_started(ghvm); + break; + } + default: + r = -ENOTTY; + break; + } + + return r; +} + static int gunyah_vm_release(struct inode *inode, struct file *filp) { struct gunyah_vm *ghvm = filp->private_data; + int ret; + + /** + * We might race with a VM exit notification, but that's ok: + * gh_rm_vm_stop() will just return right away. + */ + if (ghvm->vm_status == GUNYAH_RM_VM_STATUS_RUNNING) + gunyah_vm_stop(ghvm); + + if (ghvm->vm_status != GUNYAH_RM_VM_STATUS_NO_STATE && + ghvm->vm_status != GUNYAH_RM_VM_STATUS_LOAD && + ghvm->vm_status != GUNYAH_RM_VM_STATUS_RESET) { + ret = gunyah_rm_vm_reset(ghvm->rm, ghvm->vmid); + if (ret) + dev_err(ghvm->parent, "Failed to reset the vm: %d\n", + ret); + wait_event(ghvm->vm_status_wait, + ghvm->vm_status == GUNYAH_RM_VM_STATUS_RESET); + } + + if (ghvm->vm_status > GUNYAH_RM_VM_STATUS_NO_STATE) { + gunyah_rm_notifier_unregister(ghvm->rm, &ghvm->nb); + + ret = gunyah_rm_dealloc_vmid(ghvm->rm, ghvm->vmid); + if (ret) + dev_warn(ghvm->parent, + "Failed to deallocate vmid: %d\n", ret); + } gunyah_rm_put(ghvm->rm); kfree(ghvm); @@ -40,6 +236,8 @@ static int gunyah_vm_release(struct inode *inode, struct file *filp) static const struct file_operations gunyah_vm_fops = { .owner = THIS_MODULE, + .unlocked_ioctl = gunyah_vm_ioctl, + .compat_ioctl = compat_ptr_ioctl, .release = gunyah_vm_release, .llseek = noop_llseek, }; diff --git a/drivers/virt/gunyah/vm_mgr.h b/drivers/virt/gunyah/vm_mgr.h index 50790d402676..e6cc9aead0b6 100644 --- a/drivers/virt/gunyah/vm_mgr.h +++ b/drivers/virt/gunyah/vm_mgr.h @@ -7,6 +7,8 @@ #define _GUNYAH_VM_MGR_PRIV_H #include +#include +#include #include @@ -17,12 +19,29 @@ long gunyah_dev_vm_mgr_ioctl(struct gunyah_rm *rm, unsigned int cmd, /** * struct gunyah_vm - Main representation of a Gunyah Virtual machine + * @vmid: Gunyah's VMID for this virtual machine * @rm: Pointer to the resource manager struct to make RM calls * @parent: For logging + * @nb: Notifier block for RM notifications + * @vm_status: Current state of the VM, as last reported by RM + * @vm_status_wait: Wait queue for status @vm_status changes + * @status_lock: Serializing state transitions + * @auth: Authentication mechanism to be used by resource manager when + * launching the VM + * + * Members are grouped by hot path. */ struct gunyah_vm { + u16 vmid; struct gunyah_rm *rm; + + struct notifier_block nb; + enum gunyah_rm_vm_status vm_status; + wait_queue_head_t vm_status_wait; + struct rw_semaphore status_lock; + struct device *parent; + enum gunyah_rm_vm_auth_mechanism auth; }; #endif diff --git a/include/uapi/linux/gunyah.h b/include/uapi/linux/gunyah.h index ac338ec4b85d..31e7f79a6c39 100644 --- a/include/uapi/linux/gunyah.h +++ b/include/uapi/linux/gunyah.h @@ -20,4 +20,9 @@ */ #define GUNYAH_CREATE_VM _IO(GUNYAH_IOCTL_TYPE, 0x0) /* Returns a Gunyah VM fd */ +/* + * ioctls for gunyah-vm fds (returned by GUNYAH_CREATE_VM) + */ +#define GUNYAH_VM_START _IO(GUNYAH_IOCTL_TYPE, 0x3) + #endif From patchwork Tue Jan 9 19:37:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761573 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4C3223D96B; Tue, 9 Jan 2024 19:38:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="Sjt98MuC" Received: from pps.filterd (m0279873.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409D9R98015182; Tue, 9 Jan 2024 19:37:58 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=zaO4CN5sb0ZBvCi/8rrxssP7owlddgLH9ngiDzhb0F8 =; b=Sjt98MuCOMs/tT++NREfn+nkZgrQ/4wdoIOHtvC0hwulbxMqGo48ahHNIsj iyf5nTWs/L0jnK6BDZgDgWBAczooHX4qrt5ptIK9R77YM4pGuCTqLKNAgrPCAjtZ Gk1qOzHyOWTMulZ4Own1ROhD4hy/YSqSDNd9nGrfvQGjD/TZhbzx1IYgxjgplAOc HS6vw6j6FoAYmJDvTFyxLSShn+si+FBFAGkJCDwOSECuTd9BxAHCMiAXHvt0x5hJ YlOAU5FnEbZU6it6NWNiKotEwmTVGNIe8BMYQL7DkFX8RYh3C4PMuVF/VLnxn8Nq 13SCwoBwRUX8TjdEBMmdcVAOmgQ== Received: from nasanppmta03.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh3g699p4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:37:57 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA03.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409JbuHp011429 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:37:56 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:37:55 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:37:49 -0800 Subject: [PATCH v16 11/34] virt: gunyah: Translate gh_rm_hyp_resource into gunyah_resource Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-11-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: uIJ4ZLH2FZ4hRQD_l7p3KGjz_2w7OoFa X-Proofpoint-ORIG-GUID: uIJ4ZLH2FZ4hRQD_l7p3KGjz_2w7OoFa X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 mlxscore=0 bulkscore=0 spamscore=0 malwarescore=0 phishscore=0 clxscore=1015 impostorscore=0 suspectscore=0 lowpriorityscore=0 mlxlogscore=867 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 When booting a Gunyah virtual machine, the host VM may gain capabilities to interact with resources for the guest virtual machine. Examples of such resources are vCPUs or message queues. To use those resources, we need to translate the RM response into a gunyah_resource structure which are useful to Linux drivers. Presently, Linux drivers need only to know the type of resource, the capability ID, and an interrupt. On ARM64 systems, the interrupt reported by Gunyah is the GIC interrupt ID number and always a SPI or extended SPI. Signed-off-by: Elliot Berman --- arch/arm64/include/asm/gunyah.h | 36 +++++++++ drivers/virt/gunyah/rsc_mgr.c | 175 +++++++++++++++++++++++++++++++++++++++- drivers/virt/gunyah/rsc_mgr.h | 5 ++ include/linux/gunyah.h | 3 + 4 files changed, 218 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/gunyah.h b/arch/arm64/include/asm/gunyah.h new file mode 100644 index 000000000000..0cd3debe22b6 --- /dev/null +++ b/arch/arm64/include/asm/gunyah.h @@ -0,0 +1,36 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ +#ifndef _ASM_GUNYAH_H +#define _ASM_GUNYAH_H + +#include +#include + +static inline int arch_gunyah_fill_irq_fwspec_params(u32 virq, + struct irq_fwspec *fwspec) +{ + /* Assume that Gunyah gave us an SPI or ESPI; defensively check it */ + if (WARN(virq < 32, "Unexpected virq: %d\n", virq)) { + return -EINVAL; + } else if (virq <= 1019) { + fwspec->param_count = 3; + fwspec->param[0] = 0; /* GIC_SPI */ + fwspec->param[1] = virq - 32; /* virq 32 -> SPI 0 */ + fwspec->param[2] = IRQ_TYPE_EDGE_RISING; + } else if (WARN(virq < 4096, "Unexpected virq: %d\n", virq)) { + return -EINVAL; + } else if (virq < 5120) { + fwspec->param_count = 3; + fwspec->param[0] = 2; /* GIC_ESPI */ + fwspec->param[1] = virq - 4096; /* virq 4096 -> ESPI 0 */ + fwspec->param[2] = IRQ_TYPE_EDGE_RISING; + } else { + WARN(1, "Unexpected virq: %d\n", virq); + return -EINVAL; + } + return 0; +} + +#endif diff --git a/drivers/virt/gunyah/rsc_mgr.c b/drivers/virt/gunyah/rsc_mgr.c index 45f9514cfe0e..efc0970ae4cc 100644 --- a/drivers/virt/gunyah/rsc_mgr.c +++ b/drivers/virt/gunyah/rsc_mgr.c @@ -9,9 +9,12 @@ #include #include #include +#include #include #include +#include + #include "rsc_mgr.h" #include "vm_mgr.h" @@ -121,6 +124,7 @@ struct gunyah_rm_message { * @send_ready: completed when we know Tx message queue can take more messages * @nh: notifier chain for clients interested in RM notification messages * @miscdev: /dev/gunyah + * @irq_domain: Domain to translate Gunyah hwirqs to Linux irqs */ struct gunyah_rm { struct device *dev; @@ -138,6 +142,7 @@ struct gunyah_rm { struct blocking_notifier_head nh; struct miscdevice miscdev; + struct irq_domain *irq_domain; }; /** @@ -178,6 +183,143 @@ static inline int gunyah_rm_error_remap(enum gunyah_rm_error rm_error) } } +struct gunyah_irq_chip_data { + u32 gunyah_virq; +}; + +static struct irq_chip gunyah_rm_irq_chip = { + /* clang-format off */ + .name = "Gunyah", + .irq_enable = irq_chip_enable_parent, + .irq_disable = irq_chip_disable_parent, + .irq_ack = irq_chip_ack_parent, + .irq_mask = irq_chip_mask_parent, + .irq_mask_ack = irq_chip_mask_ack_parent, + .irq_unmask = irq_chip_unmask_parent, + .irq_eoi = irq_chip_eoi_parent, + .irq_set_affinity = irq_chip_set_affinity_parent, + .irq_set_type = irq_chip_set_type_parent, + .irq_set_wake = irq_chip_set_wake_parent, + .irq_set_vcpu_affinity = irq_chip_set_vcpu_affinity_parent, + .irq_retrigger = irq_chip_retrigger_hierarchy, + .irq_get_irqchip_state = irq_chip_get_parent_state, + .irq_set_irqchip_state = irq_chip_set_parent_state, + .flags = IRQCHIP_SET_TYPE_MASKED | + IRQCHIP_SKIP_SET_WAKE | + IRQCHIP_MASK_ON_SUSPEND, + /* clang-format on */ +}; + +static int gunyah_rm_irq_domain_alloc(struct irq_domain *d, unsigned int virq, + unsigned int nr_irqs, void *arg) +{ + struct gunyah_irq_chip_data *chip_data, *spec = arg; + struct irq_fwspec parent_fwspec = {}; + struct gunyah_rm *rm = d->host_data; + u32 gunyah_virq = spec->gunyah_virq; + int ret; + + if (nr_irqs != 1) + return -EINVAL; + + chip_data = kzalloc(sizeof(*chip_data), GFP_KERNEL); + if (!chip_data) + return -ENOMEM; + + chip_data->gunyah_virq = gunyah_virq; + + ret = irq_domain_set_hwirq_and_chip(d, virq, chip_data->gunyah_virq, + &gunyah_rm_irq_chip, chip_data); + if (ret) + goto err_free_irq_data; + + parent_fwspec.fwnode = d->parent->fwnode; + ret = arch_gunyah_fill_irq_fwspec_params(chip_data->gunyah_virq, + &parent_fwspec); + if (ret) { + dev_err(rm->dev, "virq translation failed %u: %d\n", + chip_data->gunyah_virq, ret); + goto err_free_irq_data; + } + + ret = irq_domain_alloc_irqs_parent(d, virq, nr_irqs, &parent_fwspec); + if (ret) + goto err_free_irq_data; + + return ret; +err_free_irq_data: + kfree(chip_data); + return ret; +} + +static void gunyah_rm_irq_domain_free_single(struct irq_domain *d, + unsigned int virq) +{ + struct irq_data *irq_data; + + irq_data = irq_domain_get_irq_data(d, virq); + if (!irq_data) + return; + + kfree(irq_data->chip_data); + irq_data->chip_data = NULL; +} + +static void gunyah_rm_irq_domain_free(struct irq_domain *d, unsigned int virq, + unsigned int nr_irqs) +{ + unsigned int i; + + for (i = 0; i < nr_irqs; i++) + gunyah_rm_irq_domain_free_single(d, virq); +} + +static const struct irq_domain_ops gunyah_rm_irq_domain_ops = { + .alloc = gunyah_rm_irq_domain_alloc, + .free = gunyah_rm_irq_domain_free, +}; + +struct gunyah_resource * +gunyah_rm_alloc_resource(struct gunyah_rm *rm, + struct gunyah_rm_hyp_resource *hyp_resource) +{ + struct gunyah_resource *ghrsc; + int ret; + + ghrsc = kzalloc(sizeof(*ghrsc), GFP_KERNEL); + if (!ghrsc) + return NULL; + + ghrsc->type = hyp_resource->type; + ghrsc->capid = le64_to_cpu(hyp_resource->cap_id); + ghrsc->irq = IRQ_NOTCONNECTED; + ghrsc->rm_label = le32_to_cpu(hyp_resource->resource_label); + if (hyp_resource->virq) { + struct gunyah_irq_chip_data irq_data = { + .gunyah_virq = le32_to_cpu(hyp_resource->virq), + }; + + ret = irq_domain_alloc_irqs(rm->irq_domain, 1, NUMA_NO_NODE, + &irq_data); + if (ret < 0) { + dev_err(rm->dev, + "Failed to allocate interrupt for resource %d label: %d: %d\n", + ghrsc->type, ghrsc->rm_label, ret); + kfree(ghrsc); + return NULL; + } + ghrsc->irq = ret; + } + + return ghrsc; +} + +void gunyah_rm_free_resource(struct gunyah_resource *ghrsc) +{ + irq_dispose_mapping(ghrsc->irq); + kfree(ghrsc); +} + static int gunyah_rm_init_message_payload(struct gunyah_rm_message *message, const void *msg, size_t hdr_size, size_t msg_size) @@ -712,6 +854,8 @@ static int gunyah_rm_probe_rx_msgq(struct gunyah_rm *rm, static int gunyah_rm_probe(struct platform_device *pdev) { + struct irq_domain *parent_irq_domain; + struct device_node *parent_irq_node; struct gunyah_rm *rm; int ret; @@ -737,15 +881,43 @@ static int gunyah_rm_probe(struct platform_device *pdev) if (ret) return ret; + parent_irq_node = of_irq_find_parent(pdev->dev.of_node); + if (!parent_irq_node) { + dev_err(&pdev->dev, + "Failed to find interrupt parent of resource manager\n"); + return -ENODEV; + } + + parent_irq_domain = irq_find_host(parent_irq_node); + if (!parent_irq_domain) { + dev_err(&pdev->dev, + "Failed to find interrupt parent domain of resource manager\n"); + return -ENODEV; + } + + rm->irq_domain = irq_domain_add_hierarchy(parent_irq_domain, 0, 0, + pdev->dev.of_node, + &gunyah_rm_irq_domain_ops, + NULL); + if (!rm->irq_domain) { + dev_err(&pdev->dev, "Failed to add irq domain\n"); + return -ENODEV; + } + rm->irq_domain->host_data = rm; + + rm->miscdev.parent = &pdev->dev; rm->miscdev.name = "gunyah"; rm->miscdev.minor = MISC_DYNAMIC_MINOR; rm->miscdev.fops = &gunyah_dev_fops; ret = misc_register(&rm->miscdev); if (ret) - return ret; + goto err_irq_domain; return 0; +err_irq_domain: + irq_domain_remove(rm->irq_domain); + return ret; } static void gunyah_rm_remove(struct platform_device *pdev) @@ -753,6 +925,7 @@ static void gunyah_rm_remove(struct platform_device *pdev) struct gunyah_rm *rm = platform_get_drvdata(pdev); misc_deregister(&rm->miscdev); + irq_domain_remove(rm->irq_domain); } static const struct of_device_id gunyah_rm_of_match[] = { diff --git a/drivers/virt/gunyah/rsc_mgr.h b/drivers/virt/gunyah/rsc_mgr.h index 205b9ea735e5..52711de77bb7 100644 --- a/drivers/virt/gunyah/rsc_mgr.h +++ b/drivers/virt/gunyah/rsc_mgr.h @@ -99,6 +99,11 @@ struct gunyah_rm_hyp_resources { int gunyah_rm_get_hyp_resources(struct gunyah_rm *rm, u16 vmid, struct gunyah_rm_hyp_resources **resources); +struct gunyah_resource * +gunyah_rm_alloc_resource(struct gunyah_rm *rm, + struct gunyah_rm_hyp_resource *hyp_resource); +void gunyah_rm_free_resource(struct gunyah_resource *ghrsc); + int gunyah_rm_call(struct gunyah_rm *rsc_mgr, u32 message_id, const void *req_buf, size_t req_buf_size, void **resp_buf, size_t *resp_buf_size); diff --git a/include/linux/gunyah.h b/include/linux/gunyah.h index acd70f982425..ede8abb1b276 100644 --- a/include/linux/gunyah.h +++ b/include/linux/gunyah.h @@ -29,6 +29,9 @@ struct gunyah_resource { enum gunyah_resource_type type; u64 capid; unsigned int irq; + + struct list_head list; + u32 rm_label; }; /******************************************************************************/ From patchwork Tue Jan 9 19:37:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761092 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 662A83EA60; Tue, 9 Jan 2024 19:38:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="bnceqlAr" Received: from pps.filterd (m0279871.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409IeRwr005755; Tue, 9 Jan 2024 19:37:58 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=4qiqaM4jJd8zGHPZYjmgu0l1JblGJ6sPIsRdN2U0BMc =; b=bnceqlAru+3cNJwcaW3SUuwQZe81XenMVLM7ojxK3lszLcFa1xlkT9yQg6f wg/5uHBN6NJspj68vcNUgNeKW//jAVUQOh9ca6OTkR3M6ESp2mixPzDovwar7YXE 3I5jV+1sRTXyFL8Hv0Nx4DV+yVmo6IIeg3eTEub2W5RZQnJ0pv6rEbCmP4Pj8+1I DoiPWetPG+dUQUnTmi4CgMuepnHvjOTjvhkcAdo26tZUg1B9DoRikGB2Mig9C8Cb ln4fpxDbwd9cg/8Em4p19BmwLo7tOJz346lmRnWzdFIZGNEiXmDTS+2VyXhxTfBq zCyOppkUcOSdqKVdIloy47IzoDw== Received: from nasanppmta04.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh98m8hqt-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:37:58 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA04.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409Jbv3C030413 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:37:57 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:37:56 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:37:50 -0800 Subject: [PATCH v16 12/34] virt: gunyah: Add resource tickets Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-12-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: -_URcr59O8AG3f6gWqPnDYkWBNleK7zF X-Proofpoint-GUID: -_URcr59O8AG3f6gWqPnDYkWBNleK7zF X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 impostorscore=0 spamscore=0 mlxscore=0 priorityscore=1501 phishscore=0 malwarescore=0 mlxlogscore=999 suspectscore=0 clxscore=1015 bulkscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Some VM functions need to acquire Gunyah resources. For instance, Gunyah vCPUs are exposed to the host as a resource. The Gunyah vCPU function will register a resource ticket and be able to interact with the hypervisor once the resource ticket is filled. Resource tickets are the mechanism for functions to acquire ownership of Gunyah resources. Gunyah functions can be created before the VM's resources are created and made available to Linux. A resource ticket identifies a type of resource and a label of a resource which the ticket holder is interested in. Resources are created by Gunyah as configured in the VM's devicetree configuration. Gunyah doesn't process the label and that makes it possible for userspace to create multiple resources with the same label. Resource ticket owners need to be prepared for populate to be called multiple times if userspace created multiple resources with the same label. Reviewed-by: Alex Elder Signed-off-by: Elliot Berman --- drivers/virt/gunyah/vm_mgr.c | 128 ++++++++++++++++++++++++++++++++++++++++++- drivers/virt/gunyah/vm_mgr.h | 7 +++ include/linux/gunyah.h | 39 +++++++++++++ 3 files changed, 173 insertions(+), 1 deletion(-) diff --git a/drivers/virt/gunyah/vm_mgr.c b/drivers/virt/gunyah/vm_mgr.c index f6e6b5669aae..65badcf6357b 100644 --- a/drivers/virt/gunyah/vm_mgr.c +++ b/drivers/virt/gunyah/vm_mgr.c @@ -15,6 +15,106 @@ #include "rsc_mgr.h" #include "vm_mgr.h" +int gunyah_vm_add_resource_ticket(struct gunyah_vm *ghvm, + struct gunyah_vm_resource_ticket *ticket) +{ + struct gunyah_vm_resource_ticket *iter; + struct gunyah_resource *ghrsc, *rsc_iter; + int ret = 0; + + mutex_lock(&ghvm->resources_lock); + list_for_each_entry(iter, &ghvm->resource_tickets, vm_list) { + if (iter->resource_type == ticket->resource_type && + iter->label == ticket->label) { + ret = -EEXIST; + goto out; + } + } + + if (!try_module_get(ticket->owner)) { + ret = -ENODEV; + goto out; + } + + list_add(&ticket->vm_list, &ghvm->resource_tickets); + INIT_LIST_HEAD(&ticket->resources); + + list_for_each_entry_safe(ghrsc, rsc_iter, &ghvm->resources, list) { + if (ghrsc->type == ticket->resource_type && + ghrsc->rm_label == ticket->label) { + if (ticket->populate(ticket, ghrsc)) + list_move(&ghrsc->list, &ticket->resources); + } + } +out: + mutex_unlock(&ghvm->resources_lock); + return ret; +} +EXPORT_SYMBOL_GPL(gunyah_vm_add_resource_ticket); + +void gunyah_vm_remove_resource_ticket(struct gunyah_vm *ghvm, + struct gunyah_vm_resource_ticket *ticket) +{ + struct gunyah_resource *ghrsc, *iter; + + mutex_lock(&ghvm->resources_lock); + list_for_each_entry_safe(ghrsc, iter, &ticket->resources, list) { + ticket->unpopulate(ticket, ghrsc); + list_move(&ghrsc->list, &ghvm->resources); + } + + module_put(ticket->owner); + list_del(&ticket->vm_list); + mutex_unlock(&ghvm->resources_lock); +} +EXPORT_SYMBOL_GPL(gunyah_vm_remove_resource_ticket); + +static void gunyah_vm_add_resource(struct gunyah_vm *ghvm, + struct gunyah_resource *ghrsc) +{ + struct gunyah_vm_resource_ticket *ticket; + + mutex_lock(&ghvm->resources_lock); + list_for_each_entry(ticket, &ghvm->resource_tickets, vm_list) { + if (ghrsc->type == ticket->resource_type && + ghrsc->rm_label == ticket->label) { + if (ticket->populate(ticket, ghrsc)) + list_add(&ghrsc->list, &ticket->resources); + else + list_add(&ghrsc->list, &ghvm->resources); + /* unconditonal -- we prevent multiple identical + * resource tickets so there will not be some other + * ticket elsewhere in the list if populate() failed. + */ + goto found; + } + } + list_add(&ghrsc->list, &ghvm->resources); +found: + mutex_unlock(&ghvm->resources_lock); +} + +static void gunyah_vm_clean_resources(struct gunyah_vm *ghvm) +{ + struct gunyah_vm_resource_ticket *ticket, *titer; + struct gunyah_resource *ghrsc, *riter; + + mutex_lock(&ghvm->resources_lock); + if (!list_empty(&ghvm->resource_tickets)) { + dev_warn(ghvm->parent, "Dangling resource tickets:\n"); + list_for_each_entry_safe(ticket, titer, &ghvm->resource_tickets, + vm_list) { + dev_warn(ghvm->parent, " %pS\n", ticket->populate); + gunyah_vm_remove_resource_ticket(ghvm, ticket); + } + } + + list_for_each_entry_safe(ghrsc, riter, &ghvm->resources, list) { + gunyah_rm_free_resource(ghrsc); + } + mutex_unlock(&ghvm->resources_lock); +} + static int gunyah_vm_rm_notification_status(struct gunyah_vm *ghvm, void *data) { struct gunyah_rm_vm_status_payload *payload = data; @@ -92,13 +192,18 @@ static __must_check struct gunyah_vm *gunyah_vm_alloc(struct gunyah_rm *rm) init_rwsem(&ghvm->status_lock); init_waitqueue_head(&ghvm->vm_status_wait); ghvm->vm_status = GUNYAH_RM_VM_STATUS_NO_STATE; + mutex_init(&ghvm->resources_lock); + INIT_LIST_HEAD(&ghvm->resources); + INIT_LIST_HEAD(&ghvm->resource_tickets); return ghvm; } static int gunyah_vm_start(struct gunyah_vm *ghvm) { - int ret; + struct gunyah_rm_hyp_resources *resources; + struct gunyah_resource *ghrsc; + int ret, i, n; down_write(&ghvm->status_lock); if (ghvm->vm_status != GUNYAH_RM_VM_STATUS_NO_STATE) { @@ -134,6 +239,25 @@ static int gunyah_vm_start(struct gunyah_vm *ghvm) } ghvm->vm_status = GUNYAH_RM_VM_STATUS_READY; + ret = gunyah_rm_get_hyp_resources(ghvm->rm, ghvm->vmid, &resources); + if (ret) { + dev_warn(ghvm->parent, + "Failed to get hypervisor resources for VM: %d\n", + ret); + goto err; + } + + for (i = 0, n = le32_to_cpu(resources->n_entries); i < n; i++) { + ghrsc = gunyah_rm_alloc_resource(ghvm->rm, + &resources->entries[i]); + if (!ghrsc) { + ret = -ENOMEM; + goto err; + } + + gunyah_vm_add_resource(ghvm, ghrsc); + } + ret = gunyah_rm_vm_start(ghvm->rm, ghvm->vmid); if (ret) { dev_warn(ghvm->parent, "Failed to start VM: %d\n", ret); @@ -209,6 +333,8 @@ static int gunyah_vm_release(struct inode *inode, struct file *filp) if (ghvm->vm_status == GUNYAH_RM_VM_STATUS_RUNNING) gunyah_vm_stop(ghvm); + gunyah_vm_clean_resources(ghvm); + if (ghvm->vm_status != GUNYAH_RM_VM_STATUS_NO_STATE && ghvm->vm_status != GUNYAH_RM_VM_STATUS_LOAD && ghvm->vm_status != GUNYAH_RM_VM_STATUS_RESET) { diff --git a/drivers/virt/gunyah/vm_mgr.h b/drivers/virt/gunyah/vm_mgr.h index e6cc9aead0b6..0d291f722885 100644 --- a/drivers/virt/gunyah/vm_mgr.h +++ b/drivers/virt/gunyah/vm_mgr.h @@ -26,6 +26,9 @@ long gunyah_dev_vm_mgr_ioctl(struct gunyah_rm *rm, unsigned int cmd, * @vm_status: Current state of the VM, as last reported by RM * @vm_status_wait: Wait queue for status @vm_status changes * @status_lock: Serializing state transitions + * @resource_lock: Serializing addition of resources and resource tickets + * @resources: List of &struct gunyah_resource that are associated with this VM + * @resource_tickets: List of &struct gunyah_vm_resource_ticket * @auth: Authentication mechanism to be used by resource manager when * launching the VM * @@ -39,9 +42,13 @@ struct gunyah_vm { enum gunyah_rm_vm_status vm_status; wait_queue_head_t vm_status_wait; struct rw_semaphore status_lock; + struct mutex resources_lock; + struct list_head resources; + struct list_head resource_tickets; struct device *parent; enum gunyah_rm_vm_auth_mechanism auth; + }; #endif diff --git a/include/linux/gunyah.h b/include/linux/gunyah.h index ede8abb1b276..001769100260 100644 --- a/include/linux/gunyah.h +++ b/include/linux/gunyah.h @@ -10,6 +10,7 @@ #include #include #include +#include #include /* Matches resource manager's resource types for VM_GET_HYP_RESOURCES RPC */ @@ -34,6 +35,44 @@ struct gunyah_resource { u32 rm_label; }; +struct gunyah_vm; + +/** + * struct gunyah_vm_resource_ticket - Represents a ticket to reserve access to VM resource(s) + * @vm_list: for @gunyah_vm->resource_tickets + * @resources: List of resource(s) associated with this ticket + * (members are from @gunyah_resource->list) + * @resource_type: Type of resource this ticket reserves + * @label: Label of the resource from resource manager this ticket reserves. + * @owner: owner of the ticket + * @populate: callback provided by the ticket owner and called when a resource is found that + * matches @resource_type and @label. Note that this callback could be called + * multiple times if userspace created mutliple resources with the same type/label. + * This callback may also have significant delay after gunyah_vm_add_resource_ticket() + * since gunyah_vm_add_resource_ticket() could be called before the VM starts. + * @unpopulate: callback provided by the ticket owner and called when the ticket owner should no + * longer use the resource provided in the argument. When unpopulate() returns, + * the ticket owner should not be able to use the resource any more as the resource + * might being freed. + */ +struct gunyah_vm_resource_ticket { + struct list_head vm_list; + struct list_head resources; + enum gunyah_resource_type resource_type; + u32 label; + + struct module *owner; + bool (*populate)(struct gunyah_vm_resource_ticket *ticket, + struct gunyah_resource *ghrsc); + void (*unpopulate)(struct gunyah_vm_resource_ticket *ticket, + struct gunyah_resource *ghrsc); +}; + +int gunyah_vm_add_resource_ticket(struct gunyah_vm *ghvm, + struct gunyah_vm_resource_ticket *ticket); +void gunyah_vm_remove_resource_ticket(struct gunyah_vm *ghvm, + struct gunyah_vm_resource_ticket *ticket); + /******************************************************************************/ /* Common arch-independent definitions for Gunyah hypercalls */ #define GUNYAH_CAPID_INVAL U64_MAX From patchwork Tue Jan 9 19:37:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761091 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D73843EA7B; Tue, 9 Jan 2024 19:38:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="fpc1hZF6" Received: from pps.filterd (m0279866.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409HbTnp007926; Tue, 9 Jan 2024 19:37:59 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=mor3PjSL9nlkSdArAusagsgYfSKA55S60vleqZR3Ge0 =; b=fpc1hZF6yiMDJBOD7UHjRhR+fqNfWYWI+PRt7PCYWjvF2fRlscjwF0+xipF 9dKTFaJrvjdSk5TECmgfhcAKVk5i+iuyWcIzeHuNlpvntdv2mB6/Ecv1PUkxRbbk nkkfnQAC8zHjRcEg9Kq0iMj9W0YEoGG0IHLG3SBtsEDetMokRo5jV8bcgjrRar2Q vGWUhqK8HMX5+AZUwy9PK7B/hR4IJGnuqCSj2UlexJUwMUpEpSykfyVJZyPSBLx1 pCFcdZuPJukAZgASN3L6AWiFvxxq52gpzswnEm4kum3k9uMZ7kM4QlLWWrJb+O+9 04An+SJWdU0ancjlK7zfS/re9jg== Received: from nasanppmta04.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh9evrfhb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:37:58 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA04.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409Jbwao030416 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:37:58 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:37:57 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:37:51 -0800 Subject: [PATCH v16 13/34] gunyah: vm_mgr: Add framework for VM Functions Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-13-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: W6LFEG2ACudCKEUd5w0l_1futlHRQVbP X-Proofpoint-ORIG-GUID: W6LFEG2ACudCKEUd5w0l_1futlHRQVbP X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_01,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 bulkscore=0 mlxlogscore=999 lowpriorityscore=0 mlxscore=0 priorityscore=1501 adultscore=0 suspectscore=0 clxscore=1015 spamscore=0 phishscore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Introduce a framework for Gunyah userspace to install VM functions. VM functions are optional interfaces to the virtual machine. vCPUs, ioeventfs, and irqfds are examples of such VM functions and are implemented in subsequent patches. A generic framework is implemented instead of individual ioctls to create vCPUs, irqfds, etc., in order to simplify the VM manager core implementation and allow dynamic loading of VM function modules. Signed-off-by: Elliot Berman --- drivers/virt/gunyah/vm_mgr.c | 207 ++++++++++++++++++++++++++++++++++++++++++- drivers/virt/gunyah/vm_mgr.h | 10 +++ include/linux/gunyah.h | 87 +++++++++++++++++- include/uapi/linux/gunyah.h | 18 ++++ 4 files changed, 318 insertions(+), 4 deletions(-) diff --git a/drivers/virt/gunyah/vm_mgr.c b/drivers/virt/gunyah/vm_mgr.c index 65badcf6357b..5d4f413f7a76 100644 --- a/drivers/virt/gunyah/vm_mgr.c +++ b/drivers/virt/gunyah/vm_mgr.c @@ -6,15 +6,175 @@ #define pr_fmt(fmt) "gunyah_vm_mgr: " fmt #include +#include #include #include #include +#include #include #include "rsc_mgr.h" #include "vm_mgr.h" +static DEFINE_XARRAY(gunyah_vm_functions); + +static void gunyah_vm_put_function(struct gunyah_vm_function *fn) +{ + module_put(fn->mod); +} + +static struct gunyah_vm_function *gunyah_vm_get_function(u32 type) +{ + struct gunyah_vm_function *fn; + + fn = xa_load(&gunyah_vm_functions, type); + if (!fn) { + request_module("ghfunc:%d", type); + + fn = xa_load(&gunyah_vm_functions, type); + } + + if (!fn || !try_module_get(fn->mod)) + fn = ERR_PTR(-ENOENT); + + return fn; +} + +static void +gunyah_vm_remove_function_instance(struct gunyah_vm_function_instance *inst) + __must_hold(&inst->ghvm->fn_lock) +{ + inst->fn->unbind(inst); + list_del(&inst->vm_list); + gunyah_vm_put_function(inst->fn); + kfree(inst->argp); + kfree(inst); +} + +static void gunyah_vm_remove_functions(struct gunyah_vm *ghvm) +{ + struct gunyah_vm_function_instance *inst, *iiter; + + mutex_lock(&ghvm->fn_lock); + list_for_each_entry_safe(inst, iiter, &ghvm->functions, vm_list) { + gunyah_vm_remove_function_instance(inst); + } + mutex_unlock(&ghvm->fn_lock); +} + +static long gunyah_vm_add_function_instance(struct gunyah_vm *ghvm, + struct gunyah_fn_desc *f) +{ + struct gunyah_vm_function_instance *inst; + void __user *argp; + long r = 0; + + if (f->arg_size > GUNYAH_FN_MAX_ARG_SIZE) { + dev_err_ratelimited(ghvm->parent, "%s: arg_size > %d\n", + __func__, GUNYAH_FN_MAX_ARG_SIZE); + return -EINVAL; + } + + inst = kzalloc(sizeof(*inst), GFP_KERNEL); + if (!inst) + return -ENOMEM; + + inst->arg_size = f->arg_size; + if (inst->arg_size) { + inst->argp = kzalloc(inst->arg_size, GFP_KERNEL); + if (!inst->argp) { + r = -ENOMEM; + goto free; + } + + argp = u64_to_user_ptr(f->arg); + if (copy_from_user(inst->argp, argp, f->arg_size)) { + r = -EFAULT; + goto free_arg; + } + } + + inst->fn = gunyah_vm_get_function(f->type); + if (IS_ERR(inst->fn)) { + r = PTR_ERR(inst->fn); + goto free_arg; + } + + inst->ghvm = ghvm; + inst->rm = ghvm->rm; + + mutex_lock(&ghvm->fn_lock); + r = inst->fn->bind(inst); + if (r < 0) { + mutex_unlock(&ghvm->fn_lock); + gunyah_vm_put_function(inst->fn); + goto free_arg; + } + + list_add(&inst->vm_list, &ghvm->functions); + mutex_unlock(&ghvm->fn_lock); + + return r; +free_arg: + kfree(inst->argp); +free: + kfree(inst); + return r; +} + +static long gunyah_vm_rm_function_instance(struct gunyah_vm *ghvm, + struct gunyah_fn_desc *f) +{ + struct gunyah_vm_function_instance *inst, *iter; + void __user *user_argp; + void *argp __free(kfree) = NULL; + long r = 0; + + if (f->arg_size) { + argp = kzalloc(f->arg_size, GFP_KERNEL); + if (!argp) + return -ENOMEM; + + user_argp = u64_to_user_ptr(f->arg); + if (copy_from_user(argp, user_argp, f->arg_size)) + return -EFAULT; + } + + r = mutex_lock_interruptible(&ghvm->fn_lock); + if (r) + return r; + + r = -ENOENT; + list_for_each_entry_safe(inst, iter, &ghvm->functions, vm_list) { + if (inst->fn->type == f->type && + inst->fn->compare(inst, argp, f->arg_size)) { + gunyah_vm_remove_function_instance(inst); + r = 0; + } + } + + mutex_unlock(&ghvm->fn_lock); + return r; +} + +int gunyah_vm_function_register(struct gunyah_vm_function *fn) +{ + if (!fn->bind || !fn->unbind) + return -EINVAL; + + return xa_err(xa_store(&gunyah_vm_functions, fn->type, fn, GFP_KERNEL)); +} +EXPORT_SYMBOL_GPL(gunyah_vm_function_register); + +void gunyah_vm_function_unregister(struct gunyah_vm_function *fn) +{ + /* Expecting unregister to only come when unloading a module */ + WARN_ON(fn->mod && module_refcount(fn->mod)); + xa_erase(&gunyah_vm_functions, fn->type); +} +EXPORT_SYMBOL_GPL(gunyah_vm_function_unregister); + int gunyah_vm_add_resource_ticket(struct gunyah_vm *ghvm, struct gunyah_vm_resource_ticket *ticket) { @@ -191,7 +351,11 @@ static __must_check struct gunyah_vm *gunyah_vm_alloc(struct gunyah_rm *rm) init_rwsem(&ghvm->status_lock); init_waitqueue_head(&ghvm->vm_status_wait); + kref_init(&ghvm->kref); ghvm->vm_status = GUNYAH_RM_VM_STATUS_NO_STATE; + + INIT_LIST_HEAD(&ghvm->functions); + mutex_init(&ghvm->fn_lock); mutex_init(&ghvm->resources_lock); INIT_LIST_HEAD(&ghvm->resources); INIT_LIST_HEAD(&ghvm->resource_tickets); @@ -306,6 +470,7 @@ static long gunyah_vm_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) { struct gunyah_vm *ghvm = filp->private_data; + void __user *argp = (void __user *)arg; long r; switch (cmd) { @@ -313,6 +478,24 @@ static long gunyah_vm_ioctl(struct file *filp, unsigned int cmd, r = gunyah_vm_ensure_started(ghvm); break; } + case GUNYAH_VM_ADD_FUNCTION: { + struct gunyah_fn_desc f; + + if (copy_from_user(&f, argp, sizeof(f))) + return -EFAULT; + + r = gunyah_vm_add_function_instance(ghvm, &f); + break; + } + case GUNYAH_VM_REMOVE_FUNCTION: { + struct gunyah_fn_desc f; + + if (copy_from_user(&f, argp, sizeof(f))) + return -EFAULT; + + r = gunyah_vm_rm_function_instance(ghvm, &f); + break; + } default: r = -ENOTTY; break; @@ -321,9 +504,15 @@ static long gunyah_vm_ioctl(struct file *filp, unsigned int cmd, return r; } -static int gunyah_vm_release(struct inode *inode, struct file *filp) +int __must_check gunyah_vm_get(struct gunyah_vm *ghvm) { - struct gunyah_vm *ghvm = filp->private_data; + return kref_get_unless_zero(&ghvm->kref); +} +EXPORT_SYMBOL_GPL(gunyah_vm_get); + +static void _gunyah_vm_put(struct kref *kref) +{ + struct gunyah_vm *ghvm = container_of(kref, struct gunyah_vm, kref); int ret; /** @@ -333,6 +522,7 @@ static int gunyah_vm_release(struct inode *inode, struct file *filp) if (ghvm->vm_status == GUNYAH_RM_VM_STATUS_RUNNING) gunyah_vm_stop(ghvm); + gunyah_vm_remove_functions(ghvm); gunyah_vm_clean_resources(ghvm); if (ghvm->vm_status != GUNYAH_RM_VM_STATUS_NO_STATE && @@ -357,6 +547,19 @@ static int gunyah_vm_release(struct inode *inode, struct file *filp) gunyah_rm_put(ghvm->rm); kfree(ghvm); +} + +void gunyah_vm_put(struct gunyah_vm *ghvm) +{ + kref_put(&ghvm->kref, _gunyah_vm_put); +} +EXPORT_SYMBOL_GPL(gunyah_vm_put); + +static int gunyah_vm_release(struct inode *inode, struct file *filp) +{ + struct gunyah_vm *ghvm = filp->private_data; + + gunyah_vm_put(ghvm); return 0; } diff --git a/drivers/virt/gunyah/vm_mgr.h b/drivers/virt/gunyah/vm_mgr.h index 0d291f722885..190a95ee8da6 100644 --- a/drivers/virt/gunyah/vm_mgr.h +++ b/drivers/virt/gunyah/vm_mgr.h @@ -7,6 +7,8 @@ #define _GUNYAH_VM_MGR_PRIV_H #include +#include +#include #include #include @@ -26,6 +28,10 @@ long gunyah_dev_vm_mgr_ioctl(struct gunyah_rm *rm, unsigned int cmd, * @vm_status: Current state of the VM, as last reported by RM * @vm_status_wait: Wait queue for status @vm_status changes * @status_lock: Serializing state transitions + * @kref: Reference counter for VM functions + * @fn_lock: Serialization addition of functions + * @functions: List of &struct gunyah_vm_function_instance that have been + * created by user for this VM. * @resource_lock: Serializing addition of resources and resource tickets * @resources: List of &struct gunyah_resource that are associated with this VM * @resource_tickets: List of &struct gunyah_vm_resource_ticket @@ -42,6 +48,10 @@ struct gunyah_vm { enum gunyah_rm_vm_status vm_status; wait_queue_head_t vm_status_wait; struct rw_semaphore status_lock; + + struct kref kref; + struct mutex fn_lock; + struct list_head functions; struct mutex resources_lock; struct list_head resources; struct list_head resource_tickets; diff --git a/include/linux/gunyah.h b/include/linux/gunyah.h index 001769100260..359cd63b4938 100644 --- a/include/linux/gunyah.h +++ b/include/linux/gunyah.h @@ -11,8 +11,93 @@ #include #include #include +#include #include +#include + +struct gunyah_vm; + +int __must_check gunyah_vm_get(struct gunyah_vm *ghvm); +void gunyah_vm_put(struct gunyah_vm *ghvm); + +struct gunyah_vm_function_instance; +/** + * struct gunyah_vm_function - Represents a function type + * @type: value from &enum gunyah_fn_type + * @name: friendly name for debug purposes + * @mod: owner of the function type + * @bind: Called when a new function of this type has been allocated. + * @unbind: Called when the function instance is being destroyed. + * @compare: Compare function instance @f's argument to the provided arg. + * Return true if they are equivalent. Used on GUNYAH_VM_REMOVE_FUNCTION. + */ +struct gunyah_vm_function { + u32 type; + const char *name; + struct module *mod; + long (*bind)(struct gunyah_vm_function_instance *f); + void (*unbind)(struct gunyah_vm_function_instance *f); + bool (*compare)(const struct gunyah_vm_function_instance *f, + const void *arg, size_t size); +}; + +/** + * struct gunyah_vm_function_instance - Represents one function instance + * @arg_size: size of user argument + * @argp: pointer to user argument + * @ghvm: Pointer to VM instance + * @rm: Pointer to resource manager for the VM instance + * @fn: The ops for the function + * @data: Private data for function + * @vm_list: for gunyah_vm's functions list + * @fn_list: for gunyah_vm_function's instances list + */ +struct gunyah_vm_function_instance { + size_t arg_size; + void *argp; + struct gunyah_vm *ghvm; + struct gunyah_rm *rm; + struct gunyah_vm_function *fn; + void *data; + struct list_head vm_list; +}; + +int gunyah_vm_function_register(struct gunyah_vm_function *f); +void gunyah_vm_function_unregister(struct gunyah_vm_function *f); + +/* Since the function identifiers were setup in a uapi header as an + * enum and we do no want to change that, the user must supply the expanded + * constant as well and the compiler checks they are the same. + * See also MODULE_ALIAS_RDMA_NETLINK. + */ +#define MODULE_ALIAS_GUNYAH_VM_FUNCTION(_type, _idx) \ + static inline void __maybe_unused __chk##_idx(void) \ + { \ + BUILD_BUG_ON(_type != _idx); \ + } \ + MODULE_ALIAS("ghfunc:" __stringify(_idx)) + +#define DECLARE_GUNYAH_VM_FUNCTION(_name, _type, _bind, _unbind, _compare) \ + static struct gunyah_vm_function _name = { \ + .type = _type, \ + .name = __stringify(_name), \ + .mod = THIS_MODULE, \ + .bind = _bind, \ + .unbind = _unbind, \ + .compare = _compare, \ + } + +#define module_gunyah_vm_function(__gf) \ + module_driver(__gf, gunyah_vm_function_register, \ + gunyah_vm_function_unregister) + +#define DECLARE_GUNYAH_VM_FUNCTION_INIT(_name, _type, _idx, _bind, _unbind, \ + _compare) \ + DECLARE_GUNYAH_VM_FUNCTION(_name, _type, _bind, _unbind, _compare); \ + module_gunyah_vm_function(_name); \ + MODULE_ALIAS_GUNYAH_VM_FUNCTION(_type, _idx) + /* Matches resource manager's resource types for VM_GET_HYP_RESOURCES RPC */ enum gunyah_resource_type { /* clang-format off */ @@ -35,8 +120,6 @@ struct gunyah_resource { u32 rm_label; }; -struct gunyah_vm; - /** * struct gunyah_vm_resource_ticket - Represents a ticket to reserve access to VM resource(s) * @vm_list: for @gunyah_vm->resource_tickets diff --git a/include/uapi/linux/gunyah.h b/include/uapi/linux/gunyah.h index 31e7f79a6c39..1b7cb5fde70a 100644 --- a/include/uapi/linux/gunyah.h +++ b/include/uapi/linux/gunyah.h @@ -25,4 +25,22 @@ */ #define GUNYAH_VM_START _IO(GUNYAH_IOCTL_TYPE, 0x3) +#define GUNYAH_FN_MAX_ARG_SIZE 256 + +/** + * struct gunyah_fn_desc - Arguments to create a VM function + * @type: Type of the function. See &enum gunyah_fn_type. + * @arg_size: Size of argument to pass to the function. arg_size <= GUNYAH_FN_MAX_ARG_SIZE + * @arg: Pointer to argument given to the function. See &enum gunyah_fn_type for expected + * arguments for a function type. + */ +struct gunyah_fn_desc { + __u32 type; + __u32 arg_size; + __u64 arg; +}; + +#define GUNYAH_VM_ADD_FUNCTION _IOW(GUNYAH_IOCTL_TYPE, 0x4, struct gunyah_fn_desc) +#define GUNYAH_VM_REMOVE_FUNCTION _IOW(GUNYAH_IOCTL_TYPE, 0x7, struct gunyah_fn_desc) + #endif From patchwork Tue Jan 9 19:37:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761570 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 057D03D3A0; Tue, 9 Jan 2024 19:38:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="mS13sR+J" Received: from pps.filterd (m0279867.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409JU9kp019882; Tue, 9 Jan 2024 19:37:59 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=jqaSvaTcT+GljjucYsOF+tmQD3YLMOiGYv13cJ7TCoM =; b=mS13sR+JttsNvcp6XncL1NE4Z+aQTzx6gyas4nrjZDP95B/RvilY/8oaI0a IrEFyq+I6d0z8BKLwxl3DtNx3WVo2ZWPnsKnFYc0CP4DshreW7BEz6vQs3q3lxlg 6+IDUH+nEnX0cfstiddGgIw5G49XiWy2l8URmkJQmtT9q5Mylwbq2jp89fH3eMYM b2U14+nPwo4LyvTYRvFk9tvi3ipqBeDobOq/3IT6qEgDUYhj2YNQKyV0/VooQmho 53BQBW0/kLx2EDzS2jpzQnaxOjKBhGGW2QRvTgGdW1/3M+NQ/WJUqgeh/5rRoea/ CxKdIO+ybREZ1IUjN7LtUB8sdvA== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vgxxbhtdx-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:37:59 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA02.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409JbwSp011941 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:37:58 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:37:57 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:37:52 -0800 Subject: [PATCH v16 14/34] virt: gunyah: Add hypercalls for running a vCPU Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-14-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: wCModZ2NGEmXJVf9EloG8vARBtmLpUR5 X-Proofpoint-GUID: wCModZ2NGEmXJVf9EloG8vARBtmLpUR5 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=446 lowpriorityscore=0 priorityscore=1501 bulkscore=0 phishscore=0 spamscore=0 clxscore=1015 mlxscore=0 adultscore=0 impostorscore=0 malwarescore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Add hypercall to donate CPU time to a vCPU. Signed-off-by: Elliot Berman --- arch/arm64/gunyah/gunyah_hypercall.c | 37 ++++++++++++++++++++++++++++++++++++ include/linux/gunyah.h | 35 ++++++++++++++++++++++++++++++++++ 2 files changed, 72 insertions(+) diff --git a/arch/arm64/gunyah/gunyah_hypercall.c b/arch/arm64/gunyah/gunyah_hypercall.c index 1302e128be6e..fee21df42c17 100644 --- a/arch/arm64/gunyah/gunyah_hypercall.c +++ b/arch/arm64/gunyah/gunyah_hypercall.c @@ -39,6 +39,7 @@ EXPORT_SYMBOL_GPL(arch_is_gunyah_guest); #define GUNYAH_HYPERCALL_HYP_IDENTIFY GUNYAH_HYPERCALL(0x8000) #define GUNYAH_HYPERCALL_MSGQ_SEND GUNYAH_HYPERCALL(0x801B) #define GUNYAH_HYPERCALL_MSGQ_RECV GUNYAH_HYPERCALL(0x801C) +#define GUNYAH_HYPERCALL_VCPU_RUN GUNYAH_HYPERCALL(0x8065) /* clang-format on */ /** @@ -113,5 +114,41 @@ enum gunyah_error gunyah_hypercall_msgq_recv(u64 capid, void *buff, size_t size, } EXPORT_SYMBOL_GPL(gunyah_hypercall_msgq_recv); +/** + * gunyah_hypercall_vcpu_run() - Donate CPU time to a vcpu + * @capid: capability ID of the vCPU to run + * @resume_data: Array of 3 state-specific resume data + * @resp: Filled reason why vCPU exited when return value is GUNYAH_ERROR_OK + * + * See also: + * https://github.com/quic/gunyah-hypervisor/blob/develop/docs/api/gunyah_api.md#run-a-proxy-scheduled-vcpu-thread + */ +enum gunyah_error +gunyah_hypercall_vcpu_run(u64 capid, unsigned long *resume_data, + struct gunyah_hypercall_vcpu_run_resp *resp) +{ + struct arm_smccc_1_2_regs args = { + .a0 = GUNYAH_HYPERCALL_VCPU_RUN, + .a1 = capid, + .a2 = resume_data[0], + .a3 = resume_data[1], + .a4 = resume_data[2], + /* C language says this will be implictly zero. Gunyah requires 0, so be explicit */ + .a5 = 0, + }; + struct arm_smccc_1_2_regs res; + + arm_smccc_1_2_hvc(&args, &res); + if (res.a0 == GUNYAH_ERROR_OK) { + resp->sized_state = res.a1; + resp->state_data[0] = res.a2; + resp->state_data[1] = res.a3; + resp->state_data[2] = res.a4; + } + + return res.a0; +} +EXPORT_SYMBOL_GPL(gunyah_hypercall_vcpu_run); + MODULE_LICENSE("GPL"); MODULE_DESCRIPTION("Gunyah Hypervisor Hypercalls"); diff --git a/include/linux/gunyah.h b/include/linux/gunyah.h index 359cd63b4938..8405b2faf774 100644 --- a/include/linux/gunyah.h +++ b/include/linux/gunyah.h @@ -274,4 +274,39 @@ enum gunyah_error gunyah_hypercall_msgq_send(u64 capid, size_t size, void *buff, enum gunyah_error gunyah_hypercall_msgq_recv(u64 capid, void *buff, size_t size, size_t *recv_size, bool *ready); +struct gunyah_hypercall_vcpu_run_resp { + union { + enum { + /* clang-format off */ + /* VCPU is ready to run */ + GUNYAH_VCPU_STATE_READY = 0, + /* VCPU is sleeping until an interrupt arrives */ + GUNYAH_VCPU_STATE_EXPECTS_WAKEUP = 1, + /* VCPU is powered off */ + GUNYAH_VCPU_STATE_POWERED_OFF = 2, + /* VCPU is blocked in EL2 for unspecified reason */ + GUNYAH_VCPU_STATE_BLOCKED = 3, + /* VCPU has returned for MMIO READ */ + GUNYAH_VCPU_ADDRSPACE_VMMIO_READ = 4, + /* VCPU has returned for MMIO WRITE */ + GUNYAH_VCPU_ADDRSPACE_VMMIO_WRITE = 5, + /* VCPU blocked on fault where we can demand page */ + GUNYAH_VCPU_ADDRSPACE_PAGE_FAULT = 7, + /* clang-format on */ + } state; + u64 sized_state; + }; + u64 state_data[3]; +}; + +enum { + GUNYAH_ADDRSPACE_VMMIO_ACTION_EMULATE = 0, + GUNYAH_ADDRSPACE_VMMIO_ACTION_RETRY = 1, + GUNYAH_ADDRSPACE_VMMIO_ACTION_FAULT = 2, +}; + +enum gunyah_error +gunyah_hypercall_vcpu_run(u64 capid, unsigned long *resume_data, + struct gunyah_hypercall_vcpu_run_resp *resp); + #endif From patchwork Tue Jan 9 19:37:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761566 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EE01340BE8; Tue, 9 Jan 2024 19:38:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="nmVGxi57" Received: from pps.filterd (m0279862.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409Er235000860; Tue, 9 Jan 2024 19:38:00 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=4lM0JVVYQLJT2z4piUn0A5bpNItJWqp93hg1dTCIGUo =; b=nmVGxi57i6Hz66foressk7Npgy0RWtWPIaPisAkL9t17w7/hZ/vJT+AcTIC 1mjxklx7iNIVxWkpBhIe7lQd3+//5JCgPBzxXYe1/XLt+HAooHTISas1kSr9iGtQ sGadnSnck/CX4NjQJZiJq7mJEH64a694u27OFKPY1ZivckYGRtu7CrUWW6UMAdCJ ELHX2uisxiYcxtR0apbZps8XOIXBV92xUdk+L6JHBCG5pFmVIUgys9An3pB7StXN QdydKoX3wKHvpROUyzOWo6KGO35r6A8jl6GjTBB7a2b6Bp3kg0H44nLAoCgO5U5f G+4MK3vdKHbDifxiZbB+EJpeTRg== Received: from nasanppmta04.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh3me1851-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:38:00 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA04.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409JbxFE030438 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:37:59 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:37:58 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:37:53 -0800 Subject: [PATCH v16 15/34] virt: gunyah: Add proxy-scheduled vCPUs Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-15-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: PxJYOU8rDFxI7KkcDCs-veyaHo31uZCZ X-Proofpoint-GUID: PxJYOU8rDFxI7KkcDCs-veyaHo31uZCZ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_01,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 spamscore=0 priorityscore=1501 mlxlogscore=999 lowpriorityscore=0 mlxscore=0 suspectscore=0 impostorscore=0 clxscore=1015 adultscore=0 malwarescore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Gunyah allows vCPUs that are configured as proxy-scheduled to be scheduled by another virtual machine (host) that holds capabilities to those vCPUs with suitable rights. Gunyah also supports configuring regions of a proxy-scheduled VM's address space to be virtualized by the host VM. This permits a host VMM to emulate MMIO devices in the proxy-scheduled VM. vCPUs are presented to the host as a Gunyah resource and represented to userspace as a Gunyah VM function. Creating the vcpu function on the VM will create a file descriptor that: - can handle an ioctl to run the vCPU. When called, Gunyah will directly context-switch to the selected vCPU and run it until one of the following events occurs: * the host vcpu's time slice ends * the host vcpu receives an interrupt or would have been pre-empted by the hypervisor * a fault occurs in the proxy-scheduled vcpu * a power management event, such as idle or cpu-off call in the vcpu - can be mmap'd to share the gunyah_vcpu_run structure with userspace. This allows the vcpu_run result codes to be accessed, and for arguments to vcpu_run to be passed, e.g. for resuming the vcpu when handling certain fault and exit cases. Co-developed-by: Prakruthi Deepak Heragu Signed-off-by: Prakruthi Deepak Heragu Signed-off-by: Elliot Berman --- drivers/virt/gunyah/Makefile | 2 +- drivers/virt/gunyah/gunyah_vcpu.c | 557 ++++++++++++++++++++++++++++++++++++++ drivers/virt/gunyah/vm_mgr.c | 5 + drivers/virt/gunyah/vm_mgr.h | 2 + include/uapi/linux/gunyah.h | 163 +++++++++++ 5 files changed, 728 insertions(+), 1 deletion(-) diff --git a/drivers/virt/gunyah/Makefile b/drivers/virt/gunyah/Makefile index 47f1fae5419b..3f82af8c5ce7 100644 --- a/drivers/virt/gunyah/Makefile +++ b/drivers/virt/gunyah/Makefile @@ -2,4 +2,4 @@ gunyah_rsc_mgr-y += rsc_mgr.o rsc_mgr_rpc.o vm_mgr.o -obj-$(CONFIG_GUNYAH) += gunyah.o gunyah_rsc_mgr.o +obj-$(CONFIG_GUNYAH) += gunyah.o gunyah_rsc_mgr.o gunyah_vcpu.o diff --git a/drivers/virt/gunyah/gunyah_vcpu.c b/drivers/virt/gunyah/gunyah_vcpu.c new file mode 100644 index 000000000000..b636b54dc9a1 --- /dev/null +++ b/drivers/virt/gunyah/gunyah_vcpu.c @@ -0,0 +1,557 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2022-2024 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "vm_mgr.h" + +#include + +#define MAX_VCPU_NAME 20 /* gh-vcpu:strlen(U32::MAX)+NUL */ + +/** + * struct gunyah_vcpu - Track an instance of gunyah vCPU + * @f: Function instance (how we get associated with the main VM) + * @rsc: Pointer to the Gunyah vCPU resource, will be NULL until VM starts + * @run_lock: One userspace thread at a time should run the vCPU + * @ghvm: Pointer to the main VM struct; quicker look up than going through + * @f->ghvm + * @vcpu_run: Pointer to page shared with userspace to communicate vCPU state + * @state: Our copy of the state of the vCPU, since userspace could trick + * kernel to behave incorrectly if we relied on @vcpu_run + * @mmio_read_len: Our copy of @vcpu_run->mmio.len; see also @state + * @mmio_addr: Our copy of @vcpu_run->mmio.phys_addr; see also @state + * @ready: if vCPU goes to sleep, hypervisor reports to us that it's sleeping + * and will signal interrupt (from @rsc) when it's time to wake up. + * This completion signals that we can run vCPU again. + * @nb: When VM exits, the status of VM is reported via @vcpu_run->status. + * We need to track overall VM status, and the nb gives us the updates from + * Resource Manager. + * @ticket: resource ticket to claim vCPU# for the VM + * @kref: Reference counter + */ +struct gunyah_vcpu { + struct gunyah_vm_function_instance *f; + struct gunyah_resource *rsc; + struct mutex run_lock; + struct gunyah_vm *ghvm; + + struct gunyah_vcpu_run *vcpu_run; + + /** + * Track why the vcpu_run hypercall returned. This mirrors the vcpu_run + * structure shared with userspace, except is used internally to avoid + * trusting userspace to not modify the vcpu_run structure. + */ + enum { + GUNYAH_VCPU_RUN_STATE_UNKNOWN = 0, + GUNYAH_VCPU_RUN_STATE_READY, + GUNYAH_VCPU_RUN_STATE_MMIO_READ, + GUNYAH_VCPU_RUN_STATE_MMIO_WRITE, + GUNYAH_VCPU_RUN_STATE_SYSTEM_DOWN, + } state; + u8 mmio_read_len; + u64 mmio_addr; + + struct completion ready; + + struct notifier_block nb; + struct gunyah_vm_resource_ticket ticket; + struct kref kref; +}; + +static void vcpu_release(struct kref *kref) +{ + struct gunyah_vcpu *vcpu = container_of(kref, struct gunyah_vcpu, kref); + + free_page((unsigned long)vcpu->vcpu_run); + kfree(vcpu); +} + +/* + * When hypervisor allows us to schedule vCPU again, it gives us an interrupt + */ +static irqreturn_t gunyah_vcpu_irq_handler(int irq, void *data) +{ + struct gunyah_vcpu *vcpu = data; + + complete(&vcpu->ready); + return IRQ_HANDLED; +} + +static void gunyah_handle_page_fault( + struct gunyah_vcpu *vcpu, + const struct gunyah_hypercall_vcpu_run_resp *vcpu_run_resp) +{ + u64 addr = vcpu_run_resp->state_data[0]; + + vcpu->vcpu_run->page_fault.resume_action = GUNYAH_VCPU_RESUME_FAULT; + vcpu->vcpu_run->page_fault.attempt = 0; + vcpu->vcpu_run->page_fault.phys_addr = addr; + vcpu->vcpu_run->exit_reason = GUNYAH_VCPU_EXIT_PAGE_FAULT; +} + +static void +gunyah_handle_mmio(struct gunyah_vcpu *vcpu, + const struct gunyah_hypercall_vcpu_run_resp *vcpu_run_resp) +{ + u64 addr = vcpu_run_resp->state_data[0], + len = vcpu_run_resp->state_data[1], + data = vcpu_run_resp->state_data[2]; + + if (WARN_ON(len > sizeof(u64))) + len = sizeof(u64); + + if (vcpu_run_resp->state == GUNYAH_VCPU_ADDRSPACE_VMMIO_READ) { + vcpu->vcpu_run->mmio.is_write = 0; + /* Record that we need to give vCPU user's supplied value next gunyah_vcpu_run() */ + vcpu->state = GUNYAH_VCPU_RUN_STATE_MMIO_READ; + vcpu->mmio_read_len = len; + } else { /* GUNYAH_VCPU_ADDRSPACE_VMMIO_WRITE */ + vcpu->vcpu_run->mmio.is_write = 1; + memcpy(vcpu->vcpu_run->mmio.data, &data, len); + vcpu->state = GUNYAH_VCPU_RUN_STATE_MMIO_WRITE; + } + + vcpu->vcpu_run->mmio.resume_action = 0; + vcpu->mmio_addr = vcpu->vcpu_run->mmio.phys_addr = addr; + vcpu->vcpu_run->mmio.len = len; + vcpu->vcpu_run->exit_reason = GUNYAH_VCPU_EXIT_MMIO; +} + +static int gunyah_handle_mmio_resume(struct gunyah_vcpu *vcpu, + unsigned long resume_data[3]) +{ + switch (vcpu->vcpu_run->mmio.resume_action) { + case GUNYAH_VCPU_RESUME_HANDLED: + if (vcpu->state == GUNYAH_VCPU_RUN_STATE_MMIO_READ) { + if (unlikely(vcpu->mmio_read_len > + sizeof(resume_data[0]))) + vcpu->mmio_read_len = sizeof(resume_data[0]); + memcpy(&resume_data[0], vcpu->vcpu_run->mmio.data, + vcpu->mmio_read_len); + } + resume_data[1] = GUNYAH_ADDRSPACE_VMMIO_ACTION_EMULATE; + break; + case GUNYAH_VCPU_RESUME_FAULT: + resume_data[1] = GUNYAH_ADDRSPACE_VMMIO_ACTION_FAULT; + break; + case GUNYAH_VCPU_RESUME_RETRY: + resume_data[1] = GUNYAH_ADDRSPACE_VMMIO_ACTION_RETRY; + break; + default: + return -EINVAL; + } + + return 0; +} + +static int gunyah_vcpu_rm_notification(struct notifier_block *nb, + unsigned long action, void *data) +{ + struct gunyah_vcpu *vcpu = container_of(nb, struct gunyah_vcpu, nb); + struct gunyah_rm_vm_exited_payload *exit_payload = data; + + /* Wake up userspace waiting for the vCPU to be runnable again */ + if (action == GUNYAH_RM_NOTIFICATION_VM_EXITED && + le16_to_cpu(exit_payload->vmid) == vcpu->ghvm->vmid) + complete(&vcpu->ready); + + return NOTIFY_OK; +} + +static inline enum gunyah_vm_status +remap_vm_status(enum gunyah_rm_vm_status rm_status) +{ + switch (rm_status) { + case GUNYAH_RM_VM_STATUS_INIT_FAILED: + return GUNYAH_VM_STATUS_LOAD_FAILED; + case GUNYAH_RM_VM_STATUS_EXITED: + return GUNYAH_VM_STATUS_EXITED; + default: + return GUNYAH_VM_STATUS_CRASHED; + } +} + +/** + * gunyah_vcpu_check_system() - Check whether VM as a whole is running + * @vcpu: Pointer to gunyah_vcpu + * + * Returns true if the VM is alive. + * Returns false if the vCPU is the VM is not alive (can only be that VM is shutting down). + */ +static bool gunyah_vcpu_check_system(struct gunyah_vcpu *vcpu) + __must_hold(&vcpu->run_lock) +{ + bool ret = true; + + down_read(&vcpu->ghvm->status_lock); + if (likely(vcpu->ghvm->vm_status == GUNYAH_RM_VM_STATUS_RUNNING)) + goto out; + + vcpu->vcpu_run->status.status = remap_vm_status(vcpu->ghvm->vm_status); + vcpu->vcpu_run->status.exit_info = vcpu->ghvm->exit_info; + vcpu->vcpu_run->exit_reason = GUNYAH_VCPU_EXIT_STATUS; + vcpu->state = GUNYAH_VCPU_RUN_STATE_SYSTEM_DOWN; + ret = false; +out: + up_read(&vcpu->ghvm->status_lock); + return ret; +} + +/** + * gunyah_vcpu_run() - Request Gunyah to begin scheduling this vCPU. + * @vcpu: The client descriptor that was obtained via gunyah_vcpu_alloc() + */ +static int gunyah_vcpu_run(struct gunyah_vcpu *vcpu) +{ + struct gunyah_hypercall_vcpu_run_resp vcpu_run_resp; + unsigned long resume_data[3] = { 0 }; + enum gunyah_error gunyah_error; + int ret = 0; + + if (!vcpu->f) + return -ENODEV; + + if (mutex_lock_interruptible(&vcpu->run_lock)) + return -ERESTARTSYS; + + if (!vcpu->rsc) { + ret = -ENODEV; + goto out; + } + + switch (vcpu->state) { + case GUNYAH_VCPU_RUN_STATE_UNKNOWN: + if (vcpu->ghvm->vm_status != GUNYAH_RM_VM_STATUS_RUNNING) { + /** + * Check if VM is up. If VM is starting, will block + * until VM is fully up since that thread does + * down_write. + */ + if (!gunyah_vcpu_check_system(vcpu)) + goto out; + } + vcpu->state = GUNYAH_VCPU_RUN_STATE_READY; + break; + case GUNYAH_VCPU_RUN_STATE_MMIO_READ: + case GUNYAH_VCPU_RUN_STATE_MMIO_WRITE: + ret = gunyah_handle_mmio_resume(vcpu, resume_data); + if (ret) + goto out; + vcpu->state = GUNYAH_VCPU_RUN_STATE_READY; + break; + case GUNYAH_VCPU_RUN_STATE_SYSTEM_DOWN: + goto out; + default: + break; + } + + while (!ret && !signal_pending(current)) { + if (vcpu->vcpu_run->immediate_exit) { + ret = -EINTR; + goto out; + } + + gunyah_error = gunyah_hypercall_vcpu_run( + vcpu->rsc->capid, resume_data, &vcpu_run_resp); + if (gunyah_error == GUNYAH_ERROR_OK) { + memset(resume_data, 0, sizeof(resume_data)); + switch (vcpu_run_resp.state) { + case GUNYAH_VCPU_STATE_READY: + if (need_resched()) + schedule(); + break; + case GUNYAH_VCPU_STATE_POWERED_OFF: + /** + * vcpu might be off because the VM is shut down + * If so, it won't ever run again + */ + if (!gunyah_vcpu_check_system(vcpu)) + goto out; + /** + * Otherwise, another vcpu will turn it on (e.g. + * by PSCI) and hyp sends an interrupt to wake + * Linux up. + */ + fallthrough; + case GUNYAH_VCPU_STATE_EXPECTS_WAKEUP: + ret = wait_for_completion_interruptible( + &vcpu->ready); + /** + * reinitialize completion before next + * hypercall. If we reinitialize after the + * hypercall, interrupt may have already come + * before re-initializing the completion and + * then end up waiting for event that already + * happened. + */ + reinit_completion(&vcpu->ready); + /** + * Check VM status again. Completion + * might've come from VM exiting + */ + if (!ret && !gunyah_vcpu_check_system(vcpu)) + goto out; + break; + case GUNYAH_VCPU_STATE_BLOCKED: + schedule(); + break; + case GUNYAH_VCPU_ADDRSPACE_VMMIO_READ: + case GUNYAH_VCPU_ADDRSPACE_VMMIO_WRITE: + gunyah_handle_mmio(vcpu, &vcpu_run_resp); + goto out; + case GUNYAH_VCPU_ADDRSPACE_PAGE_FAULT: + gunyah_handle_page_fault(vcpu, &vcpu_run_resp); + goto out; + default: + pr_warn_ratelimited( + "Unknown vCPU state: %llx\n", + vcpu_run_resp.sized_state); + schedule(); + break; + } + } else if (gunyah_error == GUNYAH_ERROR_RETRY) { + schedule(); + } else { + ret = gunyah_error_remap(gunyah_error); + } + } + +out: + mutex_unlock(&vcpu->run_lock); + + if (signal_pending(current)) + return -ERESTARTSYS; + + return ret; +} + +static long gunyah_vcpu_ioctl(struct file *filp, unsigned int cmd, + unsigned long arg) +{ + struct gunyah_vcpu *vcpu = filp->private_data; + long ret = -ENOTTY; + + switch (cmd) { + case GUNYAH_VCPU_RUN: + ret = gunyah_vcpu_run(vcpu); + break; + case GUNYAH_VCPU_MMAP_SIZE: + ret = PAGE_SIZE; + break; + default: + break; + } + return ret; +} + +static int gunyah_vcpu_release(struct inode *inode, struct file *filp) +{ + struct gunyah_vcpu *vcpu = filp->private_data; + + gunyah_vm_put(vcpu->ghvm); + kref_put(&vcpu->kref, vcpu_release); + return 0; +} + +static vm_fault_t gunyah_vcpu_fault(struct vm_fault *vmf) +{ + struct gunyah_vcpu *vcpu = vmf->vma->vm_file->private_data; + struct page *page = NULL; + + if (vmf->pgoff == 0) + page = virt_to_page(vcpu->vcpu_run); + + get_page(page); + vmf->page = page; + return 0; +} + +static const struct vm_operations_struct gunyah_vcpu_ops = { + .fault = gunyah_vcpu_fault, +}; + +static int gunyah_vcpu_mmap(struct file *file, struct vm_area_struct *vma) +{ + vma->vm_ops = &gunyah_vcpu_ops; + return 0; +} + +static const struct file_operations gunyah_vcpu_fops = { + .owner = THIS_MODULE, + .unlocked_ioctl = gunyah_vcpu_ioctl, + .release = gunyah_vcpu_release, + .llseek = noop_llseek, + .mmap = gunyah_vcpu_mmap, +}; + +static bool gunyah_vcpu_populate(struct gunyah_vm_resource_ticket *ticket, + struct gunyah_resource *ghrsc) +{ + struct gunyah_vcpu *vcpu = + container_of(ticket, struct gunyah_vcpu, ticket); + int ret; + + mutex_lock(&vcpu->run_lock); + if (vcpu->rsc) { + pr_warn("vcpu%d already got a Gunyah resource. Check if multiple resources with same label were configured.\n", + vcpu->ticket.label); + ret = -EEXIST; + goto out; + } + + vcpu->rsc = ghrsc; + init_completion(&vcpu->ready); + + ret = request_irq(vcpu->rsc->irq, gunyah_vcpu_irq_handler, + IRQF_TRIGGER_RISING, "gunyah_vcpu", vcpu); + if (ret) + pr_warn("Failed to request vcpu irq %d: %d", vcpu->rsc->irq, + ret); + + enable_irq_wake(vcpu->rsc->irq); + +out: + mutex_unlock(&vcpu->run_lock); + return !ret; +} + +static void gunyah_vcpu_unpopulate(struct gunyah_vm_resource_ticket *ticket, + struct gunyah_resource *ghrsc) +{ + struct gunyah_vcpu *vcpu = + container_of(ticket, struct gunyah_vcpu, ticket); + + vcpu->vcpu_run->immediate_exit = true; + complete_all(&vcpu->ready); + mutex_lock(&vcpu->run_lock); + free_irq(vcpu->rsc->irq, vcpu); + vcpu->rsc = NULL; + mutex_unlock(&vcpu->run_lock); +} + +static long gunyah_vcpu_bind(struct gunyah_vm_function_instance *f) +{ + struct gunyah_fn_vcpu_arg *arg = f->argp; + struct gunyah_vcpu *vcpu; + char name[MAX_VCPU_NAME]; + struct file *file; + struct page *page; + int fd; + long r; + + if (f->arg_size != sizeof(*arg)) + return -EINVAL; + + vcpu = kzalloc(sizeof(*vcpu), GFP_KERNEL); + if (!vcpu) + return -ENOMEM; + + vcpu->f = f; + f->data = vcpu; + mutex_init(&vcpu->run_lock); + kref_init(&vcpu->kref); + + page = alloc_page(GFP_KERNEL | __GFP_ZERO); + if (!page) { + r = -ENOMEM; + goto err_destroy_vcpu; + } + vcpu->vcpu_run = page_address(page); + + vcpu->ticket.resource_type = GUNYAH_RESOURCE_TYPE_VCPU; + vcpu->ticket.label = arg->id; + vcpu->ticket.owner = THIS_MODULE; + vcpu->ticket.populate = gunyah_vcpu_populate; + vcpu->ticket.unpopulate = gunyah_vcpu_unpopulate; + + r = gunyah_vm_add_resource_ticket(f->ghvm, &vcpu->ticket); + if (r) + goto err_destroy_page; + + if (!gunyah_vm_get(f->ghvm)) { + r = -ENODEV; + goto err_remove_resource_ticket; + } + vcpu->ghvm = f->ghvm; + + vcpu->nb.notifier_call = gunyah_vcpu_rm_notification; + /** + * Ensure we run after the vm_mgr handles the notification and does + * any necessary state changes. + */ + vcpu->nb.priority = -1; + r = gunyah_rm_notifier_register(f->rm, &vcpu->nb); + if (r) + goto err_put_gunyah_vm; + + kref_get(&vcpu->kref); + + fd = get_unused_fd_flags(O_CLOEXEC); + if (fd < 0) { + r = fd; + goto err_notifier; + } + + snprintf(name, sizeof(name), "gh-vcpu:%u", vcpu->ticket.label); + file = anon_inode_getfile(name, &gunyah_vcpu_fops, vcpu, O_RDWR); + if (IS_ERR(file)) { + r = PTR_ERR(file); + goto err_put_fd; + } + + fd_install(fd, file); + + return fd; +err_put_fd: + put_unused_fd(fd); +err_notifier: + gunyah_rm_notifier_unregister(f->rm, &vcpu->nb); +err_put_gunyah_vm: + gunyah_vm_put(vcpu->ghvm); +err_remove_resource_ticket: + gunyah_vm_remove_resource_ticket(f->ghvm, &vcpu->ticket); +err_destroy_page: + free_page((unsigned long)vcpu->vcpu_run); +err_destroy_vcpu: + kfree(vcpu); + return r; +} + +static void gunyah_vcpu_unbind(struct gunyah_vm_function_instance *f) +{ + struct gunyah_vcpu *vcpu = f->data; + + gunyah_rm_notifier_unregister(f->rm, &vcpu->nb); + gunyah_vm_remove_resource_ticket(vcpu->ghvm, &vcpu->ticket); + vcpu->f = NULL; + + kref_put(&vcpu->kref, vcpu_release); +} + +static bool gunyah_vcpu_compare(const struct gunyah_vm_function_instance *f, + const void *arg, size_t size) +{ + const struct gunyah_fn_vcpu_arg *instance = f->argp, *other = arg; + + if (sizeof(*other) != size) + return false; + + return instance->id == other->id; +} + +DECLARE_GUNYAH_VM_FUNCTION_INIT(vcpu, GUNYAH_FN_VCPU, 1, gunyah_vcpu_bind, + gunyah_vcpu_unbind, gunyah_vcpu_compare); +MODULE_DESCRIPTION("Gunyah vCPU Function"); +MODULE_LICENSE("GPL"); diff --git a/drivers/virt/gunyah/vm_mgr.c b/drivers/virt/gunyah/vm_mgr.c index 5d4f413f7a76..db3d1d18ccb8 100644 --- a/drivers/virt/gunyah/vm_mgr.c +++ b/drivers/virt/gunyah/vm_mgr.c @@ -302,6 +302,11 @@ static int gunyah_vm_rm_notification_exited(struct gunyah_vm *ghvm, void *data) down_write(&ghvm->status_lock); ghvm->vm_status = GUNYAH_RM_VM_STATUS_EXITED; + ghvm->exit_info.type = le16_to_cpu(payload->exit_type); + ghvm->exit_info.reason_size = le32_to_cpu(payload->exit_reason_size); + memcpy(&ghvm->exit_info.reason, payload->exit_reason, + min(GUNYAH_VM_MAX_EXIT_REASON_SIZE, + ghvm->exit_info.reason_size)); up_write(&ghvm->status_lock); wake_up(&ghvm->vm_status_wait); diff --git a/drivers/virt/gunyah/vm_mgr.h b/drivers/virt/gunyah/vm_mgr.h index 190a95ee8da6..8c5b94101b2c 100644 --- a/drivers/virt/gunyah/vm_mgr.h +++ b/drivers/virt/gunyah/vm_mgr.h @@ -28,6 +28,7 @@ long gunyah_dev_vm_mgr_ioctl(struct gunyah_rm *rm, unsigned int cmd, * @vm_status: Current state of the VM, as last reported by RM * @vm_status_wait: Wait queue for status @vm_status changes * @status_lock: Serializing state transitions + * @exit_info: Breadcrumbs why VM is not running anymore * @kref: Reference counter for VM functions * @fn_lock: Serialization addition of functions * @functions: List of &struct gunyah_vm_function_instance that have been @@ -48,6 +49,7 @@ struct gunyah_vm { enum gunyah_rm_vm_status vm_status; wait_queue_head_t vm_status_wait; struct rw_semaphore status_lock; + struct gunyah_vm_exit_info exit_info; struct kref kref; struct mutex fn_lock; diff --git a/include/uapi/linux/gunyah.h b/include/uapi/linux/gunyah.h index 1b7cb5fde70a..46f7d3aa61d0 100644 --- a/include/uapi/linux/gunyah.h +++ b/include/uapi/linux/gunyah.h @@ -25,8 +25,33 @@ */ #define GUNYAH_VM_START _IO(GUNYAH_IOCTL_TYPE, 0x3) +/** + * enum gunyah_fn_type - Valid types of Gunyah VM functions + * @GUNYAH_FN_VCPU: create a vCPU instance to control a vCPU + * &struct gunyah_fn_desc.arg is a pointer to &struct gunyah_fn_vcpu_arg + * Return: file descriptor to manipulate the vcpu. + */ +enum gunyah_fn_type { + GUNYAH_FN_VCPU = 1, +}; + #define GUNYAH_FN_MAX_ARG_SIZE 256 +/** + * struct gunyah_fn_vcpu_arg - Arguments to create a vCPU. + * @id: vcpu id + * + * Create this function with &GUNYAH_VM_ADD_FUNCTION using type &GUNYAH_FN_VCPU. + * + * The vcpu type will register with the VM Manager to expect to control + * vCPU number `vcpu_id`. It returns a file descriptor allowing interaction with + * the vCPU. See the Gunyah vCPU API description sections for interacting with + * the Gunyah vCPU file descriptors. + */ +struct gunyah_fn_vcpu_arg { + __u32 id; +}; + /** * struct gunyah_fn_desc - Arguments to create a VM function * @type: Type of the function. See &enum gunyah_fn_type. @@ -43,4 +68,142 @@ struct gunyah_fn_desc { #define GUNYAH_VM_ADD_FUNCTION _IOW(GUNYAH_IOCTL_TYPE, 0x4, struct gunyah_fn_desc) #define GUNYAH_VM_REMOVE_FUNCTION _IOW(GUNYAH_IOCTL_TYPE, 0x7, struct gunyah_fn_desc) +/* + * ioctls for vCPU fds + */ + +/** + * enum gunyah_vm_status - Stores status reason why VM is not runnable (exited). + * @GUNYAH_VM_STATUS_LOAD_FAILED: VM didn't start because it couldn't be loaded. + * @GUNYAH_VM_STATUS_EXITED: VM requested shutdown/reboot. + * Use &struct gunyah_vm_exit_info.reason for further details. + * @GUNYAH_VM_STATUS_CRASHED: VM state is unknown and has crashed. + */ +enum gunyah_vm_status { + GUNYAH_VM_STATUS_LOAD_FAILED = 1, + GUNYAH_VM_STATUS_EXITED = 2, + GUNYAH_VM_STATUS_CRASHED = 3, +}; + +/* + * Gunyah presently sends max 4 bytes of exit_reason. + * If that changes, this macro can be safely increased without breaking + * userspace so long as struct gunyah_vcpu_run < PAGE_SIZE. + */ +#define GUNYAH_VM_MAX_EXIT_REASON_SIZE 8u + +/** + * struct gunyah_vm_exit_info - Reason for VM exit as reported by Gunyah + * See Gunyah documentation for values. + * @type: Describes how VM exited + * @padding: padding bytes + * @reason_size: Number of bytes valid for `reason` + * @reason: See Gunyah documentation for interpretation. Note: these values are + * not interpreted by Linux and need to be converted from little-endian + * as applicable. + */ +struct gunyah_vm_exit_info { + __u16 type; + __u16 padding; + __u32 reason_size; + __u8 reason[GUNYAH_VM_MAX_EXIT_REASON_SIZE]; +}; + +/** + * enum gunyah_vcpu_exit - Stores reason why &GUNYAH_VCPU_RUN ioctl recently exited with status 0 + * @GUNYAH_VCPU_EXIT_UNKNOWN: Not used, status != 0 + * @GUNYAH_VCPU_EXIT_MMIO: vCPU performed a read or write that could not be handled + * by hypervisor or Linux. Use @struct gunyah_vcpu_run.mmio for + * details of the read/write. + * @GUNYAH_VCPU_EXIT_STATUS: vCPU not able to run because the VM has exited. + * Use @struct gunyah_vcpu_run.status for why VM has exited. + * @GUNYAH_VCPU_EXIT_PAGE_FAULT: vCPU tried to execute an instruction at an address + * for which memory hasn't been provided. Use + * @struct gunyah_vcpu_run.page_fault for details. + */ +enum gunyah_vcpu_exit { + GUNYAH_VCPU_EXIT_UNKNOWN, + GUNYAH_VCPU_EXIT_MMIO, + GUNYAH_VCPU_EXIT_STATUS, + GUNYAH_VCPU_EXIT_PAGE_FAULT, +}; + +/** + * enum gunyah_vcpu_resume_action - Provide resume action after an MMIO or page fault + * @GUNYAH_VCPU_RESUME_HANDLED: The mmio or page fault has been handled, continue + * normal operation of vCPU + * @GUNYAH_VCPU_RESUME_FAULT: The mmio or page fault could not be satisfied and + * inject the original fault back to the guest. + * @GUNYAH_VCPU_RESUME_RETRY: Retry the faulting instruction. Perhaps you added + * memory binding to satisfy the request. + */ +enum gunyah_vcpu_resume_action { + GUNYAH_VCPU_RESUME_HANDLED = 0, + GUNYAH_VCPU_RESUME_FAULT, + GUNYAH_VCPU_RESUME_RETRY, +}; + +/** + * struct gunyah_vcpu_run - Application code obtains a pointer to the gunyah_vcpu_run + * structure by mmap()ing a vcpu fd. + * @immediate_exit: polled when scheduling the vcpu. If set, immediately returns -EINTR. + * @padding: padding bytes + * @exit_reason: Set when GUNYAH_VCPU_RUN returns successfully and gives reason why + * GUNYAH_VCPU_RUN has stopped running the vCPU. See &enum gunyah_vcpu_exit. + * @mmio: Used when exit_reason == GUNYAH_VCPU_EXIT_MMIO + * The guest has faulted on an memory-mapped I/O that + * couldn't be satisfied by gunyah. + * @mmio.phys_addr: Address guest tried to access + * @mmio.data: the value that was written if `is_write == 1`. Filled by + * user for reads (`is_write == 0`). + * @mmio.len: Length of write. Only the first `len` bytes of `data` + * are considered by Gunyah. + * @mmio.is_write: 1 if VM tried to perform a write, 0 for a read + * @mmio.resume_action: See &enum gunyah_vcpu_resume_action + * @status: Used when exit_reason == GUNYAH_VCPU_EXIT_STATUS. + * The guest VM is no longer runnable. This struct informs why. + * @status.status: See &enum gunyah_vm_status for possible values + * @status.exit_info: Used when status == GUNYAH_VM_STATUS_EXITED + * @page_fault: Used when EXIT_REASON == GUNYAH_VCPU_EXIT_PAGE_FAULT + * The guest has faulted on a region that can only be provided + * by mapping memory at phys_addr. + * @page_fault.phys_addr: Address guest tried to access. + * @page_fault.attempt: Error code why Linux wasn't able to handle fault itself + * Typically, if no memory was mapped: -ENOENT, + * If permission bits weren't what the VM wanted: -EPERM + * @page_fault.resume_action: See &enum gunyah_vcpu_resume_action + */ +struct gunyah_vcpu_run { + /* in */ + __u8 immediate_exit; + __u8 padding[7]; + + /* out */ + __u32 exit_reason; + + union { + struct { + __u64 phys_addr; + __u8 data[8]; + __u32 len; + __u8 is_write; + __u8 resume_action; + } mmio; + + struct { + enum gunyah_vm_status status; + struct gunyah_vm_exit_info exit_info; + } status; + + struct { + __u64 phys_addr; + __s32 attempt; + __u8 resume_action; + } page_fault; + }; +}; + +#define GUNYAH_VCPU_RUN _IO(GUNYAH_IOCTL_TYPE, 0x5) +#define GUNYAH_VCPU_MMAP_SIZE _IO(GUNYAH_IOCTL_TYPE, 0x6) + #endif From patchwork Tue Jan 9 19:37:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761568 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8B0903F8EA; Tue, 9 Jan 2024 19:38:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="fFvd4ED5" Received: from pps.filterd (m0279872.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409GkZgt023034; Tue, 9 Jan 2024 19:38:01 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=MHmljaCamj4g1Xvfy1VeAwN2zHpA+VIUs5qGVgdABHg =; b=fFvd4ED5aSMYOK7ltpXwcorn9oqju+JMsw8aaINmJz40ivLO8k53yCIaODM EwJVf3DNVRuqbSA4j4oncpyHGG+jq0mSiVpDXsPB9ARZizbVmPmcwWQmpF7vO+h0 JkVNiL9x9chJzS2HvuNX4E642zkp9IjaDxvlz9GpYuCYGDpHJPOvNuoSoEMAXfvr kawPZPa+7jGGaGltRFm4kjE5npv/sCofiKH1d1D3jkNxcx+5rszOthda/7mIWEFW SrKYiCIrXY8TPqZrRuVDlESHkjbJLCzTt5CkEZuH3yWL+IXaCHsPIvxITMAv57Xb K4OlOziWJ7yj0FCS4/Kmfb/ra8Q== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh85t0pn5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:38:01 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA02.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409Jc05c011947 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:38:00 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:37:59 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:37:54 -0800 Subject: [PATCH v16 16/34] gunyah: Add hypercalls for demand paging Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-16-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: BYb_tui5OJ0pNsPXqymCtBC5jtYNPObE X-Proofpoint-GUID: BYb_tui5OJ0pNsPXqymCtBC5jtYNPObE X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_01,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 bulkscore=0 malwarescore=0 suspectscore=0 mlxlogscore=409 clxscore=1015 priorityscore=1501 adultscore=0 impostorscore=0 phishscore=0 spamscore=0 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Three hypercalls are needed to support demand paging. In create page mappings for a virtual machine's address space, memory must be moved to a memory extent that is allowed to be mapped into that address space. Memory extents are Gunyah's implementation of access control. Once the memory is moved to the proper memory extent, the memory can be mapped into the VM's address space. Implement the bindings to perform those hypercalls. Signed-off-by: Elliot Berman --- arch/arm64/gunyah/gunyah_hypercall.c | 87 ++++++++++++++++++++++++++++++++++++ arch/arm64/include/asm/gunyah.h | 21 +++++++++ include/linux/gunyah.h | 56 +++++++++++++++++++++++ 3 files changed, 164 insertions(+) diff --git a/arch/arm64/gunyah/gunyah_hypercall.c b/arch/arm64/gunyah/gunyah_hypercall.c index fee21df42c17..38403dc28c66 100644 --- a/arch/arm64/gunyah/gunyah_hypercall.c +++ b/arch/arm64/gunyah/gunyah_hypercall.c @@ -39,6 +39,9 @@ EXPORT_SYMBOL_GPL(arch_is_gunyah_guest); #define GUNYAH_HYPERCALL_HYP_IDENTIFY GUNYAH_HYPERCALL(0x8000) #define GUNYAH_HYPERCALL_MSGQ_SEND GUNYAH_HYPERCALL(0x801B) #define GUNYAH_HYPERCALL_MSGQ_RECV GUNYAH_HYPERCALL(0x801C) +#define GUNYAH_HYPERCALL_ADDRSPACE_MAP GUNYAH_HYPERCALL(0x802B) +#define GUNYAH_HYPERCALL_ADDRSPACE_UNMAP GUNYAH_HYPERCALL(0x802C) +#define GUNYAH_HYPERCALL_MEMEXTENT_DONATE GUNYAH_HYPERCALL(0x8061) #define GUNYAH_HYPERCALL_VCPU_RUN GUNYAH_HYPERCALL(0x8065) /* clang-format on */ @@ -114,6 +117,90 @@ enum gunyah_error gunyah_hypercall_msgq_recv(u64 capid, void *buff, size_t size, } EXPORT_SYMBOL_GPL(gunyah_hypercall_msgq_recv); +/** + * gunyah_hypercall_addrspace_map() - Add memory to an address space from a memory extent + * @capid: Address space capability ID + * @extent_capid: Memory extent capability ID + * @vbase: location in address space + * @extent_attrs: Attributes for the memory + * @flags: Flags for address space mapping + * @offset: Offset into memory extent (physical address of memory) + * @size: Size of memory to map; must be page-aligned + */ +enum gunyah_error gunyah_hypercall_addrspace_map(u64 capid, u64 extent_capid, u64 vbase, + u32 extent_attrs, u32 flags, u64 offset, u64 size) +{ + struct arm_smccc_1_2_regs args = { + .a0 = GUNYAH_HYPERCALL_ADDRSPACE_MAP, + .a1 = capid, + .a2 = extent_capid, + .a3 = vbase, + .a4 = extent_attrs, + .a5 = flags, + .a6 = offset, + .a7 = size, + /* C language says this will be implictly zero. Gunyah requires 0, so be explicit */ + .a8 = 0, + }; + struct arm_smccc_1_2_regs res; + + arm_smccc_1_2_hvc(&args, &res); + + return res.a0; +} +EXPORT_SYMBOL_GPL(gunyah_hypercall_addrspace_map); + +/** + * gunyah_hypercall_addrspace_unmap() - Remove memory from an address space + * @capid: Address space capability ID + * @extent_capid: Memory extent capability ID + * @vbase: location in address space + * @flags: Flags for address space mapping + * @offset: Offset into memory extent (physical address of memory) + * @size: Size of memory to map; must be page-aligned + */ +enum gunyah_error gunyah_hypercall_addrspace_unmap(u64 capid, u64 extent_capid, u64 vbase, + u32 flags, u64 offset, u64 size) +{ + struct arm_smccc_1_2_regs args = { + .a0 = GUNYAH_HYPERCALL_ADDRSPACE_UNMAP, + .a1 = capid, + .a2 = extent_capid, + .a3 = vbase, + .a4 = flags, + .a5 = offset, + .a6 = size, + /* C language says this will be implictly zero. Gunyah requires 0, so be explicit */ + .a7 = 0, + }; + struct arm_smccc_1_2_regs res; + + arm_smccc_1_2_hvc(&args, &res); + + return res.a0; +} +EXPORT_SYMBOL_GPL(gunyah_hypercall_addrspace_unmap); + +/** + * gunyah_hypercall_memextent_donate() - Donate memory from one memory extent to another + * @options: donate options + * @from_capid: Memory extent capability ID to donate from + * @to_capid: Memory extent capability ID to donate to + * @offset: Offset into memory extent (physical address of memory) + * @size: Size of memory to donate; must be page-aligned + */ +enum gunyah_error gunyah_hypercall_memextent_donate(u32 options, u64 from_capid, u64 to_capid, + u64 offset, u64 size) +{ + struct arm_smccc_res res; + + arm_smccc_1_1_hvc(GUNYAH_HYPERCALL_MEMEXTENT_DONATE, options, from_capid, to_capid, + offset, size, 0, &res); + + return res.a0; +} +EXPORT_SYMBOL_GPL(gunyah_hypercall_memextent_donate); + /** * gunyah_hypercall_vcpu_run() - Donate CPU time to a vcpu * @capid: capability ID of the vCPU to run diff --git a/arch/arm64/include/asm/gunyah.h b/arch/arm64/include/asm/gunyah.h index 0cd3debe22b6..4adf24977fd1 100644 --- a/arch/arm64/include/asm/gunyah.h +++ b/arch/arm64/include/asm/gunyah.h @@ -33,4 +33,25 @@ static inline int arch_gunyah_fill_irq_fwspec_params(u32 virq, return 0; } +enum arch_gunyah_memtype { + /* clang-format off */ + GUNYAH_MEMTYPE_DEVICE_nGnRnE = 0, + GUNYAH_DEVICE_nGnRE = 1, + GUNYAH_DEVICE_nGRE = 2, + GUNYAH_DEVICE_GRE = 3, + + GUNYAH_NORMAL_NC = 0b0101, + GUNYAH_NORMAL_ONC_IWT = 0b0110, + GUNYAH_NORMAL_ONC_IWB = 0b0111, + GUNYAH_NORMAL_OWT_INC = 0b1001, + GUNYAH_NORMAL_WT = 0b1010, + GUNYAH_NORMAL_OWT_IWB = 0b1011, + GUNYAH_NORMAL_OWB_INC = 0b1101, + GUNYAH_NORMAL_OWB_IWT = 0b1110, + GUNYAH_NORMAL_WB = 0b1111, + /* clang-format on */ +}; + +#define ARCH_GUNYAH_DEFAULT_MEMTYPE GUNYAH_NORMAL_WB + #endif diff --git a/include/linux/gunyah.h b/include/linux/gunyah.h index 8405b2faf774..a517c5c33a75 100644 --- a/include/linux/gunyah.h +++ b/include/linux/gunyah.h @@ -274,6 +274,62 @@ enum gunyah_error gunyah_hypercall_msgq_send(u64 capid, size_t size, void *buff, enum gunyah_error gunyah_hypercall_msgq_recv(u64 capid, void *buff, size_t size, size_t *recv_size, bool *ready); +#define GUNYAH_ADDRSPACE_SELF_CAP 0 + +enum gunyah_pagetable_access { + /* clang-format off */ + GUNYAH_PAGETABLE_ACCESS_NONE = 0, + GUNYAH_PAGETABLE_ACCESS_X = 1, + GUNYAH_PAGETABLE_ACCESS_W = 2, + GUNYAH_PAGETABLE_ACCESS_R = 4, + GUNYAH_PAGETABLE_ACCESS_RX = 5, + GUNYAH_PAGETABLE_ACCESS_RW = 6, + GUNYAH_PAGETABLE_ACCESS_RWX = 7, + /* clang-format on */ +}; + +/* clang-format off */ +#define GUNYAH_MEMEXTENT_MAPPING_USER_ACCESS GENMASK_ULL(2, 0) +#define GUNYAH_MEMEXTENT_MAPPING_KERNEL_ACCESS GENMASK_ULL(6, 4) +#define GUNYAH_MEMEXTENT_MAPPING_TYPE GENMASK_ULL(23, 16) +/* clang-format on */ + +enum gunyah_memextent_donate_type { + /* clang-format off */ + GUNYAH_MEMEXTENT_DONATE_TO_CHILD = 0, + GUNYAH_MEMEXTENT_DONATE_TO_PARENT = 1, + GUNYAH_MEMEXTENT_DONATE_TO_SIBLING = 2, + GUNYAH_MEMEXTENT_DONATE_TO_PROTECTED = 3, + GUNYAH_MEMEXTENT_DONATE_FROM_PROTECTED = 4, + /* clang-format on */ +}; + +enum gunyah_addrspace_map_flag_bits { + /* clang-format off */ + GUNYAH_ADDRSPACE_MAP_FLAG_PARTIAL = 0, + GUNYAH_ADDRSPACE_MAP_FLAG_PRIVATE = 1, + GUNYAH_ADDRSPACE_MAP_FLAG_VMMIO = 2, + GUNYAH_ADDRSPACE_MAP_FLAG_NOSYNC = 31, + /* clang-format on */ +}; + +enum gunyah_error gunyah_hypercall_addrspace_map(u64 capid, u64 extent_capid, + u64 vbase, u32 extent_attrs, + u32 flags, u64 offset, + u64 size); +enum gunyah_error gunyah_hypercall_addrspace_unmap(u64 capid, u64 extent_capid, + u64 vbase, u32 flags, + u64 offset, u64 size); + +/* clang-format off */ +#define GUNYAH_MEMEXTENT_OPTION_TYPE_MASK GENMASK_ULL(7, 0) +#define GUNYAH_MEMEXTENT_OPTION_NOSYNC BIT(31) +/* clang-format on */ + +enum gunyah_error gunyah_hypercall_memextent_donate(u32 options, u64 from_capid, + u64 to_capid, u64 offset, + u64 size); + struct gunyah_hypercall_vcpu_run_resp { union { enum { From patchwork Tue Jan 9 19:37:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761089 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 25425405C5; Tue, 9 Jan 2024 19:38:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="Zo9chhnS" Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409J7Wtr027734; Tue, 9 Jan 2024 19:38:01 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=ot2Rc6FWYqawtwAbAQxff+CJFZOjsl6S/t9tlgViSq4 =; b=Zo9chhnSm40/QbbTZ9T3vvSYbbmWNVbM7fMd5RB8IB6vR3+ZlIfKAGCoG6C 0uHKIVHrZ11CGbnUyXPmlnzXP7CjmEXgwvg34yHZaAgHa/At3sBI5Uq04qz33RKD jL9t4elpnYHd6+aiMR/ceFosZpLnMv2HFCH4LTuy1KfN6Kn4JQJIfc9ti3RLZB9B PCA9PBZXnKj0htcVLnfGOG3pMvxR8rbD4Jn/jENhB6khhqwsPbwWAgJiUmxfABlG 26zb18mXIeU/+xyYTSxbI68ZPe4/xLRRteAFR+eUWcslZtELzNUHiSSO049HSgjD 5Ot61cw21zAB9X/D7KBEbOLMNjQ== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh9vfgdrp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:38:01 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA02.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409Jc0T0011951 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:38:00 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:37:59 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:37:55 -0800 Subject: [PATCH v16 17/34] gunyah: rsc_mgr: Add memory parcel RPC Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-17-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: yuWyby_01twjbm-t5tFkMTfKW7_PGkc_ X-Proofpoint-ORIG-GUID: yuWyby_01twjbm-t5tFkMTfKW7_PGkc_ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_01,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 lowpriorityscore=0 suspectscore=0 clxscore=1015 priorityscore=1501 impostorscore=0 adultscore=0 bulkscore=0 mlxlogscore=999 spamscore=0 phishscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 In a Gunyah hypervisor system using the Gunyah Resource Manager, the "standard" unit of donating, lending and sharing memory is called a memory parcel (memparcel). A memparcel is an abstraction used by the resource manager for securely managing donating, lending and sharing memory, which may be physically and virtually fragmented, without dealing directly with physical memory addresses. Memparcels are created and managed through the RM RPC functions for lending, sharing and reclaiming memory from VMs. When creating a new VM the initial VM memory containing the VM image and the VM's device tree blob must be provided as a memparcel. The memparcel must be created using the RM RPC for lending and mapping the memory to the VM. Signed-off-by: Elliot Berman --- drivers/virt/gunyah/rsc_mgr.h | 9 ++ drivers/virt/gunyah/rsc_mgr_rpc.c | 231 ++++++++++++++++++++++++++++++++++++++ include/linux/gunyah.h | 43 +++++++ 3 files changed, 283 insertions(+) diff --git a/drivers/virt/gunyah/rsc_mgr.h b/drivers/virt/gunyah/rsc_mgr.h index 52711de77bb7..ec8ad8149e8e 100644 --- a/drivers/virt/gunyah/rsc_mgr.h +++ b/drivers/virt/gunyah/rsc_mgr.h @@ -10,6 +10,7 @@ #include #define GUNYAH_VMID_INVAL U16_MAX +#define GUNYAH_MEM_HANDLE_INVAL U32_MAX struct gunyah_rm; @@ -58,6 +59,12 @@ struct gunyah_rm_vm_status_payload { __le16 app_status; } __packed; +/* RPC Calls */ +int gunyah_rm_mem_share(struct gunyah_rm *rm, + struct gunyah_rm_mem_parcel *parcel); +int gunyah_rm_mem_reclaim(struct gunyah_rm *rm, + struct gunyah_rm_mem_parcel *parcel); + int gunyah_rm_alloc_vmid(struct gunyah_rm *rm, u16 vmid); int gunyah_rm_dealloc_vmid(struct gunyah_rm *rm, u16 vmid); int gunyah_rm_vm_reset(struct gunyah_rm *rm, u16 vmid); @@ -99,6 +106,8 @@ struct gunyah_rm_hyp_resources { int gunyah_rm_get_hyp_resources(struct gunyah_rm *rm, u16 vmid, struct gunyah_rm_hyp_resources **resources); +int gunyah_rm_get_vmid(struct gunyah_rm *rm, u16 *vmid); + struct gunyah_resource * gunyah_rm_alloc_resource(struct gunyah_rm *rm, struct gunyah_rm_hyp_resource *hyp_resource); diff --git a/drivers/virt/gunyah/rsc_mgr_rpc.c b/drivers/virt/gunyah/rsc_mgr_rpc.c index 141ce0145e91..bc44bde990ce 100644 --- a/drivers/virt/gunyah/rsc_mgr_rpc.c +++ b/drivers/virt/gunyah/rsc_mgr_rpc.c @@ -5,6 +5,12 @@ #include "rsc_mgr.h" +/* Message IDs: Memory Management */ +#define GUNYAH_RM_RPC_MEM_LEND 0x51000012 +#define GUNYAH_RM_RPC_MEM_SHARE 0x51000013 +#define GUNYAH_RM_RPC_MEM_RECLAIM 0x51000015 +#define GUNYAH_RM_RPC_MEM_APPEND 0x51000018 + /* Message IDs: VM Management */ /* clang-format off */ #define GUNYAH_RM_RPC_VM_ALLOC_VMID 0x56000001 @@ -15,6 +21,7 @@ #define GUNYAH_RM_RPC_VM_CONFIG_IMAGE 0x56000009 #define GUNYAH_RM_RPC_VM_INIT 0x5600000B #define GUNYAH_RM_RPC_VM_GET_HYP_RESOURCES 0x56000020 +#define GUNYAH_RM_RPC_VM_GET_VMID 0x56000024 /* clang-format on */ struct gunyah_rm_vm_common_vmid_req { @@ -22,6 +29,48 @@ struct gunyah_rm_vm_common_vmid_req { __le16 _padding; } __packed; +/* Call: MEM_LEND, MEM_SHARE */ +#define GUNYAH_RM_MAX_MEM_ENTRIES 512 + +#define GUNYAH_MEM_SHARE_REQ_FLAGS_APPEND BIT(1) + +struct gunyah_rm_mem_share_req_header { + u8 mem_type; + u8 _padding0; + u8 flags; + u8 _padding1; + __le32 label; +} __packed; + +struct gunyah_rm_mem_share_req_acl_section { + __le32 n_entries; + struct gunyah_rm_mem_acl_entry entries[]; +} __packed; + +struct gunyah_rm_mem_share_req_mem_section { + __le16 n_entries; + __le16 _padding; + struct gunyah_rm_mem_entry entries[]; +} __packed; + +/* Call: MEM_RELEASE */ +struct gunyah_rm_mem_release_req { + __le32 mem_handle; + u8 flags; /* currently not used */ + u8 _padding0; + __le16 _padding1; +} __packed; + +/* Call: MEM_APPEND */ +#define GUNYAH_MEM_APPEND_REQ_FLAGS_END BIT(0) + +struct gunyah_rm_mem_append_req_header { + __le32 mem_handle; + u8 flags; + u8 _padding0; + __le16 _padding1; +} __packed; + /* Call: VM_ALLOC */ struct gunyah_rm_vm_alloc_vmid_resp { __le16 vmid; @@ -66,6 +115,159 @@ static int gunyah_rm_common_vmid_call(struct gunyah_rm *rm, u32 message_id, NULL, NULL); } +static int gunyah_rm_mem_append(struct gunyah_rm *rm, u32 mem_handle, + struct gunyah_rm_mem_entry *entries, + size_t n_entries) +{ + struct gunyah_rm_mem_append_req_header *req __free(kfree) = NULL; + struct gunyah_rm_mem_share_req_mem_section *mem; + int ret = 0; + size_t n; + + req = kzalloc(sizeof(*req) + struct_size(mem, entries, GUNYAH_RM_MAX_MEM_ENTRIES), + GFP_KERNEL); + if (!req) + return -ENOMEM; + + req->mem_handle = cpu_to_le32(mem_handle); + mem = (void *)(req + 1); + + while (n_entries) { + req->flags = 0; + if (n_entries > GUNYAH_RM_MAX_MEM_ENTRIES) { + n = GUNYAH_RM_MAX_MEM_ENTRIES; + } else { + req->flags |= GUNYAH_MEM_APPEND_REQ_FLAGS_END; + n = n_entries; + } + + mem->n_entries = cpu_to_le16(n); + memcpy(mem->entries, entries, sizeof(*entries) * n); + + ret = gunyah_rm_call(rm, GUNYAH_RM_RPC_MEM_APPEND, req, + sizeof(*req) + struct_size(mem, entries, n), + NULL, NULL); + if (ret) + break; + + entries += n; + n_entries -= n; + } + + return ret; +} + +/** + * gunyah_rm_mem_share() - Share memory with other virtual machines. + * @rm: Handle to a Gunyah resource manager + * @p: Information about the memory to be shared. + * + * Sharing keeps Linux's access to the memory while the memory parcel is shared. + */ +int gunyah_rm_mem_share(struct gunyah_rm *rm, struct gunyah_rm_mem_parcel *p) +{ + u32 message_id = p->n_acl_entries == 1 ? GUNYAH_RM_RPC_MEM_LEND : + GUNYAH_RM_RPC_MEM_SHARE; + size_t msg_size, initial_mem_entries = p->n_mem_entries, resp_size; + struct gunyah_rm_mem_share_req_acl_section *acl; + struct gunyah_rm_mem_share_req_mem_section *mem; + struct gunyah_rm_mem_share_req_header *req_header; + size_t acl_size, mem_size; + u32 *attr_section; + bool need_append = false; + __le32 *resp; + void *msg; + int ret; + + if (!p->acl_entries || !p->n_acl_entries || !p->mem_entries || + !p->n_mem_entries || p->n_acl_entries > U8_MAX || + p->mem_handle != GUNYAH_MEM_HANDLE_INVAL) + return -EINVAL; + + if (initial_mem_entries > GUNYAH_RM_MAX_MEM_ENTRIES) { + initial_mem_entries = GUNYAH_RM_MAX_MEM_ENTRIES; + need_append = true; + } + + acl_size = struct_size(acl, entries, p->n_acl_entries); + mem_size = struct_size(mem, entries, initial_mem_entries); + + /* The format of the message goes: + * request header + * ACL entries (which VMs get what kind of access to this memory parcel) + * Memory entries (list of memory regions to share) + * Memory attributes (currently unused, we'll hard-code the size to 0) + */ + msg_size = sizeof(struct gunyah_rm_mem_share_req_header) + acl_size + + mem_size + + sizeof(u32); /* for memory attributes, currently unused */ + + msg = kzalloc(msg_size, GFP_KERNEL); + if (!msg) + return -ENOMEM; + + req_header = msg; + acl = (void *)req_header + sizeof(*req_header); + mem = (void *)acl + acl_size; + attr_section = (void *)mem + mem_size; + + req_header->mem_type = p->mem_type; + if (need_append) + req_header->flags |= GUNYAH_MEM_SHARE_REQ_FLAGS_APPEND; + req_header->label = cpu_to_le32(p->label); + + acl->n_entries = cpu_to_le32(p->n_acl_entries); + memcpy(acl->entries, p->acl_entries, + flex_array_size(acl, entries, p->n_acl_entries)); + + mem->n_entries = cpu_to_le16(initial_mem_entries); + memcpy(mem->entries, p->mem_entries, + flex_array_size(mem, entries, initial_mem_entries)); + + /* Set n_entries for memory attribute section to 0 */ + *attr_section = 0; + + ret = gunyah_rm_call(rm, message_id, msg, msg_size, (void **)&resp, + &resp_size); + kfree(msg); + + if (ret) + return ret; + + p->mem_handle = le32_to_cpu(*resp); + kfree(resp); + + if (need_append) { + ret = gunyah_rm_mem_append( + rm, p->mem_handle, &p->mem_entries[initial_mem_entries], + p->n_mem_entries - initial_mem_entries); + if (ret) { + gunyah_rm_mem_reclaim(rm, p); + p->mem_handle = GUNYAH_MEM_HANDLE_INVAL; + } + } + + return ret; +} + +/** + * gunyah_rm_mem_reclaim() - Reclaim a memory parcel + * @rm: Handle to a Gunyah resource manager + * @parcel: Information about the memory to be reclaimed. + * + * RM maps the associated memory back into the stage-2 page tables of the owner VM. + */ +int gunyah_rm_mem_reclaim(struct gunyah_rm *rm, + struct gunyah_rm_mem_parcel *parcel) +{ + struct gunyah_rm_mem_release_req req = { + .mem_handle = cpu_to_le32(parcel->mem_handle), + }; + + return gunyah_rm_call(rm, GUNYAH_RM_RPC_MEM_RECLAIM, &req, sizeof(req), + NULL, NULL); +} + /** * gunyah_rm_alloc_vmid() - Allocate a new VM in Gunyah. Returns the VM identifier. * @rm: Handle to a Gunyah resource manager @@ -236,3 +438,32 @@ int gunyah_rm_get_hyp_resources(struct gunyah_rm *rm, u16 vmid, *resources = resp; return 0; } + +/** + * gunyah_rm_get_vmid() - Retrieve VMID of this virtual machine + * @rm: Handle to a Gunyah resource manager + * @vmid: Filled with the VMID of this VM + */ +int gunyah_rm_get_vmid(struct gunyah_rm *rm, u16 *vmid) +{ + static u16 cached_vmid = GUNYAH_VMID_INVAL; + size_t resp_size; + __le32 *resp; + int ret; + + if (cached_vmid != GUNYAH_VMID_INVAL) { + *vmid = cached_vmid; + return 0; + } + + ret = gunyah_rm_call(rm, GUNYAH_RM_RPC_VM_GET_VMID, NULL, 0, + (void **)&resp, &resp_size); + if (ret) + return ret; + + *vmid = cached_vmid = lower_16_bits(le32_to_cpu(*resp)); + kfree(resp); + + return ret; +} +EXPORT_SYMBOL_GPL(gunyah_rm_get_vmid); diff --git a/include/linux/gunyah.h b/include/linux/gunyah.h index a517c5c33a75..9065f5758c39 100644 --- a/include/linux/gunyah.h +++ b/include/linux/gunyah.h @@ -156,6 +156,49 @@ int gunyah_vm_add_resource_ticket(struct gunyah_vm *ghvm, void gunyah_vm_remove_resource_ticket(struct gunyah_vm *ghvm, struct gunyah_vm_resource_ticket *ticket); +#define GUNYAH_RM_ACL_X BIT(0) +#define GUNYAH_RM_ACL_W BIT(1) +#define GUNYAH_RM_ACL_R BIT(2) + +struct gunyah_rm_mem_acl_entry { + __le16 vmid; + u8 perms; + u8 reserved; +} __packed; + +struct gunyah_rm_mem_entry { + __le64 phys_addr; + __le64 size; +} __packed; + +enum gunyah_rm_mem_type { + GUNYAH_RM_MEM_TYPE_NORMAL = 0, + GUNYAH_RM_MEM_TYPE_IO = 1, +}; + +/* + * struct gunyah_rm_mem_parcel - Info about memory to be lent/shared/donated/reclaimed + * @mem_type: The type of memory: normal (DDR) or IO + * @label: An client-specified identifier which can be used by the other VMs to identify the purpose + * of the memory parcel. + * @n_acl_entries: Count of the number of entries in the @acl_entries array. + * @acl_entries: An array of access control entries. Each entry specifies a VM and what access + * is allowed for the memory parcel. + * @n_mem_entries: Count of the number of entries in the @mem_entries array. + * @mem_entries: An array of regions to be associated with the memory parcel. Addresses should be + * (intermediate) physical addresses from Linux's perspective. + * @mem_handle: On success, filled with memory handle that RM allocates for this memory parcel + */ +struct gunyah_rm_mem_parcel { + enum gunyah_rm_mem_type mem_type; + u32 label; + size_t n_acl_entries; + struct gunyah_rm_mem_acl_entry *acl_entries; + size_t n_mem_entries; + struct gunyah_rm_mem_entry *mem_entries; + u32 mem_handle; +}; + /******************************************************************************/ /* Common arch-independent definitions for Gunyah hypercalls */ #define GUNYAH_CAPID_INVAL U64_MAX From patchwork Tue Jan 9 19:37:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761087 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2E5D640BF3; Tue, 9 Jan 2024 19:38:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="kvkp5i7S" Received: from pps.filterd (m0279862.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409FdruS011442; Tue, 9 Jan 2024 19:38:02 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=mCvTvHnDAvi/Z3r/fbIcWBXtbKnYIFgrJK9nQ4SRZtQ =; b=kvkp5i7ShIhFPfNEBWrulkEmXtEQ+gwLJ/1+msn+NXbqJ5ouKzdjF6fxpq1 ZJaGNz9sV8hh3IAOe4Y16AWq6r+peU2NATbyLQZZh8pSudmPOLCu05eWwABKzTMd vBuTcd4vrXtvDF7+mbTV7XT1CbUGgda+nfyLLbtwdxZlLANUnrQpSRU80Iu5V8W2 /dqd2bUeBASYSTQD/ZDxV2U5ul/PNMaOvkJF+AF2HkRotc8R8frIA+VM5QHLEJDc Ac8w0h80BDHN/bK10KN14A5SeNJ8bB1I3w+qIfE6Yg07mIz10VXz7y8MsTqAb09l W41CNyMhqnpPb7VkVHUtr8bi/Uw== Received: from nasanppmta05.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh3me1858-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:38:02 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409Jc1X8024580 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:38:01 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:38:00 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:37:56 -0800 Subject: [PATCH v16 18/34] virt: gunyah: Add interfaces to map memory into guest address space Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-18-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: JhWjMLDtmjxLfi0BZgG3YWzgoCejQ_Wn X-Proofpoint-GUID: JhWjMLDtmjxLfi0BZgG3YWzgoCejQ_Wn X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_01,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 spamscore=0 priorityscore=1501 mlxlogscore=999 lowpriorityscore=0 mlxscore=0 suspectscore=0 impostorscore=0 clxscore=1015 adultscore=0 malwarescore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Gunyah virtual machines are created with either all memory provided at VM creation using the Resource Manager memory parcel construct, or Incrementally by enabling VM demand paging. The Gunyah demand paging support is provided directly by the hypervisor and does not require the creation of resource manager memory parcels. Demand paging allows the host to map/unmap contiguous pages (folios) to a Gunyah memory extent object with the correct rights allowing its contained pages to be mapped into the Guest VM's address space. Memory extents are Gunyah's mechanism for handling system memory abstracting from the direct use of physical page numbers. Memory extents are hypervisor objects and are therefore referenced and access controlled with capabilities. When a virtual machine is configured for demand paging, 3 memory extent and 1 address space capabilities are provided to the host. The resource manager defined policy is such that memory in the "host-only" extent (the default) is private to the host. Memory in the "guest-only" extent can be used for guest private mappings, and are unmapped from the host. Memory in the "host-and-guest-shared" extent can be mapped concurrently and shared between the host and guest VMs. Implement two functions which Linux can use to move memory between the virtual machines: gunyah_provide_folio and gunyah_reclaim_folio. Memory that has been provided to the guest is tracked in a maple tree to be reclaimed later. Folios provided to the virtual machine are assumed to be owned Gunyah stack: the folio's ->private field is used for bookkeeping about whether page is mapped into virtual machine. Signed-off-by: Elliot Berman --- drivers/virt/gunyah/Makefile | 2 +- drivers/virt/gunyah/vm_mgr.c | 67 +++++++++ drivers/virt/gunyah/vm_mgr.h | 46 ++++++ drivers/virt/gunyah/vm_mgr_mem.c | 309 +++++++++++++++++++++++++++++++++++++++ 4 files changed, 423 insertions(+), 1 deletion(-) diff --git a/drivers/virt/gunyah/Makefile b/drivers/virt/gunyah/Makefile index 3f82af8c5ce7..f3c9507224ee 100644 --- a/drivers/virt/gunyah/Makefile +++ b/drivers/virt/gunyah/Makefile @@ -1,5 +1,5 @@ # SPDX-License-Identifier: GPL-2.0 -gunyah_rsc_mgr-y += rsc_mgr.o rsc_mgr_rpc.o vm_mgr.o +gunyah_rsc_mgr-y += rsc_mgr.o rsc_mgr_rpc.o vm_mgr.o vm_mgr_mem.o obj-$(CONFIG_GUNYAH) += gunyah.o gunyah_rsc_mgr.o gunyah_vcpu.o diff --git a/drivers/virt/gunyah/vm_mgr.c b/drivers/virt/gunyah/vm_mgr.c index db3d1d18ccb8..26b6dce49970 100644 --- a/drivers/virt/gunyah/vm_mgr.c +++ b/drivers/virt/gunyah/vm_mgr.c @@ -17,6 +17,16 @@ #include "rsc_mgr.h" #include "vm_mgr.h" +#define GUNYAH_VM_ADDRSPACE_LABEL 0 +// "To" extent for memory private to guest +#define GUNYAH_VM_MEM_EXTENT_GUEST_PRIVATE_LABEL 0 +// "From" extent for memory shared with guest +#define GUNYAH_VM_MEM_EXTENT_HOST_SHARED_LABEL 1 +// "To" extent for memory shared with the guest +#define GUNYAH_VM_MEM_EXTENT_GUEST_SHARED_LABEL 3 +// "From" extent for memory private to guest +#define GUNYAH_VM_MEM_EXTENT_HOST_PRIVATE_LABEL 2 + static DEFINE_XARRAY(gunyah_vm_functions); static void gunyah_vm_put_function(struct gunyah_vm_function *fn) @@ -175,6 +185,16 @@ void gunyah_vm_function_unregister(struct gunyah_vm_function *fn) } EXPORT_SYMBOL_GPL(gunyah_vm_function_unregister); +static bool gunyah_vm_resource_ticket_populate_noop( + struct gunyah_vm_resource_ticket *ticket, struct gunyah_resource *ghrsc) +{ + return true; +} +static void gunyah_vm_resource_ticket_unpopulate_noop( + struct gunyah_vm_resource_ticket *ticket, struct gunyah_resource *ghrsc) +{ +} + int gunyah_vm_add_resource_ticket(struct gunyah_vm *ghvm, struct gunyah_vm_resource_ticket *ticket) { @@ -342,6 +362,17 @@ static void gunyah_vm_stop(struct gunyah_vm *ghvm) ghvm->vm_status != GUNYAH_RM_VM_STATUS_RUNNING); } +static inline void setup_extent_ticket(struct gunyah_vm *ghvm, + struct gunyah_vm_resource_ticket *ticket, + u32 label) +{ + ticket->resource_type = GUNYAH_RESOURCE_TYPE_MEM_EXTENT; + ticket->label = label; + ticket->populate = gunyah_vm_resource_ticket_populate_noop; + ticket->unpopulate = gunyah_vm_resource_ticket_unpopulate_noop; + gunyah_vm_add_resource_ticket(ghvm, ticket); +} + static __must_check struct gunyah_vm *gunyah_vm_alloc(struct gunyah_rm *rm) { struct gunyah_vm *ghvm; @@ -365,6 +396,25 @@ static __must_check struct gunyah_vm *gunyah_vm_alloc(struct gunyah_rm *rm) INIT_LIST_HEAD(&ghvm->resources); INIT_LIST_HEAD(&ghvm->resource_tickets); + mt_init(&ghvm->mm); + + ghvm->addrspace_ticket.resource_type = GUNYAH_RESOURCE_TYPE_ADDR_SPACE; + ghvm->addrspace_ticket.label = GUNYAH_VM_ADDRSPACE_LABEL; + ghvm->addrspace_ticket.populate = + gunyah_vm_resource_ticket_populate_noop; + ghvm->addrspace_ticket.unpopulate = + gunyah_vm_resource_ticket_unpopulate_noop; + gunyah_vm_add_resource_ticket(ghvm, &ghvm->addrspace_ticket); + + setup_extent_ticket(ghvm, &ghvm->host_private_extent_ticket, + GUNYAH_VM_MEM_EXTENT_HOST_PRIVATE_LABEL); + setup_extent_ticket(ghvm, &ghvm->host_shared_extent_ticket, + GUNYAH_VM_MEM_EXTENT_HOST_SHARED_LABEL); + setup_extent_ticket(ghvm, &ghvm->guest_private_extent_ticket, + GUNYAH_VM_MEM_EXTENT_GUEST_PRIVATE_LABEL); + setup_extent_ticket(ghvm, &ghvm->guest_shared_extent_ticket, + GUNYAH_VM_MEM_EXTENT_GUEST_SHARED_LABEL); + return ghvm; } @@ -528,6 +578,21 @@ static void _gunyah_vm_put(struct kref *kref) gunyah_vm_stop(ghvm); gunyah_vm_remove_functions(ghvm); + + /** + * If this fails, we're going to lose the memory for good and is + * BUG_ON-worthy, but not unrecoverable (we just lose memory). + * This call should always succeed though because the VM is in not + * running and RM will let us reclaim all the memory. + */ + WARN_ON(gunyah_vm_reclaim_range(ghvm, 0, U64_MAX)); + + gunyah_vm_remove_resource_ticket(ghvm, &ghvm->addrspace_ticket); + gunyah_vm_remove_resource_ticket(ghvm, &ghvm->host_shared_extent_ticket); + gunyah_vm_remove_resource_ticket(ghvm, &ghvm->host_private_extent_ticket); + gunyah_vm_remove_resource_ticket(ghvm, &ghvm->guest_shared_extent_ticket); + gunyah_vm_remove_resource_ticket(ghvm, &ghvm->guest_private_extent_ticket); + gunyah_vm_clean_resources(ghvm); if (ghvm->vm_status != GUNYAH_RM_VM_STATUS_NO_STATE && @@ -541,6 +606,8 @@ static void _gunyah_vm_put(struct kref *kref) ghvm->vm_status == GUNYAH_RM_VM_STATUS_RESET); } + mtree_destroy(&ghvm->mm); + if (ghvm->vm_status > GUNYAH_RM_VM_STATUS_NO_STATE) { gunyah_rm_notifier_unregister(ghvm->rm, &ghvm->nb); diff --git a/drivers/virt/gunyah/vm_mgr.h b/drivers/virt/gunyah/vm_mgr.h index 8c5b94101b2c..e500f6eb014e 100644 --- a/drivers/virt/gunyah/vm_mgr.h +++ b/drivers/virt/gunyah/vm_mgr.h @@ -8,6 +8,7 @@ #include #include +#include #include #include #include @@ -16,12 +17,42 @@ #include "rsc_mgr.h" +static inline u64 gunyah_gpa_to_gfn(u64 gpa) +{ + return gpa >> PAGE_SHIFT; +} + +static inline u64 gunyah_gfn_to_gpa(u64 gfn) +{ + return gfn << PAGE_SHIFT; +} + long gunyah_dev_vm_mgr_ioctl(struct gunyah_rm *rm, unsigned int cmd, unsigned long arg); /** * struct gunyah_vm - Main representation of a Gunyah Virtual machine * @vmid: Gunyah's VMID for this virtual machine + * @mm: A maple tree of all memory that has been mapped to a VM. + * Indices are guest frame numbers; entries are either folios or + * RM mem parcels + * @addrspace_ticket: Resource ticket to the capability for guest VM's + * address space + * @host_private_extent_ticket: Resource ticket to the capability for our + * memory extent from which to lend private + * memory to the guest + * @host_shared_extent_ticket: Resource ticket to the capaiblity for our + * memory extent from which to share memory + * with the guest. Distinction with + * @host_private_extent_ticket needed for + * current Qualcomm platforms; on non-Qualcomm + * platforms, this is the same capability ID + * @guest_private_extent_ticket: Resource ticket to the capaiblity for + * the guest's memory extent to lend private + * memory to + * @guest_shared_extent_ticket: Resource ticket to the capability for + * the memory extent that represents + * memory shared with the guest. * @rm: Pointer to the resource manager struct to make RM calls * @parent: For logging * @nb: Notifier block for RM notifications @@ -43,6 +74,11 @@ long gunyah_dev_vm_mgr_ioctl(struct gunyah_rm *rm, unsigned int cmd, */ struct gunyah_vm { u16 vmid; + struct maple_tree mm; + struct gunyah_vm_resource_ticket addrspace_ticket, + host_private_extent_ticket, host_shared_extent_ticket, + guest_private_extent_ticket, guest_shared_extent_ticket; + struct gunyah_rm *rm; struct notifier_block nb; @@ -63,4 +99,14 @@ struct gunyah_vm { }; +int gunyah_vm_parcel_to_paged(struct gunyah_vm *ghvm, + struct gunyah_rm_mem_parcel *parcel, u64 gfn, + u64 nr); +int gunyah_vm_reclaim_parcel(struct gunyah_vm *ghvm, + struct gunyah_rm_mem_parcel *parcel, u64 gfn); +int gunyah_vm_provide_folio(struct gunyah_vm *ghvm, struct folio *folio, + u64 gfn, bool share, bool write); +int gunyah_vm_reclaim_folio(struct gunyah_vm *ghvm, u64 gfn); +int gunyah_vm_reclaim_range(struct gunyah_vm *ghvm, u64 gfn, u64 nr); + #endif diff --git a/drivers/virt/gunyah/vm_mgr_mem.c b/drivers/virt/gunyah/vm_mgr_mem.c new file mode 100644 index 000000000000..d3fcb4514907 --- /dev/null +++ b/drivers/virt/gunyah/vm_mgr_mem.c @@ -0,0 +1,309 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2023-2024 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#define pr_fmt(fmt) "gunyah_vm_mgr: " fmt + +#include +#include +#include + +#include "vm_mgr.h" + +#define WRITE_TAG (1 << 0) +#define SHARE_TAG (1 << 1) + +static inline struct gunyah_resource * +__first_resource(struct gunyah_vm_resource_ticket *ticket) +{ + return list_first_entry_or_null(&ticket->resources, + struct gunyah_resource, list); +} + +int gunyah_vm_parcel_to_paged(struct gunyah_vm *ghvm, + struct gunyah_rm_mem_parcel *parcel, u64 gfn, + u64 nr) +{ + struct gunyah_rm_mem_entry *entry; + unsigned long i, entry_size, tag = 0; + struct folio *folio; + pgoff_t off = 0; + int ret; + + if (parcel->n_acl_entries > 1) + tag |= SHARE_TAG; + if (parcel->acl_entries[0].perms & GUNYAH_RM_ACL_W) + tag |= WRITE_TAG; + + for (i = 0; i < parcel->n_mem_entries; i++) { + entry = &parcel->mem_entries[i]; + entry_size = PHYS_PFN(le64_to_cpu(entry->size)); + + folio = pfn_folio(PHYS_PFN(le64_to_cpu(entry->phys_addr))); + ret = mtree_insert_range(&ghvm->mm, gfn + off, gfn + off + folio_nr_pages(folio) - 1, xa_tag_pointer(folio, tag), GFP_KERNEL); + if (ret == -ENOMEM) + return ret; + BUG_ON(ret); + off += folio_nr_pages(folio); + } + + BUG_ON(off != nr); + + return 0; +} + +static inline u32 donate_flags(bool share) +{ + if (share) + return FIELD_PREP_CONST(GUNYAH_MEMEXTENT_OPTION_TYPE_MASK, + GUNYAH_MEMEXTENT_DONATE_TO_SIBLING); + else + return FIELD_PREP_CONST(GUNYAH_MEMEXTENT_OPTION_TYPE_MASK, + GUNYAH_MEMEXTENT_DONATE_TO_PROTECTED); +} + +static inline u32 reclaim_flags(bool share) +{ + if (share) + return FIELD_PREP_CONST(GUNYAH_MEMEXTENT_OPTION_TYPE_MASK, + GUNYAH_MEMEXTENT_DONATE_TO_SIBLING); + else + return FIELD_PREP_CONST(GUNYAH_MEMEXTENT_OPTION_TYPE_MASK, + GUNYAH_MEMEXTENT_DONATE_FROM_PROTECTED); +} + +int gunyah_vm_provide_folio(struct gunyah_vm *ghvm, struct folio *folio, + u64 gfn, bool share, bool write) +{ + struct gunyah_resource *guest_extent, *host_extent, *addrspace; + u32 map_flags = BIT(GUNYAH_ADDRSPACE_MAP_FLAG_PARTIAL); + u64 extent_attrs, gpa = gunyah_gfn_to_gpa(gfn); + phys_addr_t pa = PFN_PHYS(folio_pfn(folio)); + enum gunyah_pagetable_access access; + size_t size = folio_size(folio); + enum gunyah_error gunyah_error; + unsigned long tag = 0; + int ret; + + /* clang-format off */ + if (share) { + guest_extent = __first_resource(&ghvm->guest_shared_extent_ticket); + host_extent = __first_resource(&ghvm->host_shared_extent_ticket); + } else { + guest_extent = __first_resource(&ghvm->guest_private_extent_ticket); + host_extent = __first_resource(&ghvm->host_private_extent_ticket); + } + /* clang-format on */ + addrspace = __first_resource(&ghvm->addrspace_ticket); + + if (!addrspace || !guest_extent || !host_extent) + return -ENODEV; + + if (share) { + map_flags |= BIT(GUNYAH_ADDRSPACE_MAP_FLAG_VMMIO); + tag |= SHARE_TAG; + } else { + map_flags |= BIT(GUNYAH_ADDRSPACE_MAP_FLAG_PRIVATE); + } + + if (write) + tag |= WRITE_TAG; + + ret = mtree_insert_range(&ghvm->mm, gfn, + gfn + folio_nr_pages(folio) - 1, + xa_tag_pointer(folio, tag), GFP_KERNEL); + if (ret == -EEXIST) + return -EAGAIN; + if (ret) + return ret; + + if (share && write) + access = GUNYAH_PAGETABLE_ACCESS_RW; + else if (share && !write) + access = GUNYAH_PAGETABLE_ACCESS_R; + else if (!share && write) + access = GUNYAH_PAGETABLE_ACCESS_RWX; + else /* !share && !write */ + access = GUNYAH_PAGETABLE_ACCESS_RX; + + gunyah_error = gunyah_hypercall_memextent_donate(donate_flags(share), + host_extent->capid, + guest_extent->capid, + pa, size); + if (gunyah_error != GUNYAH_ERROR_OK) { + pr_err("Failed to donate memory for guest address 0x%016llx: %d\n", + gpa, gunyah_error); + ret = gunyah_error_remap(gunyah_error); + goto remove; + } + + extent_attrs = + FIELD_PREP_CONST(GUNYAH_MEMEXTENT_MAPPING_TYPE, + ARCH_GUNYAH_DEFAULT_MEMTYPE) | + FIELD_PREP(GUNYAH_MEMEXTENT_MAPPING_USER_ACCESS, access) | + FIELD_PREP(GUNYAH_MEMEXTENT_MAPPING_KERNEL_ACCESS, access); + gunyah_error = gunyah_hypercall_addrspace_map(addrspace->capid, + guest_extent->capid, gpa, + extent_attrs, map_flags, + pa, size); + if (gunyah_error != GUNYAH_ERROR_OK) { + pr_err("Failed to map guest address 0x%016llx: %d\n", gpa, + gunyah_error); + ret = gunyah_error_remap(gunyah_error); + goto memextent_reclaim; + } + + folio_get(folio); + if (!share) + folio_set_private(folio); + return 0; +memextent_reclaim: + gunyah_error = gunyah_hypercall_memextent_donate(reclaim_flags(share), + guest_extent->capid, + host_extent->capid, pa, + size); + if (gunyah_error != GUNYAH_ERROR_OK) + pr_err("Failed to reclaim memory donation for guest address 0x%016llx: %d\n", + gpa, gunyah_error); +remove: + mtree_erase(&ghvm->mm, gfn); + return ret; +} + +static int __gunyah_vm_reclaim_folio_locked(struct gunyah_vm *ghvm, void *entry, + u64 gfn, const bool sync) +{ + u32 map_flags = BIT(GUNYAH_ADDRSPACE_MAP_FLAG_PARTIAL); + struct gunyah_resource *guest_extent, *host_extent, *addrspace; + enum gunyah_pagetable_access access; + enum gunyah_error gunyah_error; + struct folio *folio; + bool write, share; + phys_addr_t pa; + size_t size; + int ret; + + addrspace = __first_resource(&ghvm->addrspace_ticket); + if (!addrspace) + return -ENODEV; + + share = !!(xa_pointer_tag(entry) & SHARE_TAG); + write = !!(xa_pointer_tag(entry) & WRITE_TAG); + folio = xa_untag_pointer(entry); + + if (!sync) + map_flags |= BIT(GUNYAH_ADDRSPACE_MAP_FLAG_NOSYNC); + + /* clang-format off */ + if (share) { + guest_extent = __first_resource(&ghvm->guest_shared_extent_ticket); + host_extent = __first_resource(&ghvm->host_shared_extent_ticket); + map_flags |= BIT(GUNYAH_ADDRSPACE_MAP_FLAG_VMMIO); + } else { + guest_extent = __first_resource(&ghvm->guest_private_extent_ticket); + host_extent = __first_resource(&ghvm->host_private_extent_ticket); + map_flags |= BIT(GUNYAH_ADDRSPACE_MAP_FLAG_PRIVATE); + } + /* clang-format on */ + + pa = PFN_PHYS(folio_pfn(folio)); + size = folio_size(folio); + + gunyah_error = gunyah_hypercall_addrspace_unmap(addrspace->capid, + guest_extent->capid, + gunyah_gfn_to_gpa(gfn), + map_flags, pa, size); + if (gunyah_error != GUNYAH_ERROR_OK) { + pr_err_ratelimited( + "Failed to unmap guest address 0x%016llx: %d\n", + gunyah_gfn_to_gpa(gfn), gunyah_error); + ret = gunyah_error_remap(gunyah_error); + goto err; + } + + gunyah_error = gunyah_hypercall_memextent_donate(reclaim_flags(share), + guest_extent->capid, + host_extent->capid, pa, + size); + if (gunyah_error != GUNYAH_ERROR_OK) { + pr_err_ratelimited( + "Failed to reclaim memory donation for guest address 0x%016llx: %d\n", + gunyah_gfn_to_gpa(gfn), gunyah_error); + ret = gunyah_error_remap(gunyah_error); + goto err; + } + + if (share && write) + access = GUNYAH_PAGETABLE_ACCESS_RW; + else if (share && !write) + access = GUNYAH_PAGETABLE_ACCESS_R; + else if (!share && write) + access = GUNYAH_PAGETABLE_ACCESS_RWX; + else /* !share && !write */ + access = GUNYAH_PAGETABLE_ACCESS_RX; + + gunyah_error = gunyah_hypercall_memextent_donate(donate_flags(share), + guest_extent->capid, + host_extent->capid, pa, + size); + if (gunyah_error != GUNYAH_ERROR_OK) { + pr_err("Failed to reclaim memory donation for guest address 0x%016llx: %d\n", + gfn << PAGE_SHIFT, gunyah_error); + ret = gunyah_error_remap(gunyah_error); + goto err; + } + + BUG_ON(mtree_erase(&ghvm->mm, gfn) != entry); + + folio_clear_private(folio); + folio_put(folio); + return 0; +err: + return ret; +} + +int gunyah_vm_reclaim_folio(struct gunyah_vm *ghvm, u64 gfn) +{ + struct folio *folio; + void *entry; + + entry = mtree_load(&ghvm->mm, gfn); + if (!entry) + return 0; + + folio = xa_untag_pointer(entry); + if (mtree_load(&ghvm->mm, gfn) != entry) + return -EAGAIN; + + return __gunyah_vm_reclaim_folio_locked(ghvm, entry, gfn, true); +} + +int gunyah_vm_reclaim_range(struct gunyah_vm *ghvm, u64 gfn, u64 nr) +{ + unsigned long next = gfn, g; + struct folio *folio; + int ret, ret2 = 0; + void *entry; + bool sync; + + mt_for_each(&ghvm->mm, entry, next, gfn + nr) { + folio = xa_untag_pointer(entry); + g = next; + sync = !!mt_find_after(&ghvm->mm, &g, gfn + nr); + + g = next - folio_nr_pages(folio); + folio_get(folio); + folio_lock(folio); + if (mtree_load(&ghvm->mm, g) == entry) + ret = __gunyah_vm_reclaim_folio_locked(ghvm, entry, g, sync); + else + ret = -EAGAIN; + folio_unlock(folio); + folio_put(folio); + if (ret && ret2 != -EAGAIN) + ret2 = ret; + } + + return ret2; +} From patchwork Tue Jan 9 19:37:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761090 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C1DFA3F8F7; Tue, 9 Jan 2024 19:38:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="oD3seSC8" Received: from pps.filterd (m0279870.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409JVgiM000985; Tue, 9 Jan 2024 19:38:03 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=9X97Z/Sw4iF1xfdQ6zXP7tMdWV0+M1tv2BkbjOoyCL8 =; b=oD3seSC88GSY09RBXXDfdO/S1XRWcAfJVppANpFpyYsRUtAjIM8F7yWT8KY US+gIixz6fLJ7c1G3JNlZJq8Cjidaxj8qarOqbSm2sCvkoeZxubRH7KXWOI/8DSg mWkLPV9/Iwe6lKKRTuU6VeWOf2VKyHg6M03d8c961/f1r4pxGtTQelEqXSVTO1+1 cNzPnKtkB36et2dnV14TwwDsbi4/YDhRXeszMROvdFJgjrRum7i9VpGYasjivj9n 6htyi72Npv0/UUoaV7hzDa9sRY+A2tCAALm5QnKZ+lt8Ltn9xwCzhPFuiw5gvl50 PejHQOT5BRoNkYfSlzORJlZ8H7w== Received: from nasanppmta05.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh9q70eg9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:38:03 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409Jc1ZX024598 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:38:01 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:38:01 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:37:57 -0800 Subject: [PATCH v16 19/34] gunyah: rsc_mgr: Add platform ops on mem_lend/mem_reclaim Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-19-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: BeJma36D3-TzHNyW40rWlmdLSG7o1asP X-Proofpoint-GUID: BeJma36D3-TzHNyW40rWlmdLSG7o1asP X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_01,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 adultscore=0 clxscore=1015 impostorscore=0 suspectscore=0 bulkscore=0 mlxlogscore=999 malwarescore=0 lowpriorityscore=0 priorityscore=1501 mlxscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 On Qualcomm platforms, there is a firmware entity which controls access to physical pages. In order to share memory with another VM, this entity needs to be informed that the guest VM should have access to the memory. Co-developed-by: Prakruthi Deepak Heragu Signed-off-by: Prakruthi Deepak Heragu Signed-off-by: Elliot Berman --- drivers/virt/gunyah/Kconfig | 4 + drivers/virt/gunyah/Makefile | 1 + drivers/virt/gunyah/gunyah_platform_hooks.c | 115 ++++++++++++++++++++++++++++ drivers/virt/gunyah/rsc_mgr.h | 10 +++ drivers/virt/gunyah/rsc_mgr_rpc.c | 20 ++++- drivers/virt/gunyah/vm_mgr_mem.c | 32 +++++--- include/linux/gunyah.h | 37 +++++++++ 7 files changed, 206 insertions(+), 13 deletions(-) diff --git a/drivers/virt/gunyah/Kconfig b/drivers/virt/gunyah/Kconfig index 6f4c85db80b5..23ba523d25dc 100644 --- a/drivers/virt/gunyah/Kconfig +++ b/drivers/virt/gunyah/Kconfig @@ -3,6 +3,7 @@ config GUNYAH tristate "Gunyah Virtualization drivers" depends on ARM64 + select GUNYAH_PLATFORM_HOOKS help The Gunyah drivers are the helper interfaces that run in a guest VM such as basic inter-VM IPC and signaling mechanisms, and higher level @@ -10,3 +11,6 @@ config GUNYAH Say Y/M here to enable the drivers needed to interact in a Gunyah virtual environment. + +config GUNYAH_PLATFORM_HOOKS + tristate diff --git a/drivers/virt/gunyah/Makefile b/drivers/virt/gunyah/Makefile index f3c9507224ee..ffcde0e0ccfa 100644 --- a/drivers/virt/gunyah/Makefile +++ b/drivers/virt/gunyah/Makefile @@ -3,3 +3,4 @@ gunyah_rsc_mgr-y += rsc_mgr.o rsc_mgr_rpc.o vm_mgr.o vm_mgr_mem.o obj-$(CONFIG_GUNYAH) += gunyah.o gunyah_rsc_mgr.o gunyah_vcpu.o +obj-$(CONFIG_GUNYAH_PLATFORM_HOOKS) += gunyah_platform_hooks.o diff --git a/drivers/virt/gunyah/gunyah_platform_hooks.c b/drivers/virt/gunyah/gunyah_platform_hooks.c new file mode 100644 index 000000000000..a1f93321e5ba --- /dev/null +++ b/drivers/virt/gunyah/gunyah_platform_hooks.c @@ -0,0 +1,115 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2022-2024 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include +#include +#include +#include + +#include "rsc_mgr.h" + +static const struct gunyah_rm_platform_ops *rm_platform_ops; +static DECLARE_RWSEM(rm_platform_ops_lock); + +int gunyah_rm_platform_pre_mem_share(struct gunyah_rm *rm, + struct gunyah_rm_mem_parcel *mem_parcel) +{ + int ret = 0; + + down_read(&rm_platform_ops_lock); + if (rm_platform_ops && rm_platform_ops->pre_mem_share) + ret = rm_platform_ops->pre_mem_share(rm, mem_parcel); + up_read(&rm_platform_ops_lock); + return ret; +} +EXPORT_SYMBOL_GPL(gunyah_rm_platform_pre_mem_share); + +int gunyah_rm_platform_post_mem_reclaim(struct gunyah_rm *rm, + struct gunyah_rm_mem_parcel *mem_parcel) +{ + int ret = 0; + + down_read(&rm_platform_ops_lock); + if (rm_platform_ops && rm_platform_ops->post_mem_reclaim) + ret = rm_platform_ops->post_mem_reclaim(rm, mem_parcel); + up_read(&rm_platform_ops_lock); + return ret; +} +EXPORT_SYMBOL_GPL(gunyah_rm_platform_post_mem_reclaim); + +int gunyah_rm_platform_pre_demand_page(struct gunyah_rm *rm, u16 vmid, + u32 flags, struct folio *folio) +{ + int ret = 0; + + down_read(&rm_platform_ops_lock); + if (rm_platform_ops && rm_platform_ops->pre_demand_page) + ret = rm_platform_ops->pre_demand_page(rm, vmid, flags, folio); + up_read(&rm_platform_ops_lock); + return ret; +} +EXPORT_SYMBOL_GPL(gunyah_rm_platform_pre_demand_page); + +int gunyah_rm_platform_reclaim_demand_page(struct gunyah_rm *rm, u16 vmid, + u32 flags, struct folio *folio) +{ + int ret = 0; + + down_read(&rm_platform_ops_lock); + if (rm_platform_ops && rm_platform_ops->pre_demand_page) + ret = rm_platform_ops->release_demand_page(rm, vmid, flags, + folio); + up_read(&rm_platform_ops_lock); + return ret; +} +EXPORT_SYMBOL_GPL(gunyah_rm_platform_reclaim_demand_page); + +int gunyah_rm_register_platform_ops( + const struct gunyah_rm_platform_ops *platform_ops) +{ + int ret = 0; + + down_write(&rm_platform_ops_lock); + if (!rm_platform_ops) + rm_platform_ops = platform_ops; + else + ret = -EEXIST; + up_write(&rm_platform_ops_lock); + return ret; +} +EXPORT_SYMBOL_GPL(gunyah_rm_register_platform_ops); + +void gunyah_rm_unregister_platform_ops( + const struct gunyah_rm_platform_ops *platform_ops) +{ + down_write(&rm_platform_ops_lock); + if (rm_platform_ops == platform_ops) + rm_platform_ops = NULL; + up_write(&rm_platform_ops_lock); +} +EXPORT_SYMBOL_GPL(gunyah_rm_unregister_platform_ops); + +static void _devm_gunyah_rm_unregister_platform_ops(void *data) +{ + gunyah_rm_unregister_platform_ops( + (const struct gunyah_rm_platform_ops *)data); +} + +int devm_gunyah_rm_register_platform_ops( + struct device *dev, const struct gunyah_rm_platform_ops *ops) +{ + int ret; + + ret = gunyah_rm_register_platform_ops(ops); + if (ret) + return ret; + + return devm_add_action(dev, _devm_gunyah_rm_unregister_platform_ops, + (void *)ops); +} +EXPORT_SYMBOL_GPL(devm_gunyah_rm_register_platform_ops); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("Gunyah Platform Hooks"); diff --git a/drivers/virt/gunyah/rsc_mgr.h b/drivers/virt/gunyah/rsc_mgr.h index ec8ad8149e8e..68d08d3cff02 100644 --- a/drivers/virt/gunyah/rsc_mgr.h +++ b/drivers/virt/gunyah/rsc_mgr.h @@ -117,4 +117,14 @@ int gunyah_rm_call(struct gunyah_rm *rsc_mgr, u32 message_id, const void *req_buf, size_t req_buf_size, void **resp_buf, size_t *resp_buf_size); +int gunyah_rm_platform_pre_mem_share(struct gunyah_rm *rm, + struct gunyah_rm_mem_parcel *mem_parcel); +int gunyah_rm_platform_post_mem_reclaim( + struct gunyah_rm *rm, struct gunyah_rm_mem_parcel *mem_parcel); + +int gunyah_rm_platform_pre_demand_page(struct gunyah_rm *rm, u16 vmid, + u32 flags, struct folio *folio); +int gunyah_rm_platform_reclaim_demand_page(struct gunyah_rm *rm, u16 vmid, + u32 flags, struct folio *folio); + #endif diff --git a/drivers/virt/gunyah/rsc_mgr_rpc.c b/drivers/virt/gunyah/rsc_mgr_rpc.c index bc44bde990ce..0d78613827b5 100644 --- a/drivers/virt/gunyah/rsc_mgr_rpc.c +++ b/drivers/virt/gunyah/rsc_mgr_rpc.c @@ -206,6 +206,12 @@ int gunyah_rm_mem_share(struct gunyah_rm *rm, struct gunyah_rm_mem_parcel *p) if (!msg) return -ENOMEM; + ret = gunyah_rm_platform_pre_mem_share(rm, p); + if (ret) { + kfree(msg); + return ret; + } + req_header = msg; acl = (void *)req_header + sizeof(*req_header); mem = (void *)acl + acl_size; @@ -231,8 +237,10 @@ int gunyah_rm_mem_share(struct gunyah_rm *rm, struct gunyah_rm_mem_parcel *p) &resp_size); kfree(msg); - if (ret) + if (ret) { + gunyah_rm_platform_post_mem_reclaim(rm, p); return ret; + } p->mem_handle = le32_to_cpu(*resp); kfree(resp); @@ -263,9 +271,15 @@ int gunyah_rm_mem_reclaim(struct gunyah_rm *rm, struct gunyah_rm_mem_release_req req = { .mem_handle = cpu_to_le32(parcel->mem_handle), }; + int ret; - return gunyah_rm_call(rm, GUNYAH_RM_RPC_MEM_RECLAIM, &req, sizeof(req), - NULL, NULL); + ret = gunyah_rm_call(rm, GUNYAH_RM_RPC_MEM_RECLAIM, &req, sizeof(req), + NULL, NULL); + /* Only call platform mem reclaim hooks if we reclaimed the memory */ + if (ret) + return ret; + + return gunyah_rm_platform_post_mem_reclaim(rm, parcel); } /** diff --git a/drivers/virt/gunyah/vm_mgr_mem.c b/drivers/virt/gunyah/vm_mgr_mem.c index d3fcb4514907..15610a8c6f82 100644 --- a/drivers/virt/gunyah/vm_mgr_mem.c +++ b/drivers/virt/gunyah/vm_mgr_mem.c @@ -9,6 +9,7 @@ #include #include +#include "rsc_mgr.h" #include "vm_mgr.h" #define WRITE_TAG (1 << 0) @@ -84,7 +85,7 @@ int gunyah_vm_provide_folio(struct gunyah_vm *ghvm, struct folio *folio, size_t size = folio_size(folio); enum gunyah_error gunyah_error; unsigned long tag = 0; - int ret; + int ret, tmp; /* clang-format off */ if (share) { @@ -127,6 +128,11 @@ int gunyah_vm_provide_folio(struct gunyah_vm *ghvm, struct folio *folio, else /* !share && !write */ access = GUNYAH_PAGETABLE_ACCESS_RX; + ret = gunyah_rm_platform_pre_demand_page(ghvm->rm, ghvm->vmid, access, + folio); + if (ret) + goto remove; + gunyah_error = gunyah_hypercall_memextent_donate(donate_flags(share), host_extent->capid, guest_extent->capid, @@ -135,7 +141,7 @@ int gunyah_vm_provide_folio(struct gunyah_vm *ghvm, struct folio *folio, pr_err("Failed to donate memory for guest address 0x%016llx: %d\n", gpa, gunyah_error); ret = gunyah_error_remap(gunyah_error); - goto remove; + goto platform_release; } extent_attrs = @@ -166,6 +172,14 @@ int gunyah_vm_provide_folio(struct gunyah_vm *ghvm, struct folio *folio, if (gunyah_error != GUNYAH_ERROR_OK) pr_err("Failed to reclaim memory donation for guest address 0x%016llx: %d\n", gpa, gunyah_error); +platform_release: + tmp = gunyah_rm_platform_reclaim_demand_page(ghvm->rm, ghvm->vmid, + access, folio); + if (tmp) { + pr_err("Platform failed to reclaim memory for guest address 0x%016llx: %d", + gpa, tmp); + return ret; + } remove: mtree_erase(&ghvm->mm, gfn); return ret; @@ -243,14 +257,12 @@ static int __gunyah_vm_reclaim_folio_locked(struct gunyah_vm *ghvm, void *entry, else /* !share && !write */ access = GUNYAH_PAGETABLE_ACCESS_RX; - gunyah_error = gunyah_hypercall_memextent_donate(donate_flags(share), - guest_extent->capid, - host_extent->capid, pa, - size); - if (gunyah_error != GUNYAH_ERROR_OK) { - pr_err("Failed to reclaim memory donation for guest address 0x%016llx: %d\n", - gfn << PAGE_SHIFT, gunyah_error); - ret = gunyah_error_remap(gunyah_error); + ret = gunyah_rm_platform_reclaim_demand_page(ghvm->rm, ghvm->vmid, + access, folio); + if (ret) { + pr_err_ratelimited( + "Platform failed to reclaim memory for guest address 0x%016llx: %d", + gunyah_gfn_to_gpa(gfn), ret); goto err; } diff --git a/include/linux/gunyah.h b/include/linux/gunyah.h index 9065f5758c39..32ce578220ca 100644 --- a/include/linux/gunyah.h +++ b/include/linux/gunyah.h @@ -199,6 +199,43 @@ struct gunyah_rm_mem_parcel { u32 mem_handle; }; +struct gunyah_rm_platform_ops { + int (*pre_mem_share)(struct gunyah_rm *rm, + struct gunyah_rm_mem_parcel *mem_parcel); + int (*post_mem_reclaim)(struct gunyah_rm *rm, + struct gunyah_rm_mem_parcel *mem_parcel); + + int (*pre_demand_page)(struct gunyah_rm *rm, u16 vmid, u32 flags, + struct folio *folio); + int (*release_demand_page)(struct gunyah_rm *rm, u16 vmid, u32 flags, + struct folio *folio); +}; + +#if IS_ENABLED(CONFIG_GUNYAH_PLATFORM_HOOKS) +int gunyah_rm_register_platform_ops( + const struct gunyah_rm_platform_ops *platform_ops); +void gunyah_rm_unregister_platform_ops( + const struct gunyah_rm_platform_ops *platform_ops); +int devm_gunyah_rm_register_platform_ops( + struct device *dev, const struct gunyah_rm_platform_ops *ops); +#else +static inline int gunyah_rm_register_platform_ops( + const struct gunyah_rm_platform_ops *platform_ops) +{ + return 0; +} +static inline void gunyah_rm_unregister_platform_ops( + const struct gunyah_rm_platform_ops *platform_ops) +{ +} +static inline int +devm_gunyah_rm_register_platform_ops(struct device *dev, + const struct gunyah_rm_platform_ops *ops) +{ + return 0; +} +#endif + /******************************************************************************/ /* Common arch-independent definitions for Gunyah hypercalls */ #define GUNYAH_CAPID_INVAL U64_MAX From patchwork Tue Jan 9 19:37:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761567 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0863B405C1; Tue, 9 Jan 2024 19:38:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="PxE6TmPQ" Received: from pps.filterd (m0279863.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409JTQDh022746; Tue, 9 Jan 2024 19:38:03 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=zJ0o/W/Uakm8GvtvCA3LlQsDnIwUJ0OXBqD9DQx3RpM =; b=PxE6TmPQ0ZAqV3lZMoeY5MrkWvwKQDXmMiDE9xAFpfxLEn7kbWk7B6Ew+yw 77xrhnMmgTisrmlB3pMESUVi1aECfBuEKOSgKeIvEfgNDf2FTvqd09SZ+2Os0qxJ qkjWU1fZMwCk0WSaxZ14yUG5+u9Z18pQfjPmWJiYGnXhgC2++c9stcPrcsQjDveI MWo9tKifnpa4H8jDD+PE9OohPQO+F9TaiwSq3sm/wd26HEvdxXX4B6aovZ+tfk4+ 2Ce0Brp7EVAS9LYX3XQCH4oyhtrDEEplhZTZwh55i8h61y1waJk5o4KDR2km8SQY W71GBhoaJ8AeIarZCFNVKLH9UKw== Received: from nasanppmta03.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh9bmggd8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:38:03 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA03.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409Jc2uq011503 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:38:02 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:38:01 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:37:58 -0800 Subject: [PATCH v16 20/34] virt: gunyah: Add Qualcomm Gunyah platform ops Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-20-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: eZmiIfHcIKuoUH2Q9aP1F4ODLB-2XTl2 X-Proofpoint-GUID: eZmiIfHcIKuoUH2Q9aP1F4ODLB-2XTl2 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_01,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 mlxscore=0 clxscore=1015 spamscore=0 priorityscore=1501 malwarescore=0 mlxlogscore=999 adultscore=0 bulkscore=0 suspectscore=0 lowpriorityscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Qualcomm platforms have a firmware entity which performs access control to physical pages. Dynamically started Gunyah virtual machines use the QCOM_SCM_RM_MANAGED_VMID for access. Linux thus needs to assign access to the memory used by guest VMs. Gunyah doesn't do this operation for us since it is the current VM (typically VMID_HLOS) delegating the access and not Gunyah itself. Use the Gunyah platform ops to achieve this so that only Qualcomm platforms attempt to make the needed SCM calls. Reviewed-by: Alex Elder Co-developed-by: Prakruthi Deepak Heragu Signed-off-by: Prakruthi Deepak Heragu Signed-off-by: Elliot Berman --- drivers/virt/gunyah/Kconfig | 13 +++ drivers/virt/gunyah/Makefile | 1 + drivers/virt/gunyah/gunyah_qcom.c | 218 ++++++++++++++++++++++++++++++++++++++ 3 files changed, 232 insertions(+) diff --git a/drivers/virt/gunyah/Kconfig b/drivers/virt/gunyah/Kconfig index 23ba523d25dc..fe2823dc48ba 100644 --- a/drivers/virt/gunyah/Kconfig +++ b/drivers/virt/gunyah/Kconfig @@ -4,6 +4,7 @@ config GUNYAH tristate "Gunyah Virtualization drivers" depends on ARM64 select GUNYAH_PLATFORM_HOOKS + imply GUNYAH_QCOM_PLATFORM if ARCH_QCOM help The Gunyah drivers are the helper interfaces that run in a guest VM such as basic inter-VM IPC and signaling mechanisms, and higher level @@ -14,3 +15,15 @@ config GUNYAH config GUNYAH_PLATFORM_HOOKS tristate + +config GUNYAH_QCOM_PLATFORM + tristate "Support for Gunyah on Qualcomm platforms" + depends on GUNYAH + select GUNYAH_PLATFORM_HOOKS + select QCOM_SCM + help + Enable support for interacting with Gunyah on Qualcomm + platforms. Interaction with Qualcomm firmware requires + extra platform-specific support. + + Say Y/M here to use Gunyah on Qualcomm platforms. diff --git a/drivers/virt/gunyah/Makefile b/drivers/virt/gunyah/Makefile index ffcde0e0ccfa..a6c6f29b887a 100644 --- a/drivers/virt/gunyah/Makefile +++ b/drivers/virt/gunyah/Makefile @@ -4,3 +4,4 @@ gunyah_rsc_mgr-y += rsc_mgr.o rsc_mgr_rpc.o vm_mgr.o vm_mgr_mem.o obj-$(CONFIG_GUNYAH) += gunyah.o gunyah_rsc_mgr.o gunyah_vcpu.o obj-$(CONFIG_GUNYAH_PLATFORM_HOOKS) += gunyah_platform_hooks.o +obj-$(CONFIG_GUNYAH_QCOM_PLATFORM) += gunyah_qcom.o diff --git a/drivers/virt/gunyah/gunyah_qcom.c b/drivers/virt/gunyah/gunyah_qcom.c new file mode 100644 index 000000000000..2381d75482ca --- /dev/null +++ b/drivers/virt/gunyah/gunyah_qcom.c @@ -0,0 +1,218 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2023-2024 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include + +#define QCOM_SCM_RM_MANAGED_VMID 0x3A +#define QCOM_SCM_MAX_MANAGED_VMID 0x3F + +static int +qcom_scm_gunyah_rm_pre_mem_share(struct gunyah_rm *rm, + struct gunyah_rm_mem_parcel *mem_parcel) +{ + struct qcom_scm_vmperm *new_perms __free(kfree) = NULL; + u64 src, src_cpy; + int ret = 0, i, n; + u16 vmid; + + new_perms = kcalloc(mem_parcel->n_acl_entries, sizeof(*new_perms), + GFP_KERNEL); + if (!new_perms) + return -ENOMEM; + + for (n = 0; n < mem_parcel->n_acl_entries; n++) { + vmid = le16_to_cpu(mem_parcel->acl_entries[n].vmid); + if (vmid <= QCOM_SCM_MAX_MANAGED_VMID) + new_perms[n].vmid = vmid; + else + new_perms[n].vmid = QCOM_SCM_RM_MANAGED_VMID; + if (mem_parcel->acl_entries[n].perms & GUNYAH_RM_ACL_X) + new_perms[n].perm |= QCOM_SCM_PERM_EXEC; + if (mem_parcel->acl_entries[n].perms & GUNYAH_RM_ACL_W) + new_perms[n].perm |= QCOM_SCM_PERM_WRITE; + if (mem_parcel->acl_entries[n].perms & GUNYAH_RM_ACL_R) + new_perms[n].perm |= QCOM_SCM_PERM_READ; + } + + src = BIT_ULL(QCOM_SCM_VMID_HLOS); + + for (i = 0; i < mem_parcel->n_mem_entries; i++) { + src_cpy = src; + ret = qcom_scm_assign_mem( + le64_to_cpu(mem_parcel->mem_entries[i].phys_addr), + le64_to_cpu(mem_parcel->mem_entries[i].size), &src_cpy, + new_perms, mem_parcel->n_acl_entries); + if (ret) + break; + } + + /* Did it work ok? */ + if (!ret) + return 0; + + src = 0; + for (n = 0; n < mem_parcel->n_acl_entries; n++) { + vmid = le16_to_cpu(mem_parcel->acl_entries[n].vmid); + if (vmid <= QCOM_SCM_MAX_MANAGED_VMID) + src |= BIT_ULL(vmid); + else + src |= BIT_ULL(QCOM_SCM_RM_MANAGED_VMID); + } + + new_perms[0].vmid = QCOM_SCM_VMID_HLOS; + + for (i--; i >= 0; i--) { + src_cpy = src; + WARN_ON_ONCE(qcom_scm_assign_mem( + le64_to_cpu(mem_parcel->mem_entries[i].phys_addr), + le64_to_cpu(mem_parcel->mem_entries[i].size), &src_cpy, + new_perms, 1)); + } + + return ret; +} + +static int +qcom_scm_gunyah_rm_post_mem_reclaim(struct gunyah_rm *rm, + struct gunyah_rm_mem_parcel *mem_parcel) +{ + struct qcom_scm_vmperm new_perms; + u64 src = 0, src_cpy; + int ret = 0, i, n; + u16 vmid; + + new_perms.vmid = QCOM_SCM_VMID_HLOS; + new_perms.perm = QCOM_SCM_PERM_EXEC | QCOM_SCM_PERM_WRITE | + QCOM_SCM_PERM_READ; + + for (n = 0; n < mem_parcel->n_acl_entries; n++) { + vmid = le16_to_cpu(mem_parcel->acl_entries[n].vmid); + if (vmid <= QCOM_SCM_MAX_MANAGED_VMID) + src |= (1ull << vmid); + else + src |= (1ull << QCOM_SCM_RM_MANAGED_VMID); + } + + for (i = 0; i < mem_parcel->n_mem_entries; i++) { + src_cpy = src; + ret = qcom_scm_assign_mem( + le64_to_cpu(mem_parcel->mem_entries[i].phys_addr), + le64_to_cpu(mem_parcel->mem_entries[i].size), &src_cpy, + &new_perms, 1); + WARN_ON_ONCE(ret); + } + + return ret; +} + +static int +qcom_scm_gunyah_rm_pre_demand_page(struct gunyah_rm *rm, u16 vmid, + enum gunyah_pagetable_access access, + struct folio *folio) +{ + struct qcom_scm_vmperm new_perms[2]; + unsigned int n = 1; + u64 src; + + new_perms[0].vmid = QCOM_SCM_RM_MANAGED_VMID; + new_perms[0].perm = QCOM_SCM_PERM_EXEC | QCOM_SCM_PERM_WRITE | + QCOM_SCM_PERM_READ; + if (access != GUNYAH_PAGETABLE_ACCESS_X && + access != GUNYAH_PAGETABLE_ACCESS_RX && + access != GUNYAH_PAGETABLE_ACCESS_RWX) { + new_perms[1].vmid = QCOM_SCM_VMID_HLOS; + new_perms[1].perm = QCOM_SCM_PERM_EXEC | QCOM_SCM_PERM_WRITE | + QCOM_SCM_PERM_READ; + n++; + } + + src = BIT_ULL(QCOM_SCM_VMID_HLOS); + + return qcom_scm_assign_mem(__pfn_to_phys(folio_pfn(folio)), + folio_size(folio), &src, new_perms, n); +} + +static int +qcom_scm_gunyah_rm_release_demand_page(struct gunyah_rm *rm, u16 vmid, + enum gunyah_pagetable_access access, + struct folio *folio) +{ + struct qcom_scm_vmperm new_perms; + u64 src; + + new_perms.vmid = QCOM_SCM_VMID_HLOS; + new_perms.perm = QCOM_SCM_PERM_EXEC | QCOM_SCM_PERM_WRITE | + QCOM_SCM_PERM_READ; + + src = BIT_ULL(QCOM_SCM_RM_MANAGED_VMID); + + if (access != GUNYAH_PAGETABLE_ACCESS_X && + access != GUNYAH_PAGETABLE_ACCESS_RX && + access != GUNYAH_PAGETABLE_ACCESS_RWX) + src |= BIT_ULL(QCOM_SCM_VMID_HLOS); + + return qcom_scm_assign_mem(__pfn_to_phys(folio_pfn(folio)), + folio_size(folio), &src, &new_perms, 1); +} + +static struct gunyah_rm_platform_ops qcom_scm_gunyah_rm_platform_ops = { + .pre_mem_share = qcom_scm_gunyah_rm_pre_mem_share, + .post_mem_reclaim = qcom_scm_gunyah_rm_post_mem_reclaim, + .pre_demand_page = qcom_scm_gunyah_rm_pre_demand_page, + .release_demand_page = qcom_scm_gunyah_rm_release_demand_page, +}; + +/* {19bd54bd-0b37-571b-946f-609b54539de6} */ +static const uuid_t QCOM_EXT_UUID = UUID_INIT(0x19bd54bd, 0x0b37, 0x571b, 0x94, + 0x6f, 0x60, 0x9b, 0x54, 0x53, + 0x9d, 0xe6); + +#define GUNYAH_QCOM_EXT_CALL_UUID_ID \ + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, ARM_SMCCC_SMC_32, \ + ARM_SMCCC_OWNER_VENDOR_HYP, 0x3f01) + +static bool gunyah_has_qcom_extensions(void) +{ + struct arm_smccc_res res; + uuid_t uuid; + u32 *up; + + arm_smccc_1_1_smc(GUNYAH_QCOM_EXT_CALL_UUID_ID, &res); + + up = (u32 *)&uuid.b[0]; + up[0] = lower_32_bits(res.a0); + up[1] = lower_32_bits(res.a1); + up[2] = lower_32_bits(res.a2); + up[3] = lower_32_bits(res.a3); + + return uuid_equal(&uuid, &QCOM_EXT_UUID); +} + +static int __init qcom_gunyah_platform_hooks_register(void) +{ + if (!gunyah_has_qcom_extensions()) + return -ENODEV; + + pr_info("Enabling Gunyah hooks for Qualcomm platforms.\n"); + + return gunyah_rm_register_platform_ops( + &qcom_scm_gunyah_rm_platform_ops); +} + +static void __exit qcom_gunyah_platform_hooks_unregister(void) +{ + gunyah_rm_unregister_platform_ops(&qcom_scm_gunyah_rm_platform_ops); +} + +module_init(qcom_gunyah_platform_hooks_register); +module_exit(qcom_gunyah_platform_hooks_unregister); +MODULE_DESCRIPTION("Qualcomm Technologies, Inc. Platform Hooks for Gunyah"); +MODULE_LICENSE("GPL"); From patchwork Tue Jan 9 19:37:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761088 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CFDF940C0B; Tue, 9 Jan 2024 19:38:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="Lo5n72Mz" Received: from pps.filterd (m0279871.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409JEnrB008597; Tue, 9 Jan 2024 19:38:04 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=miMCDfTEk99kjZA/iq3tKBkY47MzQ24QITHB/sPM65M =; b=Lo5n72MzcMcHMEz31QFLMDK3Jse/XQsdE7yrPE0K/M0fzI5bxpM9XlSITzj dYy35XZ3bDCG51f1h9ikSgSzTR3jaKzjLgcs5tEFmQLovYCnXPJ9daFl7fRRveIE okFKpscoSnGRp+PftF9NGeusmgNoNH5T6AMSvaUNdHjzFoXD9GITNrYR5GZt/JjM jzAh9YQ2w7cEFM1AMJYVyj9DOAUvEec35fr/+nDFM3QdjJrUY6tR/GUsfxHJXdyt 6NX/L9KdXakkrrN9/4R/L3AnIqxonyaGvbOxsGt2w019xc6rFE6Ihrxp+X8xvELl JmcRF4ngOxoG6P22XBKUwkzkTXg== Received: from nasanppmta05.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh98m8hr9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:38:04 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409Jc32H024616 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:38:03 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:38:02 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:37:59 -0800 Subject: [PATCH v16 21/34] virt: gunyah: Implement guestmemfd Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-21-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: 66cUBx-tajMGX41hKMj19ebFqRM3YAnm X-Proofpoint-GUID: 66cUBx-tajMGX41hKMj19ebFqRM3YAnm X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 impostorscore=0 spamscore=0 mlxscore=0 priorityscore=1501 phishscore=0 malwarescore=0 mlxlogscore=999 suspectscore=0 clxscore=1015 bulkscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Memory provided to Gunyah virtual machines are provided by a Gunyah guestmemfd. Because memory provided to virtual machines may be unmapped at stage-2 from the host (i.e. in the hypervisor's page tables for the host), special care needs to be taken to ensure that the kernel doesn't have a page mapped when it is lent to the guest. Without this tracking, a kernel panic could be induced by userspace tricking the kernel into accessing guest-private memory. Introduce the basic guestmemfd ops and ioctl. Userspace should be able to access the memory unless it is provided to the guest virtual machine: this is necessary to allow userspace to preload binaries such as the kernel Image prior to running the VM. Subsequent commits will wire up providing the memory to the guest. Signed-off-by: Elliot Berman --- drivers/virt/gunyah/Makefile | 2 +- drivers/virt/gunyah/guest_memfd.c | 279 ++++++++++++++++++++++++++++++++++++++ drivers/virt/gunyah/vm_mgr.c | 9 ++ drivers/virt/gunyah/vm_mgr.h | 2 + include/uapi/linux/gunyah.h | 19 +++ 5 files changed, 310 insertions(+), 1 deletion(-) diff --git a/drivers/virt/gunyah/Makefile b/drivers/virt/gunyah/Makefile index a6c6f29b887a..c4505fce177d 100644 --- a/drivers/virt/gunyah/Makefile +++ b/drivers/virt/gunyah/Makefile @@ -1,6 +1,6 @@ # SPDX-License-Identifier: GPL-2.0 -gunyah_rsc_mgr-y += rsc_mgr.o rsc_mgr_rpc.o vm_mgr.o vm_mgr_mem.o +gunyah_rsc_mgr-y += rsc_mgr.o rsc_mgr_rpc.o vm_mgr.o vm_mgr_mem.o guest_memfd.o obj-$(CONFIG_GUNYAH) += gunyah.o gunyah_rsc_mgr.o gunyah_vcpu.o obj-$(CONFIG_GUNYAH_PLATFORM_HOOKS) += gunyah_platform_hooks.o diff --git a/drivers/virt/gunyah/guest_memfd.c b/drivers/virt/gunyah/guest_memfd.c new file mode 100644 index 000000000000..73a3f1368081 --- /dev/null +++ b/drivers/virt/gunyah/guest_memfd.c @@ -0,0 +1,279 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2023-2024 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#define pr_fmt(fmt) "gunyah_guest_mem: " fmt + +#include +#include +#include +#include +#include +#include + +#include + +#include "vm_mgr.h" + +static struct folio *gunyah_gmem_get_huge_folio(struct inode *inode, + pgoff_t index) +{ +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + unsigned long huge_index = round_down(index, HPAGE_PMD_NR); + unsigned long flags = (unsigned long)inode->i_private; + struct address_space *mapping = inode->i_mapping; + gfp_t gfp = mapping_gfp_mask(mapping); + struct folio *folio; + + if (!(flags & GHMF_ALLOW_HUGEPAGE)) + return NULL; + + if (filemap_range_has_page(mapping, huge_index << PAGE_SHIFT, + (huge_index + HPAGE_PMD_NR - 1) + << PAGE_SHIFT)) + return NULL; + + folio = filemap_alloc_folio(gfp, HPAGE_PMD_ORDER); + if (!folio) + return NULL; + + if (filemap_add_folio(mapping, folio, huge_index, gfp)) { + folio_put(folio); + return NULL; + } + + return folio; +#else + return NULL; +#endif +} + +static struct folio *gunyah_gmem_get_folio(struct inode *inode, pgoff_t index) +{ + struct folio *folio; + + folio = gunyah_gmem_get_huge_folio(inode, index); + if (!folio) { + folio = filemap_grab_folio(inode->i_mapping, index); + if (IS_ERR_OR_NULL(folio)) + return NULL; + } + + /* + * Use the up-to-date flag to track whether or not the memory has been + * zeroed before being handed off to the guest. There is no backing + * storage for the memory, so the folio will remain up-to-date until + * it's removed. + */ + if (!folio_test_uptodate(folio)) { + unsigned long nr_pages = folio_nr_pages(folio); + unsigned long i; + + for (i = 0; i < nr_pages; i++) + clear_highpage(folio_page(folio, i)); + + folio_mark_uptodate(folio); + } + + /* + * Ignore accessed, referenced, and dirty flags. The memory is + * unevictable and there is no storage to write back to. + */ + return folio; +} + +static vm_fault_t gunyah_gmem_host_fault(struct vm_fault *vmf) +{ + struct folio *folio; + + folio = gunyah_gmem_get_folio(file_inode(vmf->vma->vm_file), + vmf->pgoff); + if (!folio || folio_test_private(folio)) { + folio_unlock(folio); + folio_put(folio); + return VM_FAULT_SIGBUS; + } + + vmf->page = folio_file_page(folio, vmf->pgoff); + + return VM_FAULT_LOCKED; +} + +static const struct vm_operations_struct gunyah_gmem_vm_ops = { + .fault = gunyah_gmem_host_fault, +}; + +static int gunyah_gmem_mmap(struct file *file, struct vm_area_struct *vma) +{ + file_accessed(file); + vma->vm_ops = &gunyah_gmem_vm_ops; + return 0; +} + +/** + * gunyah_gmem_punch_hole() - try to reclaim a range of pages + * @inode: guest memfd inode + * @offset: Offset into memfd to start reclaim + * @len: length to reclaim + * + * Will try to unmap from virtual machines any folios covered by + * [offset, offset+len]. If unmapped, then tries to free those folios + * + * Returns - error code if any folio in the range couldn't be freed. + */ +static long gunyah_gmem_punch_hole(struct inode *inode, loff_t offset, + loff_t len) +{ + truncate_inode_pages_range(inode->i_mapping, offset, offset + len - 1); + + return 0; +} + +static long gunyah_gmem_allocate(struct inode *inode, loff_t offset, loff_t len) +{ + struct address_space *mapping = inode->i_mapping; + pgoff_t start, index, end; + int r; + + /* Dedicated guest is immutable by default. */ + if (offset + len > i_size_read(inode)) + return -EINVAL; + + filemap_invalidate_lock_shared(mapping); + + start = offset >> PAGE_SHIFT; + end = (offset + len) >> PAGE_SHIFT; + + r = 0; + for (index = start; index < end;) { + struct folio *folio; + + if (signal_pending(current)) { + r = -EINTR; + break; + } + + folio = gunyah_gmem_get_folio(inode, index); + if (!folio) { + r = -ENOMEM; + break; + } + + index = folio_next_index(folio); + + folio_unlock(folio); + folio_put(folio); + + /* 64-bit only, wrapping the index should be impossible. */ + if (WARN_ON_ONCE(!index)) + break; + + cond_resched(); + } + + filemap_invalidate_unlock_shared(mapping); + + return r; +} + +static long gunyah_gmem_fallocate(struct file *file, int mode, loff_t offset, + loff_t len) +{ + long ret; + + if (!(mode & FALLOC_FL_KEEP_SIZE)) + return -EOPNOTSUPP; + + if (mode & ~(FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE | + FALLOC_FL_ZERO_RANGE)) + return -EOPNOTSUPP; + + if (!PAGE_ALIGNED(offset) || !PAGE_ALIGNED(len)) + return -EINVAL; + + if (mode & FALLOC_FL_PUNCH_HOLE) + ret = gunyah_gmem_punch_hole(file_inode(file), offset, len); + else + ret = gunyah_gmem_allocate(file_inode(file), offset, len); + + if (!ret) + file_modified(file); + return ret; +} + +static int gunyah_gmem_release(struct inode *inode, struct file *file) +{ + return 0; +} + +static const struct file_operations gunyah_gmem_fops = { + .owner = THIS_MODULE, + .llseek = generic_file_llseek, + .mmap = gunyah_gmem_mmap, + .open = generic_file_open, + .fallocate = gunyah_gmem_fallocate, + .release = gunyah_gmem_release, +}; + +static const struct address_space_operations gunyah_gmem_aops = { + .dirty_folio = noop_dirty_folio, + .migrate_folio = migrate_folio, + .error_remove_folio = generic_error_remove_folio, +}; + +int gunyah_guest_mem_create(struct gunyah_create_mem_args *args) +{ + const char *anon_name = "[gh-gmem]"; + unsigned long fd_flags = 0; + struct inode *inode; + struct file *file; + int fd, err; + + if (!PAGE_ALIGNED(args->size)) + return -EINVAL; + + if (args->flags & ~(GHMF_CLOEXEC | GHMF_ALLOW_HUGEPAGE)) + return -EINVAL; + + if (args->flags & GHMF_CLOEXEC) + fd_flags |= O_CLOEXEC; + + fd = get_unused_fd_flags(fd_flags); + if (fd < 0) + return fd; + + /* + * Use the so called "secure" variant, which creates a unique inode + * instead of reusing a single inode. Each guest_memfd instance needs + * its own inode to track the size, flags, etc. + */ + file = anon_inode_create_getfile(anon_name, &gunyah_gmem_fops, NULL, + O_RDWR, NULL); + if (IS_ERR(file)) { + err = PTR_ERR(file); + goto err_fd; + } + + file->f_flags |= O_LARGEFILE; + + inode = file->f_inode; + WARN_ON(file->f_mapping != inode->i_mapping); + + inode->i_private = (void *)(unsigned long)args->flags; + inode->i_mapping->a_ops = &gunyah_gmem_aops; + inode->i_mode |= S_IFREG; + inode->i_size = args->size; + mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER); + mapping_set_large_folios(inode->i_mapping); + mapping_set_unmovable(inode->i_mapping); + /* Unmovable mappings are supposed to be marked unevictable as well. */ + WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping)); + + fd_install(fd, file); + return fd; + +err_fd: + put_unused_fd(fd); + return err; +} diff --git a/drivers/virt/gunyah/vm_mgr.c b/drivers/virt/gunyah/vm_mgr.c index 26b6dce49970..33751d5cddd2 100644 --- a/drivers/virt/gunyah/vm_mgr.c +++ b/drivers/virt/gunyah/vm_mgr.c @@ -687,6 +687,15 @@ long gunyah_dev_vm_mgr_ioctl(struct gunyah_rm *rm, unsigned int cmd, switch (cmd) { case GUNYAH_CREATE_VM: return gunyah_dev_ioctl_create_vm(rm, arg); + case GUNYAH_CREATE_GUEST_MEM: { + struct gunyah_create_mem_args args; + + if (copy_from_user(&args, (const void __user *)arg, + sizeof(args))) + return -EFAULT; + + return gunyah_guest_mem_create(&args); + } default: return -ENOTTY; } diff --git a/drivers/virt/gunyah/vm_mgr.h b/drivers/virt/gunyah/vm_mgr.h index e500f6eb014e..055990842959 100644 --- a/drivers/virt/gunyah/vm_mgr.h +++ b/drivers/virt/gunyah/vm_mgr.h @@ -109,4 +109,6 @@ int gunyah_vm_provide_folio(struct gunyah_vm *ghvm, struct folio *folio, int gunyah_vm_reclaim_folio(struct gunyah_vm *ghvm, u64 gfn); int gunyah_vm_reclaim_range(struct gunyah_vm *ghvm, u64 gfn, u64 nr); +int gunyah_guest_mem_create(struct gunyah_create_mem_args *args); + #endif diff --git a/include/uapi/linux/gunyah.h b/include/uapi/linux/gunyah.h index 46f7d3aa61d0..c5f506350364 100644 --- a/include/uapi/linux/gunyah.h +++ b/include/uapi/linux/gunyah.h @@ -20,6 +20,25 @@ */ #define GUNYAH_CREATE_VM _IO(GUNYAH_IOCTL_TYPE, 0x0) /* Returns a Gunyah VM fd */ +enum gunyah_mem_flags { + GHMF_CLOEXEC = (1UL << 0), + GHMF_ALLOW_HUGEPAGE = (1UL << 1), +}; + +/** + * struct gunyah_create_mem_args - Description of guest memory to create + * @flags: See GHMF_*. + */ +struct gunyah_create_mem_args { + __u64 flags; + __u64 size; + __u64 reserved[6]; +}; + +#define GUNYAH_CREATE_GUEST_MEM \ + _IOW(GUNYAH_IOCTL_TYPE, 0x8, \ + struct gunyah_create_mem_args) /* Returns a Gunyah memory fd */ + /* * ioctls for gunyah-vm fds (returned by GUNYAH_CREATE_VM) */ From patchwork Tue Jan 9 19:38:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761082 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A430D481B4; Tue, 9 Jan 2024 19:38:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="m8NlPsuH" Received: from pps.filterd (m0279863.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409IocPs003466; Tue, 9 Jan 2024 19:38:04 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=J86hk5ZNAM4yco10H3q8d/nCtmnOiD8TudkIuvK3R54 =; b=m8NlPsuHzRxSYJTrWZTsm5UF20FIEirZ/YLEqL6Fveo7ObkscWnw7VdGTtk AlFpsxYSQg8+so5UojOofAtUKOs+Apmokcw8aGT5kugiXn6zbsF5d1yk2i+mHqop wyBaSQwF/bTtHbcnsy3NPqasl/rlP+hE32CH9Jb8kIs3wJbemymdkB3Zp1diqnNG eK4vqgc+OfCKKjfkxcORbhIVP65uiKpd4ML1tEpUTL/KoDUN26QnHjbGWl7zYjep dsHGen41zqM1JHuvPbDCDWG4dneNUv4Z3LGfQGmUw+arR4YNzw8SCh+Vo1fjph5L HbaH14JWpo7cashONpnSIRHKiJg== Received: from nasanppmta01.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh9bmggd9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:38:04 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA01.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409Jc3JS003494 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:38:03 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:38:02 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:38:00 -0800 Subject: [PATCH v16 22/34] virt: gunyah: Add ioctl to bind guestmem to VMs Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-22-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: crWTMvyVI_IKN5KiiZJuwdyyipi-0F48 X-Proofpoint-GUID: crWTMvyVI_IKN5KiiZJuwdyyipi-0F48 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_01,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 mlxscore=0 clxscore=1015 spamscore=0 priorityscore=1501 malwarescore=0 mlxlogscore=999 adultscore=0 bulkscore=0 suspectscore=0 lowpriorityscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 A maple tree is used to maintain a map from guest address ranges to a guestmemfd that provides the memory for that range of memory for the guest. The mapping of guest address range to guestmemfd is called a binding. Implement an ioctl to add/remove bindings to the virtual machine. The binding determines whether the memory is shared (host retains access) or lent (host loses access). Signed-off-by: Elliot Berman --- drivers/virt/gunyah/guest_memfd.c | 394 +++++++++++++++++++++++++++++++++++++- drivers/virt/gunyah/vm_mgr.c | 20 ++ drivers/virt/gunyah/vm_mgr.h | 9 + include/uapi/linux/gunyah.h | 41 ++++ 4 files changed, 455 insertions(+), 9 deletions(-) diff --git a/drivers/virt/gunyah/guest_memfd.c b/drivers/virt/gunyah/guest_memfd.c index 73a3f1368081..71686f1946da 100644 --- a/drivers/virt/gunyah/guest_memfd.c +++ b/drivers/virt/gunyah/guest_memfd.c @@ -16,6 +16,51 @@ #include "vm_mgr.h" +/** + * struct gunyah_gmem_binding - Represents a binding of guestmem to a Gunyah VM + * @gfn: Guest address to place acquired folios + * @ghvm: Pointer to Gunyah VM in this binding + * @i_off: offset into the guestmem to grab folios from + * @file: Pointer to guest_memfd + * @i_entry: list entry for inode->i_private_list + * @flags: Access flags for the binding + * @nr: Number of pages covered by this binding + */ +struct gunyah_gmem_binding { + u64 gfn; + struct gunyah_vm *ghvm; + + pgoff_t i_off; + struct file *file; + struct list_head i_entry; + + u32 flags; + unsigned long nr; +}; + +static inline pgoff_t gunyah_gfn_to_off(struct gunyah_gmem_binding *b, u64 gfn) +{ + return gfn - b->gfn + b->i_off; +} + +static inline u64 gunyah_off_to_gfn(struct gunyah_gmem_binding *b, pgoff_t off) +{ + return off - b->i_off + b->gfn; +} + +static inline bool gunyah_guest_mem_is_lend(struct gunyah_vm *ghvm, u32 flags) +{ + u8 access = flags & GUNYAH_MEM_ACCESS_MASK; + + if (access == GUNYAH_MEM_FORCE_LEND) + return true; + else if (access == GUNYAH_MEM_FORCE_SHARE) + return false; + + /* RM requires all VMs to be protected (isolated) */ + return true; +} + static struct folio *gunyah_gmem_get_huge_folio(struct inode *inode, pgoff_t index) { @@ -83,17 +128,55 @@ static struct folio *gunyah_gmem_get_folio(struct inode *inode, pgoff_t index) return folio; } +/** + * gunyah_gmem_launder_folio() - Tries to unmap one folio from virtual machine(s) + * @folio: The folio to unmap + * + * Returns - 0 if the folio has been reclaimed from any virtual machine(s) that + * folio was mapped into. + */ +static int gunyah_gmem_launder_folio(struct folio *folio) +{ + struct address_space *const mapping = folio->mapping; + struct gunyah_gmem_binding *b; + pgoff_t index = folio_index(folio); + int ret = 0; + u64 gfn; + + filemap_invalidate_lock_shared(mapping); + list_for_each_entry(b, &mapping->i_private_list, i_entry) { + /* if the mapping doesn't cover this folio: skip */ + if (b->i_off > index || index > b->i_off + b->nr) + continue; + + gfn = gunyah_off_to_gfn(b, index); + ret = gunyah_vm_reclaim_folio(b->ghvm, gfn); + if (WARN_RATELIMIT(ret, "failed to reclaim gfn: %08llx %d\n", + gfn, ret)) + break; + } + filemap_invalidate_unlock_shared(mapping); + + return ret; +} + static vm_fault_t gunyah_gmem_host_fault(struct vm_fault *vmf) { struct folio *folio; folio = gunyah_gmem_get_folio(file_inode(vmf->vma->vm_file), vmf->pgoff); - if (!folio || folio_test_private(folio)) { + if (!folio) + return VM_FAULT_SIGBUS; + + /* If the folio is lent to a VM, try to reclim it */ + if (folio_test_private(folio) && gunyah_gmem_launder_folio(folio)) { folio_unlock(folio); folio_put(folio); return VM_FAULT_SIGBUS; } + /* gunyah_gmem_launder_folio should clear the private bit if it returns 0 */ + BUG_ON(folio_test_private(folio)); vmf->page = folio_file_page(folio, vmf->pgoff); @@ -106,9 +189,36 @@ static const struct vm_operations_struct gunyah_gmem_vm_ops = { static int gunyah_gmem_mmap(struct file *file, struct vm_area_struct *vma) { - file_accessed(file); - vma->vm_ops = &gunyah_gmem_vm_ops; - return 0; + struct address_space *const mapping = file->f_mapping; + struct gunyah_gmem_binding *b; + int ret = 0; + u64 gfn, nr; + + filemap_invalidate_lock_shared(mapping); + list_for_each_entry(b, &mapping->i_private_list, i_entry) { + if (!gunyah_guest_mem_is_lend(b->ghvm, b->flags)) + continue; + + /* if the binding doesn't cover this vma: skip */ + if (vma->vm_pgoff + vma_pages(vma) < b->i_off) + continue; + if (vma->vm_pgoff > b->i_off + b->nr) + continue; + + gfn = gunyah_off_to_gfn(b, vma->vm_pgoff); + nr = gunyah_off_to_gfn(b, vma->vm_pgoff + vma_pages(vma)) - gfn; + ret = gunyah_vm_reclaim_range(b->ghvm, gfn, nr); + if (ret) + break; + } + filemap_invalidate_unlock_shared(mapping); + + if (!ret) { + file_accessed(file); + vma->vm_ops = &gunyah_gmem_vm_ops; + } + + return ret; } /** @@ -125,9 +235,7 @@ static int gunyah_gmem_mmap(struct file *file, struct vm_area_struct *vma) static long gunyah_gmem_punch_hole(struct inode *inode, loff_t offset, loff_t len) { - truncate_inode_pages_range(inode->i_mapping, offset, offset + len - 1); - - return 0; + return invalidate_inode_pages2_range(inode->i_mapping, offset, offset + len - 1); } static long gunyah_gmem_allocate(struct inode *inode, loff_t offset, loff_t len) @@ -204,6 +312,12 @@ static long gunyah_gmem_fallocate(struct file *file, int mode, loff_t offset, static int gunyah_gmem_release(struct inode *inode, struct file *file) { + /** + * each binding increments refcount on file, so we shouldn't be here + * if i_private_list not empty. + */ + BUG_ON(!list_empty(&inode->i_mapping->i_private_list)); + return 0; } @@ -216,10 +330,26 @@ static const struct file_operations gunyah_gmem_fops = { .release = gunyah_gmem_release, }; +static bool gunyah_gmem_release_folio(struct folio *folio, gfp_t gfp_flags) +{ + /* should return true if released; launder folio returns 0 if freed */ + return !gunyah_gmem_launder_folio(folio); +} + +static int gunyah_gmem_remove_folio(struct address_space *mapping, + struct folio *folio) +{ + if (mapping != folio->mapping) + return -EINVAL; + + return gunyah_gmem_launder_folio(folio); +} + static const struct address_space_operations gunyah_gmem_aops = { .dirty_folio = noop_dirty_folio, - .migrate_folio = migrate_folio, - .error_remove_folio = generic_error_remove_folio, + .release_folio = gunyah_gmem_release_folio, + .launder_folio = gunyah_gmem_launder_folio, + .error_remove_folio = gunyah_gmem_remove_folio, }; int gunyah_guest_mem_create(struct gunyah_create_mem_args *args) @@ -267,6 +397,7 @@ int gunyah_guest_mem_create(struct gunyah_create_mem_args *args) mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER); mapping_set_large_folios(inode->i_mapping); mapping_set_unmovable(inode->i_mapping); + mapping_set_release_always(inode->i_mapping); /* Unmovable mappings are supposed to be marked unevictable as well. */ WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping)); @@ -277,3 +408,248 @@ int gunyah_guest_mem_create(struct gunyah_create_mem_args *args) put_unused_fd(fd); return err; } + +void gunyah_gmem_remove_binding(struct gunyah_gmem_binding *b) +{ + WARN_ON(gunyah_vm_reclaim_range(b->ghvm, b->gfn, b->nr)); + mtree_erase(&b->ghvm->bindings, b->gfn); + list_del(&b->i_entry); + fput(b->file); + kfree(b); +} + +static inline unsigned long gunyah_gmem_page_mask(struct file *file) +{ + unsigned long gmem_flags = (unsigned long)file_inode(file)->i_private; + + if (gmem_flags & GHMF_ALLOW_HUGEPAGE) { +#if IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) + return HPAGE_PMD_MASK; +#else + return ULONG_MAX; +#endif + } + + return PAGE_MASK; +} + +static int gunyah_gmem_init_binding(struct gunyah_vm *ghvm, struct file *file, + struct gunyah_map_mem_args *args, + struct gunyah_gmem_binding *binding) +{ + const unsigned long page_mask = ~gunyah_gmem_page_mask(file); + + if (args->flags & ~(GUNYAH_MEM_ALLOW_RWX | GUNYAH_MEM_ACCESS_MASK)) + return -EINVAL; + + if (args->guest_addr & page_mask) + return -EINVAL; + + if (args->offset & page_mask) + return -EINVAL; + + if (args->size & page_mask) + return -EINVAL; + + binding->gfn = gunyah_gpa_to_gfn(args->guest_addr); + binding->ghvm = ghvm; + binding->i_off = args->offset >> PAGE_SHIFT; + binding->file = file; + binding->flags = args->flags; + binding->nr = args->size >> PAGE_SHIFT; + + return 0; +} + +static int gunyah_gmem_trim_binding(struct gunyah_gmem_binding *b, + unsigned long start_delta, + unsigned long end_delta) +{ + struct gunyah_vm *ghvm = b->ghvm; + int ret; + + down_write(&ghvm->bindings_lock); + if (!start_delta && !end_delta) { + ret = gunyah_vm_reclaim_range(ghvm, b->gfn, b->nr); + if (ret) + goto unlock; + gunyah_gmem_remove_binding(b); + } else if (start_delta && !end_delta) { + /* shrink the beginning */ + ret = gunyah_vm_reclaim_range(ghvm, b->gfn, + b->gfn + start_delta); + if (ret) + goto unlock; + mtree_erase(&ghvm->bindings, b->gfn); + b->gfn += start_delta; + b->i_off += start_delta; + b->nr -= start_delta; + ret = mtree_insert_range(&ghvm->bindings, b->gfn, + b->gfn + b->nr - 1, b, GFP_KERNEL); + } else if (!start_delta && end_delta) { + /* Shrink the end */ + ret = gunyah_vm_reclaim_range(ghvm, b->gfn + b->nr - end_delta, + b->gfn + b->nr); + if (ret) + goto unlock; + mtree_erase(&ghvm->bindings, b->gfn); + b->nr -= end_delta; + ret = mtree_insert_range(&ghvm->bindings, b->gfn, + b->gfn + b->nr - 1, b, GFP_KERNEL); + } else { + /* TODO: split the mapping into 2 */ + ret = -EINVAL; + } + +unlock: + up_write(&ghvm->bindings_lock); + return ret; +} + +static int gunyah_gmem_remove_mapping(struct gunyah_vm *ghvm, struct file *file, + struct gunyah_map_mem_args *args) +{ + struct inode *inode = file_inode(file); + struct gunyah_gmem_binding *b = NULL; + unsigned long start_delta, end_delta; + struct gunyah_gmem_binding remove; + int ret; + + ret = gunyah_gmem_init_binding(ghvm, file, args, &remove); + if (ret) + return ret; + + ret = -ENOENT; + filemap_invalidate_lock(inode->i_mapping); + list_for_each_entry(b, &inode->i_mapping->i_private_list, i_entry) { + if (b->ghvm != remove.ghvm || b->flags != remove.flags || + WARN_ON(b->file != remove.file)) + continue; + /** + * Test if the binding to remove is within this binding + * [gfn b nr] + * [gfn remove nr] + */ + if (b->gfn > remove.gfn) + continue; + if (b->gfn + b->nr < remove.gfn + remove.nr) + continue; + + /** + * We found the binding! + * Compute the delta in gfn start and make sure the offset + * into guest memfd matches. + */ + start_delta = remove.gfn - b->gfn; + if (remove.i_off - b->i_off != start_delta) + continue; + end_delta = remove.gfn + remove.nr - b->gfn - b->nr; + + ret = gunyah_gmem_trim_binding(b, start_delta, end_delta); + break; + } + + filemap_invalidate_unlock(inode->i_mapping); + return ret; +} + +static bool gunyah_gmem_binding_allowed_overlap(struct gunyah_gmem_binding *a, + struct gunyah_gmem_binding *b) +{ + /* assumes we are operating on the same file, check to be sure */ + BUG_ON(a->file != b->file); + + /** + * Gunyah only guarantees we can share a page with one VM and + * doesn't (currently) allow us to share same page with multiple VMs, + * regardless whether host can also access. + * Gunyah supports, but Linux hasn't implemented mapping same page + * into 2 separate addresses in guest's address space. This doesn't + * seem reasonable today, but we could do it later. + * All this to justify: check that the `a` region doesn't overlap with + * `b` region w.r.t. file offsets. + */ + if (a->i_off + a->nr < b->i_off) + return false; + if (a->i_off > b->i_off + b->nr) + return false; + + return true; +} + +static int gunyah_gmem_add_mapping(struct gunyah_vm *ghvm, struct file *file, + struct gunyah_map_mem_args *args) +{ + struct gunyah_gmem_binding *b, *tmp = NULL; + struct inode *inode = file_inode(file); + int ret; + + b = kzalloc(sizeof(*b), GFP_KERNEL); + if (!b) + return -ENOMEM; + + ret = gunyah_gmem_init_binding(ghvm, file, args, b); + if (ret) + return ret; + + filemap_invalidate_lock(inode->i_mapping); + list_for_each_entry(tmp, &inode->i_mapping->i_private_list, i_entry) { + if (!gunyah_gmem_binding_allowed_overlap(b, tmp)) { + ret = -EEXIST; + goto unlock; + } + } + + ret = mtree_insert_range(&ghvm->bindings, b->gfn, b->gfn + b->nr - 1, + b, GFP_KERNEL); + if (ret) + goto unlock; + + list_add(&b->i_entry, &inode->i_mapping->i_private_list); + +unlock: + filemap_invalidate_unlock(inode->i_mapping); + return ret; +} + +int gunyah_gmem_modify_mapping(struct gunyah_vm *ghvm, + struct gunyah_map_mem_args *args) +{ + u8 access = args->flags & GUNYAH_MEM_ACCESS_MASK; + struct file *file; + int ret = -EINVAL; + + file = fget(args->guest_mem_fd); + if (!file) + return -EINVAL; + + if (file->f_op != &gunyah_gmem_fops) + goto err_file; + + if (args->flags & ~(GUNYAH_MEM_ALLOW_RWX | GUNYAH_MEM_UNMAP | GUNYAH_MEM_ACCESS_MASK)) + goto err_file; + + /* VM needs to have some permissions to the memory */ + if (!(args->flags & GUNYAH_MEM_ALLOW_RWX)) + goto err_file; + + if (access != GUNYAH_MEM_DEFAULT_ACCESS && + access != GUNYAH_MEM_FORCE_LEND && access != GUNYAH_MEM_FORCE_SHARE) + goto err_file; + + if (!PAGE_ALIGNED(args->guest_addr) || !PAGE_ALIGNED(args->offset) || + !PAGE_ALIGNED(args->size)) + goto err_file; + + if (args->flags & GUNYAH_MEM_UNMAP) { + args->flags &= ~GUNYAH_MEM_UNMAP; + ret = gunyah_gmem_remove_mapping(ghvm, file, args); + } else { + ret = gunyah_gmem_add_mapping(ghvm, file, args); + } + +err_file: + if (ret) + fput(file); + return ret; +} diff --git a/drivers/virt/gunyah/vm_mgr.c b/drivers/virt/gunyah/vm_mgr.c index 33751d5cddd2..be2061aa0a06 100644 --- a/drivers/virt/gunyah/vm_mgr.c +++ b/drivers/virt/gunyah/vm_mgr.c @@ -397,6 +397,8 @@ static __must_check struct gunyah_vm *gunyah_vm_alloc(struct gunyah_rm *rm) INIT_LIST_HEAD(&ghvm->resource_tickets); mt_init(&ghvm->mm); + mt_init(&ghvm->bindings); + init_rwsem(&ghvm->bindings_lock); ghvm->addrspace_ticket.resource_type = GUNYAH_RESOURCE_TYPE_ADDR_SPACE; ghvm->addrspace_ticket.label = GUNYAH_VM_ADDRSPACE_LABEL; @@ -551,6 +553,14 @@ static long gunyah_vm_ioctl(struct file *filp, unsigned int cmd, r = gunyah_vm_rm_function_instance(ghvm, &f); break; } + case GUNYAH_VM_MAP_MEM: { + struct gunyah_map_mem_args args; + + if (copy_from_user(&args, argp, sizeof(args))) + return -EFAULT; + + return gunyah_gmem_modify_mapping(ghvm, &args); + } default: r = -ENOTTY; break; @@ -568,6 +578,8 @@ EXPORT_SYMBOL_GPL(gunyah_vm_get); static void _gunyah_vm_put(struct kref *kref) { struct gunyah_vm *ghvm = container_of(kref, struct gunyah_vm, kref); + struct gunyah_gmem_binding *b; + unsigned long idx = 0; int ret; /** @@ -579,6 +591,13 @@ static void _gunyah_vm_put(struct kref *kref) gunyah_vm_remove_functions(ghvm); + down_write(&ghvm->bindings_lock); + mt_for_each(&ghvm->bindings, b, idx, ULONG_MAX) { + gunyah_gmem_remove_binding(b); + } + up_write(&ghvm->bindings_lock); + WARN_ON(!mtree_empty(&ghvm->bindings)); + mtree_destroy(&ghvm->bindings); /** * If this fails, we're going to lose the memory for good and is * BUG_ON-worthy, but not unrecoverable (we just lose memory). @@ -606,6 +625,7 @@ static void _gunyah_vm_put(struct kref *kref) ghvm->vm_status == GUNYAH_RM_VM_STATUS_RESET); } + WARN_ON(!mtree_empty(&ghvm->mm)); mtree_destroy(&ghvm->mm); if (ghvm->vm_status > GUNYAH_RM_VM_STATUS_NO_STATE) { diff --git a/drivers/virt/gunyah/vm_mgr.h b/drivers/virt/gunyah/vm_mgr.h index 055990842959..518d05eeb642 100644 --- a/drivers/virt/gunyah/vm_mgr.h +++ b/drivers/virt/gunyah/vm_mgr.h @@ -36,6 +36,9 @@ long gunyah_dev_vm_mgr_ioctl(struct gunyah_rm *rm, unsigned int cmd, * @mm: A maple tree of all memory that has been mapped to a VM. * Indices are guest frame numbers; entries are either folios or * RM mem parcels + * @bindings: A maple tree of guest memfd bindings. Indices are guest frame + * numbers; entries are &struct gunyah_gmem_binding + * @bindings_lock: For serialization to @bindings * @addrspace_ticket: Resource ticket to the capability for guest VM's * address space * @host_private_extent_ticket: Resource ticket to the capability for our @@ -75,6 +78,8 @@ long gunyah_dev_vm_mgr_ioctl(struct gunyah_rm *rm, unsigned int cmd, struct gunyah_vm { u16 vmid; struct maple_tree mm; + struct maple_tree bindings; + struct rw_semaphore bindings_lock; struct gunyah_vm_resource_ticket addrspace_ticket, host_private_extent_ticket, host_shared_extent_ticket, guest_private_extent_ticket, guest_shared_extent_ticket; @@ -110,5 +115,9 @@ int gunyah_vm_reclaim_folio(struct gunyah_vm *ghvm, u64 gfn); int gunyah_vm_reclaim_range(struct gunyah_vm *ghvm, u64 gfn, u64 nr); int gunyah_guest_mem_create(struct gunyah_create_mem_args *args); +int gunyah_gmem_modify_mapping(struct gunyah_vm *ghvm, + struct gunyah_map_mem_args *args); +struct gunyah_gmem_binding; +void gunyah_gmem_remove_binding(struct gunyah_gmem_binding *binding); #endif diff --git a/include/uapi/linux/gunyah.h b/include/uapi/linux/gunyah.h index c5f506350364..1af4c5ae6bc3 100644 --- a/include/uapi/linux/gunyah.h +++ b/include/uapi/linux/gunyah.h @@ -87,6 +87,47 @@ struct gunyah_fn_desc { #define GUNYAH_VM_ADD_FUNCTION _IOW(GUNYAH_IOCTL_TYPE, 0x4, struct gunyah_fn_desc) #define GUNYAH_VM_REMOVE_FUNCTION _IOW(GUNYAH_IOCTL_TYPE, 0x7, struct gunyah_fn_desc) +/** + * enum gunyah_map_flags- Possible flags on &struct gunyah_map_mem_args + * @GUNYAH_MEM_DEFAULT_SHARE: Use default host access for the VM type + * @GUNYAH_MEM_FORCE_LEND: Force unmapping the memory once the guest starts to use + * @GUNYAH_MEM_FORCE_SHARE: Allow host to continue accessing memory when guest starts to use + * @GUNYAH_MEM_ALLOW_READ: Allow guest to read memory + * @GUNYAH_MEM_ALLOW_WRITE: Allow guest to write to the memory + * @GUNYAH_MEM_ALLOW_EXEC: Allow guest to execute instructions in the memory + */ +enum gunyah_map_flags { + GUNYAH_MEM_DEFAULT_ACCESS = 0, + GUNYAH_MEM_FORCE_LEND = 1, + GUNYAH_MEM_FORCE_SHARE = 2, +#define GUNYAH_MEM_ACCESS_MASK 0x7 + + GUNYAH_MEM_ALLOW_READ = 1UL << 4, + GUNYAH_MEM_ALLOW_WRITE = 1UL << 5, + GUNYAH_MEM_ALLOW_EXEC = 1UL << 6, + GUNYAH_MEM_ALLOW_RWX = + (GUNYAH_MEM_ALLOW_READ | GUNYAH_MEM_ALLOW_WRITE | GUNYAH_MEM_ALLOW_EXEC), + + GUNYAH_MEM_UNMAP = 1UL << 8, +}; + +/** + * struct gunyah_map_mem_args - Description to provide guest memory into a VM + * @guest_addr: Location in guest address space to place the memory + * @flags: See &enum gunyah_map_flags. + * @guest_mem_fd: File descriptor created by GUNYAH_CREATE_GUEST_MEM + * @offset: Offset into the guest memory file + */ +struct gunyah_map_mem_args { + __u64 guest_addr; + __u32 flags; + __u32 guest_mem_fd; + __u64 offset; + __u64 size; +}; + +#define GUNYAH_VM_MAP_MEM _IOW(GUNYAH_IOCTL_TYPE, 0x9, struct gunyah_map_mem_args) + /* * ioctls for vCPU fds */ From patchwork Tue Jan 9 19:38:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761560 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2A3A6481D5; Tue, 9 Jan 2024 19:38:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="iM1PMyMP" Received: from pps.filterd (m0279873.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409DRiAN016723; Tue, 9 Jan 2024 19:38:05 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=TnTPpzUDBWIYZLcHE8dxPtRIMqaLPUN2IK1SZ81q1vU =; b=iM1PMyMPrfgGq7LwkLipapCuBEO88AtOhN+uO8TjnDt5kr8v4PX+sZKPacv S2FVlPAu6WMphynffHZFe02BbDtNyxhIBkvvQBNYEqgLNLsF/BnkY04RI8+WE+MX IIFN1DOGAbDnI6QZgnJWZ5YKScN+GWB6j3g21qiiZaDm/hos6+62F7qXZmj9mE4u HSlyLdD/pgNXvmzZEa1H4kfgKfyXo7dwnPaLk6OWKRRtVoskH8vcoNavWCCO0y7K bYDcWzG5qjPECJDsCSSte8ubav2S3PLDkL4S2NhKU4QkHIwOJNpLrJymr8NL58fg ipODtktStLltwSX6v4wB+2nptdw== Received: from nasanppmta04.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh3g699py-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:38:05 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA04.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409Jc4qw030540 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:38:04 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:38:03 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:38:01 -0800 Subject: [PATCH v16 23/34] virt: gunyah: guestmem: Initialize RM mem parcels from guestmem Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-23-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: UAaQhQuWqi5aWtBsqKdQLmvnLI__Nw8A X-Proofpoint-ORIG-GUID: UAaQhQuWqi5aWtBsqKdQLmvnLI__Nw8A X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 mlxscore=0 bulkscore=0 spamscore=0 malwarescore=0 phishscore=0 clxscore=1015 impostorscore=0 suspectscore=0 lowpriorityscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Gunyah Resource Manager sets up a virtual machine based on a device tree which lives in guest memory. Resource manager requires this memory to be provided as a memory parcel for it to read and manipulate. Implement a function to construct a memory parcel from a guestmem binding. Signed-off-by: Elliot Berman --- drivers/virt/gunyah/guest_memfd.c | 190 ++++++++++++++++++++++++++++++++++++++ drivers/virt/gunyah/vm_mgr.h | 6 ++ 2 files changed, 196 insertions(+) diff --git a/drivers/virt/gunyah/guest_memfd.c b/drivers/virt/gunyah/guest_memfd.c index 71686f1946da..5eeac6ac451e 100644 --- a/drivers/virt/gunyah/guest_memfd.c +++ b/drivers/virt/gunyah/guest_memfd.c @@ -653,3 +653,193 @@ int gunyah_gmem_modify_mapping(struct gunyah_vm *ghvm, fput(file); return ret; } + +int gunyah_gmem_share_parcel(struct gunyah_vm *ghvm, struct gunyah_rm_mem_parcel *parcel, + u64 *gfn, u64 *nr) +{ + struct folio *folio, *prev_folio; + unsigned long nr_entries, i, j, start, end; + struct gunyah_gmem_binding *b; + bool lend; + int ret; + + parcel->mem_handle = GUNYAH_MEM_HANDLE_INVAL; + + if (!*nr) + return -EINVAL; + + + down_read(&ghvm->bindings_lock); + b = mtree_load(&ghvm->bindings, *gfn); + if (!b || *gfn > b->gfn + b->nr || *gfn < b->gfn) { + ret = -ENOENT; + goto unlock; + } + + /** + * Generally, indices can be based on gfn, guest_memfd offset, or + * offset into binding. start and end are based on offset into binding. + */ + start = *gfn - b->gfn; + + if (start + *nr > b->nr) { + ret = -ENOENT; + goto unlock; + } + + end = start + *nr; + lend = parcel->n_acl_entries == 1 || gunyah_guest_mem_is_lend(ghvm, b->flags); + + /** + * First, calculate the number of physically discontiguous regions + * the parcel covers. Each memory entry corresponds to one folio. + * In future, each memory entry could correspond to contiguous + * folios that are also adjacent in guest_memfd, but parcels + * are only being used for small amounts of memory for now, so + * this optimization is premature. + */ + nr_entries = 0; + prev_folio = NULL; + for (i = start + b->i_off; i < end + b->i_off;) { + folio = gunyah_gmem_get_folio(file_inode(b->file), i); /* A */ + if (!folio) { + ret = -ENOMEM; + goto out; + } + + nr_entries++; + i = folio_index(folio) + folio_nr_pages(folio); + } + end = i - b->i_off; + + parcel->mem_entries = + kcalloc(nr_entries, sizeof(*parcel->mem_entries), GFP_KERNEL); + if (!parcel->mem_entries) { + ret = -ENOMEM; + goto out; + } + + /** + * Walk through all the folios again, now filling the mem_entries array. + */ + j = 0; + prev_folio = NULL; + for (i = start + b->i_off; i < end + b->i_off; j++) { + folio = filemap_get_folio(file_inode(b->file)->i_mapping, i); /* B */ + if (WARN_ON(IS_ERR(folio))) { + ret = PTR_ERR(folio); + i = end + b->i_off; + goto out; + } + + if (lend) + folio_set_private(folio); + + parcel->mem_entries[j].size = cpu_to_le64(folio_size(folio)); + parcel->mem_entries[j].phys_addr = cpu_to_le64(PFN_PHYS(folio_pfn(folio))); + i = folio_index(folio) + folio_nr_pages(folio); + folio_put(folio); /* B */ + } + BUG_ON(j != nr_entries); + parcel->n_mem_entries = nr_entries; + + if (lend) + parcel->n_acl_entries = 1; + + parcel->acl_entries = kcalloc(parcel->n_acl_entries, + sizeof(*parcel->acl_entries), GFP_KERNEL); + if (!parcel->n_acl_entries) { + ret = -ENOMEM; + goto free_entries; + } + + parcel->acl_entries[0].vmid = cpu_to_le16(ghvm->vmid); + if (b->flags & GUNYAH_MEM_ALLOW_READ) + parcel->acl_entries[0].perms |= GUNYAH_RM_ACL_R; + if (b->flags & GUNYAH_MEM_ALLOW_WRITE) + parcel->acl_entries[0].perms |= GUNYAH_RM_ACL_W; + if (b->flags & GUNYAH_MEM_ALLOW_EXEC) + parcel->acl_entries[0].perms |= GUNYAH_RM_ACL_X; + + if (!lend) { + u16 host_vmid; + + ret = gunyah_rm_get_vmid(ghvm->rm, &host_vmid); + if (ret) + goto free_acl; + + parcel->acl_entries[1].vmid = cpu_to_le16(host_vmid); + parcel->acl_entries[1].perms = GUNYAH_RM_ACL_R | GUNYAH_RM_ACL_W | GUNYAH_RM_ACL_X; + } + + parcel->mem_handle = GUNYAH_MEM_HANDLE_INVAL; + folio = filemap_get_folio(file_inode(b->file)->i_mapping, start); /* C */ + *gfn = folio_index(folio) - b->i_off + b->gfn; + *nr = end - (folio_index(folio) - b->i_off); + folio_put(folio); /* C */ + + ret = gunyah_rm_mem_share(ghvm->rm, parcel); + goto out; +free_acl: + kfree(parcel->acl_entries); + parcel->acl_entries = NULL; +free_entries: + kfree(parcel->mem_entries); + parcel->mem_entries = NULL; + parcel->n_mem_entries = 0; +out: + /* unlock the folios */ + for (j = start + b->i_off; j < i;) { + folio = filemap_get_folio(file_inode(b->file)->i_mapping, j); /* D */ + if (IS_ERR(folio)) + continue; + j = folio_index(folio) + folio_nr_pages(folio); + folio_unlock(folio); /* A */ + folio_put(folio); /* D */ + if (ret) + folio_put(folio); /* A */ + /* matching folio_put for A is done at + * (1) gunyah_gmem_reclaim_parcel or + * (2) after gunyah_gmem_parcel_to_paged, gunyah_vm_reclaim_folio + */ + } +unlock: + up_read(&ghvm->bindings_lock); + return ret; +} + +int gunyah_gmem_reclaim_parcel(struct gunyah_vm *ghvm, + struct gunyah_rm_mem_parcel *parcel, u64 gfn, + u64 nr) +{ + struct gunyah_rm_mem_entry *entry; + struct folio *folio; + pgoff_t i; + int ret; + + if (parcel->mem_handle != GUNYAH_MEM_HANDLE_INVAL) { + ret = gunyah_rm_mem_reclaim(ghvm->rm, parcel); + if (ret) { + dev_err(ghvm->parent, "Failed to reclaim parcel: %d\n", + ret); + /* We can't reclaim the pages -- hold onto the pages + * forever because we don't know what state the memory + * is in + */ + return ret; + } + parcel->mem_handle = GUNYAH_MEM_HANDLE_INVAL; + + for (i = 0; i < parcel->n_mem_entries; i++) { + entry = &parcel->mem_entries[i]; + + folio = pfn_folio(PHYS_PFN(le64_to_cpu(entry->phys_addr))); + folio_put(folio); /* A */ + } + + kfree(parcel->mem_entries); + kfree(parcel->acl_entries); + } + + return 0; +} diff --git a/drivers/virt/gunyah/vm_mgr.h b/drivers/virt/gunyah/vm_mgr.h index 518d05eeb642..a79c11f1c3a5 100644 --- a/drivers/virt/gunyah/vm_mgr.h +++ b/drivers/virt/gunyah/vm_mgr.h @@ -119,5 +119,11 @@ int gunyah_gmem_modify_mapping(struct gunyah_vm *ghvm, struct gunyah_map_mem_args *args); struct gunyah_gmem_binding; void gunyah_gmem_remove_binding(struct gunyah_gmem_binding *binding); +int gunyah_gmem_share_parcel(struct gunyah_vm *ghvm, + struct gunyah_rm_mem_parcel *parcel, u64 *gfn, + u64 *nr); +int gunyah_gmem_reclaim_parcel(struct gunyah_vm *ghvm, + struct gunyah_rm_mem_parcel *parcel, u64 gfn, + u64 nr); #endif From patchwork Tue Jan 9 19:38:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761086 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 40381446D0; Tue, 9 Jan 2024 19:38:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="JsCn3DAO" Received: from pps.filterd (m0279872.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409Gxr33016339; Tue, 9 Jan 2024 19:38:06 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=+Qmhckol3LBZQwJot12EAf5bVy2Rf/uHyT+4To7C3J0 =; b=JsCn3DAOL5dGuQbg9AbPFtybTs8AbU3IU2fM3psZaVJd0bp4eXfrdU2bh5F lxw5tC2mrLFKnWXpebotu98uqHEyO28xAVT9DcwzD1HByxVeCsTcn158gnjTsqPI fO/kAuTTIY0qLBYlspE2V740EKQb8L9O4Z+Ln1iP35goGy0SAPdXxBc3a+Zgkndi 8Nd1mk0nUBzI0bpD4aZ2OFndYd5KgxMeV0Q/orT+c35hx31z/eYBDxi3Sp0kejxy lWsY9Xcr5DXWB39cvJg+5T91iDLwf9SSTcV1SWGNUrdb9Si5PYM2xfgRt+GOeoLW 32vY+VLeVYgS3dzO5Z0vR5rmJNA== Received: from nasanppmta04.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh85t0pnu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:38:06 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA04.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409Jc5Co030549 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:38:05 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:38:04 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:38:02 -0800 Subject: [PATCH v16 24/34] virt: gunyah: Share guest VM dtb configuration to Gunyah Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-24-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: vUg_t-T6NwaXapDllJE2l2cgFSriSpBt X-Proofpoint-GUID: vUg_t-T6NwaXapDllJE2l2cgFSriSpBt X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_01,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 bulkscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 clxscore=1015 priorityscore=1501 adultscore=0 impostorscore=0 phishscore=0 spamscore=0 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Gunyah Resource Manager sets up a virtual machine based on a device tree which lives in guest memory. Resource manager requires this memory to be provided as a memory parcel for it to read and manipulate. Construct a memory parcel, lend it to the virtual machine, and inform resource manager about the device tree location (the memory parcel ID and offset into the memory parcel). Signed-off-by: Elliot Berman --- drivers/virt/gunyah/vm_mgr.c | 49 ++++++++++++++++++++++++++++++++++++++++++-- drivers/virt/gunyah/vm_mgr.h | 13 +++++++++++- include/uapi/linux/gunyah.h | 14 +++++++++++++ 3 files changed, 73 insertions(+), 3 deletions(-) diff --git a/drivers/virt/gunyah/vm_mgr.c b/drivers/virt/gunyah/vm_mgr.c index be2061aa0a06..4379b5ba151e 100644 --- a/drivers/virt/gunyah/vm_mgr.c +++ b/drivers/virt/gunyah/vm_mgr.c @@ -445,8 +445,27 @@ static int gunyah_vm_start(struct gunyah_vm *ghvm) ghvm->vmid = ret; ghvm->vm_status = GUNYAH_RM_VM_STATUS_LOAD; - ret = gunyah_rm_vm_configure(ghvm->rm, ghvm->vmid, ghvm->auth, 0, 0, 0, - 0, 0); + ghvm->dtb.parcel_start = ghvm->dtb.config.guest_phys_addr >> PAGE_SHIFT; + ghvm->dtb.parcel_pages = ghvm->dtb.config.size >> PAGE_SHIFT; + /* RM requires the DTB parcel to be lent to guard against malicious + * modifications while starting VM. Force it so. + */ + ghvm->dtb.parcel.n_acl_entries = 1; + ret = gunyah_gmem_share_parcel(ghvm, &ghvm->dtb.parcel, + &ghvm->dtb.parcel_start, + &ghvm->dtb.parcel_pages); + if (ret) { + dev_warn(ghvm->parent, + "Failed to allocate parcel for DTB: %d\n", ret); + goto err; + } + + ret = gunyah_rm_vm_configure(ghvm->rm, ghvm->vmid, ghvm->auth, + ghvm->dtb.parcel.mem_handle, 0, 0, + ghvm->dtb.config.guest_phys_addr - + (ghvm->dtb.parcel_start + << PAGE_SHIFT), + ghvm->dtb.config.size); if (ret) { dev_warn(ghvm->parent, "Failed to configure VM: %d\n", ret); goto err; @@ -485,6 +504,8 @@ static int gunyah_vm_start(struct gunyah_vm *ghvm) goto err; } + WARN_ON(gunyah_vm_parcel_to_paged(ghvm, &ghvm->dtb.parcel, ghvm->dtb.parcel_start, ghvm->dtb.parcel_pages)); + ghvm->vm_status = GUNYAH_RM_VM_STATUS_RUNNING; up_write(&ghvm->status_lock); return ret; @@ -531,6 +552,21 @@ static long gunyah_vm_ioctl(struct file *filp, unsigned int cmd, long r; switch (cmd) { + case GUNYAH_VM_SET_DTB_CONFIG: { + struct gunyah_vm_dtb_config dtb_config; + + if (copy_from_user(&dtb_config, argp, sizeof(dtb_config))) + return -EFAULT; + + if (overflows_type(dtb_config.guest_phys_addr + dtb_config.size, + u64)) + return -EOVERFLOW; + + ghvm->dtb.config = dtb_config; + + r = 0; + break; + } case GUNYAH_VM_START: { r = gunyah_vm_ensure_started(ghvm); break; @@ -589,6 +625,15 @@ static void _gunyah_vm_put(struct kref *kref) if (ghvm->vm_status == GUNYAH_RM_VM_STATUS_RUNNING) gunyah_vm_stop(ghvm); + if (ghvm->vm_status == GUNYAH_RM_VM_STATUS_LOAD) { + ret = gunyah_gmem_reclaim_parcel(ghvm, &ghvm->dtb.parcel, + ghvm->dtb.parcel_start, + ghvm->dtb.parcel_pages); + if (ret) + dev_err(ghvm->parent, + "Failed to reclaim DTB parcel: %d\n", ret); + } + gunyah_vm_remove_functions(ghvm); down_write(&ghvm->bindings_lock); diff --git a/drivers/virt/gunyah/vm_mgr.h b/drivers/virt/gunyah/vm_mgr.h index a79c11f1c3a5..b2ab2f1bda3a 100644 --- a/drivers/virt/gunyah/vm_mgr.h +++ b/drivers/virt/gunyah/vm_mgr.h @@ -72,6 +72,13 @@ long gunyah_dev_vm_mgr_ioctl(struct gunyah_rm *rm, unsigned int cmd, * @resource_tickets: List of &struct gunyah_vm_resource_ticket * @auth: Authentication mechanism to be used by resource manager when * launching the VM + * @dtb: For tracking dtb configuration when launching the VM + * @dtb.config: Location of the DTB in the guest memory + * @dtb.parcel_start: Guest frame number where the memory parcel that we lent to + * VM (DTB could start in middle of folio; we lend entire + * folio; parcel_start is start of the folio) + * @dtb.parcel_pages: Number of pages lent for the memory parcel + * @dtb.parcel: Data for resource manager to lend the parcel * * Members are grouped by hot path. */ @@ -101,7 +108,11 @@ struct gunyah_vm { struct device *parent; enum gunyah_rm_vm_auth_mechanism auth; - + struct { + struct gunyah_vm_dtb_config config; + u64 parcel_start, parcel_pages; + struct gunyah_rm_mem_parcel parcel; + } dtb; }; int gunyah_vm_parcel_to_paged(struct gunyah_vm *ghvm, diff --git a/include/uapi/linux/gunyah.h b/include/uapi/linux/gunyah.h index 1af4c5ae6bc3..a89d9bedf3e5 100644 --- a/include/uapi/linux/gunyah.h +++ b/include/uapi/linux/gunyah.h @@ -42,6 +42,20 @@ struct gunyah_create_mem_args { /* * ioctls for gunyah-vm fds (returned by GUNYAH_CREATE_VM) */ + +/** + * struct gunyah_vm_dtb_config - Set the location of the VM's devicetree blob + * @guest_phys_addr: Address of the VM's devicetree in guest memory. + * @size: Maximum size of the devicetree including space for overlays. + * Resource manager applies an overlay to the DTB and dtb_size should + * include room for the overlay. A page of memory is typicaly plenty. + */ +struct gunyah_vm_dtb_config { + __u64 guest_phys_addr; + __u64 size; +}; +#define GUNYAH_VM_SET_DTB_CONFIG _IOW(GUNYAH_IOCTL_TYPE, 0x2, struct gunyah_vm_dtb_config) + #define GUNYAH_VM_START _IO(GUNYAH_IOCTL_TYPE, 0x3) /** From patchwork Tue Jan 9 19:38:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761565 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B064A4123A; Tue, 9 Jan 2024 19:38:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="Oy/+qHMk" Received: from pps.filterd (m0279868.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409IcWLY010749; Tue, 9 Jan 2024 19:38:07 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=6QetUZhknvq/zrDatLYqM2ZHXQHjhsAEkBKq0hM3hhM =; b=Oy/+qHMkiIlkN+857CH/9m5TyyxOqNi7PSPz8svGUFuyIX17FCNwIxrf8Ao VmEj1gLQbUia9Gko1NUNsT5rxQMqy3n9+p1VZw4AauKmnGz8U0kIMhe6dGC4lxCR J/iBFT7yqHbOtBRgipY0SUgEoGB4h2WAQCuFTmoUUtVOh/BonodWhBmnngqmgPev oZDGLoJbFCfY4dBzVuBnIpDIBzuAiduMVjG56nqDDLHwuAjSEu3CXQbpj6nh27l4 j8m35lLI+WDoP7UOxA58ulLdT3PywtOaWWkcb+D3bFk6KTAxgE0JwhYRkHoRhMln 2IweesuUjwOuwz99fMjzn1/jdlA== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh9ta0dxp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:38:06 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA02.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409Jc5JL012019 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:38:05 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:38:04 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:38:03 -0800 Subject: [PATCH v16 25/34] gunyah: rsc_mgr: Add RPC to enable demand paging Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-25-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: qHVZfw7yNHI_rsHZNWwtCx4ZyRxoAyya X-Proofpoint-ORIG-GUID: qHVZfw7yNHI_rsHZNWwtCx4ZyRxoAyya X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 impostorscore=0 malwarescore=0 mlxscore=0 adultscore=0 mlxlogscore=770 bulkscore=0 priorityscore=1501 lowpriorityscore=0 clxscore=1015 phishscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Add Gunyah Resource Manager RPC to enable demand paging for a virtual machine. Resource manager needs to be informed of private memory regions which will be demand paged and the location where the DTB memory parcel should live in the guest's address space. Signed-off-by: Elliot Berman --- drivers/virt/gunyah/rsc_mgr.h | 12 +++++++ drivers/virt/gunyah/rsc_mgr_rpc.c | 71 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 83 insertions(+) diff --git a/drivers/virt/gunyah/rsc_mgr.h b/drivers/virt/gunyah/rsc_mgr.h index 68d08d3cff02..99c2db18579c 100644 --- a/drivers/virt/gunyah/rsc_mgr.h +++ b/drivers/virt/gunyah/rsc_mgr.h @@ -108,6 +108,18 @@ int gunyah_rm_get_hyp_resources(struct gunyah_rm *rm, u16 vmid, int gunyah_rm_get_vmid(struct gunyah_rm *rm, u16 *vmid); +int gunyah_rm_vm_set_demand_paging(struct gunyah_rm *rm, u16 vmid, u32 count, + struct gunyah_rm_mem_entry *mem_entries); + +enum gunyah_rm_range_id { + GUNYAH_RM_RANGE_ID_IMAGE = 0, + GUNYAH_RM_RANGE_ID_FIRMWARE = 1, +}; + +int gunyah_rm_vm_set_address_layout(struct gunyah_rm *rm, u16 vmid, + enum gunyah_rm_range_id range_id, + u64 base_address, u64 size); + struct gunyah_resource * gunyah_rm_alloc_resource(struct gunyah_rm *rm, struct gunyah_rm_hyp_resource *hyp_resource); diff --git a/drivers/virt/gunyah/rsc_mgr_rpc.c b/drivers/virt/gunyah/rsc_mgr_rpc.c index 0d78613827b5..f4e396fd0d47 100644 --- a/drivers/virt/gunyah/rsc_mgr_rpc.c +++ b/drivers/virt/gunyah/rsc_mgr_rpc.c @@ -22,6 +22,8 @@ #define GUNYAH_RM_RPC_VM_INIT 0x5600000B #define GUNYAH_RM_RPC_VM_GET_HYP_RESOURCES 0x56000020 #define GUNYAH_RM_RPC_VM_GET_VMID 0x56000024 +#define GUNYAH_RM_RPC_VM_SET_DEMAND_PAGING 0x56000033 +#define GUNYAH_RM_RPC_VM_SET_ADDRESS_LAYOUT 0x56000034 /* clang-format on */ struct gunyah_rm_vm_common_vmid_req { @@ -100,6 +102,23 @@ struct gunyah_rm_vm_config_image_req { __le64 dtb_size; } __packed; +/* Call: VM_SET_DEMAND_PAGING */ +struct gunyah_rm_vm_set_demand_paging_req { + __le16 vmid; + __le16 _padding; + __le32 range_count; + DECLARE_FLEX_ARRAY(struct gunyah_rm_mem_entry, ranges); +} __packed; + +/* Call: VM_SET_ADDRESS_LAYOUT */ +struct gunyah_rm_vm_set_address_layout_req { + __le16 vmid; + __le16 _padding; + __le32 range_id; + __le64 range_base; + __le64 range_size; +} __packed; + /* * Several RM calls take only a VMID as a parameter and give only standard * response back. Deduplicate boilerplate code by using this common call. @@ -481,3 +500,55 @@ int gunyah_rm_get_vmid(struct gunyah_rm *rm, u16 *vmid) return ret; } EXPORT_SYMBOL_GPL(gunyah_rm_get_vmid); + +/** + * gunyah_rm_vm_set_demand_paging() - Enable demand paging of memory regions + * @rm: Handle to a Gunyah resource manager + * @vmid: VMID of the other VM + * @count: Number of demand paged memory regions + * @entries: Array of the regions + */ +int gunyah_rm_vm_set_demand_paging(struct gunyah_rm *rm, u16 vmid, u32 count, + struct gunyah_rm_mem_entry *entries) +{ + struct gunyah_rm_vm_set_demand_paging_req *req __free(kfree) = NULL; + size_t req_size; + + req_size = struct_size(req, ranges, count); + if (req_size == SIZE_MAX) + return -EINVAL; + + req = kzalloc(req_size, GFP_KERNEL); + if (!req) + return -ENOMEM; + + req->vmid = cpu_to_le16(vmid); + req->range_count = cpu_to_le32(count); + memcpy(req->ranges, entries, sizeof(*entries) * count); + + return gunyah_rm_call(rm, GUNYAH_RM_RPC_VM_SET_DEMAND_PAGING, req, + req_size, NULL, NULL); +} + +/** + * gunyah_rm_vm_set_address_layout() - Set the start address of images + * @rm: Handle to a Gunyah resource manager + * @vmid: VMID of the other VM + * @range_id: Which image to set + * @base_address: Base address + * @size: Size + */ +int gunyah_rm_vm_set_address_layout(struct gunyah_rm *rm, u16 vmid, + enum gunyah_rm_range_id range_id, + u64 base_address, u64 size) +{ + struct gunyah_rm_vm_set_address_layout_req req = { + .vmid = cpu_to_le16(vmid), + .range_id = cpu_to_le32(range_id), + .range_base = cpu_to_le64(base_address), + .range_size = cpu_to_le64(size), + }; + + return gunyah_rm_call(rm, GUNYAH_RM_RPC_VM_SET_ADDRESS_LAYOUT, &req, + sizeof(req), NULL, NULL); +} From patchwork Tue Jan 9 19:38:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761084 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A38A647A5C; Tue, 9 Jan 2024 19:38:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="AiafqSiO" Received: from pps.filterd (m0279871.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409HCxfL023212; Tue, 9 Jan 2024 19:38:07 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=20kzljP3XXAHh/+L/ahQqYgdL0v1bIu2bkQnV2tNRHs =; b=AiafqSiO0HLbkygtBVeLwn7mIJx2BHAqg5lrO/Svn163jQjHcOC+xb+nDCU esyZXhvHXB+EesuvfQpViRbZglB1ZkjSAI6NwKbyZMXejIqF1jU1hCC2z2tKdFE6 0fpLOKx1un1Yv2B/InIyqai+BVTvbqaCN+hpyR3SoCAAT7GMVx5MjkJwcoZxTOeL aezOfD8xmns8GLptJ2YxwyKx3dz/jXSJrinbMVNNM6qUhjDS8Ayf0pyobLMA6wHp rEzS5vPq7faeJMINDxn2XwhJyEkGhTOfdUhw827Bv+sUYFXBlE4W+LIXCl6/xuct KMdyls6U815cKX5NN/fsHp+m7Fg== Received: from nasanppmta01.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh98m8hrr-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:38:07 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA01.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409Jc6SY003533 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:38:06 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:38:05 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:38:04 -0800 Subject: [PATCH v16 26/34] mm/interval_tree: Export iter_first/iter_next Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-26-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: PfQLvqRZVKjSZRqkcN_pgu1hNdjTZdHL X-Proofpoint-GUID: PfQLvqRZVKjSZRqkcN_pgu1hNdjTZdHL X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 impostorscore=0 spamscore=0 mlxscore=0 priorityscore=1501 phishscore=0 malwarescore=0 mlxlogscore=819 suspectscore=0 clxscore=1015 bulkscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Export vma_interval_tree_iter_first and vma_interval_tree_iter_next for vma_interval_tree_foreach. Signed-off-by: Elliot Berman --- mm/interval_tree.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/mm/interval_tree.c b/mm/interval_tree.c index 32e390c42c53..faa50767496c 100644 --- a/mm/interval_tree.c +++ b/mm/interval_tree.c @@ -24,6 +24,9 @@ INTERVAL_TREE_DEFINE(struct vm_area_struct, shared.rb, unsigned long, shared.rb_subtree_last, vma_start_pgoff, vma_last_pgoff, /* empty */, vma_interval_tree) +EXPORT_SYMBOL_GPL(vma_interval_tree_iter_first); +EXPORT_SYMBOL_GPL(vma_interval_tree_iter_next); + /* Insert node immediately after prev in the interval tree */ void vma_interval_tree_insert_after(struct vm_area_struct *node, struct vm_area_struct *prev, From patchwork Tue Jan 9 19:38:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761563 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 71C3E45BE0; Tue, 9 Jan 2024 19:38:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="N9WJqtBu" Received: from pps.filterd (m0279870.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409GTKRk006618; Tue, 9 Jan 2024 19:38:08 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=xmkTmjNsdPCs10aYcSn7XO86TvYDKbjp9vja8YDUKN0 =; b=N9WJqtBujwwUhLG/HBE5Z/UFBG1ofvojUfp3iKfVzqKaxm/kS8nZQuIPWFi vOwR3KVMHLY4s0bMG/nyHrPGegjtSCEkq4Rsc2S1fNFNUNfvOwM4FeeWyrYVEfD5 CmmDFLUggrFTzFcdp1pjtuijkItvKcbfhSWXQ6eI2aC5csVZvr0vyPhsqW5/6R9u GItgPaoqr6Vb55R960tUsBD3LQ0E08l5WdYZ80Ztks6ewnjQPECy7t0lKLj/nzMx CbsMkO8iGaux+Hw/lr0oEhe1eCnI0D+qS/sVcHAZY69KbIMZ4+b4Juw7IXJrbO80 oVy9SkDPMumDF8zffBpw3D+Q10w== Received: from nasanppmta03.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh9q70egv-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:38:08 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA03.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409Jc7lf011554 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:38:07 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:38:06 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:38:05 -0800 Subject: [PATCH v16 27/34] virt: gunyah: Enable demand paging Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-27-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: WPtW2STFLzqb5RdGaoN_fa6pORGkIgFJ X-Proofpoint-GUID: WPtW2STFLzqb5RdGaoN_fa6pORGkIgFJ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_01,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 adultscore=0 clxscore=1015 impostorscore=0 suspectscore=0 bulkscore=0 mlxlogscore=766 malwarescore=0 lowpriorityscore=0 priorityscore=1501 mlxscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Tell resource manager to enable demand paging and wire vCPU faults to provide the backing folio when a guestmemfd is bound to the faulting access. Signed-off-by: Elliot Berman --- drivers/virt/gunyah/guest_memfd.c | 115 ++++++++++++++++++++++++++++++++++++++ drivers/virt/gunyah/gunyah_vcpu.c | 39 ++++++++++--- drivers/virt/gunyah/vm_mgr.c | 17 ++++++ drivers/virt/gunyah/vm_mgr.h | 3 + 4 files changed, 166 insertions(+), 8 deletions(-) diff --git a/drivers/virt/gunyah/guest_memfd.c b/drivers/virt/gunyah/guest_memfd.c index 5eeac6ac451e..4696ff4c7c22 100644 --- a/drivers/virt/gunyah/guest_memfd.c +++ b/drivers/virt/gunyah/guest_memfd.c @@ -843,3 +843,118 @@ int gunyah_gmem_reclaim_parcel(struct gunyah_vm *ghvm, return 0; } + +int gunyah_gmem_setup_demand_paging(struct gunyah_vm *ghvm) +{ + struct gunyah_rm_mem_entry *entries; + struct gunyah_gmem_binding *b; + unsigned long index = 0; + u32 count = 0, i; + int ret = 0; + + down_read(&ghvm->bindings_lock); + mt_for_each(&ghvm->bindings, b, index, ULONG_MAX) + if (gunyah_guest_mem_is_lend(ghvm, b->flags)) + count++; + + if (!count) + goto out; + + entries = kcalloc(count, sizeof(*entries), GFP_KERNEL); + if (!entries) { + ret = -ENOMEM; + goto out; + } + + index = i = 0; + mt_for_each(&ghvm->bindings, b, index, ULONG_MAX) { + if (!gunyah_guest_mem_is_lend(ghvm, b->flags)) + continue; + entries[i].phys_addr = cpu_to_le64(gunyah_gfn_to_gpa(b->gfn)); + entries[i].size = cpu_to_le64(b->nr << PAGE_SHIFT); + if (++i == count) + break; + } + + ret = gunyah_rm_vm_set_demand_paging(ghvm->rm, ghvm->vmid, i, entries); + kfree(entries); +out: + up_read(&ghvm->bindings_lock); + return ret; +} + +static bool folio_mmapped(struct folio *folio) +{ + struct address_space *mapping = folio->mapping; + struct vm_area_struct *vma; + bool ret = false; + + i_mmap_lock_read(mapping); + vma_interval_tree_foreach(vma, &mapping->i_mmap, folio_index(folio), + folio_index(folio) + folio_nr_pages(folio)) { + ret = true; + break; + } + i_mmap_unlock_read(mapping); + return ret; +} + +int gunyah_gmem_demand_page(struct gunyah_vm *ghvm, u64 gpa, bool write) +{ + unsigned long gfn = gunyah_gpa_to_gfn(gpa); + struct gunyah_gmem_binding *b; + struct folio *folio; + int ret; + + down_read(&ghvm->bindings_lock); + b = mtree_load(&ghvm->bindings, gfn); + if (!b) { + ret = -ENOENT; + goto unlock; + } + + if (write && !(b->flags & GUNYAH_MEM_ALLOW_WRITE)) { + ret = -EPERM; + goto unlock; + } + + folio = gunyah_gmem_get_folio(file_inode(b->file), gunyah_gfn_to_off(b, gfn)); + if (IS_ERR(folio)) { + ret = PTR_ERR(folio); + pr_err_ratelimited( + "Failed to obtain memory for guest addr %016llx: %d\n", + gpa, ret); + goto unlock; + } + + if (gunyah_guest_mem_is_lend(ghvm, b->flags) && + (folio_mapped(folio) || folio_mmapped(folio))) { + ret = -EPERM; + goto out; + } + + /** + * the folio covers the requested guest address, but the folio may not + * start at the requested guest address. recompute the gfn based on the + * folio itself. + */ + gfn = gunyah_off_to_gfn(b, folio_index(folio)); + + ret = gunyah_vm_provide_folio(ghvm, folio, gfn, + !gunyah_guest_mem_is_lend(ghvm, b->flags), + !!(b->flags & GUNYAH_MEM_ALLOW_WRITE)); + if (ret) { + if (ret != -EAGAIN) + pr_err_ratelimited( + "Failed to provide folio for guest addr: %016llx: %d\n", + gpa, ret); + goto out; + } +out: + folio_unlock(folio); + folio_put(folio); +unlock: + up_read(&ghvm->bindings_lock); + return ret; +} +EXPORT_SYMBOL_GPL(gunyah_gmem_demand_page); diff --git a/drivers/virt/gunyah/gunyah_vcpu.c b/drivers/virt/gunyah/gunyah_vcpu.c index b636b54dc9a1..f01e6d6163ba 100644 --- a/drivers/virt/gunyah/gunyah_vcpu.c +++ b/drivers/virt/gunyah/gunyah_vcpu.c @@ -89,29 +89,44 @@ static irqreturn_t gunyah_vcpu_irq_handler(int irq, void *data) return IRQ_HANDLED; } -static void gunyah_handle_page_fault( +static bool gunyah_handle_page_fault( struct gunyah_vcpu *vcpu, const struct gunyah_hypercall_vcpu_run_resp *vcpu_run_resp) { u64 addr = vcpu_run_resp->state_data[0]; + bool write = !!vcpu_run_resp->state_data[1]; + int ret = 0; + + ret = gunyah_gmem_demand_page(vcpu->ghvm, addr, write); + if (!ret || ret == -EAGAIN) + return true; vcpu->vcpu_run->page_fault.resume_action = GUNYAH_VCPU_RESUME_FAULT; - vcpu->vcpu_run->page_fault.attempt = 0; + vcpu->vcpu_run->page_fault.attempt = ret; vcpu->vcpu_run->page_fault.phys_addr = addr; vcpu->vcpu_run->exit_reason = GUNYAH_VCPU_EXIT_PAGE_FAULT; + return false; } -static void -gunyah_handle_mmio(struct gunyah_vcpu *vcpu, +static bool +gunyah_handle_mmio(struct gunyah_vcpu *vcpu, unsigned long resume_data[3], const struct gunyah_hypercall_vcpu_run_resp *vcpu_run_resp) { u64 addr = vcpu_run_resp->state_data[0], len = vcpu_run_resp->state_data[1], data = vcpu_run_resp->state_data[2]; + int ret; if (WARN_ON(len > sizeof(u64))) len = sizeof(u64); + ret = gunyah_gmem_demand_page(vcpu->ghvm, addr, + vcpu->vcpu_run->mmio.is_write); + if (!ret || ret == -EAGAIN) { + resume_data[1] = GUNYAH_ADDRSPACE_VMMIO_ACTION_RETRY; + return true; + } + if (vcpu_run_resp->state == GUNYAH_VCPU_ADDRSPACE_VMMIO_READ) { vcpu->vcpu_run->mmio.is_write = 0; /* Record that we need to give vCPU user's supplied value next gunyah_vcpu_run() */ @@ -127,6 +142,8 @@ gunyah_handle_mmio(struct gunyah_vcpu *vcpu, vcpu->mmio_addr = vcpu->vcpu_run->mmio.phys_addr = addr; vcpu->vcpu_run->mmio.len = len; vcpu->vcpu_run->exit_reason = GUNYAH_VCPU_EXIT_MMIO; + + return false; } static int gunyah_handle_mmio_resume(struct gunyah_vcpu *vcpu, @@ -147,6 +164,8 @@ static int gunyah_handle_mmio_resume(struct gunyah_vcpu *vcpu, resume_data[1] = GUNYAH_ADDRSPACE_VMMIO_ACTION_FAULT; break; case GUNYAH_VCPU_RESUME_RETRY: + gunyah_gmem_demand_page(vcpu->ghvm, vcpu->mmio_addr, + vcpu->state == GUNYAH_VCPU_RUN_STATE_MMIO_WRITE); resume_data[1] = GUNYAH_ADDRSPACE_VMMIO_ACTION_RETRY; break; default: @@ -309,11 +328,15 @@ static int gunyah_vcpu_run(struct gunyah_vcpu *vcpu) break; case GUNYAH_VCPU_ADDRSPACE_VMMIO_READ: case GUNYAH_VCPU_ADDRSPACE_VMMIO_WRITE: - gunyah_handle_mmio(vcpu, &vcpu_run_resp); - goto out; + if (!gunyah_handle_mmio(vcpu, resume_data, + &vcpu_run_resp)) + goto out; + break; case GUNYAH_VCPU_ADDRSPACE_PAGE_FAULT: - gunyah_handle_page_fault(vcpu, &vcpu_run_resp); - goto out; + if (!gunyah_handle_page_fault(vcpu, + &vcpu_run_resp)) + goto out; + break; default: pr_warn_ratelimited( "Unknown vCPU state: %llx\n", diff --git a/drivers/virt/gunyah/vm_mgr.c b/drivers/virt/gunyah/vm_mgr.c index 4379b5ba151e..3b767eeeb7c2 100644 --- a/drivers/virt/gunyah/vm_mgr.c +++ b/drivers/virt/gunyah/vm_mgr.c @@ -471,6 +471,23 @@ static int gunyah_vm_start(struct gunyah_vm *ghvm) goto err; } + ret = gunyah_gmem_setup_demand_paging(ghvm); + if (ret) { + dev_warn(ghvm->parent, + "Failed to set up gmem demand paging: %d\n", ret); + goto err; + } + + ret = gunyah_rm_vm_set_address_layout( + ghvm->rm, ghvm->vmid, GUNYAH_RM_RANGE_ID_IMAGE, + ghvm->dtb.parcel_start << PAGE_SHIFT, + ghvm->dtb.parcel_pages << PAGE_SHIFT); + if (ret) { + dev_warn(ghvm->parent, + "Failed to set location of DTB mem parcel: %d\n", ret); + goto err; + } + ret = gunyah_rm_vm_init(ghvm->rm, ghvm->vmid); if (ret) { ghvm->vm_status = GUNYAH_RM_VM_STATUS_INIT_FAILED; diff --git a/drivers/virt/gunyah/vm_mgr.h b/drivers/virt/gunyah/vm_mgr.h index b2ab2f1bda3a..474ac866d237 100644 --- a/drivers/virt/gunyah/vm_mgr.h +++ b/drivers/virt/gunyah/vm_mgr.h @@ -137,4 +137,7 @@ int gunyah_gmem_reclaim_parcel(struct gunyah_vm *ghvm, struct gunyah_rm_mem_parcel *parcel, u64 gfn, u64 nr); +int gunyah_gmem_setup_demand_paging(struct gunyah_vm *ghvm); +int gunyah_gmem_demand_page(struct gunyah_vm *ghvm, u64 gpa, bool write); + #endif From patchwork Tue Jan 9 19:38:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761564 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 673F34437F; Tue, 9 Jan 2024 19:38:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="V6m8c9Mx" Received: from pps.filterd (m0279866.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409GCAMK032177; Tue, 9 Jan 2024 19:38:08 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=mISjbqlRqJVm6RisqWAuRaAGWyV18zAgZkFAhzyIWvs =; b=V6m8c9Mx2eyhux4dK8665TNsA6fyLrEUnpaGhJjGuHQgzO7Lm3gMbmtsEvB p1+f/99M2SM60MlmAYF2aIwBieGY9fCV4fq8/7k1qFfgMHti6yXBaQ4i11l1jbqf /11POm1l8TiBS18saLUGLiLpx4HfPlfA00X6tjJ+BeeOtSvOwbTxfaUAHd9KDj+j n2sn/MhCcsMTHrnwmRhNxP4V74e56MMiud81kqYLVqnqPAmjuruwEC5WRVdTjPpy 5JIpj9TCPATFZQdrclnLzGsyRbbvtv3mRO60GNmLZ3U2UHZLYngVlpeC6M2ZeFjy TuRx2FGqJIBSRny8G6ObJLaUlgA== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh9evrfhy-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:38:08 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA02.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409Jc72b012055 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:38:07 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:38:06 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:38:06 -0800 Subject: [PATCH v16 28/34] gunyah: rsc_mgr: Add RPC to set VM boot context Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-28-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: eLF8qTS-lSW6WCS6nT_JzEqTWuhHD0yL X-Proofpoint-ORIG-GUID: eLF8qTS-lSW6WCS6nT_JzEqTWuhHD0yL X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_01,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 bulkscore=0 mlxlogscore=999 lowpriorityscore=0 mlxscore=0 priorityscore=1501 adultscore=0 suspectscore=0 clxscore=1015 spamscore=0 phishscore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 The initial context of a the primary vCPU can be initialized by performing RM RPC calls. Signed-off-by: Elliot Berman --- drivers/virt/gunyah/rsc_mgr.h | 2 ++ drivers/virt/gunyah/rsc_mgr_rpc.c | 32 ++++++++++++++++++++++++++++++++ 2 files changed, 34 insertions(+) diff --git a/drivers/virt/gunyah/rsc_mgr.h b/drivers/virt/gunyah/rsc_mgr.h index 99c2db18579c..2acaf8dff365 100644 --- a/drivers/virt/gunyah/rsc_mgr.h +++ b/drivers/virt/gunyah/rsc_mgr.h @@ -84,6 +84,8 @@ int gunyah_rm_vm_configure(struct gunyah_rm *rm, u16 vmid, u32 mem_handle, u64 image_offset, u64 image_size, u64 dtb_offset, u64 dtb_size); int gunyah_rm_vm_init(struct gunyah_rm *rm, u16 vmid); +int gunyah_rm_vm_set_boot_context(struct gunyah_rm *rm, u16 vmid, u8 reg_set, + u8 reg_index, u64 value); struct gunyah_rm_hyp_resource { u8 type; diff --git a/drivers/virt/gunyah/rsc_mgr_rpc.c b/drivers/virt/gunyah/rsc_mgr_rpc.c index f4e396fd0d47..bbdae0b05cd4 100644 --- a/drivers/virt/gunyah/rsc_mgr_rpc.c +++ b/drivers/virt/gunyah/rsc_mgr_rpc.c @@ -20,6 +20,7 @@ #define GUNYAH_RM_RPC_VM_RESET 0x56000006 #define GUNYAH_RM_RPC_VM_CONFIG_IMAGE 0x56000009 #define GUNYAH_RM_RPC_VM_INIT 0x5600000B +#define GUNYAH_RM_RPC_VM_SET_BOOT_CONTEXT 0x5600000C #define GUNYAH_RM_RPC_VM_GET_HYP_RESOURCES 0x56000020 #define GUNYAH_RM_RPC_VM_GET_VMID 0x56000024 #define GUNYAH_RM_RPC_VM_SET_DEMAND_PAGING 0x56000033 @@ -102,6 +103,15 @@ struct gunyah_rm_vm_config_image_req { __le64 dtb_size; } __packed; +/* Call: VM_SET_BOOT_CONTEXT */ +struct gunyah_rm_vm_set_boot_context_req { + __le16 vmid; + u8 reg_set; + u8 reg_index; + __le32 _padding; + __le64 value; +} __packed; + /* Call: VM_SET_DEMAND_PAGING */ struct gunyah_rm_vm_set_demand_paging_req { __le16 vmid; @@ -435,6 +445,28 @@ int gunyah_rm_vm_init(struct gunyah_rm *rm, u16 vmid) return gunyah_rm_common_vmid_call(rm, GUNYAH_RM_RPC_VM_INIT, vmid); } +/** + * gunyah_rm_vm_set_boot_context() - set the initial boot context of the primary vCPU + * @rm: Handle to a Gunyah resource manager + * @vmid: VM identifier + * @reg_set: See &enum gunyah_vm_boot_context_reg + * @reg_index: Which register to set; must be 0 for REG_SET_PC + * @value: Value to set in the register + */ +int gunyah_rm_vm_set_boot_context(struct gunyah_rm *rm, u16 vmid, u8 reg_set, + u8 reg_index, u64 value) +{ + struct gunyah_rm_vm_set_boot_context_req req_payload = { + .vmid = cpu_to_le16(vmid), + .reg_set = reg_set, + .reg_index = reg_index, + .value = cpu_to_le64(value), + }; + + return gunyah_rm_call(rm, GUNYAH_RM_RPC_VM_SET_BOOT_CONTEXT, + &req_payload, sizeof(req_payload), NULL, NULL); +} + /** * gunyah_rm_get_hyp_resources() - Retrieve hypervisor resources (capabilities) associated with a VM * @rm: Handle to a Gunyah resource manager From patchwork Tue Jan 9 19:38:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761081 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2A2E5481D3; Tue, 9 Jan 2024 19:38:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="nSo95nxl" Received: from pps.filterd (m0279869.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409GcqSe032231; Tue, 9 Jan 2024 19:38:09 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=R9oS16D14s/n6UKLdxi8B+QIGsh3AN4EVEIgfOWN57I =; b=nSo95nxloAHcDQp3UMRYG+9ys11AoQKKhL68yJYaywuAYLuP+o701wulvxE 5jOvyWxF1RGW0y/s4KXMTQaW0eOng9wMZR8v6zujGGy0XBaneVKoIhdizu2rpytR LqxdMw2BcXUqumcnvXR8QWy6tuw6891MbMrLmny4wLyjDan+vbodY7iAfd1ccKpx bwnoPLxwRaPaNasLlZ4Wp9ByQ/AXlXKbNMiX0kPyMfo8MxPqvLRpxd8Ke3M90rQ2 aDj6oOPVh6RZhtWM21keb68fDjH/W+2JpJDa059W4LKRlrQ6buIACkJQQKrRg0m5 dy783prZo25BQzaVDsnLQ16VZdQ== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh9u9gdd6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:38:09 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA02.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409Jc86C012062 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:38:08 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:38:07 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:38:07 -0800 Subject: [PATCH v16 29/34] virt: gunyah: Allow userspace to initialize context of primary vCPU Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-29-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: r2Jg0yOIgZRWvcYW8LkdYcE94GP5WljK X-Proofpoint-ORIG-GUID: r2Jg0yOIgZRWvcYW8LkdYcE94GP5WljK X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 impostorscore=0 bulkscore=0 adultscore=0 suspectscore=0 phishscore=0 clxscore=1015 lowpriorityscore=0 malwarescore=0 spamscore=0 mlxlogscore=999 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 RM provides APIs to fill boot context the initial registers upon starting the vCPU. Most importantly, this allows userspace to set the initial PC for the primary vCPU when the VM starts to run. Signed-off-by: Elliot Berman --- drivers/virt/gunyah/vm_mgr.c | 77 ++++++++++++++++++++++++++++++++++++++++++++ drivers/virt/gunyah/vm_mgr.h | 2 ++ include/uapi/linux/gunyah.h | 23 +++++++++++++ 3 files changed, 102 insertions(+) diff --git a/drivers/virt/gunyah/vm_mgr.c b/drivers/virt/gunyah/vm_mgr.c index 3b767eeeb7c2..1f3d29749174 100644 --- a/drivers/virt/gunyah/vm_mgr.c +++ b/drivers/virt/gunyah/vm_mgr.c @@ -395,6 +395,7 @@ static __must_check struct gunyah_vm *gunyah_vm_alloc(struct gunyah_rm *rm) mutex_init(&ghvm->resources_lock); INIT_LIST_HEAD(&ghvm->resources); INIT_LIST_HEAD(&ghvm->resource_tickets); + xa_init(&ghvm->boot_context); mt_init(&ghvm->mm); mt_init(&ghvm->bindings); @@ -420,6 +421,66 @@ static __must_check struct gunyah_vm *gunyah_vm_alloc(struct gunyah_rm *rm) return ghvm; } +static long gunyah_vm_set_boot_context(struct gunyah_vm *ghvm, + struct gunyah_vm_boot_context *boot_ctx) +{ + u8 reg_set, reg_index; /* to check values are reasonable */ + int ret; + + reg_set = (boot_ctx->reg >> GUNYAH_VM_BOOT_CONTEXT_REG_SHIFT) & 0xff; + reg_index = boot_ctx->reg & 0xff; + + switch (reg_set) { + case REG_SET_X: + if (reg_index > 31) + return -EINVAL; + break; + case REG_SET_PC: + if (reg_index) + return -EINVAL; + break; + case REG_SET_SP: + if (reg_index > 2) + return -EINVAL; + break; + default: + return -EINVAL; + } + + ret = down_read_interruptible(&ghvm->status_lock); + if (ret) + return ret; + + if (ghvm->vm_status != GUNYAH_RM_VM_STATUS_NO_STATE) { + ret = -EINVAL; + goto out; + } + + ret = xa_err(xa_store(&ghvm->boot_context, boot_ctx->reg, + (void *)boot_ctx->value, GFP_KERNEL)); +out: + up_read(&ghvm->status_lock); + return ret; +} + +static inline int gunyah_vm_fill_boot_context(struct gunyah_vm *ghvm) +{ + unsigned long reg_set, reg_index, id; + void *entry; + int ret; + + xa_for_each(&ghvm->boot_context, id, entry) { + reg_set = (id >> GUNYAH_VM_BOOT_CONTEXT_REG_SHIFT) & 0xff; + reg_index = id & 0xff; + ret = gunyah_rm_vm_set_boot_context( + ghvm->rm, ghvm->vmid, reg_set, reg_index, (u64)entry); + if (ret) + return ret; + } + + return 0; +} + static int gunyah_vm_start(struct gunyah_vm *ghvm) { struct gunyah_rm_hyp_resources *resources; @@ -496,6 +557,13 @@ static int gunyah_vm_start(struct gunyah_vm *ghvm) } ghvm->vm_status = GUNYAH_RM_VM_STATUS_READY; + ret = gunyah_vm_fill_boot_context(ghvm); + if (ret) { + dev_warn(ghvm->parent, "Failed to setup boot context: %d\n", + ret); + goto err; + } + ret = gunyah_rm_get_hyp_resources(ghvm->rm, ghvm->vmid, &resources); if (ret) { dev_warn(ghvm->parent, @@ -614,6 +682,14 @@ static long gunyah_vm_ioctl(struct file *filp, unsigned int cmd, return gunyah_gmem_modify_mapping(ghvm, &args); } + case GUNYAH_VM_SET_BOOT_CONTEXT: { + struct gunyah_vm_boot_context boot_ctx; + + if (copy_from_user(&boot_ctx, argp, sizeof(boot_ctx))) + return -EFAULT; + + return gunyah_vm_set_boot_context(ghvm, &boot_ctx); + } default: r = -ENOTTY; break; @@ -699,6 +775,7 @@ static void _gunyah_vm_put(struct kref *kref) "Failed to deallocate vmid: %d\n", ret); } + xa_destroy(&ghvm->boot_context); gunyah_rm_put(ghvm->rm); kfree(ghvm); } diff --git a/drivers/virt/gunyah/vm_mgr.h b/drivers/virt/gunyah/vm_mgr.h index 474ac866d237..4a436c3e435c 100644 --- a/drivers/virt/gunyah/vm_mgr.h +++ b/drivers/virt/gunyah/vm_mgr.h @@ -79,6 +79,7 @@ long gunyah_dev_vm_mgr_ioctl(struct gunyah_rm *rm, unsigned int cmd, * folio; parcel_start is start of the folio) * @dtb.parcel_pages: Number of pages lent for the memory parcel * @dtb.parcel: Data for resource manager to lend the parcel + * @boot_context: Requested initial boot context to set when launching the VM * * Members are grouped by hot path. */ @@ -113,6 +114,7 @@ struct gunyah_vm { u64 parcel_start, parcel_pages; struct gunyah_rm_mem_parcel parcel; } dtb; + struct xarray boot_context; }; int gunyah_vm_parcel_to_paged(struct gunyah_vm *ghvm, diff --git a/include/uapi/linux/gunyah.h b/include/uapi/linux/gunyah.h index a89d9bedf3e5..574116f54472 100644 --- a/include/uapi/linux/gunyah.h +++ b/include/uapi/linux/gunyah.h @@ -142,6 +142,29 @@ struct gunyah_map_mem_args { #define GUNYAH_VM_MAP_MEM _IOW(GUNYAH_IOCTL_TYPE, 0x9, struct gunyah_map_mem_args) +enum gunyah_vm_boot_context_reg { + REG_SET_X = 0, + REG_SET_PC = 1, + REG_SET_SP = 2, +}; + +#define GUNYAH_VM_BOOT_CONTEXT_REG_SHIFT 8 +#define GUNYAH_VM_BOOT_CONTEXT_REG(reg, idx) (((reg & 0xff) << GUNYAH_VM_BOOT_CONTEXT_REG_SHIFT) |\ + (idx & 0xff)) + +/** + * struct gunyah_vm_boot_context - Set an initial register for the VM + * @reg: Register to set. See GUNYAH_VM_BOOT_CONTEXT_REG_* macros + * @reserved: reserved for alignment + * @value: value to fill in the register + */ +struct gunyah_vm_boot_context { + __u32 reg; + __u32 reserved; + __u64 value; +}; +#define GUNYAH_VM_SET_BOOT_CONTEXT _IOW(GUNYAH_IOCTL_TYPE, 0xa, struct gunyah_vm_boot_context) + /* * ioctls for vCPU fds */ From patchwork Tue Jan 9 19:38:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761085 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AB7C54776D; Tue, 9 Jan 2024 19:38:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="claOXthq" Received: from pps.filterd (m0279862.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409Ik2LF020971; Tue, 9 Jan 2024 19:38:09 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=O0LTS8afF6938T/lQfplnM+8ELHnWjzwe6A6+1vfdHw =; b=claOXthqjhDnvl4cB5HE5bAjfN2ymsPboQB9NJGN3MQOcOErbzT1GIMudKB hNfHsz+SVvtgg3h2/7Eie/+9wleuSNBTiQE/pAebqiF1Hq4ZbB7udWqqI/tzg9gr CVcG5Z53O3tiA+E2k+2nK2KwKPP0C96pUkwlsCMMuzdfW9tCUP9E0AexLk5EKn5k jRWhYtIJhwmfmqA/BsWXkGJPbUJpqFYwaQqMW/3xvaaL8fSWabygVNSLRb46bDg/ TzbMjxBlVjsZL2aGHKmAJ0x7ar1iawUEVonUh4BtDLm/aeVEPbcIXfjTV8tOBXas k6+R1O4FGr3KNj7yKRblVaRYcZw== Received: from nasanppmta05.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh3me185q-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:38:09 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409Jc9DV024695 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:38:09 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:38:08 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:38:08 -0800 Subject: [PATCH v16 30/34] virt: gunyah: Add hypercalls for sending doorbell Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-30-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: 6mB1HQ53TyGWybWApgoJAF8XcYPAkg4i X-Proofpoint-GUID: 6mB1HQ53TyGWybWApgoJAF8XcYPAkg4i X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_01,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 spamscore=0 priorityscore=1501 mlxlogscore=605 lowpriorityscore=0 mlxscore=0 suspectscore=0 impostorscore=0 clxscore=1015 adultscore=0 malwarescore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Gunyah doorbells allow a virtual machine to signal another using interrupts. Add the hypercalls needed to assert the interrupt. Reviewed-by: Alex Elder Signed-off-by: Elliot Berman --- arch/arm64/gunyah/gunyah_hypercall.c | 38 ++++++++++++++++++++++++++++++++++++ include/linux/gunyah.h | 5 +++++ 2 files changed, 43 insertions(+) diff --git a/arch/arm64/gunyah/gunyah_hypercall.c b/arch/arm64/gunyah/gunyah_hypercall.c index 38403dc28c66..3c2672d683ae 100644 --- a/arch/arm64/gunyah/gunyah_hypercall.c +++ b/arch/arm64/gunyah/gunyah_hypercall.c @@ -37,6 +37,8 @@ EXPORT_SYMBOL_GPL(arch_is_gunyah_guest); /* clang-format off */ #define GUNYAH_HYPERCALL_HYP_IDENTIFY GUNYAH_HYPERCALL(0x8000) +#define GUNYAH_HYPERCALL_BELL_SEND GUNYAH_HYPERCALL(0x8012) +#define GUNYAH_HYPERCALL_BELL_SET_MASK GUNYAH_HYPERCALL(0x8015) #define GUNYAH_HYPERCALL_MSGQ_SEND GUNYAH_HYPERCALL(0x801B) #define GUNYAH_HYPERCALL_MSGQ_RECV GUNYAH_HYPERCALL(0x801C) #define GUNYAH_HYPERCALL_ADDRSPACE_MAP GUNYAH_HYPERCALL(0x802B) @@ -64,6 +66,42 @@ void gunyah_hypercall_hyp_identify( } EXPORT_SYMBOL_GPL(gunyah_hypercall_hyp_identify); +/** + * gunyah_hypercall_bell_send() - Assert a gunyah doorbell + * @capid: capability ID of the doorbell + * @new_flags: bits to set on the doorbell + * @old_flags: Filled with the bits set before the send call if return value is GUNYAH_ERROR_OK + */ +enum gunyah_error gunyah_hypercall_bell_send(u64 capid, u64 new_flags, u64 *old_flags) +{ + struct arm_smccc_res res; + + arm_smccc_1_1_hvc(GUNYAH_HYPERCALL_BELL_SEND, capid, new_flags, 0, &res); + + if (res.a0 == GUNYAH_ERROR_OK && old_flags) + *old_flags = res.a1; + + return res.a0; +} +EXPORT_SYMBOL_GPL(gunyah_hypercall_bell_send); + +/** + * gunyah_hypercall_bell_set_mask() - Set masks on a Gunyah doorbell + * @capid: capability ID of the doorbell + * @enable_mask: which bits trigger the receiver interrupt + * @ack_mask: which bits are automatically acknowledged when the receiver + * interrupt is ack'd + */ +enum gunyah_error gunyah_hypercall_bell_set_mask(u64 capid, u64 enable_mask, u64 ack_mask) +{ + struct arm_smccc_res res; + + arm_smccc_1_1_hvc(GUNYAH_HYPERCALL_BELL_SET_MASK, capid, enable_mask, ack_mask, 0, &res); + + return res.a0; +} +EXPORT_SYMBOL_GPL(gunyah_hypercall_bell_set_mask); + /** * gunyah_hypercall_msgq_send() - Send a buffer on a message queue * @capid: capability ID of the message queue to add message diff --git a/include/linux/gunyah.h b/include/linux/gunyah.h index 32ce578220ca..67cb9350ab9e 100644 --- a/include/linux/gunyah.h +++ b/include/linux/gunyah.h @@ -346,6 +346,11 @@ gunyah_api_version(const struct gunyah_hypercall_hyp_identify_resp *gunyah_api) void gunyah_hypercall_hyp_identify( struct gunyah_hypercall_hyp_identify_resp *hyp_identity); +enum gunyah_error gunyah_hypercall_bell_send(u64 capid, u64 new_flags, + u64 *old_flags); +enum gunyah_error gunyah_hypercall_bell_set_mask(u64 capid, u64 enable_mask, + u64 ack_mask); + /* Immediately raise RX vIRQ on receiver VM */ #define GUNYAH_HYPERCALL_MSGQ_TX_FLAGS_PUSH BIT(0) From patchwork Tue Jan 9 19:38:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761561 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2F9E747F44; Tue, 9 Jan 2024 19:38:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="pJjYbxEj" Received: from pps.filterd (m0279869.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409Ih58d022530; Tue, 9 Jan 2024 19:38:11 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=WfvIdhBoOkDh5cXvhhsJF6rmK9B8KJD4EXlYR0bezbs =; b=pJjYbxEjcG6UJhr/itNsxkPdNIbZVY7eldZ3Q6+BSDiIcxsFfbDYNXvGa11 t+BW0KCMlYoB1egudbNtNil63sMGvKfsJ8u0f3rHLVTbyNdRarULtIeUdvAinj/g Agya5hjsS/NkkcxbRVXIyymk+PJabBxx5iSSZQx6ktX6nJVp3H1sGsbSG9FcKWPV w9jP+Qv2jz0K3KQVXpxj99jmuodG0qQVcXEKYZq6ADvF7cUncOs6eeXTWHkxDQsR TQdz1MlUCXfD5FPLfLYIeWmRfhcdti4qedhwnl9XY++CZOmr27khTUgPHli64j3g bKbZq4yUfa8mQgVMPNz+z8hr+Rw== Received: from nasanppmta01.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh9u9gdd8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:38:10 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA01.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409Jc9Oh003577 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:38:09 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:38:08 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:38:09 -0800 Subject: [PATCH v16 31/34] virt: gunyah: Add irqfd interface Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-31-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: iPea-pAiagz0RhwNS8UmsHlaL8pQmL_h X-Proofpoint-ORIG-GUID: iPea-pAiagz0RhwNS8UmsHlaL8pQmL_h X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 impostorscore=0 bulkscore=0 adultscore=0 suspectscore=0 phishscore=0 clxscore=1015 lowpriorityscore=0 malwarescore=0 spamscore=0 mlxlogscore=999 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Enable support for creating irqfds which can raise an interrupt on a Gunyah virtual machine. irqfds are exposed to userspace as a Gunyah VM function with the name "irqfd". If the VM devicetree is not configured to create a doorbell with the corresponding label, userspace will still be able to assert the eventfd but no interrupt will be raised on the guest. Acked-by: Alex Elder Co-developed-by: Prakruthi Deepak Heragu Signed-off-by: Prakruthi Deepak Heragu Signed-off-by: Elliot Berman --- drivers/virt/gunyah/Kconfig | 9 ++ drivers/virt/gunyah/Makefile | 1 + drivers/virt/gunyah/gunyah_irqfd.c | 190 +++++++++++++++++++++++++++++++++++++ include/uapi/linux/gunyah.h | 35 +++++++ 4 files changed, 235 insertions(+) diff --git a/drivers/virt/gunyah/Kconfig b/drivers/virt/gunyah/Kconfig index fe2823dc48ba..1685b75fb77a 100644 --- a/drivers/virt/gunyah/Kconfig +++ b/drivers/virt/gunyah/Kconfig @@ -27,3 +27,12 @@ config GUNYAH_QCOM_PLATFORM extra platform-specific support. Say Y/M here to use Gunyah on Qualcomm platforms. + +config GUNYAH_IRQFD + tristate "Gunyah irqfd interface" + depends on GUNYAH + help + Enable kernel support for creating irqfds which can raise an interrupt + on Gunyah virtual machine. + + Say Y/M here if unsure and you want to support Gunyah VMMs. diff --git a/drivers/virt/gunyah/Makefile b/drivers/virt/gunyah/Makefile index c4505fce177d..b41b02792921 100644 --- a/drivers/virt/gunyah/Makefile +++ b/drivers/virt/gunyah/Makefile @@ -5,3 +5,4 @@ gunyah_rsc_mgr-y += rsc_mgr.o rsc_mgr_rpc.o vm_mgr.o vm_mgr_mem.o guest_memfd.o obj-$(CONFIG_GUNYAH) += gunyah.o gunyah_rsc_mgr.o gunyah_vcpu.o obj-$(CONFIG_GUNYAH_PLATFORM_HOOKS) += gunyah_platform_hooks.o obj-$(CONFIG_GUNYAH_QCOM_PLATFORM) += gunyah_qcom.o +obj-$(CONFIG_GUNYAH_IRQFD) += gunyah_irqfd.o diff --git a/drivers/virt/gunyah/gunyah_irqfd.c b/drivers/virt/gunyah/gunyah_irqfd.c new file mode 100644 index 000000000000..030af9069639 --- /dev/null +++ b/drivers/virt/gunyah/gunyah_irqfd.c @@ -0,0 +1,190 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2022-2024 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include + +struct gunyah_irqfd { + struct gunyah_resource *ghrsc; + struct gunyah_vm_resource_ticket ticket; + struct gunyah_vm_function_instance *f; + + bool level; + + struct eventfd_ctx *ctx; + wait_queue_entry_t wait; + poll_table pt; +}; + +static int irqfd_wakeup(wait_queue_entry_t *wait, unsigned int mode, int sync, + void *key) +{ + struct gunyah_irqfd *irqfd = + container_of(wait, struct gunyah_irqfd, wait); + __poll_t flags = key_to_poll(key); + int ret = 0; + + if (flags & EPOLLIN) { + if (irqfd->ghrsc) { + ret = gunyah_hypercall_bell_send(irqfd->ghrsc->capid, 1, + NULL); + if (ret) + pr_err_ratelimited( + "Failed to inject interrupt %d: %d\n", + irqfd->ticket.label, ret); + } else + pr_err_ratelimited( + "Premature injection of interrupt\n"); + } + + return 0; +} + +static void irqfd_ptable_queue_proc(struct file *file, wait_queue_head_t *wqh, + poll_table *pt) +{ + struct gunyah_irqfd *irq_ctx = + container_of(pt, struct gunyah_irqfd, pt); + + add_wait_queue(wqh, &irq_ctx->wait); +} + +static bool gunyah_irqfd_populate(struct gunyah_vm_resource_ticket *ticket, + struct gunyah_resource *ghrsc) +{ + struct gunyah_irqfd *irqfd = + container_of(ticket, struct gunyah_irqfd, ticket); + int ret; + + if (irqfd->ghrsc) { + pr_warn("irqfd%d already got a Gunyah resource. Check if multiple resources with same label were configured.\n", + irqfd->ticket.label); + return false; + } + + irqfd->ghrsc = ghrsc; + if (irqfd->level) { + /* Configure the bell to trigger when bit 0 is asserted (see + * irq_wakeup) and for bell to automatically clear bit 0 once + * received by the VM (ack_mask). need to make sure bit 0 is cleared right away, + * otherwise the line will never be deasserted. Emulating edge + * trigger interrupt does not need to set either mask + * because irq is listed only once per gunyah_hypercall_bell_send + */ + ret = gunyah_hypercall_bell_set_mask(irqfd->ghrsc->capid, 1, 1); + if (ret) + pr_warn("irq %d couldn't be set as level triggered. Might cause IRQ storm if asserted\n", + irqfd->ticket.label); + } + + return true; +} + +static void gunyah_irqfd_unpopulate(struct gunyah_vm_resource_ticket *ticket, + struct gunyah_resource *ghrsc) +{ + struct gunyah_irqfd *irqfd = + container_of(ticket, struct gunyah_irqfd, ticket); + u64 cnt; + + eventfd_ctx_remove_wait_queue(irqfd->ctx, &irqfd->wait, &cnt); +} + +static long gunyah_irqfd_bind(struct gunyah_vm_function_instance *f) +{ + struct gunyah_fn_irqfd_arg *args = f->argp; + struct gunyah_irqfd *irqfd; + __poll_t events; + struct fd fd; + long r; + + if (f->arg_size != sizeof(*args)) + return -EINVAL; + + /* All other flag bits are reserved for future use */ + if (args->flags & ~GUNYAH_IRQFD_FLAGS_LEVEL) + return -EINVAL; + + irqfd = kzalloc(sizeof(*irqfd), GFP_KERNEL); + if (!irqfd) + return -ENOMEM; + + irqfd->f = f; + f->data = irqfd; + + fd = fdget(args->fd); + if (!fd.file) { + kfree(irqfd); + return -EBADF; + } + + irqfd->ctx = eventfd_ctx_fileget(fd.file); + if (IS_ERR(irqfd->ctx)) { + r = PTR_ERR(irqfd->ctx); + goto err_fdput; + } + + if (args->flags & GUNYAH_IRQFD_FLAGS_LEVEL) + irqfd->level = true; + + init_waitqueue_func_entry(&irqfd->wait, irqfd_wakeup); + init_poll_funcptr(&irqfd->pt, irqfd_ptable_queue_proc); + + irqfd->ticket.resource_type = GUNYAH_RESOURCE_TYPE_BELL_TX; + irqfd->ticket.label = args->label; + irqfd->ticket.owner = THIS_MODULE; + irqfd->ticket.populate = gunyah_irqfd_populate; + irqfd->ticket.unpopulate = gunyah_irqfd_unpopulate; + + r = gunyah_vm_add_resource_ticket(f->ghvm, &irqfd->ticket); + if (r) + goto err_ctx; + + events = vfs_poll(fd.file, &irqfd->pt); + if (events & EPOLLIN) + pr_warn("Premature injection of interrupt\n"); + fdput(fd); + + return 0; +err_ctx: + eventfd_ctx_put(irqfd->ctx); +err_fdput: + fdput(fd); + kfree(irqfd); + return r; +} + +static void gunyah_irqfd_unbind(struct gunyah_vm_function_instance *f) +{ + struct gunyah_irqfd *irqfd = f->data; + + gunyah_vm_remove_resource_ticket(irqfd->f->ghvm, &irqfd->ticket); + eventfd_ctx_put(irqfd->ctx); + kfree(irqfd); +} + +static bool gunyah_irqfd_compare(const struct gunyah_vm_function_instance *f, + const void *arg, size_t size) +{ + const struct gunyah_fn_irqfd_arg *instance = f->argp, *other = arg; + + if (sizeof(*other) != size) + return false; + + return instance->label == other->label; +} + +DECLARE_GUNYAH_VM_FUNCTION_INIT(irqfd, GUNYAH_FN_IRQFD, 2, gunyah_irqfd_bind, + gunyah_irqfd_unbind, gunyah_irqfd_compare); +MODULE_DESCRIPTION("Gunyah irqfd VM Function"); +MODULE_LICENSE("GPL"); diff --git a/include/uapi/linux/gunyah.h b/include/uapi/linux/gunyah.h index 574116f54472..cb7b0bb9bef3 100644 --- a/include/uapi/linux/gunyah.h +++ b/include/uapi/linux/gunyah.h @@ -63,9 +63,12 @@ struct gunyah_vm_dtb_config { * @GUNYAH_FN_VCPU: create a vCPU instance to control a vCPU * &struct gunyah_fn_desc.arg is a pointer to &struct gunyah_fn_vcpu_arg * Return: file descriptor to manipulate the vcpu. + * @GUNYAH_FN_IRQFD: register eventfd to assert a Gunyah doorbell + * &struct gunyah_fn_desc.arg is a pointer to &struct gunyah_fn_irqfd_arg */ enum gunyah_fn_type { GUNYAH_FN_VCPU = 1, + GUNYAH_FN_IRQFD, }; #define GUNYAH_FN_MAX_ARG_SIZE 256 @@ -85,6 +88,38 @@ struct gunyah_fn_vcpu_arg { __u32 id; }; +/** + * enum gunyah_irqfd_flags - flags for use in gunyah_fn_irqfd_arg + * @GUNYAH_IRQFD_FLAGS_LEVEL: make the interrupt operate like a level triggered + * interrupt on guest side. Triggering IRQFD before + * guest handles the interrupt causes interrupt to + * stay asserted. + */ +enum gunyah_irqfd_flags { + GUNYAH_IRQFD_FLAGS_LEVEL = 1UL << 0, +}; + +/** + * struct gunyah_fn_irqfd_arg - Arguments to create an irqfd function. + * + * Create this function with &GUNYAH_VM_ADD_FUNCTION using type &GUNYAH_FN_IRQFD. + * + * Allows setting an eventfd to directly trigger a guest interrupt. + * irqfd.fd specifies the file descriptor to use as the eventfd. + * irqfd.label corresponds to the doorbell label used in the guest VM's devicetree. + * + * @fd: an eventfd which when written to will raise a doorbell + * @label: Label of the doorbell created on the guest VM + * @flags: see &enum gunyah_irqfd_flags + * @padding: padding bytes + */ +struct gunyah_fn_irqfd_arg { + __u32 fd; + __u32 label; + __u32 flags; + __u32 padding; +}; + /** * struct gunyah_fn_desc - Arguments to create a VM function * @type: Type of the function. See &enum gunyah_fn_type. From patchwork Tue Jan 9 19:38:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761559 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F01E5482E9; Tue, 9 Jan 2024 19:38:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="DRkqiuCE" Received: from pps.filterd (m0279867.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409EvQDL028241; Tue, 9 Jan 2024 19:38:11 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=4oPsPgVxByAWZqC4ZIbBPds+aMeiFl+WBc0pqTixgwc =; b=DRkqiuCEC/BZFuilTQeWSrLWAoET1QgjYlsAvw0Atuac4tyPpIf2LoPUwPt SEDoBIYEgH9R8Rn1uLAbXVoH2yT6kwHPWju99KYkefqLh8fQ6IQx0EUJUL+RcM0n Xmvz1KTmp/AtAiFT72rRAsau/FEuiW1+/HCNf6dIT91ENvNxr/lwRQ7p8LsxxDny kyHm2vtqOvr15cSvr6bcUg+i5jt3azMzfrlA5YBIVxhzniHezzfihqXlrYmm5bAh CjPmXpRJFDHMZE+7TB27xdeJOM1xy8AH2h1o6KYhhKYTlmkRLAtShSn18n40y13y rExu0eiGOf5b/uHi0FBdDMURD7g== Received: from nasanppmta04.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vgxxbhtep-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:38:11 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA04.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409JcAv7030651 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:38:10 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:38:09 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:38:10 -0800 Subject: [PATCH v16 32/34] virt: gunyah: Add IO handlers Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-32-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: xzdk2e9nhIM0BExctrrsg61U1fFmrGqg X-Proofpoint-GUID: xzdk2e9nhIM0BExctrrsg61U1fFmrGqg X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=863 lowpriorityscore=0 priorityscore=1501 bulkscore=0 phishscore=0 spamscore=0 clxscore=1015 mlxscore=0 adultscore=0 impostorscore=0 malwarescore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Add framework for VM functions to handle stage-2 write faults from Gunyah guest virtual machines. IO handlers have a range of addresses which they apply to. Optionally, they may apply to only when the value written matches the IO handler's value. Reviewed-by: Alex Elder Co-developed-by: Prakruthi Deepak Heragu Signed-off-by: Prakruthi Deepak Heragu Signed-off-by: Elliot Berman --- drivers/virt/gunyah/gunyah_vcpu.c | 4 ++ drivers/virt/gunyah/vm_mgr.c | 115 ++++++++++++++++++++++++++++++++++++++ drivers/virt/gunyah/vm_mgr.h | 8 +++ include/linux/gunyah.h | 29 ++++++++++ 4 files changed, 156 insertions(+) diff --git a/drivers/virt/gunyah/gunyah_vcpu.c b/drivers/virt/gunyah/gunyah_vcpu.c index f01e6d6163ba..edadb056cc18 100644 --- a/drivers/virt/gunyah/gunyah_vcpu.c +++ b/drivers/virt/gunyah/gunyah_vcpu.c @@ -133,6 +133,10 @@ gunyah_handle_mmio(struct gunyah_vcpu *vcpu, unsigned long resume_data[3], vcpu->state = GUNYAH_VCPU_RUN_STATE_MMIO_READ; vcpu->mmio_read_len = len; } else { /* GUNYAH_VCPU_ADDRSPACE_VMMIO_WRITE */ + if (!gunyah_vm_mmio_write(vcpu->ghvm, addr, len, data)) { + resume_data[0] = GUNYAH_ADDRSPACE_VMMIO_ACTION_EMULATE; + return true; + } vcpu->vcpu_run->mmio.is_write = 1; memcpy(vcpu->vcpu_run->mmio.data, &data, len); vcpu->state = GUNYAH_VCPU_RUN_STATE_MMIO_WRITE; diff --git a/drivers/virt/gunyah/vm_mgr.c b/drivers/virt/gunyah/vm_mgr.c index 1f3d29749174..cb63cb121846 100644 --- a/drivers/virt/gunyah/vm_mgr.c +++ b/drivers/virt/gunyah/vm_mgr.c @@ -295,6 +295,118 @@ static void gunyah_vm_clean_resources(struct gunyah_vm *ghvm) mutex_unlock(&ghvm->resources_lock); } +static int _gunyah_vm_io_handler_compare(const struct rb_node *node, + const struct rb_node *parent) +{ + struct gunyah_vm_io_handler *n = + container_of(node, struct gunyah_vm_io_handler, node); + struct gunyah_vm_io_handler *p = + container_of(parent, struct gunyah_vm_io_handler, node); + + if (n->addr < p->addr) + return -1; + if (n->addr > p->addr) + return 1; + if ((n->len && !p->len) || (!n->len && p->len)) + return 0; + if (n->len < p->len) + return -1; + if (n->len > p->len) + return 1; + /* one of the io handlers doesn't have datamatch and the other does. + * For purposes of comparison, that makes them identical since the + * one that doesn't have datamatch will cover the same handler that + * does. + */ + if (n->datamatch != p->datamatch) + return 0; + if (n->data < p->data) + return -1; + if (n->data > p->data) + return 1; + return 0; +} + +static int gunyah_vm_io_handler_compare(struct rb_node *node, + const struct rb_node *parent) +{ + return _gunyah_vm_io_handler_compare(node, parent); +} + +static int gunyah_vm_io_handler_find(const void *key, + const struct rb_node *node) +{ + const struct gunyah_vm_io_handler *k = key; + + return _gunyah_vm_io_handler_compare(&k->node, node); +} + +static struct gunyah_vm_io_handler * +gunyah_vm_mgr_find_io_hdlr(struct gunyah_vm *ghvm, u64 addr, u64 len, u64 data) +{ + struct gunyah_vm_io_handler key = { + .addr = addr, + .len = len, + .datamatch = true, + .data = data, + }; + struct rb_node *node; + + node = rb_find(&key, &ghvm->mmio_handler_root, + gunyah_vm_io_handler_find); + if (!node) + return NULL; + + return container_of(node, struct gunyah_vm_io_handler, node); +} + +int gunyah_vm_mmio_write(struct gunyah_vm *ghvm, u64 addr, u32 len, u64 data) +{ + struct gunyah_vm_io_handler *io_hdlr = NULL; + int ret; + + down_read(&ghvm->mmio_handler_lock); + io_hdlr = gunyah_vm_mgr_find_io_hdlr(ghvm, addr, len, data); + if (!io_hdlr || !io_hdlr->ops || !io_hdlr->ops->write) { + ret = -ENOENT; + goto out; + } + + ret = io_hdlr->ops->write(io_hdlr, addr, len, data); + +out: + up_read(&ghvm->mmio_handler_lock); + return ret; +} +EXPORT_SYMBOL_GPL(gunyah_vm_mmio_write); + +int gunyah_vm_add_io_handler(struct gunyah_vm *ghvm, + struct gunyah_vm_io_handler *io_hdlr) +{ + struct rb_node *found; + + if (io_hdlr->datamatch && + (!io_hdlr->len || io_hdlr->len > sizeof(io_hdlr->data))) + return -EINVAL; + + down_write(&ghvm->mmio_handler_lock); + found = rb_find_add(&io_hdlr->node, &ghvm->mmio_handler_root, + gunyah_vm_io_handler_compare); + up_write(&ghvm->mmio_handler_lock); + + return found ? -EEXIST : 0; +} +EXPORT_SYMBOL_GPL(gunyah_vm_add_io_handler); + +void gunyah_vm_remove_io_handler(struct gunyah_vm *ghvm, + struct gunyah_vm_io_handler *io_hdlr) +{ + down_write(&ghvm->mmio_handler_lock); + rb_erase(&io_hdlr->node, &ghvm->mmio_handler_root); + up_write(&ghvm->mmio_handler_lock); +} +EXPORT_SYMBOL_GPL(gunyah_vm_remove_io_handler); + static int gunyah_vm_rm_notification_status(struct gunyah_vm *ghvm, void *data) { struct gunyah_rm_vm_status_payload *payload = data; @@ -397,6 +509,9 @@ static __must_check struct gunyah_vm *gunyah_vm_alloc(struct gunyah_rm *rm) INIT_LIST_HEAD(&ghvm->resource_tickets); xa_init(&ghvm->boot_context); + init_rwsem(&ghvm->mmio_handler_lock); + ghvm->mmio_handler_root = RB_ROOT; + mt_init(&ghvm->mm); mt_init(&ghvm->bindings); init_rwsem(&ghvm->bindings_lock); diff --git a/drivers/virt/gunyah/vm_mgr.h b/drivers/virt/gunyah/vm_mgr.h index 4a436c3e435c..b956989fa5e6 100644 --- a/drivers/virt/gunyah/vm_mgr.h +++ b/drivers/virt/gunyah/vm_mgr.h @@ -10,6 +10,7 @@ #include #include #include +#include #include #include @@ -56,6 +57,9 @@ long gunyah_dev_vm_mgr_ioctl(struct gunyah_rm *rm, unsigned int cmd, * @guest_shared_extent_ticket: Resource ticket to the capability for * the memory extent that represents * memory shared with the guest. + * @mmio_handler_root: RB tree of MMIO handlers. + * Entries are &struct gunyah_vm_io_handler + * @mmio_handler_lock: Serialization of traversing @mmio_handler_root * @rm: Pointer to the resource manager struct to make RM calls * @parent: For logging * @nb: Notifier block for RM notifications @@ -91,6 +95,8 @@ struct gunyah_vm { struct gunyah_vm_resource_ticket addrspace_ticket, host_private_extent_ticket, host_shared_extent_ticket, guest_private_extent_ticket, guest_shared_extent_ticket; + struct rb_root mmio_handler_root; + struct rw_semaphore mmio_handler_lock; struct gunyah_rm *rm; @@ -117,6 +123,8 @@ struct gunyah_vm { struct xarray boot_context; }; +int gunyah_vm_mmio_write(struct gunyah_vm *ghvm, u64 addr, u32 len, u64 data); + int gunyah_vm_parcel_to_paged(struct gunyah_vm *ghvm, struct gunyah_rm_mem_parcel *parcel, u64 gfn, u64 nr); diff --git a/include/linux/gunyah.h b/include/linux/gunyah.h index 67cb9350ab9e..4638c358869a 100644 --- a/include/linux/gunyah.h +++ b/include/linux/gunyah.h @@ -156,6 +156,35 @@ int gunyah_vm_add_resource_ticket(struct gunyah_vm *ghvm, void gunyah_vm_remove_resource_ticket(struct gunyah_vm *ghvm, struct gunyah_vm_resource_ticket *ticket); +/* + * gunyah_vm_io_handler contains the info about an io device and its associated + * addr and the ops associated with the io device. + */ +struct gunyah_vm_io_handler { + struct rb_node node; + u64 addr; + + bool datamatch; + u8 len; + u64 data; + struct gunyah_vm_io_handler_ops *ops; +}; + +/* + * gunyah_vm_io_handler_ops contains function pointers associated with an iodevice. + */ +struct gunyah_vm_io_handler_ops { + int (*read)(struct gunyah_vm_io_handler *io_dev, u64 addr, u32 len, + u64 data); + int (*write)(struct gunyah_vm_io_handler *io_dev, u64 addr, u32 len, + u64 data); +}; + +int gunyah_vm_add_io_handler(struct gunyah_vm *ghvm, + struct gunyah_vm_io_handler *io_dev); +void gunyah_vm_remove_io_handler(struct gunyah_vm *ghvm, + struct gunyah_vm_io_handler *io_dev); + #define GUNYAH_RM_ACL_X BIT(0) #define GUNYAH_RM_ACL_W BIT(1) #define GUNYAH_RM_ACL_R BIT(2) From patchwork Tue Jan 9 19:38:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761562 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 49B2347A45; Tue, 9 Jan 2024 19:38:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="M+oc6rkS" Received: from pps.filterd (m0279863.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409G4aDP020396; Tue, 9 Jan 2024 19:38:11 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=CFZQM9HHgwsq85wbA4HaJDXM+Ti//VjdbRwqaGcKBmo =; b=M+oc6rkSFe+jMKMYTEkPOSWJug2g+MbEY+O8KJwHpVmZh7f+2oV2ISiRl0T qjXb+/r2csBU6vfw6lLHFVNhxHcqmMncste+FXqfTcAFEG6lRO4suvajndNqACxZ pQApBfS4U+27uvDwRmQDiNW+CgZvcUzh3xpDS54jdKA2Vvh/GfzSlLFNsuwpZ4zt zdZwPf5aolzF2jJyqNrRDRKg/d2OO0mpECaGnVjU/JcPiVvQ8Kg6XV7KghxBwhi6 zF+pH65r5zACqta+UuRK9/8R3yMSRHO74AJWTmVvHupDAdHkKos4CRWAYNAxw1OQ tKyfAs5OD9zgcvnQRuzNwnmz8mg== Received: from nasanppmta03.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh9bmggdp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:38:11 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA03.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409JcA3i011573 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:38:10 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:38:10 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:38:11 -0800 Subject: [PATCH v16 33/34] virt: gunyah: Add ioeventfd Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-33-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: TqyjYIVU4KJsNF6XyeHikBnwWRuxaqt5 X-Proofpoint-GUID: TqyjYIVU4KJsNF6XyeHikBnwWRuxaqt5 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_01,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 mlxscore=0 clxscore=1015 spamscore=0 priorityscore=1501 malwarescore=0 mlxlogscore=999 adultscore=0 bulkscore=0 suspectscore=0 lowpriorityscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Allow userspace to attach an ioeventfd to an mmio address within the guest. Userspace provides a description of the type of write to "subscribe" to and eventfd to trigger when that type of write is performed by the guest. This mechanism allows userspace to respond asynchronously to a guest manipulating a virtualized device and is similar to KVM's ioeventfd. Reviewed-by: Alex Elder Co-developed-by: Prakruthi Deepak Heragu Signed-off-by: Prakruthi Deepak Heragu Signed-off-by: Elliot Berman --- drivers/virt/gunyah/Kconfig | 9 +++ drivers/virt/gunyah/Makefile | 1 + drivers/virt/gunyah/gunyah_ioeventfd.c | 139 +++++++++++++++++++++++++++++++++ include/uapi/linux/gunyah.h | 37 +++++++++ 4 files changed, 186 insertions(+) diff --git a/drivers/virt/gunyah/Kconfig b/drivers/virt/gunyah/Kconfig index 1685b75fb77a..855d41a88b16 100644 --- a/drivers/virt/gunyah/Kconfig +++ b/drivers/virt/gunyah/Kconfig @@ -36,3 +36,12 @@ config GUNYAH_IRQFD on Gunyah virtual machine. Say Y/M here if unsure and you want to support Gunyah VMMs. + +config GUNYAH_IOEVENTFD + tristate "Gunyah ioeventfd interface" + depends on GUNYAH + help + Enable kernel support for creating ioeventfds which can alert userspace + when a Gunyah virtual machine accesses a memory address. + + Say Y/M here if unsure and you want to support Gunyah VMMs. diff --git a/drivers/virt/gunyah/Makefile b/drivers/virt/gunyah/Makefile index b41b02792921..2aec5989402b 100644 --- a/drivers/virt/gunyah/Makefile +++ b/drivers/virt/gunyah/Makefile @@ -6,3 +6,4 @@ obj-$(CONFIG_GUNYAH) += gunyah.o gunyah_rsc_mgr.o gunyah_vcpu.o obj-$(CONFIG_GUNYAH_PLATFORM_HOOKS) += gunyah_platform_hooks.o obj-$(CONFIG_GUNYAH_QCOM_PLATFORM) += gunyah_qcom.o obj-$(CONFIG_GUNYAH_IRQFD) += gunyah_irqfd.o +obj-$(CONFIG_GUNYAH_IOEVENTFD) += gunyah_ioeventfd.o diff --git a/drivers/virt/gunyah/gunyah_ioeventfd.c b/drivers/virt/gunyah/gunyah_ioeventfd.c new file mode 100644 index 000000000000..e33924d19be4 --- /dev/null +++ b/drivers/virt/gunyah/gunyah_ioeventfd.c @@ -0,0 +1,139 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2022-2024 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include + +#include + +struct gunyah_ioeventfd { + struct gunyah_vm_function_instance *f; + struct gunyah_vm_io_handler io_handler; + + struct eventfd_ctx *ctx; +}; + +static int gunyah_write_ioeventfd(struct gunyah_vm_io_handler *io_dev, u64 addr, + u32 len, u64 data) +{ + struct gunyah_ioeventfd *iofd = + container_of(io_dev, struct gunyah_ioeventfd, io_handler); + + eventfd_signal(iofd->ctx); + return 0; +} + +static struct gunyah_vm_io_handler_ops io_ops = { + .write = gunyah_write_ioeventfd, +}; + +static long gunyah_ioeventfd_bind(struct gunyah_vm_function_instance *f) +{ + const struct gunyah_fn_ioeventfd_arg *args = f->argp; + struct gunyah_ioeventfd *iofd; + struct eventfd_ctx *ctx; + int ret; + + if (f->arg_size != sizeof(*args)) + return -EINVAL; + + /* All other flag bits are reserved for future use */ + if (args->flags & ~GUNYAH_IOEVENTFD_FLAGS_DATAMATCH) + return -EINVAL; + + /* must be natural-word sized, or 0 to ignore length */ + switch (args->len) { + case 0: + case 1: + case 2: + case 4: + case 8: + break; + default: + return -EINVAL; + } + + /* check for range overflow */ + if (overflows_type(args->addr + args->len, u64)) + return -EINVAL; + + /* ioeventfd with no length can't be combined with DATAMATCH */ + if (!args->len && (args->flags & GUNYAH_IOEVENTFD_FLAGS_DATAMATCH)) + return -EINVAL; + + ctx = eventfd_ctx_fdget(args->fd); + if (IS_ERR(ctx)) + return PTR_ERR(ctx); + + iofd = kzalloc(sizeof(*iofd), GFP_KERNEL); + if (!iofd) { + ret = -ENOMEM; + goto err_eventfd; + } + + f->data = iofd; + iofd->f = f; + + iofd->ctx = ctx; + + if (args->flags & GUNYAH_IOEVENTFD_FLAGS_DATAMATCH) { + iofd->io_handler.datamatch = true; + iofd->io_handler.len = args->len; + iofd->io_handler.data = args->datamatch; + } + iofd->io_handler.addr = args->addr; + iofd->io_handler.ops = &io_ops; + + ret = gunyah_vm_add_io_handler(f->ghvm, &iofd->io_handler); + if (ret) + goto err_io_dev_add; + + return 0; + +err_io_dev_add: + kfree(iofd); +err_eventfd: + eventfd_ctx_put(ctx); + return ret; +} + +static void gunyah_ioevent_unbind(struct gunyah_vm_function_instance *f) +{ + struct gunyah_ioeventfd *iofd = f->data; + + eventfd_ctx_put(iofd->ctx); + gunyah_vm_remove_io_handler(iofd->f->ghvm, &iofd->io_handler); + kfree(iofd); +} + +static bool gunyah_ioevent_compare(const struct gunyah_vm_function_instance *f, + const void *arg, size_t size) +{ + const struct gunyah_fn_ioeventfd_arg *instance = f->argp, *other = arg; + + if (sizeof(*other) != size) + return false; + + if (instance->addr != other->addr || instance->len != other->len || + instance->flags != other->flags) + return false; + + if ((instance->flags & GUNYAH_IOEVENTFD_FLAGS_DATAMATCH) && + instance->datamatch != other->datamatch) + return false; + + return true; +} + +DECLARE_GUNYAH_VM_FUNCTION_INIT(ioeventfd, GUNYAH_FN_IOEVENTFD, 3, + gunyah_ioeventfd_bind, gunyah_ioevent_unbind, + gunyah_ioevent_compare); +MODULE_DESCRIPTION("Gunyah ioeventfd VM Function"); +MODULE_LICENSE("GPL"); diff --git a/include/uapi/linux/gunyah.h b/include/uapi/linux/gunyah.h index cb7b0bb9bef3..fd461e2fe8b5 100644 --- a/include/uapi/linux/gunyah.h +++ b/include/uapi/linux/gunyah.h @@ -65,10 +65,13 @@ struct gunyah_vm_dtb_config { * Return: file descriptor to manipulate the vcpu. * @GUNYAH_FN_IRQFD: register eventfd to assert a Gunyah doorbell * &struct gunyah_fn_desc.arg is a pointer to &struct gunyah_fn_irqfd_arg + * @GUNYAH_FN_IOEVENTFD: register ioeventfd to trigger when VM faults on parameter + * &struct gunyah_fn_desc.arg is a pointer to &struct gunyah_fn_ioeventfd_arg */ enum gunyah_fn_type { GUNYAH_FN_VCPU = 1, GUNYAH_FN_IRQFD, + GUNYAH_FN_IOEVENTFD, }; #define GUNYAH_FN_MAX_ARG_SIZE 256 @@ -120,6 +123,40 @@ struct gunyah_fn_irqfd_arg { __u32 padding; }; +/** + * enum gunyah_ioeventfd_flags - flags for use in gunyah_fn_ioeventfd_arg + * @GUNYAH_IOEVENTFD_FLAGS_DATAMATCH: the event will be signaled only if the + * written value to the registered address is + * equal to &struct gunyah_fn_ioeventfd_arg.datamatch + */ +enum gunyah_ioeventfd_flags { + GUNYAH_IOEVENTFD_FLAGS_DATAMATCH = 1UL << 0, +}; + +/** + * struct gunyah_fn_ioeventfd_arg - Arguments to create an ioeventfd function + * @datamatch: data used when GUNYAH_IOEVENTFD_DATAMATCH is set + * @addr: Address in guest memory + * @len: Length of access + * @fd: When ioeventfd is matched, this eventfd is written + * @flags: See &enum gunyah_ioeventfd_flags + * @padding: padding bytes + * + * Create this function with &GUNYAH_VM_ADD_FUNCTION using type &GUNYAH_FN_IOEVENTFD. + * + * Attaches an ioeventfd to a legal mmio address within the guest. A guest write + * in the registered address will signal the provided event instead of triggering + * an exit on the GUNYAH_VCPU_RUN ioctl. + */ +struct gunyah_fn_ioeventfd_arg { + __u64 datamatch; + __u64 addr; /* legal mmio address */ + __u32 len; /* 1, 2, 4, or 8 bytes; or 0 to ignore length */ + __s32 fd; + __u32 flags; + __u32 padding; +}; + /** * struct gunyah_fn_desc - Arguments to create a VM function * @type: Type of the function. See &enum gunyah_fn_type. From patchwork Tue Jan 9 19:38:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 761083 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 00507481C7; Tue, 9 Jan 2024 19:38:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="meB4l/v4" Received: from pps.filterd (m0279872.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 409IPgvb009679; Tue, 9 Jan 2024 19:38:13 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:date:subject:mime-version:content-type :content-transfer-encoding:message-id:references:in-reply-to:to :cc; s=qcppdkim1; bh=9IXiRln2Duun1O80Yl8EsvifJ9A2nUavg62PHcy0yVQ =; b=meB4l/v4NDsiJs4wRPv7TjhWtf35qjgjlmEZdoe6KtorlH+6+aiyx81qcU+ 2PSK4XQz5UnVSN7jdEk3INrccSurEXaoUxv6HwXV4Wah+dTHkNcoBGW9Vt1tUYTn D2ystJCaVc519dsmgvFk6EuCPq7DBthg36dkc45Bl06AFbeeR9wVfuQNr3KbslTq bbOqGJJ92LXThkRfDH4ZqQ/f1fPny87Fohj//lhFl6ybLPMoUFkX4+F7nnGa8yq/ agQE6NwlTYsZ8lCvPa6yFPR//EvHF5c7jYZR+kKK+EBBCLDnXnWXyd8EgCxS1Bbc 5zZpkb70VQJq9PwSBRJrKT+xo6Q== Received: from nasanppmta01.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3vh85t0pp9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 09 Jan 2024 19:38:12 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA01.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 409JcBhQ003587 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 9 Jan 2024 19:38:11 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 9 Jan 2024 11:38:10 -0800 From: Elliot Berman Date: Tue, 9 Jan 2024 11:38:12 -0800 Subject: [PATCH v16 34/34] MAINTAINERS: Add Gunyah hypervisor drivers section Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20240109-gunyah-v16-34-634904bf4ce9@quicinc.com> References: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> In-Reply-To: <20240109-gunyah-v16-0-634904bf4ce9@quicinc.com> To: Alex Elder , Srinivas Kandagatla , Murali Nalajal , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Philip Derrin , Prakruthi Deepak Heragu , Jonathan Corbet , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Catalin Marinas , Will Deacon , Konrad Dybcio , Bjorn Andersson , Dmitry Baryshkov , "Fuad Tabba" , Sean Christopherson , "Andrew Morton" CC: , , , , , , Elliot Berman X-Mailer: b4 0.13-dev X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: kzM0F7NTMKnUvtJCJV2bCf88yMAs95Pq X-Proofpoint-GUID: kzM0F7NTMKnUvtJCJV2bCf88yMAs95Pq X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_01,2023-12-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 bulkscore=0 malwarescore=0 suspectscore=0 mlxlogscore=744 clxscore=1015 priorityscore=1501 adultscore=0 impostorscore=0 phishscore=0 spamscore=0 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2311290000 definitions=main-2401090158 Add myself and Prakruthi as maintainers of Gunyah hypervisor drivers. Reviewed-by: Alex Elder Signed-off-by: Prakruthi Deepak Heragu Signed-off-by: Elliot Berman --- MAINTAINERS | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index fa67e2624723..64f70ef1ef91 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -9306,6 +9306,18 @@ L: linux-efi@vger.kernel.org S: Maintained F: block/partitions/efi.* +GUNYAH HYPERVISOR DRIVER +M: Elliot Berman +M: Prakruthi Deepak Heragu +L: linux-arm-msm@vger.kernel.org +S: Supported +F: Documentation/devicetree/bindings/firmware/gunyah-hypervisor.yaml +F: Documentation/virt/gunyah/ +F: arch/arm64/gunyah/ +F: drivers/virt/gunyah/ +F: include/linux/gunyah*.h +K: gunyah + HABANALABS PCI DRIVER M: Oded Gabbay L: dri-devel@lists.freedesktop.org