From patchwork Wed Apr 30 16:56:43 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 886501 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C00F7248F5B for ; Wed, 30 Apr 2025 16:57:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746032222; cv=none; b=guvQvT6a2tmN0JVe6ZBkqKgemlseS1Obow4xizsXPxHOMtE7rszrgeNorO0a+38RQmXZn/EqHlgFtzpPlqA3yK2VCNqSlfZw+CysygmpJfRD+PFBMBCWs1amds2hq5n5aRGMGfF5LsV0pNCN08d2cL6u8dylUWUtw+L5HXwYGug= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746032222; c=relaxed/simple; bh=h0KHgLItt9CHDAIDrvShHUQC1OFn5R7uYj9ts4iZ2wo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=EeGhSUWhIWbFgcYJ2ff+nbB0NvmQi/Y8DkXKZN96ze8gjh5ElayaHvsNyXAtup0NK8Bupp16BplSApLuDHWdvLLsTn4D9zG0WQeMervmbKpq2jXdy+p2ZHcgd5icp8whzkwem20kmSDLgfG0Jp9HKu4LveamOQRhzCj5v9u3dZ8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=wL30anfD; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="wL30anfD" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43cf5196c25so149125e9.0 for ; Wed, 30 Apr 2025 09:57:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1746032219; x=1746637019; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=yin9qGD9i6bYJZBuPpRaU3TWAZSEhluGZrBoQBBflwo=; b=wL30anfDfLs7vrlQ187U3vv8eoRBmZl/MT9n2C8ZY8ZcoxgCXT6DvM2YrAWHm+sNEw +Os0wifE6LpF5XB8cj1UgKEMCSc8UhANm+COTxU4tL4mo1DPYGf5LkgxTSHm1yOohmrI eWjNf+VRPiva6ql10QGznw7NVz7VLEk87DeRrfax0/ChK2ix8miZloxi9l1shHxlXqZB ItKhSNIBfSKj6NIKkiYfv8s3hUzdOEzCuRg9VnWXe9w9EOLBNtkjJF+FOq1Tk33ACkV4 Xg1ibKyvRvVKZBSq2aM8b+dxcWem+h6tNbWWiYj3Rc+EGLzhOamwp7FMya1cwDNPvJ3q 2pTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1746032219; x=1746637019; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=yin9qGD9i6bYJZBuPpRaU3TWAZSEhluGZrBoQBBflwo=; b=D58+dHsWCx22pTP9/kl0z7GMSK7bKgv4XlZl417RTEBlCu47W9XN2IYZ0687Z1YMWz Ufe87fUfJ1EEb5fC4O/ZLq2+eOeeU9bMSkQloQAI07nj6APBDXhBrALF/hjwcCkTnqbl Jz9oOgUxwmHF2qebOHSfQ6cD2H/ILwPkWG+DwV9q9FfLdIs1EbAlpNOjEuQmCIDuvNbX /9DxM0sGMDIGBtwiJkNAR4kJB7xc8lCz6rCCXZZSGvq0I+Bjer35lFp3m/l22kcHBXLY r/FcoxcF233qlR4tPrtddHzJyjeFJ6By5CWbOCWBgY/MnUtj6uiRFu+kkCfMOs2+9G8t 8UMw== X-Forwarded-Encrypted: i=1; AJvYcCWk2Us5tr9n0HN9INFax5eh+YiaShuQG5TwlZVGqC7Ptgn+V1tRPB43mP3Kb3LtHwnz7ngRV27+V6xKndx3@vger.kernel.org X-Gm-Message-State: AOJu0YyYS5NgFfk6vbvBSp/JaxU65Jhl0Vm4Ize1DGGF1UjbHyqRnZ7T fIA6Bb/ZWRjbdGy9+d/H7LbjEnXa+fOKhkopljT0H4SOY2GADMjt79a0FxsxNfq60Rm3XQd/YA= = X-Google-Smtp-Source: AGHT+IHNTXwBimJm7w0AvwxO9LHDuRmUVyq5N7prGpQ33JGOil9AldAs09e2ILpTnulXgaJQe+NnqGUKlQ== X-Received: from wmbfp26.prod.google.com ([2002:a05:600c:699a:b0:43d:1c63:a630]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1d91:b0:43c:f44c:72b7 with SMTP id 5b1f17b1804b1-441b1f35e37mr42132445e9.14.1746032219027; Wed, 30 Apr 2025 09:56:59 -0700 (PDT) Date: Wed, 30 Apr 2025 17:56:43 +0100 In-Reply-To: <20250430165655.605595-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250430165655.605595-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.967.g6a0df3ecc3-goog Message-ID: <20250430165655.605595-2-tabba@google.com> Subject: [PATCH v8 01/13] KVM: Rename CONFIG_KVM_PRIVATE_MEM to CONFIG_KVM_GMEM From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, tabba@google.com The option KVM_PRIVATE_MEM enables guest_memfd in general. Subsequent patches add shared memory support to guest_memfd. Therefore, rename it to KVM_GMEM to make its purpose clearer. Co-developed-by: David Hildenbrand Signed-off-by: David Hildenbrand Signed-off-by: Fuad Tabba --- arch/x86/include/asm/kvm_host.h | 2 +- include/linux/kvm_host.h | 10 +++++----- virt/kvm/Kconfig | 8 ++++---- virt/kvm/Makefile.kvm | 2 +- virt/kvm/kvm_main.c | 4 ++-- virt/kvm/kvm_mm.h | 4 ++-- 6 files changed, 15 insertions(+), 15 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 7bc174a1f1cb..52f6f6d08558 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -2253,7 +2253,7 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level, int tdp_max_root_level, int tdp_huge_page_level); -#ifdef CONFIG_KVM_PRIVATE_MEM +#ifdef CONFIG_KVM_GMEM #define kvm_arch_has_private_mem(kvm) ((kvm)->arch.has_private_mem) #else #define kvm_arch_has_private_mem(kvm) false diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 291d49b9bf05..d6900995725d 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -601,7 +601,7 @@ struct kvm_memory_slot { short id; u16 as_id; -#ifdef CONFIG_KVM_PRIVATE_MEM +#ifdef CONFIG_KVM_GMEM struct { /* * Writes protected by kvm->slots_lock. Acquiring a @@ -722,7 +722,7 @@ static inline int kvm_arch_vcpu_memslots_id(struct kvm_vcpu *vcpu) * Arch code must define kvm_arch_has_private_mem if support for private memory * is enabled. */ -#if !defined(kvm_arch_has_private_mem) && !IS_ENABLED(CONFIG_KVM_PRIVATE_MEM) +#if !defined(kvm_arch_has_private_mem) && !IS_ENABLED(CONFIG_KVM_GMEM) static inline bool kvm_arch_has_private_mem(struct kvm *kvm) { return false; @@ -2504,7 +2504,7 @@ bool kvm_arch_post_set_memory_attributes(struct kvm *kvm, static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) { - return IS_ENABLED(CONFIG_KVM_PRIVATE_MEM) && + return IS_ENABLED(CONFIG_KVM_GMEM) && kvm_get_memory_attributes(kvm, gfn) & KVM_MEMORY_ATTRIBUTE_PRIVATE; } #else @@ -2514,7 +2514,7 @@ static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) } #endif /* CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES */ -#ifdef CONFIG_KVM_PRIVATE_MEM +#ifdef CONFIG_KVM_GMEM int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, kvm_pfn_t *pfn, struct page **page, int *max_order); @@ -2527,7 +2527,7 @@ static inline int kvm_gmem_get_pfn(struct kvm *kvm, KVM_BUG_ON(1, kvm); return -EIO; } -#endif /* CONFIG_KVM_PRIVATE_MEM */ +#endif /* CONFIG_KVM_GMEM */ #ifdef CONFIG_HAVE_KVM_ARCH_GMEM_PREPARE int kvm_arch_gmem_prepare(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, int max_order); diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index 727b542074e7..49df4e32bff7 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -112,19 +112,19 @@ config KVM_GENERIC_MEMORY_ATTRIBUTES depends on KVM_GENERIC_MMU_NOTIFIER bool -config KVM_PRIVATE_MEM +config KVM_GMEM select XARRAY_MULTI bool config KVM_GENERIC_PRIVATE_MEM select KVM_GENERIC_MEMORY_ATTRIBUTES - select KVM_PRIVATE_MEM + select KVM_GMEM bool config HAVE_KVM_ARCH_GMEM_PREPARE bool - depends on KVM_PRIVATE_MEM + depends on KVM_GMEM config HAVE_KVM_ARCH_GMEM_INVALIDATE bool - depends on KVM_PRIVATE_MEM + depends on KVM_GMEM diff --git a/virt/kvm/Makefile.kvm b/virt/kvm/Makefile.kvm index 724c89af78af..8d00918d4c8b 100644 --- a/virt/kvm/Makefile.kvm +++ b/virt/kvm/Makefile.kvm @@ -12,4 +12,4 @@ kvm-$(CONFIG_KVM_ASYNC_PF) += $(KVM)/async_pf.o kvm-$(CONFIG_HAVE_KVM_IRQ_ROUTING) += $(KVM)/irqchip.o kvm-$(CONFIG_HAVE_KVM_DIRTY_RING) += $(KVM)/dirty_ring.o kvm-$(CONFIG_HAVE_KVM_PFNCACHE) += $(KVM)/pfncache.o -kvm-$(CONFIG_KVM_PRIVATE_MEM) += $(KVM)/guest_memfd.o +kvm-$(CONFIG_KVM_GMEM) += $(KVM)/guest_memfd.o diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index e85b33a92624..4996cac41a8f 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -4842,7 +4842,7 @@ static int kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg) case KVM_CAP_MEMORY_ATTRIBUTES: return kvm_supported_mem_attributes(kvm); #endif -#ifdef CONFIG_KVM_PRIVATE_MEM +#ifdef CONFIG_KVM_GMEM case KVM_CAP_GUEST_MEMFD: return !kvm || kvm_arch_has_private_mem(kvm); #endif @@ -5276,7 +5276,7 @@ static long kvm_vm_ioctl(struct file *filp, case KVM_GET_STATS_FD: r = kvm_vm_ioctl_get_stats_fd(kvm); break; -#ifdef CONFIG_KVM_PRIVATE_MEM +#ifdef CONFIG_KVM_GMEM case KVM_CREATE_GUEST_MEMFD: { struct kvm_create_guest_memfd guest_memfd; diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h index acef3f5c582a..ec311c0d6718 100644 --- a/virt/kvm/kvm_mm.h +++ b/virt/kvm/kvm_mm.h @@ -67,7 +67,7 @@ static inline void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, } #endif /* HAVE_KVM_PFNCACHE */ -#ifdef CONFIG_KVM_PRIVATE_MEM +#ifdef CONFIG_KVM_GMEM void kvm_gmem_init(struct module *module); int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args); int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot, @@ -91,6 +91,6 @@ static inline void kvm_gmem_unbind(struct kvm_memory_slot *slot) { WARN_ON_ONCE(1); } -#endif /* CONFIG_KVM_PRIVATE_MEM */ +#endif /* CONFIG_KVM_GMEM */ #endif /* __KVM_MM_H__ */ From patchwork Wed Apr 30 16:56:44 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 886098 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B4A9D264F80 for ; Wed, 30 Apr 2025 16:57:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746032224; cv=none; b=UDptIyTK5kP1MQ4NnkdRqlaAzhKp+7HzqOdjz1S7d23GpEx1/UI4K+RRh4OCIWn4oesFIzlRQHV81N5FslH8n3zxAQrw3ocI+mmw1f5eYmp0IuMLqTMM0JYnuDfKrHDErTvuXrtwWUujwbbmMhTBkdMgGs0a1aI4O32whOB3Rbw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746032224; c=relaxed/simple; bh=nwHILj2+5prz29PnFsotY0y6J6eLg8zMqCtDEF2APMw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=eGeJUkF8ONU64Pa0SvHu3Neaa/pNtmc+43VQr3hvwSCN21v8k2n1ut8nA0udTTtuv7lvtjQvAM+YOwMiGcHx0o19wD2SV0VmvQi0TfQoFhYsi6jk7kMJUTzhWjqkgXUHMpIvLPXP3VQTCuLnPQjH1KKZKaH+oO66MmlIHPCdLqY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Yuheya24; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Yuheya24" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43cf44b66f7so88945e9.1 for ; Wed, 30 Apr 2025 09:57:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1746032221; x=1746637021; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=EQOJicvOuD+4kvnHRJ2oY7nG9Llh+8wXb6K0ps724F0=; b=Yuheya24Zx+2QYk94q+hUPMMd04gd8lxzky+LHS9DP1Ly1YRaIiKz7Pwxkvgj3exaS ONttxc7zPUE/a+u/z4qiI1zwvkaQmexUPWCrRY8tfVQGG6OJJ6X3L+suIKuvkYqxPik8 0on+SuhL2hshqSaF/QRwZW4wSsP0AlYx1WdwZCossGRfHzYiL8NBtbLqxYqWXLlH59gy mCJ/65eIk0pSlIg5XPw/Z6W+YprmXAziWeFVa8qf8IPondqBqhgQAnJIwqsdHVm4msNO RiNvmCvqRn1/cwk7G2rIkTAmPZNYQXbbw/m8JbtL+v3qU0nH70bPhCMig+6OkZGCKZmH /j+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1746032221; x=1746637021; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=EQOJicvOuD+4kvnHRJ2oY7nG9Llh+8wXb6K0ps724F0=; b=lo+bBZYH3gBRpwvCJZV7Uf9MAG3ChamS7gMVcoFQjyMFbVjMWuGepoYQifp5HWXPu8 aUYNxmK1YKZxa2RbbYZ6vRvKLittPbABtR5AmCLojjh3wyHMWiJI5dMmtkTxh6o9ENS+ eoC2hF5xwDjps+Y4j4iIdwRnU12fAn0LecVXg1474T0D3f7hGCyjRQ8peYQpMa1Kl5AI i1aW+sXEk+u4y+6CJWAVnFp6+xA4BFG1lWeW4QZUrCmeRzw2XyIVGp/hXUols5E8R7lC /m8jK+BuynN+Al1f+BD+jKq4cNANXqL+2JfZpdm6RqlC4+Qo+ULv3VoIacTgQjNJSPV+ C/+Q== X-Forwarded-Encrypted: i=1; AJvYcCXGm+HtwZxYf0ZxZt1481WxLLZYPlUtgglC1IMAPfUgBs9+NuuUijoeC5d383wgkWbsAVFwqNE29g4bV7rs@vger.kernel.org X-Gm-Message-State: AOJu0Yz4lnbHU8YVA4NpjBVscBBJEOZzSgAJihtCmoAFFIDzuvXY0+eg HleJ5+50zsrtZNHGvxlaeUC/gKnIjSkGzbKmTwNq+Q2gfBNVGwEHJI6xN2loHFRoRUoRYNcYIQ= = X-Google-Smtp-Source: AGHT+IGwBVPOVGlrkY7iCJBXS/44guw/7Wk+oN3WlGjxVV4BaIM1xjA2zctAzqqrPbQWNZTXxR10YdF66Q== X-Received: from wmbbh20.prod.google.com ([2002:a05:600c:3d14:b0:43c:eb09:3790]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4f91:b0:43d:ac5:11ed with SMTP id 5b1f17b1804b1-441b1f5b8f7mr33803595e9.24.1746032221004; Wed, 30 Apr 2025 09:57:01 -0700 (PDT) Date: Wed, 30 Apr 2025 17:56:44 +0100 In-Reply-To: <20250430165655.605595-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250430165655.605595-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.967.g6a0df3ecc3-goog Message-ID: <20250430165655.605595-3-tabba@google.com> Subject: [PATCH v8 02/13] KVM: Rename CONFIG_KVM_GENERIC_PRIVATE_MEM to CONFIG_KVM_GENERIC_GMEM_POPULATE From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, tabba@google.com The option KVM_GENERIC_PRIVATE_MEM enables populating a GPA range with guest data. Rename it to KVM_GENERIC_GMEM_POPULATE to make its purpose clearer. Co-developed-by: David Hildenbrand Signed-off-by: David Hildenbrand Signed-off-by: Fuad Tabba --- arch/x86/kvm/Kconfig | 4 ++-- include/linux/kvm_host.h | 2 +- virt/kvm/Kconfig | 2 +- virt/kvm/guest_memfd.c | 2 +- 4 files changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index fe8ea8c097de..b37258253543 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -46,7 +46,7 @@ config KVM_X86 select HAVE_KVM_PM_NOTIFIER if PM select KVM_GENERIC_HARDWARE_ENABLING select KVM_GENERIC_PRE_FAULT_MEMORY - select KVM_GENERIC_PRIVATE_MEM if KVM_SW_PROTECTED_VM + select KVM_GENERIC_GMEM_POPULATE if KVM_SW_PROTECTED_VM select KVM_WERROR if WERROR config KVM @@ -145,7 +145,7 @@ config KVM_AMD_SEV depends on KVM_AMD && X86_64 depends on CRYPTO_DEV_SP_PSP && !(KVM_AMD=y && CRYPTO_DEV_CCP_DD=m) select ARCH_HAS_CC_PLATFORM - select KVM_GENERIC_PRIVATE_MEM + select KVM_GENERIC_GMEM_POPULATE select HAVE_KVM_ARCH_GMEM_PREPARE select HAVE_KVM_ARCH_GMEM_INVALIDATE help diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index d6900995725d..7ca23837fa52 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2533,7 +2533,7 @@ static inline int kvm_gmem_get_pfn(struct kvm *kvm, int kvm_arch_gmem_prepare(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, int max_order); #endif -#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM +#ifdef CONFIG_KVM_GENERIC_GMEM_POPULATE /** * kvm_gmem_populate() - Populate/prepare a GPA range with guest data * diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index 49df4e32bff7..559c93ad90be 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -116,7 +116,7 @@ config KVM_GMEM select XARRAY_MULTI bool -config KVM_GENERIC_PRIVATE_MEM +config KVM_GENERIC_GMEM_POPULATE select KVM_GENERIC_MEMORY_ATTRIBUTES select KVM_GMEM bool diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index b2aa6bf24d3a..befea51bbc75 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -638,7 +638,7 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, } EXPORT_SYMBOL_GPL(kvm_gmem_get_pfn); -#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM +#ifdef CONFIG_KVM_GENERIC_GMEM_POPULATE long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long npages, kvm_gmem_populate_cb post_populate, void *opaque) { From patchwork Wed Apr 30 16:56:45 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 886500 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CC2D9265608 for ; Wed, 30 Apr 2025 16:57:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746032226; cv=none; b=GJn1GWwfwtTAP1CWy8O1tDeZlCZg+vBnD8FnTMttkh34zqFFRFZFTsbK3TiuJSnjyuRTEmOIixqZYnDy3MBNPvxcgreZtJLqlae4HG60VRYRhh4BP1ImyxdS2VZrkptekMkAZsdACCkKa9NnW8PKcF0sj4NjOiY/iwK1CemLYXo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746032226; c=relaxed/simple; bh=vOcLPiQFSqvLB/c5dYxcSswHiowmUfVhjapx+aOzRyQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=MMLvTbFHk+Ge7aVRjv87MjP1RXHpYD5Ak2V9Auwm22dnAnAdAYnryFba52ZCRHbcODaUlwKAnoyF9tMcz1KNydlXH36rkD1uTZxm+mTB8olfnTDt50cVYnieOEPblHYv+eCChqGP9hKIcOGwRDtcODUYt8OpQZPA1Cpa/6ew+q8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=cEQ1RlI2; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="cEQ1RlI2" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43ce8f82e66so60685e9.3 for ; Wed, 30 Apr 2025 09:57:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1746032223; x=1746637023; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ono0W+Y+/vEWsoVmJ4OjCULk01Pp7fxFrrjUubfzUH4=; b=cEQ1RlI2sr62vquiO8APSFwBGue22dcS3qVepEQ5JuFqDZcIEr+9URFpcnxWYp+hCi PSvRJj1RtJrZ9OrjhLbVeIPtCLs5XM8qbkYSuK6ZUloYf4EVG56CbA4+NazgOdzIL8Ns XmpJ5wifWoZCXqDXheHPUBXGHkZnCavugkCPMNwG8eU22xZl4YouBa4IJQOS3O24edTZ QgJN4SM9/c8hBDrcPI0xp9DYK21FGI3k0rDwyUT97oaoIcyLGMJqdFUO6olXLic/jkq5 Efq36kKResL/rKWlJnfJWV+iMN0LkoteRhYxiT7uYWIoI1PTz13Sbdyo+3DrBh53W11Y jvtg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1746032223; x=1746637023; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ono0W+Y+/vEWsoVmJ4OjCULk01Pp7fxFrrjUubfzUH4=; b=EU9+o/cFI/iTsUa3dw4s3SQbZlyqArZNdmdk0ye9XBAcEnv69z/gVjv9Jo+s/5Yt/3 rkKdBkY5y1CNKVLDUyMjMvhsM39xD9V4X44VyknkxpjCKX9ueX8TAhUnD+LrA3bV/0Tf r9n9mWiNo/fv0b0GkWdYm9XAFilvuJimE/JtTPzqnJY/Lkk3Ap/nVe61Ft+pPBdUwus8 nwUxNr8p6jPxkmBlNKFUzPu/kSVV2NWSescyLNrKaKS42wE7D6jsrSiHZ/E2ANOW7Uqg 3tiVTQfwDEhvcXW+Yk3TaelsmUmOLN+iVfeqo8u0h7D5lalyh2C7ccpemKSViY24kj0S Bgiw== X-Forwarded-Encrypted: i=1; AJvYcCXAXyewQf/as3jxwVIye902waGnL4D5p2O24BN+Jmezls8LtZQm3MNTwGQAAtcG1AvnCdkm6MP+vzxHuQDi@vger.kernel.org X-Gm-Message-State: AOJu0YxfCfIK+0fPtvmN/MJNy4QW2TYxv71e6Zw++bQj7U/zzQygRJ0j W0d/Hznp2sym/Is4DksVZl1EB4EO4XqJUYtQ3yB8r4HTLrh91By30cRkyfbZcYehsRYAc5J2LQ= = X-Google-Smtp-Source: AGHT+IFiay9vVO5GKWifOcJ6D5kD8ctjc9P4X8VLK3L+uvOVYO2aSjWiPigUW5MFB91/q4pM36xBJgSB+w== X-Received: from wmbbe3.prod.google.com ([2002:a05:600c:1e83:b0:440:602a:960f]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3b0c:b0:43d:683:8caa with SMTP id 5b1f17b1804b1-441b1f3958emr43326145e9.15.1746032223003; Wed, 30 Apr 2025 09:57:03 -0700 (PDT) Date: Wed, 30 Apr 2025 17:56:45 +0100 In-Reply-To: <20250430165655.605595-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250430165655.605595-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.967.g6a0df3ecc3-goog Message-ID: <20250430165655.605595-4-tabba@google.com> Subject: [PATCH v8 03/13] KVM: Rename kvm_arch_has_private_mem() to kvm_arch_supports_gmem() From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, tabba@google.com The function kvm_arch_has_private_mem() is used to indicate whether guest_memfd is supported by the architecture, which until now implies that its private. To decouple guest_memfd support from whether the memory is private, rename this function to kvm_arch_supports_gmem(). Co-developed-by: David Hildenbrand Signed-off-by: David Hildenbrand Signed-off-by: Fuad Tabba --- arch/x86/include/asm/kvm_host.h | 8 ++++---- arch/x86/kvm/mmu/mmu.c | 8 ++++---- include/linux/kvm_host.h | 6 +++--- virt/kvm/kvm_main.c | 6 +++--- 4 files changed, 14 insertions(+), 14 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 52f6f6d08558..4a83fbae7056 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -2254,9 +2254,9 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level, #ifdef CONFIG_KVM_GMEM -#define kvm_arch_has_private_mem(kvm) ((kvm)->arch.has_private_mem) +#define kvm_arch_supports_gmem(kvm) ((kvm)->arch.has_private_mem) #else -#define kvm_arch_has_private_mem(kvm) false +#define kvm_arch_supports_gmem(kvm) false #endif #define kvm_arch_has_readonly_mem(kvm) (!(kvm)->arch.has_protected_state) @@ -2309,8 +2309,8 @@ enum { #define HF_SMM_INSIDE_NMI_MASK (1 << 2) # define KVM_MAX_NR_ADDRESS_SPACES 2 -/* SMM is currently unsupported for guests with private memory. */ -# define kvm_arch_nr_memslot_as_ids(kvm) (kvm_arch_has_private_mem(kvm) ? 1 : 2) +/* SMM is currently unsupported for guests with guest_memfd (esp private) memory. */ +# define kvm_arch_nr_memslot_as_ids(kvm) (kvm_arch_supports_gmem(kvm) ? 1 : 2) # define kvm_arch_vcpu_memslots_id(vcpu) ((vcpu)->arch.hflags & HF_SMM_MASK ? 1 : 0) # define kvm_memslots_for_spte_role(kvm, role) __kvm_memslots(kvm, (role).smm) #else diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 63bb77ee1bb1..7d654506d800 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4917,7 +4917,7 @@ long kvm_arch_vcpu_pre_fault_memory(struct kvm_vcpu *vcpu, if (r) return r; - if (kvm_arch_has_private_mem(vcpu->kvm) && + if (kvm_arch_supports_gmem(vcpu->kvm) && kvm_mem_is_private(vcpu->kvm, gpa_to_gfn(range->gpa))) error_code |= PFERR_PRIVATE_ACCESS; @@ -7683,7 +7683,7 @@ bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm, * Zapping SPTEs in this case ensures KVM will reassess whether or not * a hugepage can be used for affected ranges. */ - if (WARN_ON_ONCE(!kvm_arch_has_private_mem(kvm))) + if (WARN_ON_ONCE(!kvm_arch_supports_gmem(kvm))) return false; /* Unmap the old attribute page. */ @@ -7746,7 +7746,7 @@ bool kvm_arch_post_set_memory_attributes(struct kvm *kvm, * a range that has PRIVATE GFNs, and conversely converting a range to * SHARED may now allow hugepages. */ - if (WARN_ON_ONCE(!kvm_arch_has_private_mem(kvm))) + if (WARN_ON_ONCE(!kvm_arch_supports_gmem(kvm))) return false; /* @@ -7802,7 +7802,7 @@ void kvm_mmu_init_memslot_memory_attributes(struct kvm *kvm, { int level; - if (!kvm_arch_has_private_mem(kvm)) + if (!kvm_arch_supports_gmem(kvm)) return; for (level = PG_LEVEL_2M; level <= KVM_MAX_HUGEPAGE_LEVEL; level++) { diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 7ca23837fa52..6ca7279520cf 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -719,11 +719,11 @@ static inline int kvm_arch_vcpu_memslots_id(struct kvm_vcpu *vcpu) #endif /* - * Arch code must define kvm_arch_has_private_mem if support for private memory + * Arch code must define kvm_arch_supports_gmem if support for guest_memfd * is enabled. */ -#if !defined(kvm_arch_has_private_mem) && !IS_ENABLED(CONFIG_KVM_GMEM) -static inline bool kvm_arch_has_private_mem(struct kvm *kvm) +#if !defined(kvm_arch_supports_gmem) && !IS_ENABLED(CONFIG_KVM_GMEM) +static inline bool kvm_arch_supports_gmem(struct kvm *kvm) { return false; } diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 4996cac41a8f..2468d50a9ed4 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1531,7 +1531,7 @@ static int check_memory_region_flags(struct kvm *kvm, { u32 valid_flags = KVM_MEM_LOG_DIRTY_PAGES; - if (kvm_arch_has_private_mem(kvm)) + if (kvm_arch_supports_gmem(kvm)) valid_flags |= KVM_MEM_GUEST_MEMFD; /* Dirty logging private memory is not currently supported. */ @@ -2362,7 +2362,7 @@ static int kvm_vm_ioctl_clear_dirty_log(struct kvm *kvm, #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES static u64 kvm_supported_mem_attributes(struct kvm *kvm) { - if (!kvm || kvm_arch_has_private_mem(kvm)) + if (!kvm || kvm_arch_supports_gmem(kvm)) return KVM_MEMORY_ATTRIBUTE_PRIVATE; return 0; @@ -4844,7 +4844,7 @@ static int kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg) #endif #ifdef CONFIG_KVM_GMEM case KVM_CAP_GUEST_MEMFD: - return !kvm || kvm_arch_has_private_mem(kvm); + return !kvm || kvm_arch_supports_gmem(kvm); #endif default: break; From patchwork Wed Apr 30 16:56:46 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 886097 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AD19C14AD2D for ; Wed, 30 Apr 2025 16:57:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746032228; cv=none; b=pGz1Tv9mVg1XtjW9PSx12O7PFWRKvmQEZFT/P8/Fwp0lmnllVQAEqxuIByyoGj3VGa9cUaKwCFHJFcKlB2kSeRbN9QYGpxMeoenr2/KWNH4I8AntAQt/b6ElVN5TSJGQnVCvGWX6H9yW/mRxV9gp0Do7euoGNcOccDCTDNpQB9Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746032228; c=relaxed/simple; bh=0TQvTx5CXZPmFiUOoGx3wMGS7zheBLdOgtENc3r3J0g=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=AHGzYogYhRQVpLRqvW8Vglc/VG1ZOMSNu8z/VI5Kop67IRWhU25a3RzfQGqbFwIJmYkrefwMYtlJznoq6DR7UbiTfm3qYiDW1TIsuOvm6sumrU15IDu3Q+RbKr/E1YCbYnXIexD2hXpXXkO8QLy/M1iJSh53s+dxL+qJDjuMx7s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=rKeOxJ8Y; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="rKeOxJ8Y" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-44059976a1fso102045e9.1 for ; Wed, 30 Apr 2025 09:57:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1746032225; x=1746637025; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=d+iNXVDAdj/+L0buigDqwZwszhQH/OtOVJlCHKS7HfQ=; b=rKeOxJ8Ykydsysr11XEtuJrMc2JkTivZMIPHy16Mg6T+kLJc6505G1E6B2GXprUgOn 823v8IuRVmKb6N+wedsRb/BrOVWUTjpnnKjUDK37QLPyadpAfeKGeHastNsRa++SPTKG AB2YnlCDjILDv1PP9QnFrftBROcrakDIwZaoTGba2xEPJhoDozfC4PIkIs/REk2QhBca h8iNBWK0Fsz/xrjlr6CKI6yOjhR+0JIZNhaoel2Nt0zJg9TBZMSGBOL4Ac/UmsKlHqzK LkoBwQ3i8g2PMgtKjQ1cr7kxsRZz/wiBSy1he3F/rB93DONYeKQ/meQjSrwqUeV9zDks ocrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1746032225; x=1746637025; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=d+iNXVDAdj/+L0buigDqwZwszhQH/OtOVJlCHKS7HfQ=; b=o0gNSzE6gKPAMa7p87oztW4odFs18J+x5ijbSvmdKwNn84EfyIGo/KGFYnc5QVk8B8 GoWnoXGgxL60BCDekVxErOXBU56WAJG1g2t8s6DmluiM+KjYg6thO0aAdYmEJp30WfWc wlHnbcA/8s7terzUPxrVmahGom/BN4nZWL+Wyrqio4oe4z/S7hR+tlS3OY25VRaNEUFT kEIvppJ76RGEF7isXt8V1rbiOvBcBalB+1RRwwPA9YSnc5dli0UL7W+lfFjLRg4XvVvb +ruB4UOvjpjQUix0Ii2ccUbvgqAaPpj7ND8erQNXGZFZKY+1QplyzejGLsbksYEOkRTf pwpQ== X-Forwarded-Encrypted: i=1; AJvYcCWpshQn0wyBuJPzVSfobnK3YFlQudSZaZMNZ/dLUd4mKBq7mu11SxFqxQbSFr030ntTHZv+IkCP2PWvntRM@vger.kernel.org X-Gm-Message-State: AOJu0Yzi1qhDh0z1OJBFjd6rmZ3B2AAqqizxwYNlZ4kECPUX99VXNWuY WiM11cCoJAR/0kkMTCHTGPs/9TZoemZrrb0Hb3G2imwsI+FWd86Euoofrm7kB+AcSp309SmJrA= = X-Google-Smtp-Source: AGHT+IGeJ+b68NCUJ4eFqZSVib8F8N0sKNWqmlNPLoK9UTBi2tb0PPBoARkhkoRHW90xn5kcTM/fwf0JnQ== X-Received: from wmbep11.prod.google.com ([2002:a05:600c:840b:b0:439:7e67:ca7b]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:310c:b0:43d:fa58:81d4 with SMTP id 5b1f17b1804b1-441b1f61365mr32474785e9.33.1746032225114; Wed, 30 Apr 2025 09:57:05 -0700 (PDT) Date: Wed, 30 Apr 2025 17:56:46 +0100 In-Reply-To: <20250430165655.605595-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250430165655.605595-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.967.g6a0df3ecc3-goog Message-ID: <20250430165655.605595-5-tabba@google.com> Subject: [PATCH v8 04/13] KVM: x86: Rename kvm->arch.has_private_mem to kvm->arch.supports_gmem From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, tabba@google.com The bool has_private_mem is used to indicate whether guest_memfd is supported. Rename it to supports_gmem to make its meaning clearer and to decouple memory being private from guest_memfd. Co-developed-by: David Hildenbrand Signed-off-by: David Hildenbrand Signed-off-by: Fuad Tabba --- arch/x86/include/asm/kvm_host.h | 4 ++-- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/svm/svm.c | 4 ++-- arch/x86/kvm/x86.c | 3 +-- 4 files changed, 6 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 4a83fbae7056..709cc2a7ba66 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1331,7 +1331,7 @@ struct kvm_arch { unsigned int indirect_shadow_pages; u8 mmu_valid_gen; u8 vm_type; - bool has_private_mem; + bool supports_gmem; bool has_protected_state; bool pre_fault_allowed; struct hlist_head mmu_page_hash[KVM_NUM_MMU_PAGES]; @@ -2254,7 +2254,7 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level, #ifdef CONFIG_KVM_GMEM -#define kvm_arch_supports_gmem(kvm) ((kvm)->arch.has_private_mem) +#define kvm_arch_supports_gmem(kvm) ((kvm)->arch.supports_gmem) #else #define kvm_arch_supports_gmem(kvm) false #endif diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 7d654506d800..734d71ec97ef 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3486,7 +3486,7 @@ static bool page_fault_can_be_fast(struct kvm *kvm, struct kvm_page_fault *fault * on RET_PF_SPURIOUS until the update completes, or an actual spurious * case might go down the slow path. Either case will resolve itself. */ - if (kvm->arch.has_private_mem && + if (kvm->arch.supports_gmem && fault->is_private != kvm_mem_is_private(kvm, fault->gfn)) return false; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index d5d0c5c3300b..b391dd6208cf 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -5048,8 +5048,8 @@ static int svm_vm_init(struct kvm *kvm) (type == KVM_X86_SEV_ES_VM || type == KVM_X86_SNP_VM); to_kvm_sev_info(kvm)->need_init = true; - kvm->arch.has_private_mem = (type == KVM_X86_SNP_VM); - kvm->arch.pre_fault_allowed = !kvm->arch.has_private_mem; + kvm->arch.supports_gmem = (type == KVM_X86_SNP_VM); + kvm->arch.pre_fault_allowed = !kvm->arch.supports_gmem; } if (!pause_filter_count || !pause_filter_thresh) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index df5b99ea1f18..5b11ef131d5c 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12716,8 +12716,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) return -EINVAL; kvm->arch.vm_type = type; - kvm->arch.has_private_mem = - (type == KVM_X86_SW_PROTECTED_VM); + kvm->arch.supports_gmem = (type == KVM_X86_SW_PROTECTED_VM); /* Decided by the vendor code for other VM types. */ kvm->arch.pre_fault_allowed = type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM; From patchwork Wed Apr 30 16:56:47 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 886499 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E8870266580 for ; Wed, 30 Apr 2025 16:57:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746032230; cv=none; b=clyddPfg/cX776THF7Yylk7J4KpjXCNevKYhDqhJt/ADMwTEDLdVoV4Kk5BD4XFx/0nYEy88iTvCfZse7/Up0LoS9+LCOm23rHvjIBaJyE4efAnKXW4pVsetpAAjQVYExsGJfS9ac79ZAdeNG987NrL0TizYozoAZyg/CTlJNbI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746032230; c=relaxed/simple; bh=MVnAfN9V1U/fa9CprEKqa6aQx2j2+jxiiYDWMGy5vUA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=j8z001eg4YBbYH/5d9XUT8KPdjDbuuj3HxdeCNqW+onqcEh4B+leF7VHNRKKRhJj2B4L5wreQ/Z39D+jMkYqINcpKfg7rRlKblsbRGkb2ynEReH1wgsQSeNv2dBmwSa8rgp8xRRg+2U6W/kbT/MpH2foID4xD6DgsCTrIKNXFMw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=JD0s96+2; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="JD0s96+2" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-39979ad285bso3494610f8f.2 for ; Wed, 30 Apr 2025 09:57:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1746032227; x=1746637027; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=3CtfAFFPAmimQWAxnHhowOi1TY4SeMONfCUu3d2ZV+g=; b=JD0s96+29dewEMrceqOrpOoSNT8VJ9YenR/+EbTpj6rHbZOncqVCWA2I5BUPGNgHxK 09Xya0YAL5kxe5KYzJfiejD7T4Puh6bQjRZ6JwL+4c5w1ZWASjeRhyK3RKCE+AX9sAuh RpmLBgwSUlj340CHN8nrcdHj3H3dHblJbiCiPtCA9f0AzTCGWGGdtX/DGe8WDokFu0wX KR7/tG9frS7rELgXbMnlD1R47KuypEUjCC3euyBc+N8vljLFqGCJetLuAMpDCrghD2+X RB711DdHLWnXi+AAlV3Ah+0XcI7kz0QZnmb+sNDvjm7PqO2t8vM0S9lNVI5fWw6bSrYw 8YuQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1746032227; x=1746637027; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3CtfAFFPAmimQWAxnHhowOi1TY4SeMONfCUu3d2ZV+g=; b=cOpVSVUqZuHe+H+NVNU5kWut2gADQ11YK6Yqv6XoML23UovVZh5aCh//jf15md4toF 7JToewnrFoZpsmyVvW7bAiBGD758Kl+QzMii5/wTrj2C6o9kiB8hgrhuTsqGsguz5ubj 2l/srbTv0WKxVNSkwq+PJzrpyB+riYPLvjfb1YVkGeV77Cv/OER9z33n7lQWLnDtdYYJ cMBjnBO06rutrZQTZ2hEyp6oFBmrpglzS3WBzvYSopixe3McOaCHItJh4mNzSaq7vorz xjGnaN4cJ0Z+7fEJr+oKWMnfIQaqO2BAOJntKJzCYJnkfAnuT+GCmSf0FgEOfy6UQXjW DtIw== X-Forwarded-Encrypted: i=1; AJvYcCVrUsV0et+bdCfPPe2++9EMgXURuKKe/bDC3zlVmt/7zWqtuI4qpF7boIYf04zT1nYvhuOx7H1YCpoAVwnE@vger.kernel.org X-Gm-Message-State: AOJu0Yx01hH8JBff2mjKCLv57Qrnq/2IWra/hGlCnPr2TAy+6OCiX+Kx lJMFMCEYX2qELuBz3ZW3huJrib8GOtqG23Ss/mDGDcMjGSz3VmI/glKREQi1Y9oyCHp0U0anNw= = X-Google-Smtp-Source: AGHT+IG7z8QVHL81WQHnmCOzSujF1J6sOiLfM/15DH9y4sE0yZpL+jCikhfVxgZ427ckeUYKfQPTUnZiOQ== X-Received: from wrbby19.prod.google.com ([2002:a05:6000:993:b0:391:41b6:d274]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a5d:55cf:0:b0:3a0:90c6:46df with SMTP id ffacd0b85a97d-3a090c646f8mr1848861f8f.48.1746032227177; Wed, 30 Apr 2025 09:57:07 -0700 (PDT) Date: Wed, 30 Apr 2025 17:56:47 +0100 In-Reply-To: <20250430165655.605595-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250430165655.605595-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.967.g6a0df3ecc3-goog Message-ID: <20250430165655.605595-6-tabba@google.com> Subject: [PATCH v8 05/13] KVM: Rename kvm_slot_can_be_private() to kvm_slot_has_gmem() From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, tabba@google.com The function kvm_slot_can_be_private() is used to check whether a memory slot is backed by guest_memfd. Rename it to kvm_slot_has_gmem() to make that clearer and to decouple memory being private from guest_memfd. Co-developed-by: David Hildenbrand Signed-off-by: David Hildenbrand Signed-off-by: Fuad Tabba --- arch/x86/kvm/mmu/mmu.c | 4 ++-- arch/x86/kvm/svm/sev.c | 4 ++-- include/linux/kvm_host.h | 2 +- virt/kvm/guest_memfd.c | 2 +- 4 files changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 734d71ec97ef..6d5dd869c890 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3283,7 +3283,7 @@ static int __kvm_mmu_max_mapping_level(struct kvm *kvm, int kvm_mmu_max_mapping_level(struct kvm *kvm, const struct kvm_memory_slot *slot, gfn_t gfn) { - bool is_private = kvm_slot_can_be_private(slot) && + bool is_private = kvm_slot_has_gmem(slot) && kvm_mem_is_private(kvm, gfn); return __kvm_mmu_max_mapping_level(kvm, slot, gfn, PG_LEVEL_NUM, is_private); @@ -4496,7 +4496,7 @@ static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu, { int max_order, r; - if (!kvm_slot_can_be_private(fault->slot)) { + if (!kvm_slot_has_gmem(fault->slot)) { kvm_mmu_prepare_memory_fault_exit(vcpu, fault); return -EFAULT; } diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 0bc708ee2788..fbf55821d62e 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -2378,7 +2378,7 @@ static int snp_launch_update(struct kvm *kvm, struct kvm_sev_cmd *argp) mutex_lock(&kvm->slots_lock); memslot = gfn_to_memslot(kvm, params.gfn_start); - if (!kvm_slot_can_be_private(memslot)) { + if (!kvm_slot_has_gmem(memslot)) { ret = -EINVAL; goto out; } @@ -4682,7 +4682,7 @@ void sev_handle_rmp_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u64 error_code) } slot = gfn_to_memslot(kvm, gfn); - if (!kvm_slot_can_be_private(slot)) { + if (!kvm_slot_has_gmem(slot)) { pr_warn_ratelimited("SEV: Unexpected RMP fault, non-private slot for GPA 0x%llx\n", gpa); return; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 6ca7279520cf..d9616ee6acc7 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -614,7 +614,7 @@ struct kvm_memory_slot { #endif }; -static inline bool kvm_slot_can_be_private(const struct kvm_memory_slot *slot) +static inline bool kvm_slot_has_gmem(const struct kvm_memory_slot *slot) { return slot && (slot->flags & KVM_MEM_GUEST_MEMFD); } diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index befea51bbc75..6db515833f61 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -654,7 +654,7 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long return -EINVAL; slot = gfn_to_memslot(kvm, start_gfn); - if (!kvm_slot_can_be_private(slot)) + if (!kvm_slot_has_gmem(slot)) return -EINVAL; file = kvm_gmem_get_file(slot); From patchwork Wed Apr 30 16:56:48 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 886096 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1C061266B41 for ; Wed, 30 Apr 2025 16:57:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746032232; cv=none; b=PDuSecEIic0VpRpw3neyVUnNH4KDXBkfoNukdssaEnuvOWRjQ3tRpj8vFr+U4AmsGsBNSd2xOzTYeMMCujRIOTmFlbh0wPQ6u4yW0/Y0cZZsZiH3PkMp7IjdYeVqaRH4YxRKRULc5twCTJ6xcALhaFo8rqVPCumlh2O+HfkUkRA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746032232; c=relaxed/simple; bh=/44OFyP066RS7yhUZEvyYm7bCWYbBZvJOekNX824IEY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=OexzRzmGBBHSWenZCorolIOw6BK5qYipn+2ycfdMl+1daBSoa4ebcWR1Uj40VjOsGmH1CV5VrkNfQLv3BjE0ZrtEvk3H+KaU9gcXJ23e+3vWLrA+6u9Rtk9cEMbyfrbKDyD3OsrO32cMyJJjgmmxLIdtH1BhhRbhRGXL5aPeh8s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=IW5xgUGJ; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="IW5xgUGJ" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43d00017e9dso176925e9.0 for ; Wed, 30 Apr 2025 09:57:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1746032229; x=1746637029; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=gr5Lhtq4RMEgZM2tYHreWv77Ov5o6NGJLDOXXwPVCpA=; b=IW5xgUGJO5st4zfIxjOn8gf8aAGFr7KTTL+5vME3+lKBVyPCYlLcoDMJ0syXVPqcA0 oBjz9nozQXZbzqcufFMyW+3xDh2Ttu1Oxqxvm1oM0jPJFXuUIkl/I08WlN8B48UdNjBl q6Q5yo4Vh/7ZgXaMnBbdlL1bbriXX67h553PWy5U/XK0VjD4+qdXkar7P3Ffuh9TgcuJ teVyyPWFur/nnZtyOvnu2m9/QW5UfvDBSU92itTRMKcP42VUpNzlv8twWNXmRQoSwH3U C48Ado1mrIs2LMuJK3u7FfyZVGn5tAeicUPJ83h2MXBLEV7RB4Y1JNZV25/082xrbAXr ZE4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1746032229; x=1746637029; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=gr5Lhtq4RMEgZM2tYHreWv77Ov5o6NGJLDOXXwPVCpA=; b=Ff5i3/03MD5ZdgO9MJ4TLbaewe2GBbNn3Ixh3fFie0yaiJQvbybA5PODPlbpOD85WK VfcJfj8xJrxpv+o0kMutTyUv94YKDmJ8JKFTdisL9LCEYr6vAdIafOMVmkJ4k9QE4tx0 U5jMMuAk0r8w6HpMpQqyAIgRpcbm7Vmq2XvCGxrYoSG635WeZMydvra1EhZwUTpIX1Px h+OH3hkapwtbvEngjlL1hbfe+XrYuzALO+UB0M0KPDnq8F4kMiuR9bD2lTnpN3WKKsBV NNjYEgb6tqOHMqIOa2TrRDEUq0hKdxPbN+LujFkgESIdi0oAi55Xo8Q7LKOCRRDP0UvQ 7Ifw== X-Forwarded-Encrypted: i=1; AJvYcCVY4a1Nxq3mrnPiBdsKZmhdj/JoeqboOFiQ6tuwST9EseJjNjvJXwsMNE+qz+heXXFgqaDHjzEozBHBXmfY@vger.kernel.org X-Gm-Message-State: AOJu0YzdMy7gBx3Uvw4AjDS+sKalPi8r5b8cm0ZTQzWbwY6YigsYXAp1 vsmFT4eeNWgMDAmC1KM0bmVfSm/6Wju02cqeUTEI7ZWgIbaPMcsDmaQehFuJj4M3kM8BlbOcdw= = X-Google-Smtp-Source: AGHT+IETFauGFsHYOJjw4CSDyJ0tdvCZM93gDYVKoGoQHbFjxmQvhxMExSQAfHqlOSGTi6uXKyM5+m3cQQ== X-Received: from wmbjx12.prod.google.com ([2002:a05:600c:578c:b0:440:595d:fba9]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4e09:b0:43d:5ec:b2f4 with SMTP id 5b1f17b1804b1-441b1f35490mr48191655e9.10.1746032229343; Wed, 30 Apr 2025 09:57:09 -0700 (PDT) Date: Wed, 30 Apr 2025 17:56:48 +0100 In-Reply-To: <20250430165655.605595-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250430165655.605595-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.967.g6a0df3ecc3-goog Message-ID: <20250430165655.605595-7-tabba@google.com> Subject: [PATCH v8 06/13] KVM: x86: Generalize private fault lookups to guest_memfd fault lookups From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, tabba@google.com Until now, faults to private memory backed by guest_memfd are always consumed from guest_memfd whereas faults to shared memory are consumed from anonymous memory. Subsequent patches will allow sharing guest_memfd backed memory in-place, and mapping it by the host. Faults to in-place shared memory should be consumed from guest_memfd as well. In order to facilitate that, generalize the fault lookups. Currently, only private memory is consumed from guest_memfd and therefore as it stands, this patch does not change the behavior. Co-developed-by: David Hildenbrand Signed-off-by: David Hildenbrand Signed-off-by: Fuad Tabba --- arch/x86/kvm/mmu/mmu.c | 19 +++++++++---------- include/linux/kvm_host.h | 6 ++++++ 2 files changed, 15 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 6d5dd869c890..08eebd24a0e1 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3258,7 +3258,7 @@ static int host_pfn_mapping_level(struct kvm *kvm, gfn_t gfn, static int __kvm_mmu_max_mapping_level(struct kvm *kvm, const struct kvm_memory_slot *slot, - gfn_t gfn, int max_level, bool is_private) + gfn_t gfn, int max_level, bool is_gmem) { struct kvm_lpage_info *linfo; int host_level; @@ -3270,7 +3270,7 @@ static int __kvm_mmu_max_mapping_level(struct kvm *kvm, break; } - if (is_private) + if (is_gmem) return max_level; if (max_level == PG_LEVEL_4K) @@ -3283,10 +3283,9 @@ static int __kvm_mmu_max_mapping_level(struct kvm *kvm, int kvm_mmu_max_mapping_level(struct kvm *kvm, const struct kvm_memory_slot *slot, gfn_t gfn) { - bool is_private = kvm_slot_has_gmem(slot) && - kvm_mem_is_private(kvm, gfn); + bool is_gmem = kvm_slot_has_gmem(slot) && kvm_mem_from_gmem(kvm, gfn); - return __kvm_mmu_max_mapping_level(kvm, slot, gfn, PG_LEVEL_NUM, is_private); + return __kvm_mmu_max_mapping_level(kvm, slot, gfn, PG_LEVEL_NUM, is_gmem); } void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) @@ -4465,7 +4464,7 @@ static inline u8 kvm_max_level_for_order(int order) return PG_LEVEL_4K; } -static u8 kvm_max_private_mapping_level(struct kvm *kvm, kvm_pfn_t pfn, +static u8 kvm_max_gmem_mapping_level(struct kvm *kvm, kvm_pfn_t pfn, u8 max_level, int gmem_order) { u8 req_max_level; @@ -4491,7 +4490,7 @@ static void kvm_mmu_finish_page_fault(struct kvm_vcpu *vcpu, r == RET_PF_RETRY, fault->map_writable); } -static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu, +static int kvm_mmu_faultin_pfn_gmem(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { int max_order, r; @@ -4509,8 +4508,8 @@ static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu, } fault->map_writable = !(fault->slot->flags & KVM_MEM_READONLY); - fault->max_level = kvm_max_private_mapping_level(vcpu->kvm, fault->pfn, - fault->max_level, max_order); + fault->max_level = kvm_max_gmem_mapping_level(vcpu->kvm, fault->pfn, + fault->max_level, max_order); return RET_PF_CONTINUE; } @@ -4521,7 +4520,7 @@ static int __kvm_mmu_faultin_pfn(struct kvm_vcpu *vcpu, unsigned int foll = fault->write ? FOLL_WRITE : 0; if (fault->is_private) - return kvm_mmu_faultin_pfn_private(vcpu, fault); + return kvm_mmu_faultin_pfn_gmem(vcpu, fault); foll |= FOLL_NOWAIT; fault->pfn = __kvm_faultin_pfn(fault->slot, fault->gfn, foll, diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index d9616ee6acc7..cdcd7ac091b5 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2514,6 +2514,12 @@ static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) } #endif /* CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES */ +static inline bool kvm_mem_from_gmem(struct kvm *kvm, gfn_t gfn) +{ + /* For now, only private memory gets consumed from guest_memfd. */ + return kvm_mem_is_private(kvm, gfn); +} + #ifdef CONFIG_KVM_GMEM int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, kvm_pfn_t *pfn, struct page **page, From patchwork Wed Apr 30 16:56:49 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 886498 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2AC34264F99 for ; Wed, 30 Apr 2025 16:57:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746032234; cv=none; b=IsBRSgzI1X/boIvTLsHyhlMNRoxivaGHi0H5tpxyN3twM05ewqFj27BiUrDP0wjVjUrXJhR3aSwZcQNfc7pFf9IU4Ob5HERGPvLkWHfzdsxfFB5DoWTakI6PJeY2MjzWg4uc1Nh2YN1yg3ZgS0YPoNQBR6PwOHOL2NLoOOggDSI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746032234; c=relaxed/simple; bh=w12b4zaFkJCKoQvjJZ1cgL7kXZ9PuO6sSB6PGuoRTyo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=hOKe0OcHimRp189ea4T2omd+XkzbRQwt0TQftjUkgEzajLiEJCRVMF16y53/rFR3DOWyu8fr3wn50nfm3oEaubGX/sfuu1hWSQqnWNbtHKHweAku6lL4NcCJRpI/HlfuMz5k/BFXUztDF+GhnNInnA8O8OvYJwV6ueXOCqo9gdc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=d9elZVZz; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="d9elZVZz" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-39134c762ebso1173f8f.0 for ; Wed, 30 Apr 2025 09:57:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1746032231; x=1746637031; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=c2CirrXQLYDrGMQg5euGpNIkUrJdTmNtkCDhZrP+vQ0=; b=d9elZVZzJQoHSzr6Bufe9o5GS2sNWuby1tfA8y0u+HG6LqTncYN9lAlXkX9kkyUSak b/QehGZSWo3HRBByk/Fo+kXUUtXly+yddekcjftw9ueR1856J1mIcDP5ssEQzkQ9hSrE 8MUPnpVlc2GRmCWzTgITYgVJOFlgnANnLpwUo+EVPU97FbLhOVqm5NVCN4bcz85hGG4h U8uknDzAd4+0qRzBzJYJBwi9O4yArHIkQ91wDozfYoqR8wx6C9I4ryqz1EFdOZ47Lb6/ OEvJDPekz0DqBwr63x7W+JjUmncCzqEi4XyTlXjP6sziBM6X2phQG6qLPdeZRDr4X0Ya 2UlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1746032231; x=1746637031; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=c2CirrXQLYDrGMQg5euGpNIkUrJdTmNtkCDhZrP+vQ0=; b=LW7ePtqQcmeViTfj8GlZiQ1m6wDngAXhUQhboopmzErniSgXHZfgCwa/aaol3UdEF0 azBTtMVSEGZ5K9WQ+FMsf189f8ksLycl36NN6jzHgCnl/7EL+QHN1hTdmgF5WscMufeM eEBiEr/k7tVz6iLM2edSBVrTDfV6EPzbuZN7BCKT3Vezz5x3c7u0JUrvTO/g9mIaHQWN oggKarvcBdlpFM2KNHObkEnCPYMftOlCxsOY2p+4pmD0xxsDG8e97RT5qr48W1j3ptz9 Y5KuCG6dtOCHKSJvv5le6XRPr/bej2at+vRg57njCTbglWWj6kbM5d0/XqPyDosMqBzH a9EA== X-Forwarded-Encrypted: i=1; AJvYcCVy7BaxauVRSqSxVL1Mcq+/adKPYFlfTGL3YgjuxlMLq34PgRaX3vwiaL52Dt9kYRAJkAKnYZHLXhUxfVpk@vger.kernel.org X-Gm-Message-State: AOJu0YxVczDv3JD/s8QlavyQFiIbznwjHJ1TrbkazhyjCms5AsDbcW+j jMOqd1tlY8MH9MK/ZNRXq8Gllww0bzJf8rkOx9ghwWERtfpHA8urPQnIk4NvdCLNat09gSqW7Q= = X-Google-Smtp-Source: AGHT+IFZ0fZuH5KrBBFCujIwvjpx8VCuAOhwo8FodbLOeOqGwPZm7lxCTBaRHUDn4PpBrTjcI+tyvp+wdw== X-Received: from wrd17.prod.google.com ([2002:a05:6000:4a11:b0:39a:bc58:83e4]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:420e:b0:391:22e2:cd21 with SMTP id ffacd0b85a97d-3a08f7a3281mr3864622f8f.36.1746032231484; Wed, 30 Apr 2025 09:57:11 -0700 (PDT) Date: Wed, 30 Apr 2025 17:56:49 +0100 In-Reply-To: <20250430165655.605595-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250430165655.605595-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.967.g6a0df3ecc3-goog Message-ID: <20250430165655.605595-8-tabba@google.com> Subject: [PATCH v8 07/13] KVM: Fix comments that refer to slots_lock From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, tabba@google.com Fix comments so that they refer to slots_lock instead of slots_locks (remove trailing s). Signed-off-by: Fuad Tabba --- include/linux/kvm_host.h | 2 +- virt/kvm/kvm_main.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index cdcd7ac091b5..9419fb99f7c2 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -859,7 +859,7 @@ struct kvm { struct notifier_block pm_notifier; #endif #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES - /* Protected by slots_locks (for writes) and RCU (for reads) */ + /* Protected by slots_lock (for writes) and RCU (for reads) */ struct xarray mem_attr_array; #endif char stats_id[KVM_STATS_NAME_SIZE]; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 2468d50a9ed4..6289ea1685dd 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -333,7 +333,7 @@ void kvm_flush_remote_tlbs_memslot(struct kvm *kvm, * All current use cases for flushing the TLBs for a specific memslot * are related to dirty logging, and many do the TLB flush out of * mmu_lock. The interaction between the various operations on memslot - * must be serialized by slots_locks to ensure the TLB flush from one + * must be serialized by slots_lock to ensure the TLB flush from one * operation is observed by any other operation on the same memslot. */ lockdep_assert_held(&kvm->slots_lock); From patchwork Wed Apr 30 16:56:50 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 886095 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0B6B626982E for ; Wed, 30 Apr 2025 16:57:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746032237; cv=none; b=lXHy2XMJj9y3RXlpnnNJpeCd63TNYP+e3rNH2/vng7jMxOBD7mGQA0dtcQtZETAp8z+ntYZhgUynkXLVcaGRfnQP/yhl8bb/JfFe+oea1qfCuWMGMH+qcKTGO/Pq8brkDJn1abzK8LjbGc9V6dCPEC2dzVCtsb3jyf8szsR9FYc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746032237; c=relaxed/simple; bh=f0TRZ36e7B2SYKPTDH8A8xdr+i/oUNUNgdw4jZd9faM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=btfFk3FQALPaTS3PzeJesA+PBjVygw8qLnYe7qoQplRhuK+5W6F9AK61BMdPO+QeUYQbPbnkW95CUXIg5P+DwaZoU/Z0jYWwaCYEXYniv4r8TCe+ynyiz/Ss/127qtftb4OshayU1jd5Nj8Jgh3sx7dr8tXNZCbzp13K4CsY2bA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=cLGGa/iN; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="cLGGa/iN" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43cec217977so151895e9.0 for ; Wed, 30 Apr 2025 09:57:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1746032233; x=1746637033; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=qguY8wkjSLxGc6Rnm9Mow0KqEpbeWasuLEBUoK8rap4=; b=cLGGa/iN/E90xEiQ7IxTY7FxmiCzo+fD+7FCGxY0ZZ+lNKPFHp2hGfUt1xJUSCpqlD ljLdglMZ6xo/qwSCuvSac8GHb4pi2l3JiTVq528oQsaEwQSpTMV144P1NUnEsBSL1dO+ dwdFpb0e9qMboG7ZQOpt1xKYfF/eP0d2s2dlTLcP0h2xdX8yfmqFoUpPbWk7pA/yyiMU HggE6Yw5zmuW5Bhb/i2SUCjRiljtCXsL+S7IPxacjrI6nacForT4BkKKKypzD3LX7Dwa OT9tJN8wRyMQu+SJcf6Fol3ggVGEPGcPUh5OshZPKZjQoGSihkAGn3Z3lb4Sp+NOnO9P 355g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1746032233; x=1746637033; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=qguY8wkjSLxGc6Rnm9Mow0KqEpbeWasuLEBUoK8rap4=; b=eAShvf+lo1ULW8gUjaCNnwGd7TtLNkb5q11Ep2x0LAfVlgRrZxlGmyphNqpp5HUOoS WJnaZLvJ/j1FnWX66T+sSlypem4rT7/vik5YSvkWJjNFZBXhnpeFQ467KI2w5Wn0t3dl hXhuuoEyTlwgJogQgOa22eVh0Al2eeYiwyZMfSWXAqM8RpobNpH6p9OdC93+LMnP8QLz O0aNuEsiX4wi8n8PF2vFK7XGXJB+6Vr+WVn8EuXSRRaIgFMsLGC/sWCPKDDWeepeKdAa C7BX3AyNoPGQqteWxDxM5YwbR6cE5isGGKNC7piKEfTe+TM3pwAeh3d3XEKDl8stIcU1 FSvQ== X-Forwarded-Encrypted: i=1; AJvYcCXdwUTNF2m+Ecflj1k5MlJAFq/lMZVMcW6alOZn/1oOWO346dr9XZtjlYmeIISV5rQuZSltHd5MnGXfakRx@vger.kernel.org X-Gm-Message-State: AOJu0Yxisej2+xynrUUP/XOk9SGnMZOeJN3f53FWDh6e8JKCGfRp1f1f a1QrtQIFyWs64dQCm0U1+Z9Q2Up6SG/ivOS0FJZhuxCIKPMqBQL8+L8bY11FxAH74WTjVqIa9A= = X-Google-Smtp-Source: AGHT+IH/bjSAFj0lsB85l/2YbEt4D9+ii8jCl4h9AfCzNBDyCxisD+HmPdIjUIeed8Koj3te+0fhdPZTrA== X-Received: from wmbgw7.prod.google.com ([2002:a05:600c:8507:b0:440:5f8a:667c]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:cce:b0:43d:8ea:8d80 with SMTP id 5b1f17b1804b1-441b2634cd0mr34951035e9.5.1746032233508; Wed, 30 Apr 2025 09:57:13 -0700 (PDT) Date: Wed, 30 Apr 2025 17:56:50 +0100 In-Reply-To: <20250430165655.605595-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250430165655.605595-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.967.g6a0df3ecc3-goog Message-ID: <20250430165655.605595-9-tabba@google.com> Subject: [PATCH v8 08/13] KVM: guest_memfd: Allow host to map guest_memfd() pages From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, tabba@google.com Add support for mmap() and fault() for guest_memfd backed memory in the host for VMs that support in-place conversion between shared and private. To that end, this patch adds the ability to check whether the VM type supports in-place conversion, and only allows mapping its memory if that's the case. This patch introduces the configuration option KVM_GMEM_SHARED_MEM, which enables support for in-place shared memory. It also introduces the KVM capability KVM_CAP_GMEM_SHARED_MEM, which indicates that the host can create VMs that support shared memory. Supporting shared memory implies that memory can be mapped when shared with the host. Signed-off-by: Fuad Tabba --- include/linux/kvm_host.h | 15 ++++++- include/uapi/linux/kvm.h | 1 + virt/kvm/Kconfig | 5 +++ virt/kvm/guest_memfd.c | 92 ++++++++++++++++++++++++++++++++++++++++ virt/kvm/kvm_main.c | 4 ++ 5 files changed, 116 insertions(+), 1 deletion(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 9419fb99f7c2..f3af6bff3232 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -729,6 +729,17 @@ static inline bool kvm_arch_supports_gmem(struct kvm *kvm) } #endif +/* + * Arch code must define kvm_arch_gmem_supports_shared_mem if support for + * private memory is enabled and it supports in-place shared/private conversion. + */ +#if !defined(kvm_arch_gmem_supports_shared_mem) && !IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM) +static inline bool kvm_arch_gmem_supports_shared_mem(struct kvm *kvm) +{ + return false; +} +#endif + #ifndef kvm_arch_has_readonly_mem static inline bool kvm_arch_has_readonly_mem(struct kvm *kvm) { @@ -2516,7 +2527,9 @@ static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) static inline bool kvm_mem_from_gmem(struct kvm *kvm, gfn_t gfn) { - /* For now, only private memory gets consumed from guest_memfd. */ + if (kvm_arch_gmem_supports_shared_mem(kvm)) + return true; + return kvm_mem_is_private(kvm, gfn); } diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index b6ae8ad8934b..8bc8046c7f3a 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -930,6 +930,7 @@ struct kvm_enable_cap { #define KVM_CAP_X86_APIC_BUS_CYCLES_NS 237 #define KVM_CAP_X86_GUEST_MODE 238 #define KVM_CAP_ARM_WRITABLE_IMP_ID_REGS 239 +#define KVM_CAP_GMEM_SHARED_MEM 240 struct kvm_irq_routing_irqchip { __u32 irqchip; diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index 559c93ad90be..f4e469a62a60 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -128,3 +128,8 @@ config HAVE_KVM_ARCH_GMEM_PREPARE config HAVE_KVM_ARCH_GMEM_INVALIDATE bool depends on KVM_GMEM + +config KVM_GMEM_SHARED_MEM + select KVM_GMEM + bool + prompt "Enables in-place shared memory for guest_memfd" diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 6db515833f61..8bc8fc991d58 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -312,7 +312,99 @@ static pgoff_t kvm_gmem_get_index(struct kvm_memory_slot *slot, gfn_t gfn) return gfn - slot->base_gfn + slot->gmem.pgoff; } +#ifdef CONFIG_KVM_GMEM_SHARED_MEM +/* + * Returns true if the folio is shared with the host and the guest. + */ +static bool kvm_gmem_offset_is_shared(struct file *file, pgoff_t index) +{ + struct kvm_gmem *gmem = file->private_data; + + /* For now, VMs that support shared memory share all their memory. */ + return kvm_arch_gmem_supports_shared_mem(gmem->kvm); +} + +static vm_fault_t kvm_gmem_fault_shared(struct vm_fault *vmf) +{ + struct inode *inode = file_inode(vmf->vma->vm_file); + struct folio *folio; + vm_fault_t ret = VM_FAULT_LOCKED; + + filemap_invalidate_lock_shared(inode->i_mapping); + + folio = kvm_gmem_get_folio(inode, vmf->pgoff); + if (IS_ERR(folio)) { + int err = PTR_ERR(folio); + + if (err == -EAGAIN) + ret = VM_FAULT_RETRY; + else + ret = vmf_error(err); + + goto out_filemap; + } + + if (folio_test_hwpoison(folio)) { + ret = VM_FAULT_HWPOISON; + goto out_folio; + } + + if (!kvm_gmem_offset_is_shared(vmf->vma->vm_file, vmf->pgoff)) { + ret = VM_FAULT_SIGBUS; + goto out_folio; + } + + if (WARN_ON_ONCE(folio_test_large(folio))) { + ret = VM_FAULT_SIGBUS; + goto out_folio; + } + + if (!folio_test_uptodate(folio)) { + clear_highpage(folio_page(folio, 0)); + kvm_gmem_mark_prepared(folio); + } + + vmf->page = folio_file_page(folio, vmf->pgoff); + +out_folio: + if (ret != VM_FAULT_LOCKED) { + folio_unlock(folio); + folio_put(folio); + } + +out_filemap: + filemap_invalidate_unlock_shared(inode->i_mapping); + + return ret; +} + +static const struct vm_operations_struct kvm_gmem_vm_ops = { + .fault = kvm_gmem_fault_shared, +}; + +static int kvm_gmem_mmap(struct file *file, struct vm_area_struct *vma) +{ + struct kvm_gmem *gmem = file->private_data; + + if (!kvm_arch_gmem_supports_shared_mem(gmem->kvm)) + return -ENODEV; + + if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) != + (VM_SHARED | VM_MAYSHARE)) { + return -EINVAL; + } + + vm_flags_set(vma, VM_DONTDUMP); + vma->vm_ops = &kvm_gmem_vm_ops; + + return 0; +} +#else +#define kvm_gmem_mmap NULL +#endif /* CONFIG_KVM_GMEM_SHARED_MEM */ + static struct file_operations kvm_gmem_fops = { + .mmap = kvm_gmem_mmap, .open = generic_file_open, .release = kvm_gmem_release, .fallocate = kvm_gmem_fallocate, diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 6289ea1685dd..c75d8e188eb7 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -4845,6 +4845,10 @@ static int kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg) #ifdef CONFIG_KVM_GMEM case KVM_CAP_GUEST_MEMFD: return !kvm || kvm_arch_supports_gmem(kvm); +#endif +#ifdef CONFIG_KVM_GMEM_SHARED_MEM + case KVM_CAP_GMEM_SHARED_MEM: + return !kvm || kvm_arch_gmem_supports_shared_mem(kvm); #endif default: break; From patchwork Wed Apr 30 16:56:51 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 886497 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1CC63267733 for ; Wed, 30 Apr 2025 16:57:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746032238; cv=none; b=mvzqbC5KOYdFUeJ1HyLVf7d4a3PQr0l3tIOZQUC7ubsdvPV9QIdz1VupAKftE5uThkbhRIKKT5r4ozf6EKf21gNMMJ8JaYghkMp7fJW0ie4OOpX+8yeX34rJE4yxsiY/xreJ7Y2ckOr/frnk+ChBUDSh9dxn79Uh+kUXzpMAuvM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746032238; c=relaxed/simple; bh=9oByMR/9JphyDqzRqkqg78E9Kw5pdvhMFfzVgdt5esg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=tUJabeC7xW9ag6sQ8W73hJU7EAO3YotGMRzo1pB0YliZoSUOc1voqozxBfCtKFKSfLKGPM2cl/QgmK7/fPkYWm2L6/sPBUvw5U3Y4LIGe/kH3rO+0681dTEP7n7ezFT9cDUlFliaAAvhmUTyQ9GK29UhWFq18Ot+PlPQ17nXqgo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=CVKwcBMp; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="CVKwcBMp" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43cf327e9a2so77545e9.3 for ; Wed, 30 Apr 2025 09:57:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1746032235; x=1746637035; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=LVAJjWabCG3OGzL9nZTYbHeceT78jHenYYJP1PJaCf8=; b=CVKwcBMpVFoXNDDQgKImY7G1SOkAKfCntqU169So2RxcXfGLrEDNcN8ScZb9KdZo5I uCohVqMPKKPuXnRzl3ddxY2KIwSYp6S7hhWHL99FZ2ck3peaWJlH7gg2evdS0jqY/BYI HOBvteTx+MFws/c3AglcTeiA3eUz/dOimY2XbF8TpkIcv/7VlfalL1LO5MSZTkBrOHIO q26j3LI/3ZEfl+9vlW4VP41w6xEBGZAwRQD5SMQeycvEBqok2tECYTGstBTVUpddzkSW peyrPK9orQRs2x7AjZKKZxlH6e4ag9R7cz1o2RHeUgNWC9/GJohB+HDJEfm7VluJjTe6 dLsA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1746032235; x=1746637035; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=LVAJjWabCG3OGzL9nZTYbHeceT78jHenYYJP1PJaCf8=; b=ZKKIHBMFEDWOopCfD57BRdPKm7eQ+govV+AAgdSRsEm6Kw2x0N23YPyuMudEWKCQc3 tF9mKDluQK3unjFTiURu4mV2Q6ViDEQiD2fUeBr6rjVxSEcbj7qu4fl6UctTHbz4RJRz yh7lfkfm3Knxapk67Q9DAdKgLspW01VCac9M4vwkaFxSKxbcJtVdNDj5aaDbGk9iRa4O caKmdv+rOtvBlzFzqX3MV4R/7HMxGgfV4J56vhX868iExsjU4mOthwRwf28JUYli+xxp eBdn2oo/Qyj862wnTqtfj2DMSQHvqoKw3Tpahqj8UEPe+mnoTlKUEefYn2U2Up4BYYXe kmew== X-Forwarded-Encrypted: i=1; AJvYcCWrATQhTaYF35orR6KwLhz1+N28/RTUZIa+j8Om/0nLWlQJkBa4Lv4+Ur5uzNZKB18Wjn0/qkKi8sjFwhSv@vger.kernel.org X-Gm-Message-State: AOJu0YwzGXytqKnEloEQZpCttQMsMt1XyKQXpNNVjsUhNaltjEsJBBsO F6bWLe9hZvg6V+m8XsqycNdIewWtIEYLmrONWVOO9dRyR8WNjqdHGFa0jhpKlSEmBzCcfBrqsg= = X-Google-Smtp-Source: AGHT+IFO6+rLQ0EajKHUJSNNxKCG9biVTumSU06fED5NLBWVqs++8Ov0pRt8qI0Ifp6jDl4rnxgq1fe1XA== X-Received: from wmgi20.prod.google.com ([2002:a05:600c:2d94:b0:441:b363:76fa]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1e02:b0:43c:e305:6d50 with SMTP id 5b1f17b1804b1-441b1f5ae61mr33787605e9.24.1746032235467; Wed, 30 Apr 2025 09:57:15 -0700 (PDT) Date: Wed, 30 Apr 2025 17:56:51 +0100 In-Reply-To: <20250430165655.605595-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250430165655.605595-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.967.g6a0df3ecc3-goog Message-ID: <20250430165655.605595-10-tabba@google.com> Subject: [PATCH v8 09/13] KVM: arm64: Refactor user_mem_abort() calculation of force_pte From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, tabba@google.com To simplify the code and to make the assumptions clearer, refactor user_mem_abort() by immediately setting force_pte to true if the conditions are met. Also, remove the comment about logging_active being guaranteed to never be true for VM_PFNMAP memslots, since it's not actually correct. No functional change intended. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/mmu.c | 13 ++++--------- 1 file changed, 4 insertions(+), 9 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 754f2fe0cc67..148a97c129de 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1472,7 +1472,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, bool fault_is_perm) { int ret = 0; - bool write_fault, writable, force_pte = false; + bool write_fault, writable; bool exec_fault, mte_allowed; bool device = false, vfio_allow_any_uc = false; unsigned long mmu_seq; @@ -1484,6 +1484,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, gfn_t gfn; kvm_pfn_t pfn; bool logging_active = memslot_is_logging(memslot); + bool force_pte = logging_active || is_protected_kvm_enabled(); long vma_pagesize, fault_granule; enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; struct kvm_pgtable *pgt; @@ -1533,16 +1534,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, return -EFAULT; } - /* - * logging_active is guaranteed to never be true for VM_PFNMAP - * memslots. - */ - if (logging_active || is_protected_kvm_enabled()) { - force_pte = true; + if (force_pte) vma_shift = PAGE_SHIFT; - } else { + else vma_shift = get_vma_page_shift(vma, hva); - } switch (vma_shift) { #ifndef __PAGETABLE_PMD_FOLDED From patchwork Wed Apr 30 16:56:52 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 886094 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 627EA26C3B5 for ; Wed, 30 Apr 2025 16:57:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746032241; cv=none; b=PsOZ73S9tmkD7aginskbgAbdYGh9pJHML4PhzPyY+r8amKJFYQjPypFyLTdt3VuKy3zWByuT9pPT4pljW+4PV9hr0FgdwDQ6KFiaM+A2oDAyfT48reiWKPoHVExcCv0ePwwSevt5k7ByrSyjnotZ3dQvzP8p9b4h7RYAWKFHmbE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746032241; c=relaxed/simple; bh=Vl8tBENFg+CssuZIfH6YSyfeiVsOwoaYIe7y9tn/wZc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Gaa6IDzgndlR1Kliq8wY7ypcJ41wuC4HLaLUZjjvNLZyfSx4gipZ66UrxH/5qIjYVc5dNU8aGcu66J81d4VybqYQa5S0XEY9fwYKUYMnZQ6tkqZ+JBu/qTQwXf0tYOOXFd1Ye1Yeo2j8D0TdVLzls28e811szAdGFdRdDar3Ojs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=wyrmPnMA; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="wyrmPnMA" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43d5ca7c86aso180205e9.0 for ; Wed, 30 Apr 2025 09:57:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1746032238; x=1746637038; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=lT+w+gixt11vac0AKgWaA63BjBPb8gwiS7Jbs6OQuxA=; b=wyrmPnMA520etlPl+yeWWxQsIwDce29nKcvsqS+wrGHkbt/Iih02SY/7tJrhzdH7hV v8v5CCBjcErOOEkshvtnAgVXKQs8QnPHA6pb919Qs2IUqStZt4sb2nDMTY9dm4VqdGXg 2/YI4gw4yxwYT1V651M6HFNZ9BvEVJ4a3R1pdrCy//a9mpoOspS91zChrrlhTSqQcy1G NmwZjHU6GVGbCBjrEIBgvR7pZkJm2e41u8vA2isKGLBFDTsG3sV+XpXv3QUI7ZO+dyKn lko5P2ZBkaiEXdT6NqUjppofQ20tP5hip9U6c2eoOiY4z7EwPjApSgfO7MFAX8rplB3r NlQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1746032238; x=1746637038; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=lT+w+gixt11vac0AKgWaA63BjBPb8gwiS7Jbs6OQuxA=; b=fz4ehvXae3op5Dl3OJP5uiGLBeJMA43kQIvuZQnNoXmZ+1WbUji70dZ3d9pnAYowLI LqX6vBcLCLczyjPJ/4vRSFNvW/Zs78h7TodqB98D45Mksskqmea7+L53Zzh7YhOzsjVg GSPGflfwlQFdJ9d6Jtd7R8ZhXlGXt0UnLmdzTT3jZPeeMrdfC+GMulagqXfwjIyV2DZd rEao9iDwpzmqbQZhdPwIZBWjzBwbjCJAcSobAlAVu9JDzs+xRlbhSx2Xlwal2VWk36rM m6syYGrq7VIp1UM7p+AzugXyjcO4Z2q/6ru+ble5yr1m+0pSTMGUXD7u46zLwt2rj3h0 ApqQ== X-Forwarded-Encrypted: i=1; AJvYcCUmLfDTtxJYuluHHEWgwV0nofPY21iqUBooh7TlxauZ6ANOaOTljCJkLwXxwpnteLY91YjBRv3Tb98+GWbv@vger.kernel.org X-Gm-Message-State: AOJu0YychklpiDOsrkAmxF4Bru09OMXAZKsJ3LPdxXFnS/2olVY6Eay2 h3ViBcpM/DluheJxN0gXUXk3yjGXbu/FJpSLxZVWHIkdmEJWbNxBIE6cC2gbAKwExtY3yWpwMQ= = X-Google-Smtp-Source: AGHT+IEL5W88VW0xyg+gDyttDCBzFVHxw1cpBOysUXjegnCll+1R7oG7QBLoFSGBHg0tx6z/jBMX0Qc8Aw== X-Received: from wmqe11.prod.google.com ([2002:a05:600c:4e4b:b0:43c:ebbe:4bce]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:5246:b0:43d:49eb:963f with SMTP id 5b1f17b1804b1-441b1f5bdb0mr30320305e9.24.1746032237765; Wed, 30 Apr 2025 09:57:17 -0700 (PDT) Date: Wed, 30 Apr 2025 17:56:52 +0100 In-Reply-To: <20250430165655.605595-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250430165655.605595-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.967.g6a0df3ecc3-goog Message-ID: <20250430165655.605595-11-tabba@google.com> Subject: [PATCH v8 10/13] KVM: arm64: Handle guest_memfd()-backed guest page faults From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, tabba@google.com Add arm64 support for handling guest page faults on guest_memfd backed memslots. For now, the fault granule is restricted to PAGE_SIZE. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/mmu.c | 65 +++++++++++++++++++++++++++------------- include/linux/kvm_host.h | 5 ++++ virt/kvm/kvm_main.c | 5 ---- 3 files changed, 50 insertions(+), 25 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 148a97c129de..d1044c7f78bb 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1466,6 +1466,30 @@ static bool kvm_vma_mte_allowed(struct vm_area_struct *vma) return vma->vm_flags & VM_MTE_ALLOWED; } +static kvm_pfn_t faultin_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, + gfn_t gfn, bool write_fault, bool *writable, + struct page **page, bool is_gmem) +{ + kvm_pfn_t pfn; + int ret; + + if (!is_gmem) + return __kvm_faultin_pfn(slot, gfn, write_fault ? FOLL_WRITE : 0, writable, page); + + *writable = false; + + ret = kvm_gmem_get_pfn(kvm, slot, gfn, &pfn, page, NULL); + if (!ret) { + *writable = !memslot_is_readonly(slot); + return pfn; + } + + if (ret == -EHWPOISON) + return KVM_PFN_ERR_HWPOISON; + + return KVM_PFN_ERR_NOSLOT_MASK; +} + static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, struct kvm_s2_trans *nested, struct kvm_memory_slot *memslot, unsigned long hva, @@ -1473,19 +1497,20 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, { int ret = 0; bool write_fault, writable; - bool exec_fault, mte_allowed; + bool exec_fault, mte_allowed = false; bool device = false, vfio_allow_any_uc = false; unsigned long mmu_seq; phys_addr_t ipa = fault_ipa; struct kvm *kvm = vcpu->kvm; - struct vm_area_struct *vma; + struct vm_area_struct *vma = NULL; short vma_shift; void *memcache; - gfn_t gfn; + gfn_t gfn = ipa >> PAGE_SHIFT; kvm_pfn_t pfn; bool logging_active = memslot_is_logging(memslot); - bool force_pte = logging_active || is_protected_kvm_enabled(); - long vma_pagesize, fault_granule; + bool is_gmem = kvm_slot_has_gmem(memslot) && kvm_mem_from_gmem(kvm, gfn); + bool force_pte = logging_active || is_gmem || is_protected_kvm_enabled(); + long vma_pagesize, fault_granule = PAGE_SIZE; enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; struct kvm_pgtable *pgt; struct page *page; @@ -1522,16 +1547,22 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, return ret; } + mmap_read_lock(current->mm); + /* * Let's check if we will get back a huge page backed by hugetlbfs, or * get block mapping for device MMIO region. */ - mmap_read_lock(current->mm); - vma = vma_lookup(current->mm, hva); - if (unlikely(!vma)) { - kvm_err("Failed to find VMA for hva 0x%lx\n", hva); - mmap_read_unlock(current->mm); - return -EFAULT; + if (!is_gmem) { + vma = vma_lookup(current->mm, hva); + if (unlikely(!vma)) { + kvm_err("Failed to find VMA for hva 0x%lx\n", hva); + mmap_read_unlock(current->mm); + return -EFAULT; + } + + vfio_allow_any_uc = vma->vm_flags & VM_ALLOW_ANY_UNCACHED; + mte_allowed = kvm_vma_mte_allowed(vma); } if (force_pte) @@ -1602,18 +1633,13 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, ipa &= ~(vma_pagesize - 1); } - gfn = ipa >> PAGE_SHIFT; - mte_allowed = kvm_vma_mte_allowed(vma); - - vfio_allow_any_uc = vma->vm_flags & VM_ALLOW_ANY_UNCACHED; - /* Don't use the VMA after the unlock -- it may have vanished */ vma = NULL; /* * Read mmu_invalidate_seq so that KVM can detect if the results of - * vma_lookup() or __kvm_faultin_pfn() become stale prior to - * acquiring kvm->mmu_lock. + * vma_lookup() or faultin_pfn() become stale prior to acquiring + * kvm->mmu_lock. * * Rely on mmap_read_unlock() for an implicit smp_rmb(), which pairs * with the smp_wmb() in kvm_mmu_invalidate_end(). @@ -1621,8 +1647,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, mmu_seq = vcpu->kvm->mmu_invalidate_seq; mmap_read_unlock(current->mm); - pfn = __kvm_faultin_pfn(memslot, gfn, write_fault ? FOLL_WRITE : 0, - &writable, &page); + pfn = faultin_pfn(kvm, memslot, gfn, write_fault, &writable, &page, is_gmem); if (pfn == KVM_PFN_ERR_HWPOISON) { kvm_send_hwpoison_signal(hva, vma_shift); return 0; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index f3af6bff3232..1b2e4e9a7802 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1882,6 +1882,11 @@ static inline int memslot_id(struct kvm *kvm, gfn_t gfn) return gfn_to_memslot(kvm, gfn)->id; } +static inline bool memslot_is_readonly(const struct kvm_memory_slot *slot) +{ + return slot->flags & KVM_MEM_READONLY; +} + static inline gfn_t hva_to_gfn_memslot(unsigned long hva, struct kvm_memory_slot *slot) { diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index c75d8e188eb7..d9bca5ba19dc 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2640,11 +2640,6 @@ unsigned long kvm_host_page_size(struct kvm_vcpu *vcpu, gfn_t gfn) return size; } -static bool memslot_is_readonly(const struct kvm_memory_slot *slot) -{ - return slot->flags & KVM_MEM_READONLY; -} - static unsigned long __gfn_to_hva_many(const struct kvm_memory_slot *slot, gfn_t gfn, gfn_t *nr_pages, bool write) { From patchwork Wed Apr 30 16:56:53 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 886496 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6C20A269AE9 for ; Wed, 30 Apr 2025 16:57:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746032243; cv=none; b=MqAZrSRh19R1CuZZ1NwJ+4JffjXqJOH6ZxNLUDC2jRQoN71DLdz31a8JabFRVe+HBJ5nBKTZUnCFTX+b9I8CJNnA0v9oHchYOY6O7LTszfd5dei1ZuNTy0juggG1P+g74c9ikkOA/pI19wxbcoZp+QHjMwciTXcJ7JM97HqHL4I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746032243; c=relaxed/simple; bh=zT3f53bQYwYRMiDbIzuNcOZPjcOyX467WnIVdZBFCY4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Zw7FNb/4ZPr5HvcU/zpSDfXEII+Q+nc6noMOt0OOQnhBJ4JCVarAu6WYR8n0T9nqvWJSSUlHK5OEOohr5FPl/WlBpIbmvRWL3/zU2Jx4gJLC6XUjTOZC+6/gwRRPh9Z0dtaUwCVpbm/k7vBWbkdy+UdfQXyCmMXqEJaBhVRWo64= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=yCk8t83G; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="yCk8t83G" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43cf5196c25so150585e9.0 for ; Wed, 30 Apr 2025 09:57:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1746032240; x=1746637040; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=W/dUU1YHD6BOP4+jeLnrgluDyscgK/772S0e/eaG98M=; b=yCk8t83G42lLpvQIQYfZ3ZMYPpWqGDp6tbg7TKcIRgqQcsctARYz7UEkXr0nEeAoHa N3+9yUx/6wIWvvB60QIQkSnsZDvhrw/Pv5Fjmuu4HdeIsTLV140E3csY998cvPJ42UQE WbbKaU34qR3FNKZL7za90no+K+knrZTZepEf4SB6eiM3/uHofbBYgyWTTEevsFSeRIdy UyPJlfRYY2IyAXxMqhaEny3ZK8jXD/4ThQk1NgwASBv+wOql1PAnVcDls8MWA5qiCnGl FXNs2Gv1bIayB8N+FcexescqM02QGTWDjJ9rM4zJV9PS17xthY9SYdqi6SZyT57SJXGU SfTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1746032240; x=1746637040; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=W/dUU1YHD6BOP4+jeLnrgluDyscgK/772S0e/eaG98M=; b=vOtWo+XXYgtAvHASuwgp/bDloQ1xPR7TZTMxmmYlBujR1dUf5ajkpsMsFcFI/y4GPB soo7mCoQTCUckD0MyvaRsq+ceLBpER1ymW7sM6C8noRuELIJ8zq8HOlUV1pwQvT7DdXE MN4bmSkh+Ko/B3m/9678p7E4I41cTZRboh8pp7Hr6VkUShZXeM79WpMW+90UqneAt6kn 0ILWPbN9iZrHE+2YUSrIQOhk0rJQ6+kuC5MYAS1DFCzxzjp83faOE0VZT1obYmowicPH B2iwbzOdmhB371eU4KRf6YuybMRB8fhkuUhIreWOOWi7EiBZP9WyTJma41bYOjHqcohk zTYA== X-Forwarded-Encrypted: i=1; AJvYcCXxXptkA/v2rHjF/V8E5HWx00YCMkpCsQygXsEzFjPUI7v1ybkreBI045k4j8qMKVCTdskTbWpW+Mnwhxdq@vger.kernel.org X-Gm-Message-State: AOJu0YzklSzGgpbcq5wffzhVyV7+dG/Pu9r+IsCnI/TGCz+ucODJaSlr 6zPO9Amu8YKS0OXS92FOGTWh4FEOU8QE24yu5udCe7Mg5Xha7u5M1PucVCrqFYbaK1QPARO41A= = X-Google-Smtp-Source: AGHT+IFYYSu+z7lcdTevEwKYxKvR0BM42N5Yxez0Q/Zy8BqAGBeXJPTD7ox5Spif8yRdhs0jTp5x8CQKow== X-Received: from wmhu3.prod.google.com ([2002:a05:600c:a363:b0:43d:1d5b:1c79]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:35d2:b0:43c:fffc:786c with SMTP id 5b1f17b1804b1-441b1f3ab0fmr32935775e9.19.1746032239805; Wed, 30 Apr 2025 09:57:19 -0700 (PDT) Date: Wed, 30 Apr 2025 17:56:53 +0100 In-Reply-To: <20250430165655.605595-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250430165655.605595-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.967.g6a0df3ecc3-goog Message-ID: <20250430165655.605595-12-tabba@google.com> Subject: [PATCH v8 11/13] KVM: arm64: Enable mapping guest_memfd in arm64 From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, tabba@google.com Enable mapping guest_memfd in arm64. For now, it applies to all VMs in arm64 that use guest_memfd. In the future, new VM types can restrict this via kvm_arch_gmem_supports_shared_mem(). Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_host.h | 12 ++++++++++++ arch/arm64/kvm/Kconfig | 1 + 2 files changed, 13 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 08ba91e6fb03..1b1753e8021a 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1593,4 +1593,16 @@ static inline bool kvm_arch_has_irq_bypass(void) return true; } +#ifdef CONFIG_KVM_GMEM +static inline bool kvm_arch_supports_gmem(struct kvm *kvm) +{ + return IS_ENABLED(CONFIG_KVM_GMEM); +} + +static inline bool kvm_arch_gmem_supports_shared_mem(struct kvm *kvm) +{ + return IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM); +} +#endif /* CONFIG_KVM_GMEM */ + #endif /* __ARM64_KVM_HOST_H__ */ diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index 096e45acadb2..8c1e1964b46a 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -38,6 +38,7 @@ menuconfig KVM select HAVE_KVM_VCPU_RUN_PID_CHANGE select SCHED_INFO select GUEST_PERF_EVENTS if PERF_EVENTS + select KVM_GMEM_SHARED_MEM help Support hosting virtualized guest machines. From patchwork Wed Apr 30 16:56:54 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 886093 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BF4B7270EA1 for ; Wed, 30 Apr 2025 16:57:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746032245; cv=none; b=ZweKxIg6rztOnD9PUKn0Qp6n8gHO7Ow4DIG5Y8PuadZmeGKW+w4Xn/EfWu1+Gq9L2aFFy8jEC/YWTV5SJvuqjJ7aR51Ia+OTm85NCL9SwsmNYwrgAnLB3lWz4YF3vdci65xL93oa31vRHHAB2KblJOcTq6uGAoBjNZbxfvWaf+Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746032245; c=relaxed/simple; bh=XrKt/cQ1PJ4L8WD+acL5St80qFJz8v71jskxh3LK1WE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=H6Lhbqnrgc5ZbB37hptpJ5E/Ft5Sa2+MWW+jJpbXQG0AmKV86j9MYiwLSX2HBmnRn5Hl4nNi/MaSBIL8oiTDs+WQFQ/3A2hq64c4Lu6eLLsbSWqRDjO2BeMUixlTpeWaPbrb2TtePG31vGxvkoo+0Iqg7qzJvT1VXytVR56xSYA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Ch2oQ6RN; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Ch2oQ6RN" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43d6c65dc52so114215e9.1 for ; Wed, 30 Apr 2025 09:57:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1746032242; x=1746637042; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=r8U8U1nhHclBviWvD96hvtWkHURwDxBT0dCfKidoius=; b=Ch2oQ6RNjM2pdVaWRzVFKWDiH6e+k3DqduHifO0ks8nBObuLR+t+zniR/gVGPDljoJ XJWGhfHutNC2cKnJhGM3p/5ZfUrgqze1QrRnLyeJgHkacRnK3h5CinxxtmdI9FSmg3/Y zrpHJMYlqxyoY0MAI9yvG4+0pXx5mGsvEUPERmu5WRMILZLRINzrOxrMwPL11GJMA/1F cS8ojRcl3zBqcrzmy83mYowcSbQ9qm4Ks1wDGNpQl3V/AGkoUe5RZAehnz0JZ7bPKcUE JsjFjZv0/Ak6abkELIIqaVwBEeQ1TyuFW4G5Ox9lKCt2087PJmUyMnJTwYasIcFLAdHa ypQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1746032242; x=1746637042; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=r8U8U1nhHclBviWvD96hvtWkHURwDxBT0dCfKidoius=; b=fWc8rmWoTZbvKDML/9lZ1lklAkmZG3mc04ITfTsY8T1y9mC3MmWHdNTVr/sa40V1mr Or2VE31jIuT5yLzWvXSgXsMmLtBrLAtkOdR8FWmNbmk/qV/O/rAMJ9+omB6C9Zu+BXS+ yhzuJB8K4m3IY8LsSvXIl3eTh1qs+pQrlwDAecL6V0Q5ZXMvLS887OHZzrCR4rkISJ3s /TiSY/ITNJ0miA+D/LfLdEOFIsb7gsL7rXd7hvxob3UXuInMdyOq5pWHpflFHSJSZpw5 e+l7X8RMB+dAKdx3/h9njmhuFD6qMAQS9ftPnrbYpyR5PCYpwL5jAzfXktmdlByVNbwH VE5Q== X-Forwarded-Encrypted: i=1; AJvYcCWimg77PWlflKk8e7RvjYa37L8Qp+kMofz0/i3kxEwI/Ae4t6s40pS9J9pFVhYypefvkD3vNsIiOke6Vz1v@vger.kernel.org X-Gm-Message-State: AOJu0YxBUxdhDZmlgQYJPQ6/0KDgJJhaYwQAf9aQwo8gJZiqSJJFOjgR J6Qd1iVeRo13m4sP0NbxxMbGWVMnMFphfnejOL/xLvzy+WpFVN30/TDd0EDLnRAoM1p7KwrrJA= = X-Google-Smtp-Source: AGHT+IH7GwjRIYej47ad9eSczR4IN/8XTDl2Y0w5DE8LJkqljmss+RSdA/meE0fu/iiOZNhdn1995hFeMQ== X-Received: from wmbep11.prod.google.com ([2002:a05:600c:840b:b0:43b:c7e5:66e0]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1c12:b0:43d:4686:5cfb with SMTP id 5b1f17b1804b1-441b1f5fc52mr33367835e9.27.1746032242127; Wed, 30 Apr 2025 09:57:22 -0700 (PDT) Date: Wed, 30 Apr 2025 17:56:54 +0100 In-Reply-To: <20250430165655.605595-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250430165655.605595-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.967.g6a0df3ecc3-goog Message-ID: <20250430165655.605595-13-tabba@google.com> Subject: [PATCH v8 12/13] KVM: x86: KVM_X86_SW_PROTECTED_VM to support guest_memfd shared memory From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, tabba@google.com The KVM_X86_SW_PROTECTED_VM type is meant for experimentation and does not have underlying support for protected guests. This makes it a good candidate for testing mapping shared memory. Therefore, only when the kconfig option for in-place shared memory is enabled (KVM_GMEM_SHARED_MEM), mark this type as supporting shared memory. This means that this memory is considered by guest_memfd to be shared with the host, which is now able to map and fault in guest_memfd memory belonging to this VM type. Signed-off-by: Fuad Tabba --- arch/x86/include/asm/kvm_host.h | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 709cc2a7ba66..1858dde449c3 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -2255,8 +2255,13 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level, #ifdef CONFIG_KVM_GMEM #define kvm_arch_supports_gmem(kvm) ((kvm)->arch.supports_gmem) + +#define kvm_arch_gmem_supports_shared_mem(kvm) \ + (IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM) && \ + ((kvm)->arch.vm_type == KVM_X86_SW_PROTECTED_VM)) #else #define kvm_arch_supports_gmem(kvm) false +#define kvm_arch_gmem_supports_shared_mem(kvm) false #endif #define kvm_arch_has_readonly_mem(kvm) (!(kvm)->arch.has_protected_state) From patchwork Wed Apr 30 16:56:55 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 886495 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D522A2701C4 for ; Wed, 30 Apr 2025 16:57:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746032247; cv=none; b=di8y/r9WZzph6kj/Hi9Qs86yDN8Dfjf3j6d24rCz0nlAIHDxnQkrMQ6Rad9sIrK8wm+/6AGmoi+lkjlrSIyX9NlK37LNak7bQsByfjHl2zyotsrALWKpRcdp1aVWmove1w8cAK7GmzWFdZNkonwi/68g5ATYLPLGygBLdcEHngs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746032247; c=relaxed/simple; bh=vZStTZsulyc+ysdbGqR4Q/xfgj4kkZUoRlaRm5P3XVM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Wpo7Q/jqrT8IkW7CrLTwxBEmreczdjvcQ+QrwIEwyEyt6viN/aoZjhopcb7J/lP8SQoOo8VCfcKz3iSTXUaL8XOXJ+w3LfbKhdfIy2C23Ik+TQ4kdoFey1Yf6cm+Ye9lzMFgyQtVDKZGc5gqZ1OWVEp7s517tfiGDtvOwbycae8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=neXuVFi0; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="neXuVFi0" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43efa869b19so55865e9.2 for ; Wed, 30 Apr 2025 09:57:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1746032244; x=1746637044; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=+9uoDx+86QSTziNQV39E2QQb4dm1IuLCuTSYjONJ8/4=; b=neXuVFi0iJoGUYZwtYcAOUalXqG4NwIlC3TLRi0PjgUPzlV11KfxzkchoYBEaEb2En p5nhmY8TFxotpoym/uHiC0W0+krllJaXSUJLCSuEd2ObMvoP0ws1KkZq+oKaChDoKsjj Oa+1YnA0k0XIX0ddXrgYMyHBUpgf1tJfVBBeSP7jnkPcR+v+0ihG3WlxKlh+es2W8ZYd HxTtA4aY26tnafReEAqt4T3/piDmz7Sr/Y4yMgLTt2j+K+tti7mEg/TEhKCRAbTm5wow 8970hFhHjQf+WWxAe1icWQm49Jl6gnoRKcZDQAgWeGYUzO/4Qx+p53tb+tABOwHl42fT uEQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1746032244; x=1746637044; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=+9uoDx+86QSTziNQV39E2QQb4dm1IuLCuTSYjONJ8/4=; b=jc7VYdTw+YckaQpUtLiIwu3H8/WgtHpFz5Q8jrjlrumyfx+yk4OtmLMJS7qR3RZPGK TK+gKr+ChIoy9Jryk/FgjVzkUMnCpIu7OzccQXBE3vWxTGl3qTYw4OBTTZsnNI79qCfd nxROoBUYyoNiuHFJRvBZQv/5TEiqe8e3TzS62/g8suHQ8ruI87XbTNnqN4XfjWJB0cYS c/cXb9q14z8y6m5Oh+OmtPGQTQ1XoeOJa6hqV6CVcpU5dGoZX2KcM2laRkIcT6NvDrYq AHIagowLlDecUCzKtDFvYuobikrWWDd8IpuWoilZt+Y6VMd1vdMvK4cp8jwnDQYAtY2T dgbg== X-Forwarded-Encrypted: i=1; AJvYcCURIsg4wrU7u4WGWPRzUVdW5Gh6I4wZhneKDlOphU1DGxh4sYAN/z10wJVS3KXxRHgoSdu5e1OgtwTmbKjX@vger.kernel.org X-Gm-Message-State: AOJu0YxHeK7BZENdt1KdiP3a06nMg1yD0nsf1PoP7VekvvvyWiNvTNPd K2zbkASX7IBu4G3eUi7C9BiTgovfZHaLKKEjPAVQUNP7n2zg3BynGWRlCEgbIHpUxHThkbaIgA= = X-Google-Smtp-Source: AGHT+IFDpTzi9FcOVjJwpujZOVhPqGqNNrby7RROx+UeO014EyEnbkZ/iK1EmHM4+ksMKNW3EMqbqBmh0g== X-Received: from wmqc15.prod.google.com ([2002:a05:600c:a4f:b0:43c:fce2:1db2]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:384a:b0:43c:ec28:d31b with SMTP id 5b1f17b1804b1-441b263a413mr38860435e9.10.1746032244111; Wed, 30 Apr 2025 09:57:24 -0700 (PDT) Date: Wed, 30 Apr 2025 17:56:55 +0100 In-Reply-To: <20250430165655.605595-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250430165655.605595-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.967.g6a0df3ecc3-goog Message-ID: <20250430165655.605595-14-tabba@google.com> Subject: [PATCH v8 13/13] KVM: guest_memfd: selftests: guest_memfd mmap() test when mapping is allowed From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, tabba@google.com Expand the guest_memfd selftests to include testing mapping guest memory for VM types that support it. Also, build the guest_memfd selftest for arm64. Signed-off-by: Fuad Tabba --- tools/testing/selftests/kvm/Makefile.kvm | 1 + .../testing/selftests/kvm/guest_memfd_test.c | 75 +++++++++++++++++-- 2 files changed, 70 insertions(+), 6 deletions(-) diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm index f62b0a5aba35..ccf95ed037c3 100644 --- a/tools/testing/selftests/kvm/Makefile.kvm +++ b/tools/testing/selftests/kvm/Makefile.kvm @@ -163,6 +163,7 @@ TEST_GEN_PROGS_arm64 += access_tracking_perf_test TEST_GEN_PROGS_arm64 += arch_timer TEST_GEN_PROGS_arm64 += coalesced_io_test TEST_GEN_PROGS_arm64 += dirty_log_perf_test +TEST_GEN_PROGS_arm64 += guest_memfd_test TEST_GEN_PROGS_arm64 += get-reg-list TEST_GEN_PROGS_arm64 += memslot_modification_stress_test TEST_GEN_PROGS_arm64 += memslot_perf_test diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c index ce687f8d248f..bd35b56c90dc 100644 --- a/tools/testing/selftests/kvm/guest_memfd_test.c +++ b/tools/testing/selftests/kvm/guest_memfd_test.c @@ -34,12 +34,48 @@ static void test_file_read_write(int fd) "pwrite on a guest_mem fd should fail"); } -static void test_mmap(int fd, size_t page_size) +static void test_mmap_allowed(int fd, size_t total_size) { + size_t page_size = getpagesize(); + const char val = 0xaa; + char *mem; + size_t i; + int ret; + + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); + TEST_ASSERT(mem != MAP_FAILED, "mmaping() guest memory should pass."); + + memset(mem, val, total_size); + for (i = 0; i < total_size; i++) + TEST_ASSERT_EQ(mem[i], val); + + ret = fallocate(fd, FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE, 0, + page_size); + TEST_ASSERT(!ret, "fallocate the first page should succeed"); + + for (i = 0; i < page_size; i++) + TEST_ASSERT_EQ(mem[i], 0x00); + for (; i < total_size; i++) + TEST_ASSERT_EQ(mem[i], val); + + memset(mem, val, total_size); + for (i = 0; i < total_size; i++) + TEST_ASSERT_EQ(mem[i], val); + + ret = munmap(mem, total_size); + TEST_ASSERT(!ret, "munmap should succeed"); +} + +static void test_mmap_denied(int fd, size_t total_size) +{ + size_t page_size = getpagesize(); char *mem; mem = mmap(NULL, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); TEST_ASSERT_EQ(mem, MAP_FAILED); + + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); + TEST_ASSERT_EQ(mem, MAP_FAILED); } static void test_file_size(int fd, size_t page_size, size_t total_size) @@ -170,19 +206,27 @@ static void test_create_guest_memfd_multiple(struct kvm_vm *vm) close(fd1); } -int main(int argc, char *argv[]) +unsigned long get_shared_type(void) { - size_t page_size; +#ifdef __x86_64__ + return KVM_X86_SW_PROTECTED_VM; +#endif + return 0; +} + +void test_vm_type(unsigned long type, bool is_shared) +{ + struct kvm_vm *vm; size_t total_size; + size_t page_size; int fd; - struct kvm_vm *vm; TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD)); page_size = getpagesize(); total_size = page_size * 4; - vm = vm_create_barebones(); + vm = vm_create_barebones_type(type); test_create_guest_memfd_invalid(vm); test_create_guest_memfd_multiple(vm); @@ -190,10 +234,29 @@ int main(int argc, char *argv[]) fd = vm_create_guest_memfd(vm, total_size, 0); test_file_read_write(fd); - test_mmap(fd, page_size); + + if (is_shared) + test_mmap_allowed(fd, total_size); + else + test_mmap_denied(fd, total_size); + test_file_size(fd, page_size, total_size); test_fallocate(fd, page_size, total_size); test_invalid_punch_hole(fd, page_size, total_size); close(fd); + kvm_vm_release(vm); +} + +int main(int argc, char *argv[]) +{ +#ifndef __aarch64__ + /* For now, arm64 only supports shared guest memory. */ + test_vm_type(VM_TYPE_DEFAULT, false); +#endif + + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) + test_vm_type(get_shared_type(), true); + + return 0; }