From patchwork Tue May 13 16:34:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 889652 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 814AC2BF3D9 for ; Tue, 13 May 2025 16:34:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154089; cv=none; b=Aw3OJ+6RJsZehqTPYeBwDm4ewXKmBaXmY1p7H9rwNYZAAgkYwgwqNLJgrheYf8gM4wZWxwX/UAuN62U4TwY+5uPn57dNJLiKUjEsUxywSij2LLbw0zuqaCXHpeQxLdY+LuauChHppChCT0UkfA2D5xcqpjPTZtFKlKXaMGpUhRk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154089; c=relaxed/simple; bh=bUDNNNHJW1Nq2iKV/dPi2YnKYzPovvJALR70i2Ogl8I=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=NA4t6d1KOvOOFdpKI1mfVsFPCo/GEKctwSGnsEkUruZIsFLtGfU+5myG2vUpbzunmwwLr71m0/8jm0tbEpaqqQ3BaKJla892ts7Iwr8Of5QjQkTrNLCrecct/ndLWpYH+C5I7HtDK0VRO4fyb2kETboh9/zGUPcCwLJzSQtnCO8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=rWhGSHXS; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="rWhGSHXS" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43eea5a5d80so33309315e9.1 for ; Tue, 13 May 2025 09:34:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747154085; x=1747758885; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=TW58tdeuujPQSfLrI4Dejm7n8vR9oSFyuhN6cUvPKqQ=; b=rWhGSHXSf5p/rPJ71yxaLKXAkI9VKdygr5EhM9xgCB9TGQylrJgAnX3sh33sGGquRJ T5JwV3bsuyfZDt2oIH6y6PlIMcXa92NI9Hg4ZiSp+PG7RazE7+0jRLJrny9E1096ZNig cEm4AOYIdzINCkgZqUieD9SpMjcGzWZO3HJrO/jfsOMAYejY9on7Cj5M3CBLF60m/+N7 eYbYGuQ1TRc6eJOOBgma01om8IKjzmR50u5goloXozW4sPxkVaG6MOKTzuHQ1DHvdJry Eem3cocs6qSH+MnaZ+IQRwrdMS7a/u3dF/19h3nJd4UUiPwpksn1S0HQ4809IGR92XOn 4mmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747154085; x=1747758885; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=TW58tdeuujPQSfLrI4Dejm7n8vR9oSFyuhN6cUvPKqQ=; b=OmtKz4z0W/c1v5qhEXO1Jr1oMNVKFnD08KLNT3ubwG7waTg9T1Z3308tk+IMhfMOnU 7+zS3yoLOeTbv1zgl5sLF2mufIrvyzAQKXLRmhFVkGMHZllOtrXfaEI6GIN8z+uA5TO6 1GCYfuLxEfQyLnpsARb89W2hZ9+9Hn+WwAdac1ySa9Lqh3a+MTU2UxjVwx94NN/smPdU ub0oqaFz731D4M+hr5PTWX3lJ2xro225G/7e42mbe7cUn0LuMfXs70Py9kQ67YnDSF+0 VvAw2WbbrSXbtnWWB5zYrztHn9E3z/QloWFFhPWzCTZ84/I6TRa6ReZM7isbLQjV/vHM LJew== X-Forwarded-Encrypted: i=1; AJvYcCUKbXLCeEUKb9aOWEagiE+FWdfQzlXaWccx5Flz3KtEN9Qw97Poo9Ednp9Y8vmq490sDrsFHxY1IzqDkDxl@vger.kernel.org X-Gm-Message-State: AOJu0Yx6QZYnsC/PmNmaWLP7BOp/2x3zBFUoNOrRneVSkTvLRihhl/LS FTL2lm8yqI2BQYZR7rcR2A2jrb88GHtv7WH/l3COIAvYnZa/no4Du4ZGmPNRBAXTnTMGiQJSFw= = X-Google-Smtp-Source: AGHT+IFj9pRKN/DH8YfxglgQfsZpXnJ3j3ZI0jXCzgoNzxGRBHeHeZF3xHQiYbURdskXxs29GTD5d6vGGA== X-Received: from wmrm4.prod.google.com ([2002:a05:600c:37c4:b0:43d:1dd4:37f2]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4e46:b0:441:b3f0:e5f6 with SMTP id 5b1f17b1804b1-442d6dd2276mr134025105e9.25.1747154084667; Tue, 13 May 2025 09:34:44 -0700 (PDT) Date: Tue, 13 May 2025 17:34:23 +0100 In-Reply-To: <20250513163438.3942405-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250513163438.3942405-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: <20250513163438.3942405-3-tabba@google.com> Subject: [PATCH v9 02/17] KVM: Rename CONFIG_KVM_GENERIC_PRIVATE_MEM to CONFIG_KVM_GENERIC_GMEM_POPULATE From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com The option KVM_GENERIC_PRIVATE_MEM enables populating a GPA range with guest data. Rename it to KVM_GENERIC_GMEM_POPULATE to make its purpose clearer. Co-developed-by: David Hildenbrand Signed-off-by: David Hildenbrand Signed-off-by: Fuad Tabba --- arch/x86/kvm/Kconfig | 4 ++-- include/linux/kvm_host.h | 2 +- virt/kvm/Kconfig | 2 +- virt/kvm/guest_memfd.c | 2 +- 4 files changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index fe8ea8c097de..b37258253543 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -46,7 +46,7 @@ config KVM_X86 select HAVE_KVM_PM_NOTIFIER if PM select KVM_GENERIC_HARDWARE_ENABLING select KVM_GENERIC_PRE_FAULT_MEMORY - select KVM_GENERIC_PRIVATE_MEM if KVM_SW_PROTECTED_VM + select KVM_GENERIC_GMEM_POPULATE if KVM_SW_PROTECTED_VM select KVM_WERROR if WERROR config KVM @@ -145,7 +145,7 @@ config KVM_AMD_SEV depends on KVM_AMD && X86_64 depends on CRYPTO_DEV_SP_PSP && !(KVM_AMD=y && CRYPTO_DEV_CCP_DD=m) select ARCH_HAS_CC_PLATFORM - select KVM_GENERIC_PRIVATE_MEM + select KVM_GENERIC_GMEM_POPULATE select HAVE_KVM_ARCH_GMEM_PREPARE select HAVE_KVM_ARCH_GMEM_INVALIDATE help diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index d6900995725d..7ca23837fa52 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2533,7 +2533,7 @@ static inline int kvm_gmem_get_pfn(struct kvm *kvm, int kvm_arch_gmem_prepare(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, int max_order); #endif -#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM +#ifdef CONFIG_KVM_GENERIC_GMEM_POPULATE /** * kvm_gmem_populate() - Populate/prepare a GPA range with guest data * diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index 49df4e32bff7..559c93ad90be 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -116,7 +116,7 @@ config KVM_GMEM select XARRAY_MULTI bool -config KVM_GENERIC_PRIVATE_MEM +config KVM_GENERIC_GMEM_POPULATE select KVM_GENERIC_MEMORY_ATTRIBUTES select KVM_GMEM bool diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index b2aa6bf24d3a..befea51bbc75 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -638,7 +638,7 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, } EXPORT_SYMBOL_GPL(kvm_gmem_get_pfn); -#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM +#ifdef CONFIG_KVM_GENERIC_GMEM_POPULATE long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long npages, kvm_gmem_populate_cb post_populate, void *opaque) { From patchwork Tue May 13 16:34:25 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 889651 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ECB0C2BEC56 for ; Tue, 13 May 2025 16:34:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154092; cv=none; b=ErUIzappCs9X+fTNcZzb23U9Kad9nJxI/NM54AWEwF11cvgDLz/JDgnZdOCVYZVUz4aE/cmny+38n4C9tTYMMV2Ix5DA1/19o8bAV8tMEyNeTkp+yPbr24qcicTaf/EdMVvsYJhDpMQj85RD0fUuDBN+Yu8UT14zfrxsTDM+z0M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154092; c=relaxed/simple; bh=58ev2u1BDl4Ee9rPaPwyjuTFZN6CiQQdYykTB1lgYAI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=lnyo9XH6kOAaKtDHs/9cOg2ZCBlABxFK2bfs2Ywpnx8ojRiWDDbFb4DEZLaBU/TYSpzpL+uQBKujpjCoopDiarPqjrPyLuWUghgIeagBDEOAnAtxULWU08B/Jq5R7tWqCv6P5ozaiMKCIuVMgPuf1TmSFJiAKkKOFS5WO25/i1w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=K+5RBGr2; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="K+5RBGr2" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-441d438a9b7so26435e9.1 for ; Tue, 13 May 2025 09:34:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747154089; x=1747758889; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=RemcVjZbjZ9ZPgu+3d5I80Ik2FoSdKLzJPcqSfEXd4g=; b=K+5RBGr2jx3cPI1kQ/VfOrM6iFTnVaOWAbXGDCVInyDdyKzHhhAtemdoCTggfaGWyz xL0+8JBtJus1eYSqSMHq56+Fmq8EVaJtDcnFRNHrZryCAjXW5Os4M4TpGx2vGI7pAulM fEwGs9ef4VGeh5FgyKv/BZFjfT+/qGjWvJBe8qrxCE/6Xxxc76ftsG8DDuCbTFVvG8UQ yxeFGnAYAgNHLDEJwa0oqdHukaSoy+MImANhID0wcOddq+KpkdhGa5NjiqCFmEkcUQE7 ybxZEQqEXABtVYhcjDNfVz8kclRGeqhCj/p58bfeY+x2E8BZxPNv0nqIb8tZp/R69SW4 pFJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747154089; x=1747758889; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=RemcVjZbjZ9ZPgu+3d5I80Ik2FoSdKLzJPcqSfEXd4g=; b=BKYq0gHeFLujF5vDxKGKrgueIjf2UgrpNbYBxXOQi/hSLk/SsxMctfrWfnSvux+Ehz p3b7awQVUt3K6kskp+82beFUKw+G6cuhTd5ANfno2dyv23Z4pjtVfN0iZXkC/bG9Ltqh bvb/wN45R88f4zGEpfjzIg0WEilZYjUxkv8Az7VuGksYw0HVHZ/71d0Su71gw7DccWtO wsgOA1DT720INfpMfuQM8FpEqZfTjk6/XoAaYF8XhThEIc4BYaz4h1nqpyu4pHdXDNt9 W8WP62DQnx/ju3ZUNf/OKUudw+NYhkH2yNgbPgbIZ6Xxol3FXMgIO5CeAMkPEzaSypVv jxjw== X-Forwarded-Encrypted: i=1; AJvYcCUCSgQcLl25A1fcynjFmQcdsbS42kRmHNepzRyyD7663K3XmavzzIWzG9c+nYXnoFM87KwEGKLzfC0b6044@vger.kernel.org X-Gm-Message-State: AOJu0Yx6VCPhzhWDURdBA6PVhTIPwr4OmeC030ZWXjc6RLYp7rCmDLe7 SbqTAttkTXfUKB6MdQf7gLJGqwLnP+abG3yQ7GqQrS3sqqzaBwMJD3n29wWCAl0xvWplbvvXRA= = X-Google-Smtp-Source: AGHT+IFxEwob+8byXmXMbz/khknkrLHMGFrF9iGceLErqSy/3DALfFa6E4jLemK1/8N7omYL7d0nchlvjg== X-Received: from wmbhj10.prod.google.com ([2002:a05:600c:528a:b0:442:cdb9:da41]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1ca0:b0:43d:5264:3cf0 with SMTP id 5b1f17b1804b1-442eb8855bcmr27319875e9.11.1747154089111; Tue, 13 May 2025 09:34:49 -0700 (PDT) Date: Tue, 13 May 2025 17:34:25 +0100 In-Reply-To: <20250513163438.3942405-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250513163438.3942405-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: <20250513163438.3942405-5-tabba@google.com> Subject: [PATCH v9 04/17] KVM: x86: Rename kvm->arch.has_private_mem to kvm->arch.supports_gmem From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com The bool has_private_mem is used to indicate whether guest_memfd is supported. Rename it to supports_gmem to make its meaning clearer and to decouple memory being private from guest_memfd. Reviewed-by: Ira Weiny Co-developed-by: David Hildenbrand Signed-off-by: David Hildenbrand Signed-off-by: Fuad Tabba --- arch/x86/include/asm/kvm_host.h | 4 ++-- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/svm/svm.c | 4 ++-- arch/x86/kvm/x86.c | 3 +-- 4 files changed, 6 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 4a83fbae7056..709cc2a7ba66 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1331,7 +1331,7 @@ struct kvm_arch { unsigned int indirect_shadow_pages; u8 mmu_valid_gen; u8 vm_type; - bool has_private_mem; + bool supports_gmem; bool has_protected_state; bool pre_fault_allowed; struct hlist_head mmu_page_hash[KVM_NUM_MMU_PAGES]; @@ -2254,7 +2254,7 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level, #ifdef CONFIG_KVM_GMEM -#define kvm_arch_supports_gmem(kvm) ((kvm)->arch.has_private_mem) +#define kvm_arch_supports_gmem(kvm) ((kvm)->arch.supports_gmem) #else #define kvm_arch_supports_gmem(kvm) false #endif diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index b66f1bf24e06..69bf2ef22ed0 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3486,7 +3486,7 @@ static bool page_fault_can_be_fast(struct kvm *kvm, struct kvm_page_fault *fault * on RET_PF_SPURIOUS until the update completes, or an actual spurious * case might go down the slow path. Either case will resolve itself. */ - if (kvm->arch.has_private_mem && + if (kvm->arch.supports_gmem && fault->is_private != kvm_mem_is_private(kvm, fault->gfn)) return false; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index a89c271a1951..a05b7dc7b717 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -5110,8 +5110,8 @@ static int svm_vm_init(struct kvm *kvm) (type == KVM_X86_SEV_ES_VM || type == KVM_X86_SNP_VM); to_kvm_sev_info(kvm)->need_init = true; - kvm->arch.has_private_mem = (type == KVM_X86_SNP_VM); - kvm->arch.pre_fault_allowed = !kvm->arch.has_private_mem; + kvm->arch.supports_gmem = (type == KVM_X86_SNP_VM); + kvm->arch.pre_fault_allowed = !kvm->arch.supports_gmem; } if (!pause_filter_count || !pause_filter_thresh) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 9896fd574bfc..12433b1e755b 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12716,8 +12716,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) return -EINVAL; kvm->arch.vm_type = type; - kvm->arch.has_private_mem = - (type == KVM_X86_SW_PROTECTED_VM); + kvm->arch.supports_gmem = (type == KVM_X86_SW_PROTECTED_VM); /* Decided by the vendor code for other VM types. */ kvm->arch.pre_fault_allowed = type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM; From patchwork Tue May 13 16:34:27 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 889650 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A29982BFC63 for ; Tue, 13 May 2025 16:34:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154096; cv=none; b=OwZfChe9odyeDpSkh18YhGouIcqxmUcXHajhOO9yO5biOoucUeNcCuYUS6JmSyNmxDWzUzxOExVoo3ptaMA7LRG1dg8cGFr9DPJrixvAeoUD8wKHKAKFEKFl0sI8DV4i6HuC7lnGAYIRikP0KAKKozAS7ydo5M7I9hBSVHpUjNo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154096; c=relaxed/simple; bh=usZ98HnqQjJuOLHnrjdI/J1buXDTlngynXDQux61byk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=KRa7ZIq2lz9yhauE8D2RmMPS6PTmgCp6mdI+jswE6mjG+hXD8+VP9oLc+qLvAK1Niv5Kp5KMiG5TnJ5COzqlxe7+fztZEl76du97OXCCVpiayLQwiSV4BZ8gHZhRBD3zfPCy+Cwvf1m1N0AKjELmpRapVfoSTN1IyGELhWd/KK0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=uQm8rBC8; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="uQm8rBC8" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43ce8f82e66so30910585e9.3 for ; Tue, 13 May 2025 09:34:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747154093; x=1747758893; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ZfRX0WQ0nwRc65ZNXi/N9cbTM9kcIW9CoeCwP9fdsaQ=; b=uQm8rBC8dKN+z54cb6Ah6x5zw7AspA5CcCl6VjsEN+krMFCiCeEuCtYZNBWPADjuj6 Uu0+6LhrWgnaCBijncnUpcaOw2CNQ2DE9+iHIr5m5DT7kt5QAdxVbr3ph6Mrda0rLS+q QJL4YQIDjse+K7M24IY91GAeeiiklVvRBq+X+eiQhncSw8JA1lez9IZEzZgngXx09Zk3 y5qojo+MwE+KYSrD3CsJ3ZAfKl13QiNWTu0sMTJWaBm715khJetj66Hda5NiSnpAITfd 1b14ZKQzL38gKvGOEPrgG/C+bbZTNBATiTsEBGUHkgvYOa7KlThM9V7RyuGp/rP5jCUD acnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747154093; x=1747758893; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ZfRX0WQ0nwRc65ZNXi/N9cbTM9kcIW9CoeCwP9fdsaQ=; b=IpJESrRHbclEQZmUqrfQyfN3QuBBUOeYmCW9TZVu7bOV3S/IDXMYEhABgcwai6CXPV d6N6qhsCpz10RW8bKV1xOzyzop5UTdBH1nkQ+Kem0jcgLo9hJ0zIOBq7HEz6oLl6ys63 tSiaEZguiRzpeXQ8I/Dhp9DDSxC+E6hEaZDheYpThgIP2W48aLFv9xdRdxZzjyr++Sw7 adGqArci5Thf51qwVMwqPE8WHWMYpOF9+BWERyCzXl6jK5NPE9JNJKMRVOaB1A7x9zDh wWJvX70hRVv9amE3PF+Exm9bobxkmgovTN+d1rfkMy4pwvbx3BU+7DgKvGjJtuBuIlIo Vneg== X-Forwarded-Encrypted: i=1; AJvYcCUL2D0MogrnJRsck7YFNltw28FtUEraZw8DNEkepZvib3dq9SqITjLNkjqPo1nRCO+PIY26rhxPJUdqXLhy@vger.kernel.org X-Gm-Message-State: AOJu0YyMz7dTcw62+MPNgpXimAGz7giF8nhQxHqvlKYrsPszgOWlU2EP fLWBsIZM4KwpfBr126zg6DMUQbo4BSz6PJate9QFNUIacExoG4vGLvArEVaIEAJ3mOJxvn56wg= = X-Google-Smtp-Source: AGHT+IH1XM5cKXN9KHIp9N+CWq3wdwJ0y1vkRxyXxd2Z8mniOCEFD2cALdYDmqraPvwEBsmjywaqCAwJxw== X-Received: from wmbdo24.prod.google.com ([2002:a05:600c:6818:b0:43d:56fa:9b95]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:a00a:b0:43d:db5:7af8 with SMTP id 5b1f17b1804b1-442d6dc7cd1mr144031745e9.21.1747154093124; Tue, 13 May 2025 09:34:53 -0700 (PDT) Date: Tue, 13 May 2025 17:34:27 +0100 In-Reply-To: <20250513163438.3942405-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250513163438.3942405-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: <20250513163438.3942405-7-tabba@google.com> Subject: [PATCH v9 06/17] KVM: Fix comments that refer to slots_lock From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com Fix comments so that they refer to slots_lock instead of slots_locks (remove trailing s). Reviewed-by: David Hildenbrand Reviewed-by: Ira Weiny Signed-off-by: Fuad Tabba --- include/linux/kvm_host.h | 2 +- virt/kvm/kvm_main.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index d9616ee6acc7..ae70e4e19700 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -859,7 +859,7 @@ struct kvm { struct notifier_block pm_notifier; #endif #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES - /* Protected by slots_locks (for writes) and RCU (for reads) */ + /* Protected by slots_lock (for writes) and RCU (for reads) */ struct xarray mem_attr_array; #endif char stats_id[KVM_STATS_NAME_SIZE]; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 2468d50a9ed4..6289ea1685dd 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -333,7 +333,7 @@ void kvm_flush_remote_tlbs_memslot(struct kvm *kvm, * All current use cases for flushing the TLBs for a specific memslot * are related to dirty logging, and many do the TLB flush out of * mmu_lock. The interaction between the various operations on memslot - * must be serialized by slots_locks to ensure the TLB flush from one + * must be serialized by slots_lock to ensure the TLB flush from one * operation is observed by any other operation on the same memslot. */ lockdep_assert_held(&kvm->slots_lock); From patchwork Tue May 13 16:34:29 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 889649 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E63772BF3D7 for ; Tue, 13 May 2025 16:34:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154100; cv=none; b=kjpmWpgGUppzyzOO6IBnoKobTTKl9AOvHwF/ZglkONnszYgDpIQBgVysDYqBLn1HDvfst1BbAhma/L0voD5dBnqbkDQ+tBtaxZzSMb7R/0azzEEuupOXpORZbL1TMnpQcT3r0/5f28xSQvqlEgmh4B4Zpu9Zn/6okVKfEgxh0fg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154100; c=relaxed/simple; bh=TOeeaoQ3m+/LWMfhhlCsyU/JnOcJqolOnfYg+C9z+4s=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=af2vosUMMAIZlP9QxYbcqkEUg2udWmvXKet86QQZYJFsYx1KIB6o+TRLY/GJFuyp3ih2tpbgwZ2FDz6uGQVrgP0ges7ie/hXAfoKCGU9uu0B8MpnailFoVuJqKzYwjaL1qO43oZL9SsUu8d5GJM74Sgtwt5mvqgI7SPhizhqMXs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=yYMQlnwc; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="yYMQlnwc" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43cf172ff63so26854235e9.3 for ; Tue, 13 May 2025 09:34:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747154097; x=1747758897; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=sUPX/y9AGDBrl/3DZ/v5doAtXHf0jW3cIL5MwdsoRE0=; b=yYMQlnwcg4byULaa1kPujReOlMiiU2kX/9lZzsha1kLoWTLp8G13NhGTCCTsvmV1n+ p5Ubo5aHAZl0Fmaog5VRldaxU3QvFkVnSpPpXJRAl7ifVvSHBe2rHcQJZ36OdEKkJUXT vog9YCwWvpDXmnyhRiLSxt8bfcnOiwk1AWmGBE45dmWbyz4RouOaSGHpNfJMXiOP3D6M gwvFXePpQo01/kIs3LxhPSc+CcWzjEWJ6IaUT/GuedET/3ksoUYOVvpbm2fPKYtqIFej J/B3hg6xOHdwcTIbyN23FkFsIBEmuk3CeSrEIpwNP/kFp8wdj4r8bIIxiORD7JuSzqf7 H0VA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747154097; x=1747758897; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=sUPX/y9AGDBrl/3DZ/v5doAtXHf0jW3cIL5MwdsoRE0=; b=ris5/xt+k5ol3XcLo713Zry7rlwH/+gq02Smwc2rUOHrVbvEcVqz6YX8Z32BCdDcyI j+xC9RH1wjVaEwc2/LAaq90kGPEH7w1UsxVppdpo/78xU9qoQHSFyO9Fkf8jtnP36zU2 7Y9iF43VmqbWJa/oOInkYqP6vHKkr6JQ73SihU4cpl7Daaj0jKiriRcxxAjqBByChe8l PR/u2YMHn4dzNtofCtP2mtlaibmz9frcl/cRaeROEFQqDdgxqyAJNIZfG5cNOHAJVE0+ CMLjTToyBRi8aM4Fj3/PFhP5+zFto9+8SbVMkcToZ+bmgVWZ59e6mhI3HyVk/Evaxf5x nuRw== X-Forwarded-Encrypted: i=1; AJvYcCW9+zOWncMayKY7c+cTHQnqitQj1Yj08US0FsOtHPiz+QrKuP6V9e4Yw+e6WtT+bd33cFzBTqoloFFCALWh@vger.kernel.org X-Gm-Message-State: AOJu0YzBvjMxYcGvlb2X2PMIGuWrmrz9GrBRdD22IfuNe3F3t7EU8x/R GhgWqRo8kp6Q1inlwSVX+vu+7MXnKKx/JkUgGIqDpkM0Ke1F6sSurxJJOJO4roqFhiJ1i3dbgw= = X-Google-Smtp-Source: AGHT+IHPBFphMwR0/rLp+gySy0dxBabpKT4zkEgWC/5U+nwhKCMIpmmhaC+Hj2shZb3rzwt63hBboV4t6w== X-Received: from wmbep21.prod.google.com ([2002:a05:600c:8415:b0:440:5e01:286b]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:37cd:b0:43c:e6d1:efe7 with SMTP id 5b1f17b1804b1-442d6dd21e9mr126542875e9.26.1747154097178; Tue, 13 May 2025 09:34:57 -0700 (PDT) Date: Tue, 13 May 2025 17:34:29 +0100 In-Reply-To: <20250513163438.3942405-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250513163438.3942405-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: <20250513163438.3942405-9-tabba@google.com> Subject: [PATCH v9 08/17] KVM: guest_memfd: Check that userspace_addr and fd+offset refer to same range From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com From: Ackerley Tng On binding of a guest_memfd with a memslot, check that the slot's userspace_addr and the requested fd and offset refer to the same memory range. This check is best-effort: nothing prevents userspace from later mapping other memory to the same provided in slot->userspace_addr and breaking guest operation. Suggested-by: David Hildenbrand Suggested-by: Sean Christopherson Suggested-by: Yan Zhao Signed-off-by: Ackerley Tng Signed-off-by: Fuad Tabba --- virt/kvm/guest_memfd.c | 37 ++++++++++++++++++++++++++++++++++--- 1 file changed, 34 insertions(+), 3 deletions(-) diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 8e6d1866b55e..2f499021df66 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -556,6 +556,32 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args) return __kvm_gmem_create(kvm, size, flags); } +static bool kvm_gmem_is_same_range(struct kvm *kvm, + struct kvm_memory_slot *slot, + struct file *file, loff_t offset) +{ + struct mm_struct *mm = kvm->mm; + loff_t userspace_addr_offset; + struct vm_area_struct *vma; + bool ret = false; + + mmap_read_lock(mm); + + vma = vma_lookup(mm, slot->userspace_addr); + if (!vma) + goto out; + + if (vma->vm_file != file) + goto out; + + userspace_addr_offset = slot->userspace_addr - vma->vm_start; + ret = userspace_addr_offset + (vma->vm_pgoff << PAGE_SHIFT) == offset; +out: + mmap_read_unlock(mm); + + return ret; +} + int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot, unsigned int fd, loff_t offset) { @@ -585,9 +611,14 @@ int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot, offset + size > i_size_read(inode)) goto err; - if (kvm_gmem_supports_shared(inode) && - !kvm_arch_vm_supports_gmem_shared_mem(kvm)) - goto err; + if (kvm_gmem_supports_shared(inode)) { + if (!kvm_arch_vm_supports_gmem_shared_mem(kvm)) + goto err; + + if (slot->userspace_addr && + !kvm_gmem_is_same_range(kvm, slot, file, offset)) + goto err; + } filemap_invalidate_lock(inode->i_mapping); From patchwork Tue May 13 16:34:31 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 889648 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B312A2BF99E for ; Tue, 13 May 2025 16:35:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154104; cv=none; b=K3ktNNl6q+rXL7vARFHP5iuvDGBkCM9oUJ4bWYQxX1mLF7jHVQvothVnSAM2yd0e+R69IyzP2MbeKj0V240KuUyXcYAB/I4inm+Br0PTWYcuVLo3rn5PgnGrrqOSasciOgoolTyyitYde9X3pVkwaZaZwZa3uTZOxA+t0fzEX7o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154104; c=relaxed/simple; bh=Jr9xqsOqotECuvzAu+C7RbwpyeNTMbt9IEgZBUaW6fM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=RD/LH7Q8Fz7TwwtJ0kvZb1VO8+tITdxQx5mnj4nwn30oNXr/G3eqPZRge/znE8j2kXveteMUuxjtwTh2IYydCNir+PIzAGbW45/txC9wkDOpj5Ss7RRXanRn+cjLy2+Uxlkp1omvy9KP3PuQ/nOjCttWeucRPzWTG3E/p5M38d4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=U5/ZHG11; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="U5/ZHG11" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-3a0b7ceaa20so2028120f8f.1 for ; Tue, 13 May 2025 09:35:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747154101; x=1747758901; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=j1W1Cy37OzbQ24L0v54CfH7EjVT5UN5Iyr5YInP+w4Y=; b=U5/ZHG11kxeq3z/pjK8tDwgUf1RdTFGSx9sI3dY1Fl5CZnKDjoi6cLCdCwAog4rHYT a+X7mruR5XtzSKIRpz0lpht/JPDQSQfCrU2sMGshiI4Ml/lRTvWSaCV9X6v5rgoj7KN8 S0FjK+K5DpdcWqJNqTDwQrXJBE46gV6C72olJJLGu2PnKj+httP4msDMl4Y48nP0BTku owoRCwRnQFk4qC97+U06sW5TiFSK0/G5cdSfTFH7Xs0eIbzrZhQyL8kmJpZ87gFf96JL HKW0lKWaAg+kn2QnHjeQ0qXyEfBL5IgtG8xzsizLzCfYkbEFz0F4nket7IgiQqlUVgdv EJyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747154101; x=1747758901; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=j1W1Cy37OzbQ24L0v54CfH7EjVT5UN5Iyr5YInP+w4Y=; b=TvCCAIP6Sr3lZFk2+uOoHGETXPq6qtt2zUcxGJ+S0w0gaGp2goiN/PmvAyqv6jvxia te+vDxKnY0aKgRMagw2/c/xIo9uauIK6HInWfwIqJEoJcyHg5airfxxPKAAM/W4S0CYn dcsjWr5XFblIM452U3LKUE4K9ZRjNvaJrdVV9xytIjQAiyJ26pbw4Bm4ZsblggT4g9SG N26EAJUXB1EUGL429x6JWvzqIgp2GUmJ6y9P8L1rS1ExZIJbsxMKD60m0WcwGsNJOpB2 0P1svTOn8tkXGuHxBeJLfOuTf1+WAu4YWMrFLV/c1WPxZkNw9zUjl0J8JZ4PpEYQl3Rz UX+Q== X-Forwarded-Encrypted: i=1; AJvYcCUMlldsbpWtQMz+bcFDhzLUlCoMDMQn0HicQqZFzp4sBbDAPTS88rPUNpFJfD4PHAuubEfVLF5bCZ/p8Tpp@vger.kernel.org X-Gm-Message-State: AOJu0Yx4uJ74SIHyZgdlpnJQs9XlJGIY7S78QAfpU5M323gQFS4hj87u z2XUVuBXlcXL5WU913DQDdmmaA4J9QBM3f2lt207rqMG2PBYQt4xyua8qHMOQnsSxXLplTjlYA= = X-Google-Smtp-Source: AGHT+IFGjQxHgijGI2OEWj0PdaWz94EVCFblWm1hwEf7UsJPhE5e8u62QQVgajDHnVw5zTqR4B58OPcL3Q== X-Received: from wrbgv9.prod.google.com ([2002:a05:6000:4609:b0:3a0:b83e:ad41]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a5d:64ce:0:b0:3a0:b8b0:441a with SMTP id ffacd0b85a97d-3a1f643ba6bmr13308222f8f.25.1747154101187; Tue, 13 May 2025 09:35:01 -0700 (PDT) Date: Tue, 13 May 2025 17:34:31 +0100 In-Reply-To: <20250513163438.3942405-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250513163438.3942405-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: <20250513163438.3942405-11-tabba@google.com> Subject: [PATCH v9 10/17] KVM: x86: Compute max_mapping_level with input from guest_memfd From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com From: Ackerley Tng This patch adds kvm_gmem_max_mapping_level(), which always returns PG_LEVEL_4K since guest_memfd only supports 4K pages for now. When guest_memfd supports shared memory, max_mapping_level (especially when recovering huge pages - see call to __kvm_mmu_max_mapping_level() from recover_huge_pages_range()) should take input from guest_memfd. Input from guest_memfd should be taken in these cases: + if the memslot supports shared memory (guest_memfd is used for shared memory, or in future both shared and private memory) or + if the memslot is only used for private memory and that gfn is private. If the memslot doesn't use guest_memfd, figure out the max_mapping_level using the host page tables like before. This patch also refactors and inlines the other call to __kvm_mmu_max_mapping_level(). In kvm_mmu_hugepage_adjust(), guest_memfd's input is already provided (if applicable) in fault->max_level. Hence, there is no need to query guest_memfd. lpage_info is queried like before, and then if the fault is not from guest_memfd, adjust fault->req_level based on input from host page tables. Signed-off-by: Ackerley Tng Signed-off-by: Fuad Tabba --- arch/x86/kvm/mmu/mmu.c | 92 ++++++++++++++++++++++++++-------------- include/linux/kvm_host.h | 7 +++ virt/kvm/guest_memfd.c | 12 ++++++ 3 files changed, 79 insertions(+), 32 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index cfbb471f7c70..9e0bc8114859 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3256,12 +3256,11 @@ static int host_pfn_mapping_level(struct kvm *kvm, gfn_t gfn, return level; } -static int __kvm_mmu_max_mapping_level(struct kvm *kvm, - const struct kvm_memory_slot *slot, - gfn_t gfn, int max_level, bool is_private) +static int kvm_lpage_info_max_mapping_level(struct kvm *kvm, + const struct kvm_memory_slot *slot, + gfn_t gfn, int max_level) { struct kvm_lpage_info *linfo; - int host_level; max_level = min(max_level, max_huge_page_level); for ( ; max_level > PG_LEVEL_4K; max_level--) { @@ -3270,23 +3269,61 @@ static int __kvm_mmu_max_mapping_level(struct kvm *kvm, break; } - if (is_private) - return max_level; + return max_level; +} + +static inline u8 kvm_max_level_for_order(int order) +{ + BUILD_BUG_ON(KVM_MAX_HUGEPAGE_LEVEL > PG_LEVEL_1G); + + KVM_MMU_WARN_ON(order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_1G) && + order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_2M) && + order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_4K)); + + if (order >= KVM_HPAGE_GFN_SHIFT(PG_LEVEL_1G)) + return PG_LEVEL_1G; + + if (order >= KVM_HPAGE_GFN_SHIFT(PG_LEVEL_2M)) + return PG_LEVEL_2M; + + return PG_LEVEL_4K; +} + +static inline int kvm_gmem_max_mapping_level(const struct kvm_memory_slot *slot, + gfn_t gfn, int max_level) +{ + int max_order; if (max_level == PG_LEVEL_4K) return PG_LEVEL_4K; - host_level = host_pfn_mapping_level(kvm, gfn, slot); - return min(host_level, max_level); + max_order = kvm_gmem_mapping_order(slot, gfn); + return min(max_level, kvm_max_level_for_order(max_order)); } int kvm_mmu_max_mapping_level(struct kvm *kvm, const struct kvm_memory_slot *slot, gfn_t gfn) { - bool is_private = kvm_slot_has_gmem(slot) && - kvm_mem_is_private(kvm, gfn); + int max_level; + + max_level = kvm_lpage_info_max_mapping_level(kvm, slot, gfn, PG_LEVEL_NUM); + if (max_level == PG_LEVEL_4K) + return PG_LEVEL_4K; - return __kvm_mmu_max_mapping_level(kvm, slot, gfn, PG_LEVEL_NUM, is_private); + if (kvm_slot_has_gmem(slot) && + (kvm_gmem_memslot_supports_shared(slot) || + kvm_get_memory_attributes(kvm, gfn) & KVM_MEMORY_ATTRIBUTE_PRIVATE)) { + return kvm_gmem_max_mapping_level(slot, gfn, max_level); + } + + return min(max_level, host_pfn_mapping_level(kvm, gfn, slot)); +} + +static inline bool fault_from_gmem(struct kvm_page_fault *fault) +{ + return fault->is_private || + (kvm_slot_has_gmem(fault->slot) && + kvm_gmem_memslot_supports_shared(fault->slot)); } void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) @@ -3309,12 +3346,20 @@ void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault * Enforce the iTLB multihit workaround after capturing the requested * level, which will be used to do precise, accurate accounting. */ - fault->req_level = __kvm_mmu_max_mapping_level(vcpu->kvm, slot, - fault->gfn, fault->max_level, - fault->is_private); + fault->req_level = kvm_lpage_info_max_mapping_level(vcpu->kvm, slot, + fault->gfn, fault->max_level); if (fault->req_level == PG_LEVEL_4K || fault->huge_page_disallowed) return; + if (!fault_from_gmem(fault)) { + int host_level; + + host_level = host_pfn_mapping_level(vcpu->kvm, fault->gfn, slot); + fault->req_level = min(fault->req_level, host_level); + if (fault->req_level == PG_LEVEL_4K) + return; + } + /* * mmu_invalidate_retry() was successful and mmu_lock is held, so * the pmd can't be split from under us. @@ -4448,23 +4493,6 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work) vcpu->stat.pf_fixed++; } -static inline u8 kvm_max_level_for_order(int order) -{ - BUILD_BUG_ON(KVM_MAX_HUGEPAGE_LEVEL > PG_LEVEL_1G); - - KVM_MMU_WARN_ON(order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_1G) && - order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_2M) && - order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_4K)); - - if (order >= KVM_HPAGE_GFN_SHIFT(PG_LEVEL_1G)) - return PG_LEVEL_1G; - - if (order >= KVM_HPAGE_GFN_SHIFT(PG_LEVEL_2M)) - return PG_LEVEL_2M; - - return PG_LEVEL_4K; -} - static u8 kvm_max_level_for_fault_and_order(struct kvm *kvm, struct kvm_page_fault *fault, int order) @@ -4523,7 +4551,7 @@ static int __kvm_mmu_faultin_pfn(struct kvm_vcpu *vcpu, { unsigned int foll = fault->write ? FOLL_WRITE : 0; - if (fault->is_private || kvm_gmem_memslot_supports_shared(fault->slot)) + if (fault_from_gmem(fault)) return kvm_mmu_faultin_pfn_gmem(vcpu, fault); foll |= FOLL_NOWAIT; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index de7b46ee1762..f9bb025327c3 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2560,6 +2560,7 @@ static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, kvm_pfn_t *pfn, struct page **page, int *max_order); +int kvm_gmem_mapping_order(const struct kvm_memory_slot *slot, gfn_t gfn); #else static inline int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, @@ -2569,6 +2570,12 @@ static inline int kvm_gmem_get_pfn(struct kvm *kvm, KVM_BUG_ON(1, kvm); return -EIO; } +static inline int kvm_gmem_mapping_order(const struct kvm_memory_slot *slot, + gfn_t gfn) +{ + BUG(); + return 0; +} #endif /* CONFIG_KVM_GMEM */ #ifdef CONFIG_HAVE_KVM_ARCH_GMEM_PREPARE diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index fe0245335c96..b8e247063b20 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -774,6 +774,18 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, } EXPORT_SYMBOL_GPL(kvm_gmem_get_pfn); +/** + * Returns the mapping order for this @gfn in @slot. + * + * This is equal to max_order that would be returned if kvm_gmem_get_pfn() were + * called now. + */ +int kvm_gmem_mapping_order(const struct kvm_memory_slot *slot, gfn_t gfn) +{ + return 0; +} +EXPORT_SYMBOL_GPL(kvm_gmem_mapping_order); + #ifdef CONFIG_KVM_GENERIC_GMEM_POPULATE long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long npages, kvm_gmem_populate_cb post_populate, void *opaque) From patchwork Tue May 13 16:34:33 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 889647 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 846762C0316 for ; Tue, 13 May 2025 16:35:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154108; cv=none; b=d5XZ3EjFXDMIcNaO2g7ggX1B/s/Zg54Pr+fnJE5g/c9v6dnnGYe1V27z9CkrnpQobKqgzrGLrkKmTI3HAFbgXaStdA5nocTrB1+deczCWBZ4j+kdNzPdeivTPdQdjJ1RNCHf0ELnOHZm8rVBgkEVxAH5tG3191hgO4mRqBSLkkM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154108; c=relaxed/simple; bh=o+7g7TGRMBkFIwd+yhmtjHUJT97QY+563PoPTnNJLwk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=GQmuzeLbqagz2tq7cUDvORw5LrMq+lfJLRvxvg7V5d3YUWWNB+jL24uh2nFiE4FqFfP7KSZ0/tM1FPoDvA47OJyo1zKJboFiFkVurqUHs37HaC4TKYDTn+8SoD+TW1zbSOluxpbivOnVdDPeWYulpgRz+8Jiyt1oxc5B2F/nuoc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=c8WteXlp; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="c8WteXlp" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43d209dc2d3so28826005e9.3 for ; Tue, 13 May 2025 09:35:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747154105; x=1747758905; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Be1AlR+JPJhKhyuHrgiYClfCDGQ2SZI+bIXk4D2xVD8=; b=c8WteXlpLwOMOxuY7qeyCupbLgbK3VEHOSAoHQGbONfod8xRKnZJ+A7USDhmYYNigy SqsdgCIJNqsPMJ9L8km7gjMwVAriTaR+nb1k15Tk4lU2zZRECEQcOMay+1C1a6Q/Cvn/ HBQkLsKLYrCeIe20gQPBP2F4JxgUF+6OCyhA7G2NXK/VHyIJVpFfgiLTOnxVl3YhtaY5 IcHRYXnQsCwa04xRbJJ3ZENwGqPH1nXmijmcJ4QGda4QncQNHtwcUw6e2cs+UM5qIIGe pMylUr/8S+OPfiEwCa3Fmsral0Q/N0UETLuyiuANEKu6QQVHRmG+GM9DuUrPAcfU2xCx AAkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747154105; x=1747758905; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Be1AlR+JPJhKhyuHrgiYClfCDGQ2SZI+bIXk4D2xVD8=; b=P4lh8I2PKhYdRdzQL7bIBoaDNdVrCjccXDs1M7bEKL8ibOMlL8ewVgsZRmI0kHSf6t Wln4wTOficbB00mr/78ZLTDC2bwLvKOl6qdgYG9SXFlmd2K5d/jXwc0HzXWW/mUSoTHu dl9qD8ALFAUV+vrCTS7r2FEZRfAIW/0UPbniBmYOUg0EOj6zw8vWhiZ+HjBugz4KXHjl qUyWu2G8BeLuWx0tc4v9KeKwplOlvyS2uBOfVweySvJF4GafhKF7+0MPZqn81ThLPdca DJIQxN90mCHs3A+NQU6PWF0gr7roBjF2FYu06iQ1VQoyKk1xZVX1MsKpLyJ9NXPcCF8L VLog== X-Forwarded-Encrypted: i=1; AJvYcCU7+tyCxxHi4M0KMaXQ/cqd885IO1YkHhOCMm7tFbASaNM3LN/d6IpQEg/LNQc/CL9QaYUvOxiPlM1bfUdv@vger.kernel.org X-Gm-Message-State: AOJu0Ywxbt1NeQO6gDUcmk6ZtL+GZPFNP1MJL3FKk4Br8b2gW+zug3Ft la8syFNyTMkquxVtkTsX7+yv90kkB59WXlvCyzKdtBqajRuPxubr7d5XvqGauo7SxmecjT+W8Q= = X-Google-Smtp-Source: AGHT+IEsMEetoV4YK4SV4vFb6l1Ng9DKZZK9MFFHNiIzxliX7gPauq7tpe88BFecnM4+cLCAaJyauy01cw== X-Received: from wmbhc10.prod.google.com ([2002:a05:600c:870a:b0:442:cd39:5ca4]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:8207:b0:43c:ee62:33f5 with SMTP id 5b1f17b1804b1-442d6ddcf2dmr152785155e9.27.1747154104922; Tue, 13 May 2025 09:35:04 -0700 (PDT) Date: Tue, 13 May 2025 17:34:33 +0100 In-Reply-To: <20250513163438.3942405-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250513163438.3942405-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: <20250513163438.3942405-13-tabba@google.com> Subject: [PATCH v9 12/17] KVM: arm64: Rename variables in user_mem_abort() From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com Guest memory can be backed by guest_memfd or by anonymous memory. Rename vma_shift to page_shift and vma_pagesize to page_size to ease readability in subsequent patches. Suggested-by: James Houghton Signed-off-by: Fuad Tabba --- arch/arm64/kvm/mmu.c | 54 ++++++++++++++++++++++---------------------- 1 file changed, 27 insertions(+), 27 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 9865ada04a81..d756c2b5913f 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1479,13 +1479,13 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, phys_addr_t ipa = fault_ipa; struct kvm *kvm = vcpu->kvm; struct vm_area_struct *vma; - short vma_shift; + short page_shift; void *memcache; gfn_t gfn; kvm_pfn_t pfn; bool logging_active = memslot_is_logging(memslot); bool force_pte = logging_active || is_protected_kvm_enabled(); - long vma_pagesize, fault_granule; + long page_size, fault_granule; enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; struct kvm_pgtable *pgt; struct page *page; @@ -1538,11 +1538,11 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, } if (force_pte) - vma_shift = PAGE_SHIFT; + page_shift = PAGE_SHIFT; else - vma_shift = get_vma_page_shift(vma, hva); + page_shift = get_vma_page_shift(vma, hva); - switch (vma_shift) { + switch (page_shift) { #ifndef __PAGETABLE_PMD_FOLDED case PUD_SHIFT: if (fault_supports_stage2_huge_mapping(memslot, hva, PUD_SIZE)) @@ -1550,23 +1550,23 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, fallthrough; #endif case CONT_PMD_SHIFT: - vma_shift = PMD_SHIFT; + page_shift = PMD_SHIFT; fallthrough; case PMD_SHIFT: if (fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE)) break; fallthrough; case CONT_PTE_SHIFT: - vma_shift = PAGE_SHIFT; + page_shift = PAGE_SHIFT; force_pte = true; fallthrough; case PAGE_SHIFT: break; default: - WARN_ONCE(1, "Unknown vma_shift %d", vma_shift); + WARN_ONCE(1, "Unknown page_shift %d", page_shift); } - vma_pagesize = 1UL << vma_shift; + page_size = 1UL << page_shift; if (nested) { unsigned long max_map_size; @@ -1592,7 +1592,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, max_map_size = PAGE_SIZE; force_pte = (max_map_size == PAGE_SIZE); - vma_pagesize = min(vma_pagesize, (long)max_map_size); + page_size = min_t(long, page_size, max_map_size); } /* @@ -1600,9 +1600,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, * ensure we find the right PFN and lay down the mapping in the right * place. */ - if (vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE) { - fault_ipa &= ~(vma_pagesize - 1); - ipa &= ~(vma_pagesize - 1); + if (page_size == PMD_SIZE || page_size == PUD_SIZE) { + fault_ipa &= ~(page_size - 1); + ipa &= ~(page_size - 1); } gfn = ipa >> PAGE_SHIFT; @@ -1627,7 +1627,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, pfn = __kvm_faultin_pfn(memslot, gfn, write_fault ? FOLL_WRITE : 0, &writable, &page); if (pfn == KVM_PFN_ERR_HWPOISON) { - kvm_send_hwpoison_signal(hva, vma_shift); + kvm_send_hwpoison_signal(hva, page_shift); return 0; } if (is_error_noslot_pfn(pfn)) @@ -1636,9 +1636,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (kvm_is_device_pfn(pfn)) { /* * If the page was identified as device early by looking at - * the VMA flags, vma_pagesize is already representing the + * the VMA flags, page_size is already representing the * largest quantity we can map. If instead it was mapped - * via __kvm_faultin_pfn(), vma_pagesize is set to PAGE_SIZE + * via __kvm_faultin_pfn(), page_size is set to PAGE_SIZE * and must not be upgraded. * * In both cases, we don't let transparent_hugepage_adjust() @@ -1686,16 +1686,16 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, * If we are not forced to use page mapping, check if we are * backed by a THP and thus use block mapping if possible. */ - if (vma_pagesize == PAGE_SIZE && !(force_pte || device)) { + if (page_size == PAGE_SIZE && !(force_pte || device)) { if (fault_is_perm && fault_granule > PAGE_SIZE) - vma_pagesize = fault_granule; + page_size = fault_granule; else - vma_pagesize = transparent_hugepage_adjust(kvm, memslot, - hva, &pfn, - &fault_ipa); + page_size = transparent_hugepage_adjust(kvm, memslot, + hva, &pfn, + &fault_ipa); - if (vma_pagesize < 0) { - ret = vma_pagesize; + if (page_size < 0) { + ret = page_size; goto out_unlock; } } @@ -1703,7 +1703,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (!fault_is_perm && !device && kvm_has_mte(kvm)) { /* Check the VMM hasn't introduced a new disallowed VMA */ if (mte_allowed) { - sanitise_mte_tags(kvm, pfn, vma_pagesize); + sanitise_mte_tags(kvm, pfn, page_size); } else { ret = -EFAULT; goto out_unlock; @@ -1728,10 +1728,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, /* * Under the premise of getting a FSC_PERM fault, we just need to relax - * permissions only if vma_pagesize equals fault_granule. Otherwise, + * permissions only if page_size equals fault_granule. Otherwise, * kvm_pgtable_stage2_map() should be called to change block size. */ - if (fault_is_perm && vma_pagesize == fault_granule) { + if (fault_is_perm && page_size == fault_granule) { /* * Drop the SW bits in favour of those stored in the * PTE, which will be preserved. @@ -1739,7 +1739,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, prot &= ~KVM_NV_GUEST_MAP_SZ; ret = KVM_PGT_FN(kvm_pgtable_stage2_relax_perms)(pgt, fault_ipa, prot, flags); } else { - ret = KVM_PGT_FN(kvm_pgtable_stage2_map)(pgt, fault_ipa, vma_pagesize, + ret = KVM_PGT_FN(kvm_pgtable_stage2_map)(pgt, fault_ipa, page_size, __pfn_to_phys(pfn), prot, memcache, flags); } From patchwork Tue May 13 16:34:35 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 889646 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9D64E2C031D for ; Tue, 13 May 2025 16:35:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154112; cv=none; b=knGfwVwMYjSCodiUGwQ7C82mQvgXbAHqDzxZb2uAQHVqO3diVzmHINmCshgPnJRPUubZoj8amE1edY947U3MgmSe27qshW/jHb6uRywScVIrA+laekC7O6zQiomPISNiBaDckd0qmB2qncqa4lvhs7vcEdfGYHn7u7ar1ukX/tE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154112; c=relaxed/simple; bh=qEaTBE1uwGPI8+uCA4iVoWVO+dyV2JyaLTfwA9DK/6k=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=C4HcQKuAzKv7OebXpEyyJJ6JB0e2lAzRzp7CiNT7/k7tRerEca86XzjIj0DeW7DZcsbb8+UFTqT9KjgqzoHCwUfSkTiUDK2S1vENHAXOejI6uFFQtqffRkIeu5O1oPBMurE+7IQsz53FefbFQgLbZ3TScS8Bf6AbZ3KGMsKzUjQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=xb5nKyDr; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="xb5nKyDr" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43d4d15058dso42810785e9.0 for ; Tue, 13 May 2025 09:35:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747154109; x=1747758909; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=rZXPMKjc2x8q7vQw6jLYOA4YavTLLDAU0LUHvWww2m0=; b=xb5nKyDrUMJOAiN1OSgQE9yd7RRW/8LLG2SuRllRT0m+thusdHP/y1PjFRQ3YZJHyc jnxANxOa4ifgoBmbDBg2TBTMMgPJjVoj7XfVjZLadY3t5P06yXtJ2LVIo6YMX9qdL3gZ Wi+zVG8MgLfwMFbzoXS5vUJcqluU8olgOKvH5ZPmsB5UUhMdkK++C1+ksH/EjP8hdDe2 HOrcZQ5udx+zrPc/GireS9p1VsMX+UetlIY5sJJlUFuRXyBile14CUGXFWiDGCJI9lbp 9hbDHoixWf38cSwe+qdFYdKs2nRoJVeflHypXbbY8oQ8G4wQ4tee4mkdppMmln7VSkSt 3jlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747154109; x=1747758909; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rZXPMKjc2x8q7vQw6jLYOA4YavTLLDAU0LUHvWww2m0=; b=GVa38UM0gxFSY1ha7hWTm5yM4IGG21iuhv6VSAIzXkffcawx5hDrBjyEGZFqFT0kTm KlvCD4s8h/J0CUKBY19XUZUzhAvaGyTeTrdxS9uL+OvLvBCY+1W3B0bQwxPpoTBXGNGv 1mEe3SiJ3RY8iKo1YlnPfSqp2pXQ5QBaJzgev6qFcv8pz35/5TSD7ygXxhPb3B/B7x6v F4lOXWZGzmZEZq32LkxMKqi6ky2pJh+0/vZ4RnEtoMvQb5ccg7/FBx862RAWygq7RIBb Kht0kPCLSwMZOsZb1rokN0y9Yv+Yb621y165txUdBWPe9u9YE9+W2Y7x2IJKGd/tMk4p HP6w== X-Forwarded-Encrypted: i=1; AJvYcCWtBkN9M/Cf8TI3kY06iQ/TO97zVavIdIZfaRNNBCuTH9EIaqoqSREGU8DBBhqXP+K8ThsnHtfcxy17GTyP@vger.kernel.org X-Gm-Message-State: AOJu0YyXNlUDh/yzhPBXVe9lJETEZOF4b4FPO6pCW0NfBPKKT3BVd2py ajdr7HbADa7t1XljkIpPSVBj8ZIzYZmqIN4LOkHNmWwLeIQso/+hRHIwxHBDLR4V34dte1Espg= = X-Google-Smtp-Source: AGHT+IFPpQgvrgdez5UxRSIdFt8fENxNzznV8garFYKvunshxDRiI2542r7dI1m28zPtVhnl3UAAh1kxZg== X-Received: from wmqc20.prod.google.com ([2002:a05:600c:a54:b0:43d:8f:dd29]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:5286:b0:440:66a4:8d1a with SMTP id 5b1f17b1804b1-442f20ba9fbmr156435e9.7.1747154109115; Tue, 13 May 2025 09:35:09 -0700 (PDT) Date: Tue, 13 May 2025 17:34:35 +0100 In-Reply-To: <20250513163438.3942405-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250513163438.3942405-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: <20250513163438.3942405-15-tabba@google.com> Subject: [PATCH v9 14/17] KVM: arm64: Enable mapping guest_memfd in arm64 From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com Enable mapping guest_memfd in arm64. For now, it applies to all VMs in arm64 that use guest_memfd. In the future, new VM types can restrict this via kvm_arch_gmem_supports_shared_mem(). Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_host.h | 10 ++++++++++ arch/arm64/kvm/Kconfig | 1 + 2 files changed, 11 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 08ba91e6fb03..2514779f5131 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1593,4 +1593,14 @@ static inline bool kvm_arch_has_irq_bypass(void) return true; } +static inline bool kvm_arch_supports_gmem(struct kvm *kvm) +{ + return IS_ENABLED(CONFIG_KVM_GMEM); +} + +static inline bool kvm_arch_vm_supports_gmem_shared_mem(struct kvm *kvm) +{ + return IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM); +} + #endif /* __ARM64_KVM_HOST_H__ */ diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index 096e45acadb2..8c1e1964b46a 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -38,6 +38,7 @@ menuconfig KVM select HAVE_KVM_VCPU_RUN_PID_CHANGE select SCHED_INFO select GUEST_PERF_EVENTS if PERF_EVENTS + select KVM_GMEM_SHARED_MEM help Support hosting virtualized guest machines. From patchwork Tue May 13 16:34:37 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 889645 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E610A2BF961 for ; Tue, 13 May 2025 16:35:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154116; cv=none; b=PRnNiTML8Hi606NoAsNNqm6pdlFj916s2UbWEwbPfbqNbA9aY48Yva2jdT8z6hXw/52hDoTdPRdU/QJlk5iz+eosQZBaf3QXTb1OmN1jd9osOV85l1HfA793EyI7lF0Uh8eT2McWVNz/7Aj7X78FIOJt5PvxndPjkSO4V4IBLIU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154116; c=relaxed/simple; bh=mKUcMM6xySgMhrCMICHBVOWIY72xumqrKrfQhfRhHEs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=G/6evRbf8vIvcBjfYGBf2MaJOP9bBbAD+4/dfACLuTndevksgtg9KG1X9CBH33vWGchIEBcSkK36sKB97ttfqtc9jUPWlBYVbEZOrGmaRxHjdb2SORj2AUrHnZCQCZ36HCQQJAb1ZHzLZviqcPI4s3cgH5FMRJ+4cNFAJe2eQSI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=hEHj5P8f; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="hEHj5P8f" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43d209dc2d3so28827105e9.3 for ; Tue, 13 May 2025 09:35:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747154113; x=1747758913; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=HplCz6Ki91pQtkmMnDfyPLORTUTrCJV6qBuSdHR42kE=; b=hEHj5P8f5Nzvk88UrJ1Hv6EOB/L0ePuyxAWjjJfVgL9bCDBFcCJ5+UF2AIs6ebx8WN liLXMFKt7dLSaPsigYlo768FjhqzrkW32cXsuidzDtKFNLuhFsD2uY045l3HvUdG9Y9o bWnMGJRnfGkEzs1Ct9F/6qzki7qVDHW4+bU4BQHqUZi1Ubbu92/1+o9hmrx14cOY8dcE 6APn+f9dLOPuXQUFYYJGZX9P6RT5+688uHDzSHnHajRuOwo0IL5+UaEfyWCVG36v3aRk o8bd+dKzC8plI3nQYywh9Jr9Cois2dfVQvnANEjnZCl0RM+Cdmb4uWMVZ8X1arGXK/AY /4og== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747154113; x=1747758913; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=HplCz6Ki91pQtkmMnDfyPLORTUTrCJV6qBuSdHR42kE=; b=dXumXmBJYPoQLJ50eoZr0slvrLO1KfHQ6z77QQ4Fc10uepSWbrb4/YLzK65rfeKKV8 iv2TmVKpS530x2+xvjTEOHx3E/UjZP9xWmK03ouxV70A4Mxc8QMc/s4ryRKN7Iw3e3v6 YemdBjATdlyfSqhldIVx8TxMqPUPsFGBafN0IJqXgTCAjbLizTD30CHU8C9z6SSkmOMi fZDwkjExkrBhJehKwEqsG3iwTQRjkEKYSwmkCsPsZQBoqUT5x4jGgvlJI+hmHLEsgSPb 87NSXFZDcovlK5tx2GqpkQxAgcSqs7ob7xW+qHUE8RNLRQtZUnFP+oTtDLT1BGuYiFlV BS5Q== X-Forwarded-Encrypted: i=1; AJvYcCU7RSGm1ryjEg2HECNZLMc422sSam3UHCKIvBnxzeFVcYzHYHbwbbMyblc03LlC4ZxMTHWE9gg6bdRYlAvG@vger.kernel.org X-Gm-Message-State: AOJu0YxJN1F8nph1NGA7xDnPor0WIREmbEMkJcqBkzhWnuE4T2NP9DCP 8exA5TXgBDZVwRiCEs7dHzK7bVlM5Fa7jXZBZTliG/Ep9+Hn3CMIpu50TyxE/MMzzqmNJV4fsA= = X-Google-Smtp-Source: AGHT+IGyG9X0C8EATAee6mw1Vu0OjWxo4VAOW8iVeiL+MppPyCAt6e4o5dM2NyqZvZWkg4NG1ybE313Wog== X-Received: from wmbdo24.prod.google.com ([2002:a05:600c:6818:b0:43d:56fa:9b95]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4451:b0:43d:526:e0ce with SMTP id 5b1f17b1804b1-442d6dc51e8mr133770725e9.21.1747154113359; Tue, 13 May 2025 09:35:13 -0700 (PDT) Date: Tue, 13 May 2025 17:34:37 +0100 In-Reply-To: <20250513163438.3942405-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250513163438.3942405-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: <20250513163438.3942405-17-tabba@google.com> Subject: [PATCH v9 16/17] KVM: selftests: guest_memfd mmap() test when mapping is allowed From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com Expand the guest_memfd selftests to include testing mapping guest memory for VM types that support it. Also, build the guest_memfd selftest for arm64. Co-developed-by: Ackerley Tng Signed-off-by: Ackerley Tng Signed-off-by: Fuad Tabba --- tools/testing/selftests/kvm/Makefile.kvm | 1 + .../testing/selftests/kvm/guest_memfd_test.c | 145 +++++++++++++++--- 2 files changed, 126 insertions(+), 20 deletions(-) diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm index f62b0a5aba35..ccf95ed037c3 100644 --- a/tools/testing/selftests/kvm/Makefile.kvm +++ b/tools/testing/selftests/kvm/Makefile.kvm @@ -163,6 +163,7 @@ TEST_GEN_PROGS_arm64 += access_tracking_perf_test TEST_GEN_PROGS_arm64 += arch_timer TEST_GEN_PROGS_arm64 += coalesced_io_test TEST_GEN_PROGS_arm64 += dirty_log_perf_test +TEST_GEN_PROGS_arm64 += guest_memfd_test TEST_GEN_PROGS_arm64 += get-reg-list TEST_GEN_PROGS_arm64 += memslot_modification_stress_test TEST_GEN_PROGS_arm64 += memslot_perf_test diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c index ce687f8d248f..443c49185543 100644 --- a/tools/testing/selftests/kvm/guest_memfd_test.c +++ b/tools/testing/selftests/kvm/guest_memfd_test.c @@ -34,12 +34,46 @@ static void test_file_read_write(int fd) "pwrite on a guest_mem fd should fail"); } -static void test_mmap(int fd, size_t page_size) +static void test_mmap_allowed(int fd, size_t page_size, size_t total_size) +{ + const char val = 0xaa; + char *mem; + size_t i; + int ret; + + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); + TEST_ASSERT(mem != MAP_FAILED, "mmaping() guest memory should pass."); + + memset(mem, val, total_size); + for (i = 0; i < total_size; i++) + TEST_ASSERT_EQ(mem[i], val); + + ret = fallocate(fd, FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE, 0, + page_size); + TEST_ASSERT(!ret, "fallocate the first page should succeed"); + + for (i = 0; i < page_size; i++) + TEST_ASSERT_EQ(mem[i], 0x00); + for (; i < total_size; i++) + TEST_ASSERT_EQ(mem[i], val); + + memset(mem, val, total_size); + for (i = 0; i < total_size; i++) + TEST_ASSERT_EQ(mem[i], val); + + ret = munmap(mem, total_size); + TEST_ASSERT(!ret, "munmap should succeed"); +} + +static void test_mmap_denied(int fd, size_t page_size, size_t total_size) { char *mem; mem = mmap(NULL, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); TEST_ASSERT_EQ(mem, MAP_FAILED); + + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); + TEST_ASSERT_EQ(mem, MAP_FAILED); } static void test_file_size(int fd, size_t page_size, size_t total_size) @@ -120,26 +154,19 @@ static void test_invalid_punch_hole(int fd, size_t page_size, size_t total_size) } } -static void test_create_guest_memfd_invalid(struct kvm_vm *vm) +static void test_create_guest_memfd_invalid_sizes(struct kvm_vm *vm, + uint64_t guest_memfd_flags, + size_t page_size) { - size_t page_size = getpagesize(); - uint64_t flag; size_t size; int fd; for (size = 1; size < page_size; size++) { - fd = __vm_create_guest_memfd(vm, size, 0); + fd = __vm_create_guest_memfd(vm, size, guest_memfd_flags); TEST_ASSERT(fd == -1 && errno == EINVAL, "guest_memfd() with non-page-aligned page size '0x%lx' should fail with EINVAL", size); } - - for (flag = BIT(0); flag; flag <<= 1) { - fd = __vm_create_guest_memfd(vm, page_size, flag); - TEST_ASSERT(fd == -1 && errno == EINVAL, - "guest_memfd() with flag '0x%lx' should fail with EINVAL", - flag); - } } static void test_create_guest_memfd_multiple(struct kvm_vm *vm) @@ -170,30 +197,108 @@ static void test_create_guest_memfd_multiple(struct kvm_vm *vm) close(fd1); } -int main(int argc, char *argv[]) +static void test_with_type(unsigned long vm_type, uint64_t guest_memfd_flags, + bool expect_mmap_allowed) { - size_t page_size; + struct kvm_vm *vm; size_t total_size; + size_t page_size; int fd; - struct kvm_vm *vm; - TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD)); + if (!(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(vm_type))) + return; page_size = getpagesize(); total_size = page_size * 4; - vm = vm_create_barebones(); + vm = vm_create_barebones_type(vm_type); - test_create_guest_memfd_invalid(vm); test_create_guest_memfd_multiple(vm); + test_create_guest_memfd_invalid_sizes(vm, guest_memfd_flags, page_size); - fd = vm_create_guest_memfd(vm, total_size, 0); + fd = vm_create_guest_memfd(vm, total_size, guest_memfd_flags); test_file_read_write(fd); - test_mmap(fd, page_size); + + if (expect_mmap_allowed) + test_mmap_allowed(fd, page_size, total_size); + else + test_mmap_denied(fd, page_size, total_size); + test_file_size(fd, page_size, total_size); test_fallocate(fd, page_size, total_size); test_invalid_punch_hole(fd, page_size, total_size); close(fd); + kvm_vm_release(vm); +} + +static void test_vm_type_gmem_flag_validity(unsigned long vm_type, + uint64_t expected_valid_flags) +{ + size_t page_size = getpagesize(); + struct kvm_vm *vm; + uint64_t flag = 0; + int fd; + + if (!(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(vm_type))) + return; + + vm = vm_create_barebones_type(vm_type); + + for (flag = BIT(0); flag; flag <<= 1) { + fd = __vm_create_guest_memfd(vm, page_size, flag); + + if (flag & expected_valid_flags) { + TEST_ASSERT(fd > 0, + "guest_memfd() with flag '0x%lx' should be valid", + flag); + close(fd); + } else { + TEST_ASSERT(fd == -1 && errno == EINVAL, + "guest_memfd() with flag '0x%lx' should fail with EINVAL", + flag); + } + } + + kvm_vm_release(vm); +} + +static void test_gmem_flag_validity(void) +{ + uint64_t non_coco_vm_valid_flags = 0; + + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) + non_coco_vm_valid_flags = GUEST_MEMFD_FLAG_SUPPORT_SHARED; + + test_vm_type_gmem_flag_validity(VM_TYPE_DEFAULT, non_coco_vm_valid_flags); + +#ifdef __x86_64__ + test_vm_type_gmem_flag_validity(KVM_X86_SW_PROTECTED_VM, non_coco_vm_valid_flags); + test_vm_type_gmem_flag_validity(KVM_X86_SEV_VM, 0); + test_vm_type_gmem_flag_validity(KVM_X86_SEV_ES_VM, 0); + test_vm_type_gmem_flag_validity(KVM_X86_SNP_VM, 0); + test_vm_type_gmem_flag_validity(KVM_X86_TDX_VM, 0); +#endif +} + +int main(int argc, char *argv[]) +{ + TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD)); + + test_gmem_flag_validity(); + + test_with_type(VM_TYPE_DEFAULT, 0, false); + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) { + test_with_type(VM_TYPE_DEFAULT, GUEST_MEMFD_FLAG_SUPPORT_SHARED, + true); + } + +#ifdef __x86_64__ + test_with_type(KVM_X86_SW_PROTECTED_VM, 0, false); + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) { + test_with_type(KVM_X86_SW_PROTECTED_VM, + GUEST_MEMFD_FLAG_SUPPORT_SHARED, true); + } +#endif }