From patchwork Fri May 16 19:19:22 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Afranji X-Patchwork-Id: 890759 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4CA9527FD71 for ; Fri, 16 May 2025 19:19:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423194; cv=none; b=AyvZJsMSRNzpMlzQ4XFgoX7/1z3Jjx531r0v7MtYYR/TxYdsW4QnObboORgDinuhtKzpMHTVZvCSD8awIBddWj/qf9kH6A5g+E2+kNGBKdOHJIDsIVX1G96kSuFntIdIVL54oBcxk2Fq8fMMsa/ijMXPfn9WFuoyn/HL3ihy1x0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423194; c=relaxed/simple; bh=UXDez3ry/DaxCrQjKni9gf7Mfs0ISje/h07kz2tuJRk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=H1avOZ9bDiuQrAz3BrETDxpvp+foFeLfd14Lh1YTEY2n3kq0P3/1/Kg6GPQ8dPLjh0vvYiPumvZLoamj+6TRvcL0q6xk4xPLHkbfQWQExLruXm1G7PLWbcbAni750VEa7vlM9sTH6eXUYNQTKJETEyEnWrfBHKiP+MLfJmC/6oM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=YXk7HJAs; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="YXk7HJAs" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-30ab5d34fdbso2609221a91.0 for ; Fri, 16 May 2025 12:19:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747423191; x=1748027991; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=rsqKzuSLf1x4EemD/tMjg1BQgSo1cJf+jNZhMbjjLok=; b=YXk7HJAsCWaOM7fpfGaNa/PZnhdzh4ln5RPJWiJwK+XfpyvpXCjOEjVP2bFANrP4Fg b/R1o7QRcv8k1Hj++GuKfM6w9sqdtrKYG1K9EbE0BUOaFeLjbKEYBHJRjXU4inEyHSQY fr8Q9xm2i75OOkubW8L4P605d7MaSoR2N4DCkoNNuClbvSSP3mQLfxRTErgDC2CP1IRg 5qb7ZpftWARv1Kj8SSlp9NKbbNOOJ2kVCxOim6oUKM1oQ8gJ2WZfXI8ow3+mHvKB0u95 NxQsu+gNfmviEFe+3IZI0k3l9+/pe8y3F+o/zYdd84E6Ftr/5xkEeAiZYYfyFZ+FHbWT S1ug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747423191; x=1748027991; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rsqKzuSLf1x4EemD/tMjg1BQgSo1cJf+jNZhMbjjLok=; b=Iw+cYuzuBYIX1YndZT57cnshfWevcyGmE3P1w0zz7/KWNzT7KvqGnYRObh/H0jmigD A0H0uzgWxsRDNwNmQJ05s/tWErzznUGL7fmk9scV5ZEBfbx2dy1UwCLYGmGUyoGXQ8cj Z/bhiZTGlHs74uNQ4D/oIwozQ/QAqO+NwTDa5zRVtwKdserCXyLq+Plw6USRVfvcBlE1 UET29EdZ76XEls1AKqdvZ9fNgwAiHZELFNmMcWPRuBeuIYzQZ0ktfRudjHki85lqUuo3 9cDByAHR2uKUfc932L4TQidWidV+haqrPh67ILsLJ7tRfSICrJ08UfoJ+/X8aH20aLDo ghAA== X-Forwarded-Encrypted: i=1; AJvYcCW+xiotsvnbFdtlTicBIoqKGRyjAaND95PD6n9zGAWA6IooUHVyUj83gHZW9tGIYsuH+RwTADZ7DkR1xtRwEg0=@vger.kernel.org X-Gm-Message-State: AOJu0YxrTBuCuRr5TuwrNJAeV4j+AwlZ11zyMf/0nGUgehcdgJFqum69 JSL1x9EKjojfo/5QwvlfFEF3pcdBv2iRjiTsyIMnazblQg4cwFr/1qCS0YAUNgO/bNekipJgGdM RDbvMzLQylw== X-Google-Smtp-Source: AGHT+IGmB4tQYHflNeidIdzxEAiQVgVR06U6vmB3abdN612fldcUCZhwfNaA1J0qHtwaVn3gWHMxg6azF1vb X-Received: from pjbpv7.prod.google.com ([2002:a17:90b:3c87:b0:301:1ea9:63b0]) (user=afranji job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:5750:b0:30e:3737:7c71 with SMTP id 98e67ed59e1d1-30e7d57e4e6mr6123056a91.20.1747423191645; Fri, 16 May 2025 12:19:51 -0700 (PDT) Date: Fri, 16 May 2025 19:19:22 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.49.0.1101.gccaa498523-goog Message-ID: <754b4898c3362050071f6dd09deb24f3c92a41c3.1747368092.git.afranji@google.com> Subject: [RFC PATCH v2 02/13] KVM: guest_memfd: Make guest mem use guest mem inodes instead of anonymous inodes From: Ryan Afranji To: afranji@google.com, ackerleytng@google.com, pbonzini@redhat.com, seanjc@google.com, tglx@linutronix.de, x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, tabba@google.com Cc: mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, andrew.jones@linux.dev, ricarkol@google.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, yu.c.zhang@linux.intel.com, vannapurve@google.com, erdemaktas@google.com, mail@maciej.szmigiero.name, vbabka@suse.cz, david@redhat.com, qperret@google.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, sagis@google.com, jthoughton@google.com From: Ackerley Tng Using guest mem inodes allows us to store metadata for the backing memory on the inode. Metadata will be added in a later patch to support HugeTLB pages. Metadata about backing memory should not be stored on the file, since the file represents a guest_memfd's binding with a struct kvm, and metadata about backing memory is not unique to a specific binding and struct kvm. Signed-off-by: Fuad Tabba Signed-off-by: Ackerley Tng --- include/uapi/linux/magic.h | 1 + virt/kvm/guest_memfd.c | 132 +++++++++++++++++++++++++++++++------ virt/kvm/kvm_main.c | 7 +- virt/kvm/kvm_mm.h | 9 ++- 4 files changed, 124 insertions(+), 25 deletions(-) diff --git a/include/uapi/linux/magic.h b/include/uapi/linux/magic.h index bb575f3ab45e..169dba2a6920 100644 --- a/include/uapi/linux/magic.h +++ b/include/uapi/linux/magic.h @@ -103,5 +103,6 @@ #define DEVMEM_MAGIC 0x454d444d /* "DMEM" */ #define SECRETMEM_MAGIC 0x5345434d /* "SECM" */ #define PID_FS_MAGIC 0x50494446 /* "PIDF" */ +#define GUEST_MEMORY_MAGIC 0x474d454d /* "GMEM" */ #endif /* __LINUX_MAGIC_H__ */ diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index b2aa6bf24d3a..2ee26695dc31 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -1,12 +1,16 @@ // SPDX-License-Identifier: GPL-2.0 +#include #include #include #include +#include #include #include #include "kvm_mm.h" +static struct vfsmount *kvm_gmem_mnt; + struct kvm_gmem { struct kvm *kvm; struct xarray bindings; @@ -318,9 +322,51 @@ static struct file_operations kvm_gmem_fops = { .fallocate = kvm_gmem_fallocate, }; -void kvm_gmem_init(struct module *module) +static const struct super_operations kvm_gmem_super_operations = { + .statfs = simple_statfs, +}; + +static int kvm_gmem_init_fs_context(struct fs_context *fc) +{ + struct pseudo_fs_context *ctx; + + if (!init_pseudo(fc, GUEST_MEMORY_MAGIC)) + return -ENOMEM; + + ctx = fc->fs_private; + ctx->ops = &kvm_gmem_super_operations; + + return 0; +} + +static struct file_system_type kvm_gmem_fs = { + .name = "kvm_guest_memory", + .init_fs_context = kvm_gmem_init_fs_context, + .kill_sb = kill_anon_super, +}; + +static int kvm_gmem_init_mount(void) +{ + kvm_gmem_mnt = kern_mount(&kvm_gmem_fs); + + if (WARN_ON_ONCE(IS_ERR(kvm_gmem_mnt))) + return PTR_ERR(kvm_gmem_mnt); + + kvm_gmem_mnt->mnt_flags |= MNT_NOEXEC; + return 0; +} + +int kvm_gmem_init(struct module *module) { kvm_gmem_fops.owner = module; + + return kvm_gmem_init_mount(); +} + +void kvm_gmem_exit(void) +{ + kern_unmount(kvm_gmem_mnt); + kvm_gmem_mnt = NULL; } static int kvm_gmem_migrate_folio(struct address_space *mapping, @@ -402,11 +448,71 @@ static const struct inode_operations kvm_gmem_iops = { .setattr = kvm_gmem_setattr, }; +static struct inode *kvm_gmem_inode_make_secure_inode(const char *name, + loff_t size, u64 flags) +{ + struct inode *inode; + + inode = alloc_anon_secure_inode(kvm_gmem_mnt->mnt_sb, name); + if (IS_ERR(inode)) + return inode; + + inode->i_private = (void *)(unsigned long)flags; + inode->i_op = &kvm_gmem_iops; + inode->i_mapping->a_ops = &kvm_gmem_aops; + inode->i_mode |= S_IFREG; + inode->i_size = size; + mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER); + mapping_set_inaccessible(inode->i_mapping); + /* Unmovable mappings are supposed to be marked unevictable as well. */ + WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping)); + + return inode; +} + +static struct file *kvm_gmem_inode_create_getfile(void *priv, loff_t size, + u64 flags) +{ + static const char *name = "[kvm-gmem]"; + struct inode *inode; + struct file *file; + int err; + + err = -ENOENT; + if (!try_module_get(kvm_gmem_fops.owner)) + goto err; + + inode = kvm_gmem_inode_make_secure_inode(name, size, flags); + if (IS_ERR(inode)) { + err = PTR_ERR(inode); + goto err_put_module; + } + + file = alloc_file_pseudo(inode, kvm_gmem_mnt, name, O_RDWR, + &kvm_gmem_fops); + if (IS_ERR(file)) { + err = PTR_ERR(file); + goto err_put_inode; + } + + file->f_flags |= O_LARGEFILE; + file->private_data = priv; + +out: + return file; + +err_put_inode: + iput(inode); +err_put_module: + module_put(kvm_gmem_fops.owner); +err: + file = ERR_PTR(err); + goto out; +} + static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags) { - const char *anon_name = "[kvm-gmem]"; struct kvm_gmem *gmem; - struct inode *inode; struct file *file; int fd, err; @@ -420,32 +526,16 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags) goto err_fd; } - file = anon_inode_create_getfile(anon_name, &kvm_gmem_fops, gmem, - O_RDWR, NULL); + file = kvm_gmem_inode_create_getfile(gmem, size, flags); if (IS_ERR(file)) { err = PTR_ERR(file); goto err_gmem; } - file->f_flags |= O_LARGEFILE; - - inode = file->f_inode; - WARN_ON(file->f_mapping != inode->i_mapping); - - inode->i_private = (void *)(unsigned long)flags; - inode->i_op = &kvm_gmem_iops; - inode->i_mapping->a_ops = &kvm_gmem_aops; - inode->i_mode |= S_IFREG; - inode->i_size = size; - mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER); - mapping_set_inaccessible(inode->i_mapping); - /* Unmovable mappings are supposed to be marked unevictable as well. */ - WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping)); - kvm_get_kvm(kvm); gmem->kvm = kvm; xa_init(&gmem->bindings); - list_add(&gmem->entry, &inode->i_mapping->i_private_list); + list_add(&gmem->entry, &file_inode(file)->i_mapping->i_private_list); fd_install(fd, file); return fd; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 69782df3617f..1e3fd81868bc 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -6412,7 +6412,9 @@ int kvm_init(unsigned vcpu_size, unsigned vcpu_align, struct module *module) if (WARN_ON_ONCE(r)) goto err_vfio; - kvm_gmem_init(module); + r = kvm_gmem_init(module); + if (r) + goto err_gmem; r = kvm_init_virtualization(); if (r) @@ -6433,6 +6435,8 @@ int kvm_init(unsigned vcpu_size, unsigned vcpu_align, struct module *module) err_register: kvm_uninit_virtualization(); err_virt: + kvm_gmem_exit(); +err_gmem: kvm_vfio_ops_exit(); err_vfio: kvm_async_pf_deinit(); @@ -6464,6 +6468,7 @@ void kvm_exit(void) for_each_possible_cpu(cpu) free_cpumask_var(per_cpu(cpu_kick_mask, cpu)); kmem_cache_destroy(kvm_vcpu_cache); + kvm_gmem_exit(); kvm_vfio_ops_exit(); kvm_async_pf_deinit(); kvm_irqfd_exit(); diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h index acef3f5c582a..dcacb76b8f00 100644 --- a/virt/kvm/kvm_mm.h +++ b/virt/kvm/kvm_mm.h @@ -68,17 +68,20 @@ static inline void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, #endif /* HAVE_KVM_PFNCACHE */ #ifdef CONFIG_KVM_PRIVATE_MEM -void kvm_gmem_init(struct module *module); +int kvm_gmem_init(struct module *module); +void kvm_gmem_exit(void); int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args); int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot, unsigned int fd, loff_t offset); void kvm_gmem_unbind(struct kvm_memory_slot *slot); #else -static inline void kvm_gmem_init(struct module *module) +static inline int kvm_gmem_init(struct module *module) { - + return 0; } +static inline void kvm_gmem_exit(void) {}; + static inline int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot, unsigned int fd, loff_t offset) From patchwork Fri May 16 19:19:24 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Afranji X-Patchwork-Id: 890758 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 21075280015 for ; Fri, 16 May 2025 19:19:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423196; cv=none; b=Lj/Z29XhnefGl4udLmOVXNOAJTOwFAT0OiXSaQkFc+ieWBJE08Bsl3chQ7IuUk8ELJ8PYksbRSjM4YpSYyPUkEhPrz2Z1e7qEcuNASRYB/RjLuqz9m6UPS0R6bEjaIwsIDrGjRAEyg0Rk5J2Vn4mEwpNQm5TlGVvHFIYFsQNpfc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423196; c=relaxed/simple; bh=MZaYn+nvo0B04wdZlv09EaHso5El8svtVs317b2BC04=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=gjtulPvM4l4N35Ss76IaPTKxLUBdzzWDdHlIjvTNRaPTlCKJYXP6/2blP3fKdh6R8nQN218CODhW0LROhVs1K/thyfbflWif/CJBd4Cts8fMKacVL0m15K2Abn+r8z7G40jBwxN6UiaKZwmDwXYnkCmB2xxNSU6i0RJLx1EIJQw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=d2u6BSPa; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="d2u6BSPa" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-30e8fc03575so461182a91.0 for ; Fri, 16 May 2025 12:19:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747423194; x=1748027994; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=oMFJ363USnG+FNszh8DLSKhNL8qceUKY5adrEM17BYQ=; b=d2u6BSPafAD90iS+6Dr4reF1EuHgIY5rDW/ZDrjtzul/r93E1AUqsu095yU8U1Rt3R XWFe8QRBaAl9aAUe8ZHS3f2y9sMh7lYcGe4D6+zaGLr6mD9eCVfGmE6a+ITS807AmnN5 0t1oxfPSC5nJ2DtmsfGAp6DdSOu4/0oZE/1waYPPSMZBohaoOva0aDJ41/1zITurrHH8 M1BOOEwPC/7uCMHxOBwuuuUD6Q7GeLAEwWkY+i5aClIX26CQYgIMNKlxNjuB7wo2G9IG ob+s11HW3bx/KPuCQ6drCDAKkWvYKYGKCfCvyE8o4ioXuLqN/Mr4f9zvHCz0pi7HNW9W udbA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747423194; x=1748027994; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=oMFJ363USnG+FNszh8DLSKhNL8qceUKY5adrEM17BYQ=; b=uMmLpDrwJAysfma3x87rD3k38W8LQBIATewheesCWCA/R5ft0tW5jvU7km51ZdW8VW zWT5BCSA8KFC271sLl/CMBCLpEqpJmKaIHEYz1J921UEf2haEabe9UjCujEvcomjVsw5 iDjuUgBr49AZqacEcwZfgmyv2cxVvaBFc41SC2hi50ZVZrOsSWO82IpKTsatFN1iKEK7 oWX/L0sR8WRKy9e+t0gUDDasLjd16T6HU4Xds6DqikV8XNoPg0VQFwnkmxnwFhCuKq0h 632qtNgmpnn6wH9lCHmGCfY1ffN0smMg1VEz9l6Kmdg6sdRBa8bhfgP4dc1hmhYPx27c KrdA== X-Forwarded-Encrypted: i=1; AJvYcCWNB1NljW3jsmh+N3s4DL4NGr20KZifRLTthugQn23qMcyAZ4VxY6PLoxcg7RziCu49jZhue0tveaP3y/8xPds=@vger.kernel.org X-Gm-Message-State: AOJu0YxmOovJol6oIMrrFa5Hr0PReNH/xR1QxmkylK1B5CIv+OFLGabY dx1O/4fQ3FQoRS9djRoQ5BR16SV6KZ173Szu8JAwrhxCrQDfqWIn449HQ6K+zvLCXZCJbQ261om xhahCbx5ZIw== X-Google-Smtp-Source: AGHT+IGI9ILyUKepnXkdGGm63vJAAGDjSs8x0J8EEnoDiBvLz3VF49AsRx4LzlHZ8XR+aVVknzB+VH1DaL5H X-Received: from pjbsv11.prod.google.com ([2002:a17:90b:538b:b0:2f9:dc36:b11]) (user=afranji job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:56cc:b0:2fe:a8b1:7d8 with SMTP id 98e67ed59e1d1-30e7d5acb07mr6669924a91.25.1747423194396; Fri, 16 May 2025 12:19:54 -0700 (PDT) Date: Fri, 16 May 2025 19:19:24 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.49.0.1101.gccaa498523-goog Message-ID: Subject: [RFC PATCH v2 04/13] KVM: guest_mem: Add ioctl KVM_LINK_GUEST_MEMFD From: Ryan Afranji To: afranji@google.com, ackerleytng@google.com, pbonzini@redhat.com, seanjc@google.com, tglx@linutronix.de, x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, tabba@google.com Cc: mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, andrew.jones@linux.dev, ricarkol@google.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, yu.c.zhang@linux.intel.com, vannapurve@google.com, erdemaktas@google.com, mail@maciej.szmigiero.name, vbabka@suse.cz, david@redhat.com, qperret@google.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, sagis@google.com, jthoughton@google.com From: Ackerley Tng KVM_LINK_GUEST_MEMFD will link a gmem fd's underlying inode to a new file (and fd). Signed-off-by: Ackerley Tng Co-developed-by: Ryan Afranji Signed-off-by: Ryan Afranji --- include/uapi/linux/kvm.h | 8 ++++++ virt/kvm/guest_memfd.c | 57 ++++++++++++++++++++++++++++++++++++++++ virt/kvm/kvm_main.c | 10 +++++++ virt/kvm/kvm_mm.h | 7 +++++ 4 files changed, 82 insertions(+) diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index c6988e2c68d5..8f17f0b462aa 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1583,4 +1583,12 @@ struct kvm_pre_fault_memory { __u64 padding[5]; }; +#define KVM_LINK_GUEST_MEMFD _IOWR(KVMIO, 0xd6, struct kvm_link_guest_memfd) + +struct kvm_link_guest_memfd { + __u64 fd; + __u64 flags; + __u64 reserved[6]; +}; + #endif /* __LINUX_KVM_H */ diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index a3918d1695b9..d76bd1119198 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -555,6 +555,63 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args) return __kvm_gmem_create(kvm, size, flags); } +int kvm_gmem_link(struct kvm *kvm, struct kvm_link_guest_memfd *args) +{ + static const char *name = "[kvm-gmem]"; + u64 flags = args->flags; + u64 valid_flags = 0; + struct file *dst_file, *src_file; + struct kvm_gmem *gmem; + struct timespec64 ts; + struct inode *inode; + struct fd f; + int ret, fd; + + if (flags & ~valid_flags) + return -EINVAL; + + f = fdget(args->fd); + src_file = fd_file(f); + if (!src_file) + return -EINVAL; + + ret = -EINVAL; + if (src_file->f_op != &kvm_gmem_fops) + goto out; + + /* Cannot link a gmem file with the same vm again */ + gmem = src_file->private_data; + if (gmem->kvm == kvm) + goto out; + + ret = fd = get_unused_fd_flags(0); + if (ret < 0) + goto out; + + inode = file_inode(src_file); + dst_file = kvm_gmem_alloc_view(kvm, inode, name); + if (IS_ERR(dst_file)) { + ret = PTR_ERR(dst_file); + goto out_fd; + } + + ts = inode_set_ctime_current(inode); + inode_set_atime_to_ts(inode, ts); + + inc_nlink(inode); + ihold(inode); + + fd_install(fd, dst_file); + fdput(f); + return fd; + +out_fd: + put_unused_fd(fd); +out: + fdput(f); + return ret; +} + int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot, unsigned int fd, loff_t offset) { diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 1e3fd81868bc..a9b01841a243 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -5285,6 +5285,16 @@ static long kvm_vm_ioctl(struct file *filp, r = kvm_gmem_create(kvm, &guest_memfd); break; } + case KVM_LINK_GUEST_MEMFD: { + struct kvm_link_guest_memfd params; + + r = -EFAULT; + if (copy_from_user(¶ms, argp, sizeof(params))) + goto out; + + r = kvm_gmem_link(kvm, ¶ms); + break; + } #endif default: r = kvm_arch_vm_ioctl(filp, ioctl, arg); diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h index dcacb76b8f00..85baf8a7e0de 100644 --- a/virt/kvm/kvm_mm.h +++ b/virt/kvm/kvm_mm.h @@ -71,6 +71,7 @@ static inline void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, int kvm_gmem_init(struct module *module); void kvm_gmem_exit(void); int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args); +int kvm_gmem_link(struct kvm *kvm, struct kvm_link_guest_memfd *args); int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot, unsigned int fd, loff_t offset); void kvm_gmem_unbind(struct kvm_memory_slot *slot); @@ -82,6 +83,12 @@ static inline int kvm_gmem_init(struct module *module) static inline void kvm_gmem_exit(void) {}; +static inline int kvm_gmem_link(struct kvm *kvm, + struct kvm_link_guest_memfd *args) +{ + return -EOPNOTSUPP; +} + static inline int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot, unsigned int fd, loff_t offset) From patchwork Fri May 16 19:19:26 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Afranji X-Patchwork-Id: 890757 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1BB5528032F for ; Fri, 16 May 2025 19:19:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423199; cv=none; b=g3QPwuBpjlHwWcDWaLcREiL5cYdRO23iPIB4rPRtQ49ORwab0s0QFNkO1yyj0t70Yq6ukw6gqDpVtJ5PBidnVUD4GvF8fe/QqsMZ/lA+DwAHthfwOQBOyaK5err+30HCat0FunwdwQUemYqxLMDuzpj1bYbyJpDKMEfs36R9u5k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423199; c=relaxed/simple; bh=/R9vB/KqEvAjXWixKptCo/7lWCpS9NBKQF8SStvrMlc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=gDMvUdZORQlHCkeFu9L/Bb7nmmeb0regudqXxEt0onY2nNKp69ySTPuUTlnw59udY756PumbDdT0A8iycmcm5KOKS7lc4yg6q4njrU69ziiLvrJkUc0iPMg7wQqVBn/kLzRRj1HZCxwThSekCEsjP6TVRoCCNQkCC4kbzjpc5FU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=JDXEf5Dr; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="JDXEf5Dr" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-30a39fa0765so3435724a91.3 for ; Fri, 16 May 2025 12:19:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747423197; x=1748027997; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Fj+O3835hB+mDLIwn62GyrN5K+Himp712LOqeVzKuMw=; b=JDXEf5DrhNAjKz3Pxbn2Wmciu0O/sMxPTAEWXU+/VEP7riA+/8rwR4YPbiz7pUEYu2 jMX2Wt41mRUlJDnhlzChVONUaDzkju/SZnrCbwndPYN+p4ckrjjCzCRXOjOzZ2xTJZbP MLwOkt9ElCqagjlly6FPoUPayJ/CCz09mgu5I/+4yydKGezq/oQU+6IrbSNh3Q3I1oas dQkmZ+P1uEzBaEoDpJo9s+HIvaR99n3qFEFIzRXQkYufjhTLIBMLaXMKx1loR6i8B5Dt F/TcnJTMYJycaG6v2a+ml2xwPfgy4boIb3w2+cdQ51WrBlENeY2QdT14UHteg3PtQJDs btCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747423197; x=1748027997; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Fj+O3835hB+mDLIwn62GyrN5K+Himp712LOqeVzKuMw=; b=KdF6pYQsPlnODOTKNk+2UUQPgQrfrQiG9hMSnjJORv6BHRuYrN1iI7IvH+LjmKGTk2 433Og8Z1OUjNhU1/zvbJH9iCO9q4/qxtbf/f2sJMtfCScRfTo8DiC7cfZcmYeVp2hbAn EqZ/w2X0R9S0mYTLXFFS4mbpvtGu/7TH7GcikvjSWUwndel2yZemQQvi0edA4brXpGr6 LacL3nRIGRtpm6rP0YP79SG8n/VpgcUz5F6/laqOkAFRlJ3Kw5yn6dJ11fuK8NgBcZ8E f3Mko+ZyOQw53Po+lijc0Lvx/sMp4UNBcJbsKyPqxH7j13uItTE9THl1EhcPOA5yQpaa WcYg== X-Forwarded-Encrypted: i=1; AJvYcCUWi8D4dI8B0AhsL3zA4enz7bvUmBuh4Qx3x4r1oGTwpLtOw8PMfUT3QfYImYm337/AI5hpUOpbgpnGgrts0/E=@vger.kernel.org X-Gm-Message-State: AOJu0YzjTSDt8mtU9tQXlPfGpDYZhpqjYpsfa94OcMz+/0ghGEveT7DB E8gQJaw5XqXmTSoO58YMaCeQ07HvBRNu6iFx56lZcjCgNHWk+495Ia6aBvt3OKAVk126p3V49oz hArtPF2Tnxg== X-Google-Smtp-Source: AGHT+IHsBpALK/e0rg+JB1mNGETo/0o1fwTI4vHuGOsLELeZyBF3WgvYCnwYUD0i5CByXpXpO2KcQaRcAv/0 X-Received: from pjbqn6.prod.google.com ([2002:a17:90b:3d46:b0:2fc:ccfe:368]) (user=afranji job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90a:e185:b0:2ff:6ac2:c5a5 with SMTP id 98e67ed59e1d1-30e7d5a8600mr5955317a91.26.1747423197397; Fri, 16 May 2025 12:19:57 -0700 (PDT) Date: Fri, 16 May 2025 19:19:26 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.49.0.1101.gccaa498523-goog Message-ID: Subject: [RFC PATCH v2 06/13] KVM: selftests: Test transferring private memory to another VM From: Ryan Afranji To: afranji@google.com, ackerleytng@google.com, pbonzini@redhat.com, seanjc@google.com, tglx@linutronix.de, x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, tabba@google.com Cc: mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, andrew.jones@linux.dev, ricarkol@google.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, yu.c.zhang@linux.intel.com, vannapurve@google.com, erdemaktas@google.com, mail@maciej.szmigiero.name, vbabka@suse.cz, david@redhat.com, qperret@google.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, sagis@google.com, jthoughton@google.com From: Ackerley Tng Signed-off-by: Ackerley Tng Signed-off-by: Ryan Afranji --- .../kvm/x86/private_mem_migrate_tests.c | 87 +++++++++++++++++++ 1 file changed, 87 insertions(+) create mode 100644 tools/testing/selftests/kvm/x86/private_mem_migrate_tests.c diff --git a/tools/testing/selftests/kvm/x86/private_mem_migrate_tests.c b/tools/testing/selftests/kvm/x86/private_mem_migrate_tests.c new file mode 100644 index 000000000000..4226de3ebd41 --- /dev/null +++ b/tools/testing/selftests/kvm/x86/private_mem_migrate_tests.c @@ -0,0 +1,87 @@ +// SPDX-License-Identifier: GPL-2.0 +#include "kvm_util_base.h" +#include "test_util.h" +#include "ucall_common.h" +#include +#include + +#define TRANSFER_PRIVATE_MEM_TEST_SLOT 10 +#define TRANSFER_PRIVATE_MEM_GPA ((uint64_t)(1ull << 32)) +#define TRANSFER_PRIVATE_MEM_GVA TRANSFER_PRIVATE_MEM_GPA +#define TRANSFER_PRIVATE_MEM_VALUE 0xdeadbeef + +static void transfer_private_mem_guest_code_src(void) +{ + uint64_t volatile *const ptr = (uint64_t *)TRANSFER_PRIVATE_MEM_GVA; + + *ptr = TRANSFER_PRIVATE_MEM_VALUE; + + GUEST_SYNC1(*ptr); +} + +static void transfer_private_mem_guest_code_dst(void) +{ + uint64_t volatile *const ptr = (uint64_t *)TRANSFER_PRIVATE_MEM_GVA; + + GUEST_SYNC1(*ptr); +} + +static void test_transfer_private_mem(void) +{ + struct kvm_vm *src_vm, *dst_vm; + struct kvm_vcpu *src_vcpu, *dst_vcpu; + int src_memfd, dst_memfd; + struct ucall uc; + + const struct vm_shape shape = { + .mode = VM_MODE_DEFAULT, + .type = KVM_X86_SW_PROTECTED_VM, + }; + + /* Build the source VM, use it to write to private memory */ + src_vm = __vm_create_shape_with_one_vcpu( + shape, &src_vcpu, 0, transfer_private_mem_guest_code_src); + src_memfd = vm_create_guest_memfd(src_vm, SZ_4K, 0); + + vm_mem_add(src_vm, DEFAULT_VM_MEM_SRC, TRANSFER_PRIVATE_MEM_GPA, + TRANSFER_PRIVATE_MEM_TEST_SLOT, 1, KVM_MEM_PRIVATE, + src_memfd, 0); + + virt_map(src_vm, TRANSFER_PRIVATE_MEM_GVA, TRANSFER_PRIVATE_MEM_GPA, 1); + vm_set_memory_attributes(src_vm, TRANSFER_PRIVATE_MEM_GPA, SZ_4K, + KVM_MEMORY_ATTRIBUTE_PRIVATE); + + vcpu_run(src_vcpu); + TEST_ASSERT_KVM_EXIT_REASON(src_vcpu, KVM_EXIT_IO); + get_ucall(src_vcpu, &uc); + TEST_ASSERT(uc.args[0] == TRANSFER_PRIVATE_MEM_VALUE, + "Source VM should be able to write to private memory"); + + /* Build the destination VM with linked fd */ + dst_vm = __vm_create_shape_with_one_vcpu( + shape, &dst_vcpu, 0, transfer_private_mem_guest_code_dst); + dst_memfd = vm_link_guest_memfd(dst_vm, src_memfd, 0); + + vm_mem_add(dst_vm, DEFAULT_VM_MEM_SRC, TRANSFER_PRIVATE_MEM_GPA, + TRANSFER_PRIVATE_MEM_TEST_SLOT, 1, KVM_MEM_PRIVATE, + dst_memfd, 0); + + virt_map(dst_vm, TRANSFER_PRIVATE_MEM_GVA, TRANSFER_PRIVATE_MEM_GPA, 1); + vm_set_memory_attributes(dst_vm, TRANSFER_PRIVATE_MEM_GPA, SZ_4K, + KVM_MEMORY_ATTRIBUTE_PRIVATE); + + vcpu_run(dst_vcpu); + TEST_ASSERT_KVM_EXIT_REASON(dst_vcpu, KVM_EXIT_IO); + get_ucall(dst_vcpu, &uc); + TEST_ASSERT(uc.args[0] == TRANSFER_PRIVATE_MEM_VALUE, + "Destination VM should be able to read value transferred"); +} + +int main(int argc, char *argv[]) +{ + TEST_REQUIRE(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_SW_PROTECTED_VM)); + + test_transfer_private_mem(); + + return 0; +} From patchwork Fri May 16 19:19:28 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Afranji X-Patchwork-Id: 890756 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F208F28136F for ; Fri, 16 May 2025 19:20:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423202; cv=none; b=JiNCNfe3omJqrnGGtczKl/7uY4XzbTl/ponXFiS/zkTy2tDmSuns8F9t6lGBAVD8QBcboqhZsJshA8A1uhFmIj6sb2EIQafUzPISjSdWkjrv5JuXBYqBCQCZGXYUCFiERzWh2SD1IhU5YO8ctYL5AQF1+C9q+SsEHJL8jW3E0MM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423202; c=relaxed/simple; bh=Z+Ymw5KorVqxl1MWcDtTMy/41FachyKB8z3Xc1HhzcI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=l/91l+wY0dCJhpxL4cHDEdtIJbLz+eG+9jtpeG0iNbsJ6yzB7scerzTom0oOtJyzmI67QZILhHnrGeM5mrnonf94SziUbbbuP0kKTCNiplSYjK9Gdt0FuQ0jXbjrfrCKukaXzIT+wSOodpe9TTToXDMMwsa3nBBT36RZ1PF24DU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=fOT4YVaE; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="fOT4YVaE" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-740270e168aso2305915b3a.1 for ; Fri, 16 May 2025 12:20:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747423200; x=1748028000; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=i+O0+h0SP1zdyUU/AUBngE1nKYK7W0UZEZfvwyilFUE=; b=fOT4YVaEe85yMPIq6UbKaGDTj3/KrNF4V3448D7qflMl+JSLnxJufSjNAtQ1huqJj8 UJKCcQbUb4Q+MRWwZV3EzH2HeriR4XGJAhVozwlykIZX60s9q3j5SCbuZCYa+hGaofKH 2RMPbjwkGnCcD7/wWXBc48/utGzQDYccVYWjs8pc8zx7SfOyuaeG2wm+rmBD9bqPMlJt GJF+8BE0T03MbnljiKYecHZqIGcv586gP1VQlkD+pbnZC6oim5YjVNhCJzoFsRTvpwG5 z9paIiiYep/nCq7pxCLAkyE8K1FwV22fvYyfPRsIw+P8OzWMpBV0RQpj9X3Z/so/LTMk gO0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747423200; x=1748028000; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=i+O0+h0SP1zdyUU/AUBngE1nKYK7W0UZEZfvwyilFUE=; b=nWkXazD0nBdKZMnYsl8A/vdw7KvUYKuD2L+CrcbrzLyhT7tJZIbX82e0dDr3fpMyH1 SuSjWTY3j01FXhok8kYlcZgmg6tnvrrSESMpqM50OatgjlOuAqiOqskwtkBQeQaj+YWH XtrAKIbfmiPsaEnr2KF1J+9rfAFletqleAX4WUaRT23N53zSlnIhwecXg/sSRTVpIhdf XmFjmB55RFsRUxs9ZFA0fPZsgzzEPASonb3LbIu4N1edMkKXQCa2D3LfCnTgsdL6zI8Y SxK0OEV+4Uy60NiftDLWYBgwJXoI/fyqgE+D43Lr2Em4EPWEea0ABoInBOvWKmG4ePAd nH0Q== X-Forwarded-Encrypted: i=1; AJvYcCU41bbtkMct8NTxtrJrHcLj7T3DLzTL4DpMnlE/5To7DqWsPGeCVhZq2Adqk8yuvBo/AWebwRezGiSYtSb/lgI=@vger.kernel.org X-Gm-Message-State: AOJu0Yxb/8dLAhJbcbwQ4EgRs8VWjBg0C73WNnLfahM5pfl/ZorRAt4Y rVabWoMT1AFJRd5SyonjM5uVwGsE3O6ysUY+yzDIPFjgHiOiGxRQAyKAxh0FKC3np13eibJobyb Hy+J3BvtHTA== X-Google-Smtp-Source: AGHT+IG5/VnWZMp1URaxTB5fGhpyao745JN7FMyydoGLfEMB3xGNC9Y03XfePEOK3stFRMn8KrJaxzImVTNi X-Received: from pfux1.prod.google.com ([2002:a05:6a00:bc1:b0:741:2a97:6ae2]) (user=afranji job=prod-delivery.src-stubby-dispatcher) by 2002:aa7:888e:0:b0:742:a23e:2a68 with SMTP id d2e1a72fcca58-742a98a2437mr5530195b3a.15.1747423200276; Fri, 16 May 2025 12:20:00 -0700 (PDT) Date: Fri, 16 May 2025 19:19:28 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.49.0.1101.gccaa498523-goog Message-ID: Subject: [RFC PATCH v2 08/13] KVM: x86: Refactor common code out of sev.c From: Ryan Afranji To: afranji@google.com, ackerleytng@google.com, pbonzini@redhat.com, seanjc@google.com, tglx@linutronix.de, x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, tabba@google.com Cc: mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, andrew.jones@linux.dev, ricarkol@google.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, yu.c.zhang@linux.intel.com, vannapurve@google.com, erdemaktas@google.com, mail@maciej.szmigiero.name, vbabka@suse.cz, david@redhat.com, qperret@google.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, sagis@google.com, jthoughton@google.com From: Ackerley Tng Split sev_lock_two_vms() into kvm_mark_migration_in_progress() and kvm_lock_two_vms() and refactor sev.c to use these two new functions. Co-developed-by: Sagi Shahar Signed-off-by: Sagi Shahar Co-developed-by: Vishal Annapurve Signed-off-by: Vishal Annapurve Signed-off-by: Ackerley Tng Signed-off-by: Ryan Afranji --- arch/x86/kvm/svm/sev.c | 60 ++++++++++------------------------------ arch/x86/kvm/x86.c | 62 ++++++++++++++++++++++++++++++++++++++++++ arch/x86/kvm/x86.h | 6 ++++ 3 files changed, 82 insertions(+), 46 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 89c06cfcc200..b3048ec411e2 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -1836,47 +1836,6 @@ static bool is_cmd_allowed_from_mirror(u32 cmd_id) return false; } -static int sev_lock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm) -{ - int r = -EBUSY; - - if (dst_kvm == src_kvm) - return -EINVAL; - - /* - * Bail if these VMs are already involved in a migration to avoid - * deadlock between two VMs trying to migrate to/from each other. - */ - if (atomic_cmpxchg_acquire(&dst_kvm->migration_in_progress, 0, 1)) - return -EBUSY; - - if (atomic_cmpxchg_acquire(&src_kvm->migration_in_progress, 0, 1)) - goto release_dst; - - r = -EINTR; - if (mutex_lock_killable(&dst_kvm->lock)) - goto release_src; - if (mutex_lock_killable_nested(&src_kvm->lock, SINGLE_DEPTH_NESTING)) - goto unlock_dst; - return 0; - -unlock_dst: - mutex_unlock(&dst_kvm->lock); -release_src: - atomic_set_release(&src_kvm->migration_in_progress, 0); -release_dst: - atomic_set_release(&dst_kvm->migration_in_progress, 0); - return r; -} - -static void sev_unlock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm) -{ - mutex_unlock(&dst_kvm->lock); - mutex_unlock(&src_kvm->lock); - atomic_set_release(&dst_kvm->migration_in_progress, 0); - atomic_set_release(&src_kvm->migration_in_progress, 0); -} - /* vCPU mutex subclasses. */ enum sev_migration_role { SEV_MIGRATION_SOURCE = 0, @@ -2057,9 +2016,12 @@ int sev_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd) return -EBADF; source_kvm = fd_file(f)->private_data; - ret = sev_lock_two_vms(kvm, source_kvm); + ret = kvm_mark_migration_in_progress(kvm, source_kvm); if (ret) return ret; + ret = kvm_lock_two_vms(kvm, source_kvm); + if (ret) + goto out_mark_migration_done; if (kvm->arch.vm_type != source_kvm->arch.vm_type || sev_guest(kvm) || !sev_guest(source_kvm)) { @@ -2105,7 +2067,9 @@ int sev_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd) put_misc_cg(cg_cleanup_sev->misc_cg); cg_cleanup_sev->misc_cg = NULL; out_unlock: - sev_unlock_two_vms(kvm, source_kvm); + kvm_unlock_two_vms(kvm, source_kvm); +out_mark_migration_done: + kvm_mark_migration_done(kvm, source_kvm); return ret; } @@ -2779,9 +2743,12 @@ int sev_vm_copy_enc_context_from(struct kvm *kvm, unsigned int source_fd) return -EBADF; source_kvm = fd_file(f)->private_data; - ret = sev_lock_two_vms(kvm, source_kvm); + ret = kvm_mark_migration_in_progress(kvm, source_kvm); if (ret) return ret; + ret = kvm_lock_two_vms(kvm, source_kvm); + if (ret) + goto e_mark_migration_done; /* * Mirrors of mirrors should work, but let's not get silly. Also @@ -2821,9 +2788,10 @@ int sev_vm_copy_enc_context_from(struct kvm *kvm, unsigned int source_fd) * KVM contexts as the original, and they may have different * memory-views. */ - e_unlock: - sev_unlock_two_vms(kvm, source_kvm); + kvm_unlock_two_vms(kvm, source_kvm); +e_mark_migration_done: + kvm_mark_migration_done(kvm, source_kvm); return ret; } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index f6ce044b090a..422c66a033d2 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4502,6 +4502,68 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info) } EXPORT_SYMBOL_GPL(kvm_get_msr_common); +int kvm_mark_migration_in_progress(struct kvm *dst_kvm, struct kvm *src_kvm) +{ + int r; + + if (dst_kvm == src_kvm) + return -EINVAL; + + /* + * Bail if these VMs are already involved in a migration to avoid + * deadlock between two VMs trying to migrate to/from each other. + */ + r = -EBUSY; + if (atomic_cmpxchg_acquire(&dst_kvm->migration_in_progress, 0, 1)) + return r; + + if (atomic_cmpxchg_acquire(&src_kvm->migration_in_progress, 0, 1)) + goto release_dst; + + return 0; + +release_dst: + atomic_set_release(&dst_kvm->migration_in_progress, 0); + return r; +} +EXPORT_SYMBOL_GPL(kvm_mark_migration_in_progress); + +void kvm_mark_migration_done(struct kvm *dst_kvm, struct kvm *src_kvm) +{ + atomic_set_release(&dst_kvm->migration_in_progress, 0); + atomic_set_release(&src_kvm->migration_in_progress, 0); +} +EXPORT_SYMBOL_GPL(kvm_mark_migration_done); + +int kvm_lock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm) +{ + int r; + + if (dst_kvm == src_kvm) + return -EINVAL; + + r = -EINTR; + if (mutex_lock_killable(&dst_kvm->lock)) + return r; + + if (mutex_lock_killable_nested(&src_kvm->lock, SINGLE_DEPTH_NESTING)) + goto unlock_dst; + + return 0; + +unlock_dst: + mutex_unlock(&dst_kvm->lock); + return r; +} +EXPORT_SYMBOL_GPL(kvm_lock_two_vms); + +void kvm_unlock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm) +{ + mutex_unlock(&dst_kvm->lock); + mutex_unlock(&src_kvm->lock); +} +EXPORT_SYMBOL_GPL(kvm_unlock_two_vms); + /* * Read or write a bunch of msrs. All parameters are kernel addresses. * diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index 88a9475899c8..508f9509546c 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -649,4 +649,10 @@ int ____kvm_emulate_hypercall(struct kvm_vcpu *vcpu, int cpl, int kvm_emulate_hypercall(struct kvm_vcpu *vcpu); +int kvm_mark_migration_in_progress(struct kvm *dst_kvm, struct kvm *src_kvm); +void kvm_mark_migration_done(struct kvm *dst_kvm, struct kvm *src_kvm); + +int kvm_lock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm); +void kvm_unlock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm); + #endif From patchwork Fri May 16 19:19:30 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Afranji X-Patchwork-Id: 890755 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2B2CB2820B9 for ; Fri, 16 May 2025 19:20:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423205; cv=none; b=rNpWe0b7zLzwBSa+95vipQNR00ze+Hm/N5ImoeSZ58a8vZuUqDQjfzyJnGTQrWI+QH+mcJrSM2YwOy6f+WH7qpTJrvF0RIjKTEaeVwf7HOVviHBnYcF5E02YVDxrwxnUyqQsdCW4bZNC9AAjgF1cMQwifzJY5GiLnNv6byAa4k0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423205; c=relaxed/simple; bh=zI703iJCDbnk2s6d61LSt5o9bFxMF1iFazDfQ6c+5lo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=jy3wXVwCprB9XloiTXGKhH0X/ZEWyd38tM9e45A6qoesMiwB82DZjMhIWzU/yeBs+3XVAYdRH0l+BhJOCSpbpQTDwbo7G97BL2mBUhpOqMWkUBeQKa/5hwI5gHzqw6YKJsdEMJZrTRbVOZDFB9s7fdhcGW/42ylgnqqS52adBvY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=kjBaW/fa; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="kjBaW/fa" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-30e8425926eso1410034a91.1 for ; Fri, 16 May 2025 12:20:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747423203; x=1748028003; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Xw0kwmlCHmpdsX/DiL844LcPhYI529J79EzDxXuX1+o=; b=kjBaW/faJLsmRpkYDLcpuE44pBFxM774oWFWGs/U/exePEWWfWzkgCyILUbXl0VE0m RiVmZymxIHz7xgF/RhaBIji8SNXkNa0GspmhXEd7gC4ZmnYJnLM/CQLytwtNuqBmoMb7 24xrFyevRQ4MrIQDCfndBhBzF2DTo1K8Fx32xGJ6JORBiigVC9oHzC3z6vBm6qRVpejm TaQi7QWvqJtemLU56lf9CDA0BzwEMB0SjZ585RksO3a+VZ6Xbp5Wn0FBmPn5wV5o/bvN A9lgJH46+n8QXo0A13epSqB9XEO/+Q/bEDCJVkoFfdL4Uygplw6ehIoImmRYsZlQBKWK kqzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747423203; x=1748028003; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Xw0kwmlCHmpdsX/DiL844LcPhYI529J79EzDxXuX1+o=; b=IfLfV+fNSKy8D2ncCMtPiNGFzje/2dSvGj7nqOMnMZmJYX5a0w0/zOk+ag3k7cV1Vl RamsTZ/qu8pN8wdD/k5KC6jTPSKs8ZTPFY/4dvC67RbDBAoEEuO9A4mOMLHCjKjFQVLC KwYXoWRJXMKdwFXstZ4Eg5hZQLR6hcIgeSr1GV6cx0sXskIfNcduG1/YVXki4yBK7NB7 bnaEv6FGE3VAF3S16pEEkGnITHztrJcHFw77AKINdWQ1NAOzZwjgjBnJjMaSc7kRAtYP fUcu1g4hDIjfMzbM109qCwqEhd43674mFG0aisHUQxIomv0ZF9vsYr7dEWi5yvgX3iHN TmtQ== X-Forwarded-Encrypted: i=1; AJvYcCXTnW1XJvEI5gEP4XbE7HFS3nesOCAyMO4prWWuqjNMjMFVGx0X96pkA0N2kjsIaJgeyUqgDawdYgRiU0iBW8w=@vger.kernel.org X-Gm-Message-State: AOJu0YwfQu7Axe1c+9HGJxQ7KQx08vslD8Vl1HPJ+ZI+FBP+HnAXHvJW FNBZlDBtaMvWGpB7IKHjFOXsmJf2+6u3nIyry/y2KMS3GECFCo4z2RQNPH303ceDo6v/6PHwJ52 hCNDCBHlzV6m8/YzHPrO8v880m0xAAOifm2OQ765abVgl3BqP6UFTpEa4MqSG9Jg4sDn3LcY0HR geHkk= X-Google-Smtp-Source: AGHT+IHuWIFRKFYUeYZ/E0Mn4pw8FwJ4Uk8VOlO7RIg7ETTqbymvwLUfY9oGU3oIEt0b7BaEDlNyVDhXpsne X-Received: from pjb12.prod.google.com ([2002:a17:90b:2f0c:b0:30a:a05c:6e7d]) (user=afranji job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:558e:b0:2ee:8ea0:6b9c with SMTP id 98e67ed59e1d1-30e830fb83cmr6780268a91.12.1747423203424; Fri, 16 May 2025 12:20:03 -0700 (PDT) Date: Fri, 16 May 2025 19:19:30 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.49.0.1101.gccaa498523-goog Message-ID: <7c51d4ae251323ce8c224aa362a4be616b4cfeba.1747368093.git.afranji@google.com> Subject: [RFC PATCH v2 10/13] KVM: x86: Let moving encryption context be configurable From: Ryan Afranji To: afranji@google.com, ackerleytng@google.com, pbonzini@redhat.com, seanjc@google.com, tglx@linutronix.de, x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, tabba@google.com Cc: mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, andrew.jones@linux.dev, ricarkol@google.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, yu.c.zhang@linux.intel.com, vannapurve@google.com, erdemaktas@google.com, mail@maciej.szmigiero.name, vbabka@suse.cz, david@redhat.com, qperret@google.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, sagis@google.com, jthoughton@google.com X-ccpol: medium From: Ackerley Tng SEV-capable VMs may also use the KVM_X86_SW_PROTECTED_VM type, but they will still need architecture-specific handling to move encryption context. Hence, we let moving of encryption context be configurable and store that configuration in a flag. Co-developed-by: Vishal Annapurve Signed-off-by: Vishal Annapurve Signed-off-by: Ackerley Tng Signed-off-by: Ryan Afranji --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/svm/sev.c | 2 ++ arch/x86/kvm/x86.c | 9 ++++++++- 3 files changed, 11 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 179618300270..db37ce814611 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1576,6 +1576,7 @@ struct kvm_arch { #define SPLIT_DESC_CACHE_MIN_NR_OBJECTS (SPTE_ENT_PER_PAGE + 1) struct kvm_mmu_memory_cache split_desc_cache; + bool use_vm_enc_ctxt_op; gfn_t gfn_direct_bits; /* diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 689521d9e26f..95083556d321 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -442,6 +442,8 @@ static int __sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp, if (ret) goto e_no_asid; + kvm->arch.use_vm_enc_ctxt_op = true; + init_args.probe = false; ret = sev_platform_init(&init_args); if (ret) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 637540309456..3a7e05c47aa8 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -6624,7 +6624,14 @@ static int kvm_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd) if (r) goto out_mark_migration_done; - r = kvm_x86_call(vm_move_enc_context_from)(kvm, source_kvm); + /* + * Different types of VMs will allow userspace to define if moving + * encryption context should be required. + */ + if (kvm->arch.use_vm_enc_ctxt_op && + kvm_x86_ops.vm_move_enc_context_from) { + r = kvm_x86_call(vm_move_enc_context_from)(kvm, source_kvm); + } kvm_unlock_two_vms(kvm, source_kvm); out_mark_migration_done: From patchwork Fri May 16 19:19:32 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Afranji X-Patchwork-Id: 890754 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DA4E5283152 for ; Fri, 16 May 2025 19:20:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423208; cv=none; b=WGJfZPCoVsCnVMxMGg/X10iu0o8i8ndGIc7m6MwJB2zc3/mzWYqDBJgjHbSdUhiNUlOmtqHGw4kpqcN09dDLzZE//5T4sVx7sjNZ7ThG6svkSnpn1otxmUkOknZqH/lnVaKncvsKdzHgU0N4hdKIzmm8FcUMszc52NEdu8RZklw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423208; c=relaxed/simple; bh=3na7WazeRc+l31ZBLq7zu/XN2+gl6KLApYKpDXRt1DE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=lZxSbOY7N87GjD5AYBIvyqy2iAyK6sFk2eAEJo+cpNQ+unKqwiAAHRmgnc9ll2p7BYylBBD3r1T/0sxvdfviDjDP/+9xEX4LJfzqt3rCYyxMLWfPb0GopoHEqJbyHEHscJGN7hWqzd6CtOmCV2jkQ0O0aBPLsC/VeM+jcrp3hb4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=G0LJvHI6; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="G0LJvHI6" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-af8e645a1d1so1530578a12.3 for ; Fri, 16 May 2025 12:20:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747423206; x=1748028006; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=EBw7wfCdEYmTTv4xHtm1PEmGnm1jL6beLce2NlL5Vts=; b=G0LJvHI6fg2EFuD7Dgnu6JG410A49xGI5HqV7DEQ6mKGc8bzqRrv9uo7lmWF1xPL/L H4yfefuKxG7dBWnkbajg5zSgAwp4zLNX8xoW9+TRTPdkFfOmFtcAo/nnKgMnI6oJ4Ybf RkAgEg13vy8CMWArUnNn23vAyrdnQq9OPEBigBBvl1d0RIA6pk6YvmZQ0oH33wEQjQcN YRe5EX46ifFBs8t6LwKQrNS0ycFd2u10/0L6Y7tm4wTlSXpmS1mo8krcYqA0L639vc4Y cNSxwC4PsJdGgUJrKUMcJNqx19L2qEMIfFEYPf2mhOi7pzSHGzbn+2WU3edEvprNr5jN 6gDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747423206; x=1748028006; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=EBw7wfCdEYmTTv4xHtm1PEmGnm1jL6beLce2NlL5Vts=; b=BXvCL62NwtUbYS+jVUDaR2KaozYHwscbH9T/MfmQJpVriLSf1dO8qhveW1XrSH4dl/ 1vXUeR/8lJ4Y6Iuvx9r7eFhgsnaDnXeuAqkusO+Kth5oQOoGWrfFTbU9SwLkcp+gajje ryJlvZb7XgFQVivamLrm0c7KTGJt92R9kZ86UTgOHxLz33Gk5UeAdaMsZNTxtFdcHyz4 71+1Mu7GTbSIrTzNEhzO4LBXyt00gqifPjogDYP8GyINO9o3+/6q1Q1jGdGHVU8YP7L2 u3hpoTCDkDxCEKvf/nlW8wg1HpN4leZfvMP/7vaRkq2xEp1vu+NSRP/4+lKhSdmIKKJR RX/g== X-Forwarded-Encrypted: i=1; AJvYcCX9r0Er4XvFIHzGxSOPFVCORrILuNDd0f/W8tZNSBM33C+WcH1DGv5XKYL78Lx25j6tADSbxWXmvvxonhp9X0k=@vger.kernel.org X-Gm-Message-State: AOJu0YwzefBf7pRUYGjAfGyVIdUZegyeIvNsibTEvUtDR6bSavmI6NuZ Xp1GHzg0KywtWFrC6aeBf3OqW0ezu1uv9QdGoazoyLO+UJoAUt6zIiD6MrUl4VSrwJ34qV7tr0l XvwjVle5esg== X-Google-Smtp-Source: AGHT+IFy+jT+27vVJgz7R2oehTji+RlKiv3+uUc/6owtg1K9ubfV5NHggyHzsM0M3NoRAMEDy2BAnXhRN7jv X-Received: from pjwx4.prod.google.com ([2002:a17:90a:c2c4:b0:2fa:1fac:2695]) (user=afranji job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:4a83:b0:2ee:8e75:4aeb with SMTP id 98e67ed59e1d1-30e8313d1d5mr6376058a91.17.1747423206206; Fri, 16 May 2025 12:20:06 -0700 (PDT) Date: Fri, 16 May 2025 19:19:32 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.49.0.1101.gccaa498523-goog Message-ID: <8a479dcaa271976e784d8b592e75d883a2c7721a.1747368093.git.afranji@google.com> Subject: [RFC PATCH v2 12/13] KVM: selftests: Generalize migration functions from sev_migrate_tests.c From: Ryan Afranji To: afranji@google.com, ackerleytng@google.com, pbonzini@redhat.com, seanjc@google.com, tglx@linutronix.de, x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, tabba@google.com Cc: mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, andrew.jones@linux.dev, ricarkol@google.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, yu.c.zhang@linux.intel.com, vannapurve@google.com, erdemaktas@google.com, mail@maciej.szmigiero.name, vbabka@suse.cz, david@redhat.com, qperret@google.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, sagis@google.com, jthoughton@google.com From: Ackerley Tng These functions will be used in private (guest mem) migration tests. Signed-off-by: Ackerley Tng Signed-off-by: Ryan Afranji --- .../testing/selftests/kvm/include/kvm_util.h | 13 +++++ .../selftests/kvm/x86/sev_migrate_tests.c | 48 +++++++------------ 2 files changed, 30 insertions(+), 31 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h index 68faa658b69e..80375d6456a5 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -378,6 +378,19 @@ static inline void vm_enable_cap(struct kvm_vm *vm, uint32_t cap, uint64_t arg0) vm_ioctl(vm, KVM_ENABLE_CAP, &enable_cap); } +static inline int __vm_migrate_from(struct kvm_vm *dst, struct kvm_vm *src) +{ + return __vm_enable_cap(dst, KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM, src->fd); +} + +static inline void vm_migrate_from(struct kvm_vm *dst, struct kvm_vm *src) +{ + int ret; + + ret = __vm_migrate_from(dst, src); + TEST_ASSERT(!ret, "Migration failed, ret: %d, errno: %d\n", ret, errno); +} + static inline void vm_set_memory_attributes(struct kvm_vm *vm, uint64_t gpa, uint64_t size, uint64_t attributes) { diff --git a/tools/testing/selftests/kvm/x86/sev_migrate_tests.c b/tools/testing/selftests/kvm/x86/sev_migrate_tests.c index 0a6dfba3905b..905cdf9b39b1 100644 --- a/tools/testing/selftests/kvm/x86/sev_migrate_tests.c +++ b/tools/testing/selftests/kvm/x86/sev_migrate_tests.c @@ -56,20 +56,6 @@ static struct kvm_vm *aux_vm_create(bool with_vcpus) return vm; } -static int __sev_migrate_from(struct kvm_vm *dst, struct kvm_vm *src) -{ - return __vm_enable_cap(dst, KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM, src->fd); -} - - -static void sev_migrate_from(struct kvm_vm *dst, struct kvm_vm *src) -{ - int ret; - - ret = __sev_migrate_from(dst, src); - TEST_ASSERT(!ret, "Migration failed, ret: %d, errno: %d", ret, errno); -} - static void test_sev_migrate_from(bool es) { struct kvm_vm *src_vm; @@ -81,13 +67,13 @@ static void test_sev_migrate_from(bool es) dst_vms[i] = aux_vm_create(true); /* Initial migration from the src to the first dst. */ - sev_migrate_from(dst_vms[0], src_vm); + vm_migrate_from(dst_vms[0], src_vm); for (i = 1; i < NR_MIGRATE_TEST_VMS; i++) - sev_migrate_from(dst_vms[i], dst_vms[i - 1]); + vm_migrate_from(dst_vms[i], dst_vms[i - 1]); /* Migrate the guest back to the original VM. */ - ret = __sev_migrate_from(src_vm, dst_vms[NR_MIGRATE_TEST_VMS - 1]); + ret = __vm_migrate_from(src_vm, dst_vms[NR_MIGRATE_TEST_VMS - 1]); TEST_ASSERT(ret == -1 && errno == EIO, "VM that was migrated from should be dead. ret %d, errno: %d", ret, errno); @@ -109,7 +95,7 @@ static void *locking_test_thread(void *arg) for (i = 0; i < NR_LOCK_TESTING_ITERATIONS; ++i) { j = i % NR_LOCK_TESTING_THREADS; - __sev_migrate_from(input->vm, input->source_vms[j]); + __vm_migrate_from(input->vm, input->source_vms[j]); } return NULL; @@ -146,7 +132,7 @@ static void test_sev_migrate_parameters(void) vm_no_vcpu = vm_create_barebones(); vm_no_sev = aux_vm_create(true); - ret = __sev_migrate_from(vm_no_vcpu, vm_no_sev); + ret = __vm_migrate_from(vm_no_vcpu, vm_no_sev); TEST_ASSERT(ret == -1 && errno == EINVAL, "Migrations require SEV enabled. ret %d, errno: %d", ret, errno); @@ -160,25 +146,25 @@ static void test_sev_migrate_parameters(void) sev_es_vm_init(sev_es_vm_no_vmsa); __vm_vcpu_add(sev_es_vm_no_vmsa, 1); - ret = __sev_migrate_from(sev_vm, sev_es_vm); + ret = __vm_migrate_from(sev_vm, sev_es_vm); TEST_ASSERT( ret == -1 && errno == EINVAL, "Should not be able migrate to SEV enabled VM. ret: %d, errno: %d", ret, errno); - ret = __sev_migrate_from(sev_es_vm, sev_vm); + ret = __vm_migrate_from(sev_es_vm, sev_vm); TEST_ASSERT( ret == -1 && errno == EINVAL, "Should not be able migrate to SEV-ES enabled VM. ret: %d, errno: %d", ret, errno); - ret = __sev_migrate_from(vm_no_vcpu, sev_es_vm); + ret = __vm_migrate_from(vm_no_vcpu, sev_es_vm); TEST_ASSERT( ret == -1 && errno == EINVAL, "SEV-ES migrations require same number of vCPUS. ret: %d, errno: %d", ret, errno); - ret = __sev_migrate_from(vm_no_vcpu, sev_es_vm_no_vmsa); + ret = __vm_migrate_from(vm_no_vcpu, sev_es_vm_no_vmsa); TEST_ASSERT( ret == -1 && errno == EINVAL, "SEV-ES migrations require UPDATE_VMSA. ret %d, errno: %d", @@ -331,14 +317,14 @@ static void test_sev_move_copy(void) sev_mirror_create(mirror_vm, sev_vm); - sev_migrate_from(dst_mirror_vm, mirror_vm); - sev_migrate_from(dst_vm, sev_vm); + vm_migrate_from(dst_mirror_vm, mirror_vm); + vm_migrate_from(dst_vm, sev_vm); - sev_migrate_from(dst2_vm, dst_vm); - sev_migrate_from(dst2_mirror_vm, dst_mirror_vm); + vm_migrate_from(dst2_vm, dst_vm); + vm_migrate_from(dst2_mirror_vm, dst_mirror_vm); - sev_migrate_from(dst3_mirror_vm, dst2_mirror_vm); - sev_migrate_from(dst3_vm, dst2_vm); + vm_migrate_from(dst3_mirror_vm, dst2_mirror_vm); + vm_migrate_from(dst3_vm, dst2_vm); kvm_vm_free(dst_vm); kvm_vm_free(sev_vm); @@ -360,8 +346,8 @@ static void test_sev_move_copy(void) sev_mirror_create(mirror_vm, sev_vm); - sev_migrate_from(dst_mirror_vm, mirror_vm); - sev_migrate_from(dst_vm, sev_vm); + vm_migrate_from(dst_mirror_vm, mirror_vm); + vm_migrate_from(dst_vm, sev_vm); kvm_vm_free(mirror_vm); kvm_vm_free(dst_mirror_vm);