diff mbox series

[1/8] KVM: x86: avoid loading a vCPU after .vm_destroy was called

Message ID 20220322172449.235575-2-mlevitsk@redhat.com
State Accepted
Commit 6fcee03df6a1a3101a77344be37bb85c6142d56c
Headers show
Series [1/8] KVM: x86: avoid loading a vCPU after .vm_destroy was called | expand

Commit Message

Maxim Levitsky March 22, 2022, 5:24 p.m. UTC
This can cause various unexpected issues, since VM is partially
destroyed at that point.

For example when AVIC is enabled, this causes avic_vcpu_load to
access physical id page entry which is already freed by .vm_destroy.

Fixes: 8221c1370056 ("svm: Manage vcpu load/unload when enable AVIC")
Cc: stable@vger.kernel.org

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/x86.c | 10 +++-------
 1 file changed, 3 insertions(+), 7 deletions(-)

Comments

Maxim Levitsky April 28, 2022, 5:47 a.m. UTC | #1
On Wed, 2022-03-30 at 14:07 +0200, Paolo Bonzini wrote:
> On 3/30/22 02:27, Sean Christopherson wrote:
> > Rather than split kvm_free_vcpus(), can we instead move the call to svm_vm_destroy()
> > by adding a second hook, .vm_teardown(), which is needed for TDX?  I.e. keep VMX
> > where it is by using vm_teardown, but effectively move SVM?
> > 
> > https://lore.kernel.org/all/1fa2d0db387a99352d44247728c5b8ae5f5cab4d.1637799475.git.isaku.yamahata@intel.com
> 
> I'd rather do that only for the TDX patches.
> 
> Paolo
> 
Any update on this patch? Looks like it is not upstream nor in kvm/queue.

Best regards,
	Maxim Levitsky
diff mbox series

Patch

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index d3a9ce07a565..ba920e537ddf 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -11759,20 +11759,15 @@  static void kvm_unload_vcpu_mmu(struct kvm_vcpu *vcpu)
 	vcpu_put(vcpu);
 }
 
-static void kvm_free_vcpus(struct kvm *kvm)
+static void kvm_unload_vcpu_mmus(struct kvm *kvm)
 {
 	unsigned long i;
 	struct kvm_vcpu *vcpu;
 
-	/*
-	 * Unpin any mmu pages first.
-	 */
 	kvm_for_each_vcpu(i, vcpu, kvm) {
 		kvm_clear_async_pf_completion_queue(vcpu);
 		kvm_unload_vcpu_mmu(vcpu);
 	}
-
-	kvm_destroy_vcpus(kvm);
 }
 
 void kvm_arch_sync_events(struct kvm *kvm)
@@ -11878,11 +11873,12 @@  void kvm_arch_destroy_vm(struct kvm *kvm)
 		__x86_set_memory_region(kvm, TSS_PRIVATE_MEMSLOT, 0, 0);
 		mutex_unlock(&kvm->slots_lock);
 	}
+	kvm_unload_vcpu_mmus(kvm);
 	static_call_cond(kvm_x86_vm_destroy)(kvm);
 	kvm_free_msr_filter(srcu_dereference_check(kvm->arch.msr_filter, &kvm->srcu, 1));
 	kvm_pic_destroy(kvm);
 	kvm_ioapic_destroy(kvm);
-	kvm_free_vcpus(kvm);
+	kvm_destroy_vcpus(kvm);
 	kvfree(rcu_dereference_check(kvm->arch.apic_map, 1));
 	kfree(srcu_dereference_check(kvm->arch.pmu_event_filter, &kvm->srcu, 1));
 	kvm_mmu_uninit_vm(kvm);