Message ID | 20210915171755.3773766-1-pgonda@google.com |
---|---|
State | Accepted |
Commit | bb18a677746543e7f5eeb478129c92cedb0f9658 |
Headers | show |
Series | [V2] KVM: SEV: Acquire vcpu mutex when updating VMSA | expand |
On 16/09/21 00:40, Marc Orr wrote: > On Wed, Sep 15, 2021 at 10:18 AM Peter Gonda <pgonda@google.com> wrote: >> >> Adds vcpu mutex guard to the VMSA updating code. Refactors out >> __sev_launch_update_vmsa() function to deal with per vCPU parts >> of sev_launch_update_vmsa(). > > Can you expand the changelog, and perhaps add a comment into the > source code as well, to explain what grabbing the mutex protects us > from? I assume that it's a poorly behaved user-space, rather than a > race condition in a well-behaved user-space VMM, but I'm not certain. > > Other than that, the patch itself seems fine to me. I added this: The update-VMSA ioctl touches data stored in struct kvm_vcpu, and therefore should not be performed concurrently with any VCPU ioctl that might cause KVM or the processor to use the same data. Paolo >> >> Fixes: ad73109ae7ec ("KVM: SVM: Provide support to launch and run an SEV-ES guest") >> >> Signed-off-by: Peter Gonda <pgonda@google.com> >> Cc: Marc Orr <marcorr@google.com> >> Cc: Paolo Bonzini <pbonzini@redhat.com> >> Cc: Sean Christopherson <seanjc@google.com> >> Cc: Brijesh Singh <brijesh.singh@amd.com> >> Cc: kvm@vger.kernel.org >> Cc: stable@vger.kernel.org >> Cc: linux-kernel@vger.kernel.org >> --- >> >> V2 >> * Refactor per vcpu work to separate function. >> * Remove check to skip already initialized VMSAs. >> * Removed vmsa struct zeroing. >> >> --- >> arch/x86/kvm/svm/sev.c | 53 ++++++++++++++++++++++++------------------ >> 1 file changed, 30 insertions(+), 23 deletions(-) >> >> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c >> index 75e0b21ad07c..766510fe3abb 100644 >> --- a/arch/x86/kvm/svm/sev.c >> +++ b/arch/x86/kvm/svm/sev.c >> @@ -595,43 +595,50 @@ static int sev_es_sync_vmsa(struct vcpu_svm *svm) >> return 0; >> } >> >> -static int sev_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp) >> +static int __sev_launch_update_vmsa(struct kvm *kvm, struct kvm_vcpu *vcpu, >> + int *error) >> { >> - struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; >> struct sev_data_launch_update_vmsa vmsa; >> + struct vcpu_svm *svm = to_svm(vcpu); >> + int ret; >> + >> + /* Perform some pre-encryption checks against the VMSA */ >> + ret = sev_es_sync_vmsa(svm); >> + if (ret) >> + return ret; >> + >> + /* >> + * The LAUNCH_UPDATE_VMSA command will perform in-place encryption of >> + * the VMSA memory content (i.e it will write the same memory region >> + * with the guest's key), so invalidate it first. >> + */ >> + clflush_cache_range(svm->vmsa, PAGE_SIZE); >> + >> + vmsa.reserved = 0; >> + vmsa.handle = to_kvm_svm(kvm)->sev_info.handle; >> + vmsa.address = __sme_pa(svm->vmsa); >> + vmsa.len = PAGE_SIZE; >> + return sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_VMSA, &vmsa, error); >> +} >> + >> +static int sev_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp) >> +{ >> struct kvm_vcpu *vcpu; >> int i, ret; >> >> if (!sev_es_guest(kvm)) >> return -ENOTTY; >> >> - vmsa.reserved = 0; >> - >> - kvm_for_each_vcpu(i, vcpu, kvm) { >> - struct vcpu_svm *svm = to_svm(vcpu); >> - >> - /* Perform some pre-encryption checks against the VMSA */ >> - ret = sev_es_sync_vmsa(svm); >> + kvm_for_each_vcpu(i, vcpu, kvm) { >> + ret = mutex_lock_killable(&vcpu->mutex); >> if (ret) >> return ret; >> >> - /* >> - * The LAUNCH_UPDATE_VMSA command will perform in-place >> - * encryption of the VMSA memory content (i.e it will write >> - * the same memory region with the guest's key), so invalidate >> - * it first. >> - */ >> - clflush_cache_range(svm->vmsa, PAGE_SIZE); >> + ret = __sev_launch_update_vmsa(kvm, vcpu, &argp->error); >> >> - vmsa.handle = sev->handle; >> - vmsa.address = __sme_pa(svm->vmsa); >> - vmsa.len = PAGE_SIZE; >> - ret = sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_VMSA, &vmsa, >> - &argp->error); >> + mutex_unlock(&vcpu->mutex); >> if (ret) >> return ret; >> - >> - svm->vcpu.arch.guest_state_protected = true; >> } >> >> return 0; >> -- >> 2.33.0.464.g1972c5931b-goog >> >
On 15/09/21 19:17, Peter Gonda wrote: > Adds vcpu mutex guard to the VMSA updating code. Refactors out > __sev_launch_update_vmsa() function to deal with per vCPU parts > of sev_launch_update_vmsa(). > > Fixes: ad73109ae7ec ("KVM: SVM: Provide support to launch and run an SEV-ES guest") > > Signed-off-by: Peter Gonda <pgonda@google.com> > Cc: Marc Orr <marcorr@google.com> > Cc: Paolo Bonzini <pbonzini@redhat.com> > Cc: Sean Christopherson <seanjc@google.com> > Cc: Brijesh Singh <brijesh.singh@amd.com> > Cc: kvm@vger.kernel.org > Cc: stable@vger.kernel.org > Cc: linux-kernel@vger.kernel.org > --- > > V2 > * Refactor per vcpu work to separate function. > * Remove check to skip already initialized VMSAs. > * Removed vmsa struct zeroing. > > --- > arch/x86/kvm/svm/sev.c | 53 ++++++++++++++++++++++++------------------ > 1 file changed, 30 insertions(+), 23 deletions(-) > > diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c > index 75e0b21ad07c..766510fe3abb 100644 > --- a/arch/x86/kvm/svm/sev.c > +++ b/arch/x86/kvm/svm/sev.c > @@ -595,43 +595,50 @@ static int sev_es_sync_vmsa(struct vcpu_svm *svm) > return 0; > } > > -static int sev_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp) > +static int __sev_launch_update_vmsa(struct kvm *kvm, struct kvm_vcpu *vcpu, > + int *error) > { > - struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; > struct sev_data_launch_update_vmsa vmsa; > + struct vcpu_svm *svm = to_svm(vcpu); > + int ret; > + > + /* Perform some pre-encryption checks against the VMSA */ > + ret = sev_es_sync_vmsa(svm); > + if (ret) > + return ret; > + > + /* > + * The LAUNCH_UPDATE_VMSA command will perform in-place encryption of > + * the VMSA memory content (i.e it will write the same memory region > + * with the guest's key), so invalidate it first. > + */ > + clflush_cache_range(svm->vmsa, PAGE_SIZE); > + > + vmsa.reserved = 0; > + vmsa.handle = to_kvm_svm(kvm)->sev_info.handle; > + vmsa.address = __sme_pa(svm->vmsa); > + vmsa.len = PAGE_SIZE; > + return sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_VMSA, &vmsa, error); > +} > + > +static int sev_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp) > +{ > struct kvm_vcpu *vcpu; > int i, ret; > > if (!sev_es_guest(kvm)) > return -ENOTTY; > > - vmsa.reserved = 0; > - > - kvm_for_each_vcpu(i, vcpu, kvm) { > - struct vcpu_svm *svm = to_svm(vcpu); > - > - /* Perform some pre-encryption checks against the VMSA */ > - ret = sev_es_sync_vmsa(svm); > + kvm_for_each_vcpu(i, vcpu, kvm) { > + ret = mutex_lock_killable(&vcpu->mutex); > if (ret) > return ret; > > - /* > - * The LAUNCH_UPDATE_VMSA command will perform in-place > - * encryption of the VMSA memory content (i.e it will write > - * the same memory region with the guest's key), so invalidate > - * it first. > - */ > - clflush_cache_range(svm->vmsa, PAGE_SIZE); > + ret = __sev_launch_update_vmsa(kvm, vcpu, &argp->error); > > - vmsa.handle = sev->handle; > - vmsa.address = __sme_pa(svm->vmsa); > - vmsa.len = PAGE_SIZE; > - ret = sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_VMSA, &vmsa, > - &argp->error); > + mutex_unlock(&vcpu->mutex); > if (ret) > return ret; > - > - svm->vcpu.arch.guest_state_protected = true; > } > > return 0; > Queued, thanks. Paolo
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 75e0b21ad07c..766510fe3abb 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -595,43 +595,50 @@ static int sev_es_sync_vmsa(struct vcpu_svm *svm) return 0; } -static int sev_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp) +static int __sev_launch_update_vmsa(struct kvm *kvm, struct kvm_vcpu *vcpu, + int *error) { - struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; struct sev_data_launch_update_vmsa vmsa; + struct vcpu_svm *svm = to_svm(vcpu); + int ret; + + /* Perform some pre-encryption checks against the VMSA */ + ret = sev_es_sync_vmsa(svm); + if (ret) + return ret; + + /* + * The LAUNCH_UPDATE_VMSA command will perform in-place encryption of + * the VMSA memory content (i.e it will write the same memory region + * with the guest's key), so invalidate it first. + */ + clflush_cache_range(svm->vmsa, PAGE_SIZE); + + vmsa.reserved = 0; + vmsa.handle = to_kvm_svm(kvm)->sev_info.handle; + vmsa.address = __sme_pa(svm->vmsa); + vmsa.len = PAGE_SIZE; + return sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_VMSA, &vmsa, error); +} + +static int sev_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp) +{ struct kvm_vcpu *vcpu; int i, ret; if (!sev_es_guest(kvm)) return -ENOTTY; - vmsa.reserved = 0; - - kvm_for_each_vcpu(i, vcpu, kvm) { - struct vcpu_svm *svm = to_svm(vcpu); - - /* Perform some pre-encryption checks against the VMSA */ - ret = sev_es_sync_vmsa(svm); + kvm_for_each_vcpu(i, vcpu, kvm) { + ret = mutex_lock_killable(&vcpu->mutex); if (ret) return ret; - /* - * The LAUNCH_UPDATE_VMSA command will perform in-place - * encryption of the VMSA memory content (i.e it will write - * the same memory region with the guest's key), so invalidate - * it first. - */ - clflush_cache_range(svm->vmsa, PAGE_SIZE); + ret = __sev_launch_update_vmsa(kvm, vcpu, &argp->error); - vmsa.handle = sev->handle; - vmsa.address = __sme_pa(svm->vmsa); - vmsa.len = PAGE_SIZE; - ret = sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_VMSA, &vmsa, - &argp->error); + mutex_unlock(&vcpu->mutex); if (ret) return ret; - - svm->vcpu.arch.guest_state_protected = true; } return 0;
Adds vcpu mutex guard to the VMSA updating code. Refactors out __sev_launch_update_vmsa() function to deal with per vCPU parts of sev_launch_update_vmsa(). Fixes: ad73109ae7ec ("KVM: SVM: Provide support to launch and run an SEV-ES guest") Signed-off-by: Peter Gonda <pgonda@google.com> Cc: Marc Orr <marcorr@google.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Sean Christopherson <seanjc@google.com> Cc: Brijesh Singh <brijesh.singh@amd.com> Cc: kvm@vger.kernel.org Cc: stable@vger.kernel.org Cc: linux-kernel@vger.kernel.org --- V2 * Refactor per vcpu work to separate function. * Remove check to skip already initialized VMSAs. * Removed vmsa struct zeroing. --- arch/x86/kvm/svm/sev.c | 53 ++++++++++++++++++++++++------------------ 1 file changed, 30 insertions(+), 23 deletions(-)