Message ID | 1457124368-2025-7-git-send-email-Suravee.Suthikulpanit@amd.com |
---|---|
State | New |
Headers | show |
On 03/07/2016 10:36 PM, Paolo Bonzini wrote: > > > On 04/03/2016 21:46, Suravee Suthikulpanit wrote: >> +static void svm_deliver_avic_intr(struct kvm_vcpu *vcpu, int vec) >> +{ >> + struct vcpu_svm *svm = to_svm(vcpu); >> + >> + kvm_lapic_set_vector(vec, avic_get_bk_page_entry(svm, APIC_IRR)); >> + >> + if (vcpu->mode == IN_GUEST_MODE) { >> + wrmsrl(SVM_AVIC_DOORBELL, >> + __default_cpu_present_to_apicid(vcpu->cpu)); >> + } else { >> + kvm_vcpu_kick(vcpu); >> + } > > You also need to add > > kvm_make_request(KVM_REQ_EVENT, vcpu); > > before the "if", similar to vmx_deliver_posted_interrupt. > > Paolo > Actually, I should only need that just before the kvm_cpu_kick(vcpu) isn't it. I don't think we need it in the case when sending doorbell. Thanks, Suravee
Hi, Please ignore this. I should have read the whole thread before reply :( Sorry for spam. Suravee On 03/14/2016 12:25 PM, Suravee Suthikulpanit wrote: > > > On 03/07/2016 10:36 PM, Paolo Bonzini wrote: >> >> >> On 04/03/2016 21:46, Suravee Suthikulpanit wrote: >>> +static void svm_deliver_avic_intr(struct kvm_vcpu *vcpu, int vec) >>> +{ >>> + struct vcpu_svm *svm = to_svm(vcpu); >>> + >>> + kvm_lapic_set_vector(vec, avic_get_bk_page_entry(svm, APIC_IRR)); >>> + >>> + if (vcpu->mode == IN_GUEST_MODE) { >>> + wrmsrl(SVM_AVIC_DOORBELL, >>> + __default_cpu_present_to_apicid(vcpu->cpu)); >>> + } else { >>> + kvm_vcpu_kick(vcpu); >>> + } >> >> You also need to add >> >> kvm_make_request(KVM_REQ_EVENT, vcpu); >> >> before the "if", similar to vmx_deliver_posted_interrupt. >> >> Paolo >> > > Actually, I should only need that just before the kvm_cpu_kick(vcpu) > isn't it. I don't think we need it in the case when sending doorbell. > > Thanks, > Suravee
Hi On 03/09/2016 11:00 PM, Radim Krčmář wrote: > 2016-03-09 12:10+0100, Paolo Bonzini: >> On 08/03/2016 22:54, Radim Krčmář wrote: >>> 2016-03-07 16:36+0100, Paolo Bonzini: >>>> On 04/03/2016 21:46, Suravee Suthikulpanit wrote: >>>>> +static void svm_deliver_avic_intr(struct kvm_vcpu *vcpu, int vec) >>>>> +{ >>>>> + struct vcpu_svm *svm = to_svm(vcpu); >>>>> + >>>>> + kvm_lapic_set_vector(vec, avic_get_bk_page_entry(svm, APIC_IRR)); >>> >>> (I think that smp_mb here would make sense, even though we're fine now >>> thanks to re-checking vcpu->mode in kvm_vcpu_kick. >> >> Right, though only a smp_mb__after_atomic() is required (which is a >> compiler barrier). It is similarly required in vmx. > > True, kvm_lapic_set_vector uses a lock prefix. > > (I thought it behaves like atomic_set, which would require MFENCE for > correct ordering here ... I don't like smp_mb__after_atomic much > because of the discrepancy on some atomic operations.) > So, should i just use smb_mb() in this case? Suravee.
Hi, On 03/09/2016 06:10 PM, Paolo Bonzini wrote: >>>> + >>>> >>>+ if (vcpu->mode == IN_GUEST_MODE) { >>>> >>>+ wrmsrl(SVM_AVIC_DOORBELL, >>>> >>>+ __default_cpu_present_to_apicid(vcpu->cpu)); >>>> >>>+ } else { >>>> >>>+ kvm_vcpu_kick(vcpu); >>>> >>>+ } >> > >> >And what about >> > [...] >> > else if (!vcpu->...->is_running) >> > kvm_vcpu_kick(vcpu); >> > >> >? >> >The kick isn't needed unless the VCPU is scheduled out. >> >Or maybe just >> > if (vcpu->...->is_running) >> > wrmsrl() >> > else >> > kvm_vcpu_kick(); >> >? >> >Which doesn't use the information we have on top AVIC, making our logic >> >a bit simpler. > Yes, both of this should work. I like the latter. Ok, I'll modify this to check the is_running bit of the AVIC Physical APIC ID table entry to determine if we should kick vcpu. Thanks, Suravee
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index c703149..8f11200 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -71,6 +71,8 @@ MODULE_DEVICE_TABLE(x86cpu, svm_cpu_id); #define SVM_FEATURE_DECODE_ASSIST (1 << 7) #define SVM_FEATURE_PAUSE_FILTER (1 << 10) +#define SVM_AVIC_DOORBELL 0xc001011b + #define NESTED_EXIT_HOST 0 /* Exit handled on host level */ #define NESTED_EXIT_DONE 1 /* Exit caused nested vmexit */ #define NESTED_EXIT_CONTINUE 2 /* Further checks needed */ @@ -236,6 +238,8 @@ static bool npt_enabled = true; static bool npt_enabled; #endif +static struct kvm_x86_ops svm_x86_ops; + /* allow nested paging (virtualized MMU) for all guests */ static int npt = true; module_param(npt, int, S_IRUGO); @@ -978,6 +982,8 @@ static __init int svm_hardware_setup(void) if (avic) { printk(KERN_INFO "kvm: AVIC enabled\n"); + } else { + svm_x86_ops.deliver_posted_interrupt = NULL; } return 0; @@ -3071,8 +3077,10 @@ static int clgi_interception(struct vcpu_svm *svm) disable_gif(svm); /* After a CLGI no interrupts should come */ - svm_clear_vintr(svm); - svm->vmcb->control.v_irq = 0; + if (!avic) { + svm_clear_vintr(svm); + svm->vmcb->control.v_irq = 0; + } mark_dirty(svm->vmcb, VMCB_INTR); @@ -3647,6 +3655,9 @@ static int msr_interception(struct vcpu_svm *svm) static int interrupt_window_interception(struct vcpu_svm *svm) { + if (avic) + BUG(); + kvm_make_request(KVM_REQ_EVENT, &svm->vcpu); svm_clear_vintr(svm); svm->vmcb->control.v_irq = 0; @@ -3961,7 +3972,7 @@ static inline void svm_inject_irq(struct vcpu_svm *svm, int irq) { struct vmcb_control_area *control; - + /* The following fields are ignored when AVIC is enabled */ control = &svm->vmcb->control; control->int_vector = irq; control->v_intr_prio = 0xf; @@ -4042,6 +4053,20 @@ static void svm_sync_pir_to_irr(struct kvm_vcpu *vcpu) return; } +static void svm_deliver_avic_intr(struct kvm_vcpu *vcpu, int vec) +{ + struct vcpu_svm *svm = to_svm(vcpu); + + kvm_lapic_set_vector(vec, avic_get_bk_page_entry(svm, APIC_IRR)); + + if (vcpu->mode == IN_GUEST_MODE) { + wrmsrl(SVM_AVIC_DOORBELL, + __default_cpu_present_to_apicid(vcpu->cpu)); + } else { + kvm_vcpu_kick(vcpu); + } +} + static int svm_nmi_allowed(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); @@ -4102,6 +4127,9 @@ static void enable_irq_window(struct kvm_vcpu *vcpu) * get that intercept, this function will be called again though and * we'll get the vintr intercept. */ + if (avic) + return; + if (gif_set(svm) && nested_svm_intr(svm)) { svm_set_vintr(svm); svm_inject_irq(svm, 0x0); @@ -4834,6 +4862,7 @@ static struct kvm_x86_ops svm_x86_ops = { .sched_in = svm_sched_in, .pmu_ops = &amd_pmu_ops, + .deliver_posted_interrupt = svm_deliver_avic_intr, }; static int __init svm_init(void)
This patch introduces a new mechanism to inject interrupt using AVIC. Since VINTR is not supported when enable AVIC, we need to inject interrupt via APIC backing page instead. This patch also adds support for AVIC doorbell, which is used by KVM to signal a running vcpu to check IRR for injected interrupts. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> --- arch/x86/kvm/svm.c | 35 ++++++++++++++++++++++++++++++++--- 1 file changed, 32 insertions(+), 3 deletions(-) -- 1.9.1