From patchwork Fri Jun 6 17:48:26 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Stabellini X-Patchwork-Id: 31503 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ob0-f200.google.com (mail-ob0-f200.google.com [209.85.214.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id EF08D202DA for ; Fri, 6 Jun 2014 17:50:13 +0000 (UTC) Received: by mail-ob0-f200.google.com with SMTP id wo20sf14309432obc.7 for ; Fri, 06 Jun 2014 10:50:13 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:in-reply-to :references:mime-version:cc:subject:precedence:list-id :list-unsubscribe:list-post:list-help:list-subscribe:sender :errors-to:x-original-sender:x-original-authentication-results :mailing-list:list-archive:content-type:content-transfer-encoding; bh=jtk5O49u68gZg4PFTMSjLh/A19wrKKzZqF7sX3K5FRw=; b=Ewb3waJPTI7YM/QQMw/b8IbB5oM4iUJvZ03hUZJ27B4cJaApB+/kLdnGbW5P+HCYSW 4Jp11l2iww39MpkQou/qnykkh1vDpao9GrWGwclmDHsy/HPv2r39h1f9reaZXXceKgjI apkeFXDsD/GZ+dheV8XM0cLKy0IjL0f2lWSwRqMhcpIDtiS+ARZzbQac9lGw611xCfoo dXyLVHPykKXdWZQJbABEVCEuTRb63RGptbZDIIl81XNM25znmnsFFFB2Ayl3SDYpkKbS 0GOJn3ceCAYSIHLNykXALvCAWZvXXUR8QfSXD3tZDYCtEUSou5WjCmYJf11Pe92O76RE +qjg== X-Gm-Message-State: ALoCoQn0GeaB4aM8lM5c1gg548W2MNQCv0X2UX4J6SbVHdHZo1CsLKQBAKDwgFzlevw5JLJcdxLV X-Received: by 10.182.227.131 with SMTP id sa3mr3455730obc.38.1402077013594; Fri, 06 Jun 2014 10:50:13 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.100.179 with SMTP id s48ls748596qge.17.gmail; Fri, 06 Jun 2014 10:50:13 -0700 (PDT) X-Received: by 10.58.161.101 with SMTP id xr5mr7657095veb.36.1402077013487; Fri, 06 Jun 2014 10:50:13 -0700 (PDT) Received: from mail-ve0-f173.google.com (mail-ve0-f173.google.com [209.85.128.173]) by mx.google.com with ESMTPS id eh5si6988901veb.63.2014.06.06.10.50.13 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 06 Jun 2014 10:50:13 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.128.173 as permitted sender) client-ip=209.85.128.173; Received: by mail-ve0-f173.google.com with SMTP id pa12so3642171veb.4 for ; Fri, 06 Jun 2014 10:50:13 -0700 (PDT) X-Received: by 10.220.53.72 with SMTP id l8mr7586757vcg.16.1402077013405; Fri, 06 Jun 2014 10:50:13 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.54.6 with SMTP id vs6csp123103vcb; Fri, 6 Jun 2014 10:50:12 -0700 (PDT) X-Received: by 10.180.74.114 with SMTP id s18mr28198849wiv.37.1402077012498; Fri, 06 Jun 2014 10:50:12 -0700 (PDT) Received: from lists.xen.org (lists.xen.org. [50.57.142.19]) by mx.google.com with ESMTPS id p3si18760384wjz.30.2014.06.06.10.50.11 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Fri, 06 Jun 2014 10:50:12 -0700 (PDT) Received-SPF: none (google.com: xen-devel-bounces@lists.xen.org does not designate permitted sender hosts) client-ip=50.57.142.19; Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1WsyG5-0000jV-La; Fri, 06 Jun 2014 17:48:53 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1WsyG4-0000ix-NG for xen-devel@lists.xensource.com; Fri, 06 Jun 2014 17:48:52 +0000 Received: from [85.158.139.211:38243] by server-3.bemta-5.messagelabs.com id B5/F9-01676-40FF1935; Fri, 06 Jun 2014 17:48:52 +0000 X-Env-Sender: Stefano.Stabellini@citrix.com X-Msg-Ref: server-6.tower-206.messagelabs.com!1402076928!8574987!2 X-Originating-IP: [66.165.176.89] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n X-StarScan-Received: X-StarScan-Version: 6.11.3; banners=-,-,- X-VirusChecked: Checked Received: (qmail 19282 invoked from network); 6 Jun 2014 17:48:51 -0000 Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89) by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP; 6 Jun 2014 17:48:51 -0000 X-IronPort-AV: E=Sophos;i="4.98,990,1392163200"; d="scan'208";a="140457350" Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net) ([10.9.154.239]) by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jun 2014 17:48:48 +0000 Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id 14.3.181.6; Fri, 6 Jun 2014 13:48:47 -0400 Received: from kaball.uk.xensource.com ([10.80.2.59]) by ukmail1.uk.xensource.com with esmtp (Exim 4.69) (envelope-from ) id 1WsyFu-00015p-MB; Fri, 06 Jun 2014 18:48:42 +0100 From: Stefano Stabellini To: Date: Fri, 6 Jun 2014 18:48:26 +0100 Message-ID: <1402076908-26740-2-git-send-email-stefano.stabellini@eu.citrix.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: References: MIME-Version: 1.0 X-DLP: MIA2 Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com, Stefano Stabellini Subject: [Xen-devel] [PATCH v4 2/4] xen/arm: inflight irqs during migration X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: List-Unsubscribe: , List-Post: , List-Help: , List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: stefano.stabellini@eu.citrix.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.128.173 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Archive: We need to take special care when migrating irqs that are already inflight from one vcpu to another. In fact the lr_pending and inflight lists are per-vcpu. The lock we take to protect them is also per-vcpu. In order to avoid issues, we set a new flag GIC_IRQ_GUEST_MIGRATING, so that we can recognize when we receive an irq while the previous one is still inflight (given that we are only dealing with hardware interrupts here, it just means that its LR hasn't been cleared yet on the old vcpu). If GIC_IRQ_GUEST_MIGRATING is set, we only set GIC_IRQ_GUEST_QUEUED and interrupt the other vcpu. When clearing the LR on the old vcpu, we take special care of injecting the interrupt into the new vcpu. To do that we need to release the old vcpu lock and take the new vcpu lock. Signed-off-by: Stefano Stabellini --- xen/arch/arm/gic.c | 23 +++++++++++++++++++++-- xen/arch/arm/vgic.c | 36 ++++++++++++++++++++++++++++++++++++ xen/include/asm-arm/domain.h | 4 ++++ 3 files changed, 61 insertions(+), 2 deletions(-) diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c index 08ae23b..92391b4 100644 --- a/xen/arch/arm/gic.c +++ b/xen/arch/arm/gic.c @@ -677,10 +677,29 @@ static void gic_update_one_lr(struct vcpu *v, int i) clear_bit(GIC_IRQ_GUEST_ACTIVE, &p->status); p->lr = GIC_INVALID_LR; if ( test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) && - test_bit(GIC_IRQ_GUEST_QUEUED, &p->status) ) + test_bit(GIC_IRQ_GUEST_QUEUED, &p->status) && + !test_bit(GIC_IRQ_GUEST_MIGRATING, &p->status) ) gic_raise_guest_irq(v, irq, p->priority); - else + else { list_del_init(&p->inflight); + + if ( test_and_clear_bit(GIC_IRQ_GUEST_MIGRATING, &p->status) && + test_bit(GIC_IRQ_GUEST_QUEUED, &p->status) ) + { + struct vcpu *v_target; + + spin_unlock(&v->arch.vgic.lock); + v_target = vgic_get_target_vcpu(v, irq); + spin_lock(&v_target->arch.vgic.lock); + + gic_add_to_lr_pending(v_target, p); + if ( v_target->is_running ) + smp_send_event_check_mask(cpumask_of(v_target->processor)); + + spin_unlock(&v_target->arch.vgic.lock); + spin_lock(&v->arch.vgic.lock); + } + } } } diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c index e527892..54d3676 100644 --- a/xen/arch/arm/vgic.c +++ b/xen/arch/arm/vgic.c @@ -377,6 +377,21 @@ read_as_zero: return 1; } +static void vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq) +{ + unsigned long flags; + struct pending_irq *p = irq_to_pending(old, irq); + + /* nothing to do for virtual interrupts */ + if ( p->desc == NULL ) + return; + + spin_lock_irqsave(&old->arch.vgic.lock, flags); + if ( !list_empty(&p->inflight) ) + set_bit(GIC_IRQ_GUEST_MIGRATING, &p->status); + spin_unlock_irqrestore(&old->arch.vgic.lock, flags); +} + struct vcpu *vgic_get_target_vcpu(struct vcpu *v, unsigned int irq) { int target; @@ -629,6 +644,21 @@ static int vgic_distr_mmio_write(struct vcpu *v, mmio_info_t *info) } i++; } + i = 0; + while ( (i = find_next_bit((const unsigned long *) &tr, 32, i)) < 32 ) + { + unsigned int irq, target, old_target; + struct vcpu *v_target, *v_old; + + target = i % 8; + + irq = offset + (i / 8); + v_target = v->domain->vcpu[target]; + old_target = byte_read(rank->itargets[REG_RANK_INDEX(8, gicd_reg - GICD_ITARGETSR)], 0, i/8); + v_old = v->domain->vcpu[old_target]; + vgic_migrate_irq(v_old, v_target, irq); + i += 8 - target; + } if ( dabt.size == 2 ) rank->itargets[REG_RANK_INDEX(8, gicd_reg - GICD_ITARGETSR)] = *r; else @@ -771,6 +801,12 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq) spin_lock_irqsave(&v->arch.vgic.lock, flags); + if ( test_bit(GIC_IRQ_GUEST_MIGRATING, &n->status) ) + { + set_bit(GIC_IRQ_GUEST_QUEUED, &n->status); + goto out; + } + if ( !list_empty(&n->inflight) ) { set_bit(GIC_IRQ_GUEST_QUEUED, &n->status); diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h index d689675..743c020 100644 --- a/xen/include/asm-arm/domain.h +++ b/xen/include/asm-arm/domain.h @@ -54,11 +54,15 @@ struct pending_irq * GIC_IRQ_GUEST_ENABLED: the guest IRQ is enabled at the VGICD * level (GICD_ICENABLER/GICD_ISENABLER). * + * GIC_IRQ_GUEST_MIGRATING: the irq is being migrated to a different + * vcpu. + * */ #define GIC_IRQ_GUEST_QUEUED 0 #define GIC_IRQ_GUEST_ACTIVE 1 #define GIC_IRQ_GUEST_VISIBLE 2 #define GIC_IRQ_GUEST_ENABLED 3 +#define GIC_IRQ_GUEST_MIGRATING 4 unsigned long status; struct irq_desc *desc; /* only set it the irq corresponds to a physical irq */ int irq;