From patchwork Thu Mar 27 15:13:38 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Stabellini X-Patchwork-Id: 27204 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-pd0-f199.google.com (mail-pd0-f199.google.com [209.85.192.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 39800202FA for ; Thu, 27 Mar 2014 15:17:27 +0000 (UTC) Received: by mail-pd0-f199.google.com with SMTP id x10sf8103148pdj.10 for ; Thu, 27 Mar 2014 08:17:26 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:in-reply-to :references:mime-version:cc:subject:precedence:list-id :list-unsubscribe:list-post:list-help:list-subscribe:sender :errors-to:x-original-sender:x-original-authentication-results :mailing-list:list-archive:content-type:content-transfer-encoding; bh=qcSzMob/10W7/6fhg2QqDixGTP8VHXX3oNUprU5atBY=; b=AHphXcDvjckTf+u8cJYvlPPnW9KSGBahfNVyXlvXf/CTc57zeTpF1sbRCW2qIagDSL kVlcf7avk0acpbhXIDRegCGwesKNPpQJHbG6YussRfCarfJE4R6B7cgHthZgmCJN9hoF F1zM4/xHa3U5knmT5PKN7B+jUQa3V9oA2e4j12MX7f/VorXZLTiaaN3ks8MJvmbtMqtK 0C2DPyv4x0FG9G3kDTcg10g/inelR9ewQ+99uTl0pbgKPxA2KoUp7LIeMfbqcwOo5l+/ fKbVm+EKlwD/ziaYhDIHmkNCDdnUWJNTXzjGS51If89HITAj7DkTbhHsokoVSKh/n+D9 gMqw== X-Gm-Message-State: ALoCoQlAmMt1x8hMhKOmkiH4+fWpPC1C9zDGLSvyKOph3UJAiCQBZC50Hzy7zojG2PhTwRtKWQXx X-Received: by 10.66.150.106 with SMTP id uh10mr886939pab.13.1395933446352; Thu, 27 Mar 2014 08:17:26 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.82.16 with SMTP id g16ls1105178qgd.25.gmail; Thu, 27 Mar 2014 08:17:26 -0700 (PDT) X-Received: by 10.221.74.65 with SMTP id yv1mr1347322vcb.31.1395933446079; Thu, 27 Mar 2014 08:17:26 -0700 (PDT) Received: from mail-vc0-f174.google.com (mail-vc0-f174.google.com [209.85.220.174]) by mx.google.com with ESMTPS id b2si513522vcz.92.2014.03.27.08.17.26 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 27 Mar 2014 08:17:26 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.174 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.174; Received: by mail-vc0-f174.google.com with SMTP id ld13so4295072vcb.19 for ; Thu, 27 Mar 2014 08:17:26 -0700 (PDT) X-Received: by 10.58.91.101 with SMTP id cd5mr1888288veb.5.1395933445993; Thu, 27 Mar 2014 08:17:25 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.78.9 with SMTP id i9csp36106vck; Thu, 27 Mar 2014 08:17:25 -0700 (PDT) X-Received: by 10.140.50.74 with SMTP id r68mr2723262qga.22.1395933445379; Thu, 27 Mar 2014 08:17:25 -0700 (PDT) Received: from lists.xen.org (lists.xen.org. [50.57.142.19]) by mx.google.com with ESMTPS id b42si1149791qgd.193.2014.03.27.08.17.25 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Thu, 27 Mar 2014 08:17:25 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xen.org designates 50.57.142.19 as permitted sender) client-ip=50.57.142.19; Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1WTC17-00045X-RN; Thu, 27 Mar 2014 15:14:54 +0000 Received: from mail6.bemta4.messagelabs.com ([85.158.143.247]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1WTC0z-0003z0-R0 for xen-devel@lists.xensource.com; Thu, 27 Mar 2014 15:14:46 +0000 Received: from [85.158.143.35:63629] by server-3.bemta-4.messagelabs.com id A1/28-13602-56044335; Thu, 27 Mar 2014 15:14:45 +0000 X-Env-Sender: Stefano.Stabellini@citrix.com X-Msg-Ref: server-16.tower-21.messagelabs.com!1395933280!5077486!5 X-Originating-IP: [66.165.176.63] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n X-StarScan-Received: X-StarScan-Version: 6.11.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 1651 invoked from network); 27 Mar 2014 15:14:44 -0000 Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63) by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP; 27 Mar 2014 15:14:44 -0000 X-IronPort-AV: E=Sophos;i="4.97,743,1389744000"; d="scan'208";a="114189370" Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net) ([10.9.154.239]) by FTLPIPO02.CITRIX.COM with ESMTP; 27 Mar 2014 15:14:21 +0000 Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 27 Mar 2014 11:14:19 -0400 Received: from kaball.uk.xensource.com ([10.80.2.59]) by ukmail1.uk.xensource.com with esmtp (Exim 4.69) (envelope-from ) id 1WTC0U-0004xQ-Gu; Thu, 27 Mar 2014 15:14:14 +0000 From: Stefano Stabellini To: Date: Thu, 27 Mar 2014 15:13:38 +0000 Message-ID: <1395933219-18495-9-git-send-email-stefano.stabellini@eu.citrix.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: References: MIME-Version: 1.0 X-DLP: MIA1 Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com, Stefano Stabellini Subject: [Xen-devel] [PATCH v6 09/10] xen/arm: don't protect GICH and lr_queue accesses with gic.lock X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: List-Unsubscribe: , List-Post: , List-Help: , List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: stefano.stabellini@eu.citrix.com X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.174 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Archive: GICH is banked, protect accesses by disabling interrupts. Protect lr_queue accesses with the vgic.lock only. gic.lock only protects accesses to GICD now. Signed-off-by: Stefano Stabellini --- Changes in v5: - gic_remove_from_queues need to be protected with a vgic lock; - introduce ASSERTs to check the vgic is locked and interrupts are disabled. Changes in v4: - improved in code comments. --- xen/arch/arm/gic.c | 35 ++++++++++++++++------------------- xen/arch/arm/vgic.c | 9 +++++++-- xen/include/asm-arm/domain.h | 5 ++++- 3 files changed, 27 insertions(+), 22 deletions(-) diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c index 218e3c6..611990d 100644 --- a/xen/arch/arm/gic.c +++ b/xen/arch/arm/gic.c @@ -119,6 +119,7 @@ void gic_save_state(struct vcpu *v) void gic_restore_state(struct vcpu *v) { int i; + ASSERT(!local_irq_is_enabled()); if ( is_idle_vcpu(v) ) return; @@ -630,6 +631,7 @@ static inline void gic_set_lr(int lr, struct pending_irq *p, { uint32_t lr_reg; + ASSERT(!local_irq_is_enabled()); BUG_ON(lr >= nr_lrs); BUG_ON(lr < 0); BUG_ON(state & ~(GICH_LR_STATE_MASK<arch.vgic.lock)); + if ( !list_empty(&n->lr_queue) ) return; @@ -669,19 +673,20 @@ void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq) struct pending_irq *p = irq_to_pending(v, virtual_irq); unsigned long flags; - spin_lock_irqsave(&gic.lock, flags); + spin_lock_irqsave(&v->arch.vgic.lock, flags); if ( !list_empty(&p->lr_queue) ) list_del_init(&p->lr_queue); - spin_unlock_irqrestore(&gic.lock, flags); + spin_unlock_irqrestore(&v->arch.vgic.lock, flags); } void gic_raise_guest_irq(struct vcpu *v, unsigned int virtual_irq, unsigned int priority) { int i; - unsigned long flags; struct pending_irq *n = irq_to_pending(v, virtual_irq); + ASSERT(spin_is_locked(&v->arch.vgic.lock)); + if ( test_bit(GIC_IRQ_GUEST_VISIBLE, &n->status)) { if ( v == current ) @@ -689,23 +694,17 @@ void gic_raise_guest_irq(struct vcpu *v, unsigned int virtual_irq, return; } - spin_lock_irqsave(&gic.lock, flags); - if ( v == current && list_empty(&v->arch.vgic.lr_pending) ) { i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs); if (i < nr_lrs) { set_bit(i, &this_cpu(lr_mask)); gic_set_lr(i, irq_to_pending(v, virtual_irq), GICH_LR_PENDING); - goto out; + return; } } gic_add_to_lr_pending(v, irq_to_pending(v, virtual_irq)); - -out: - spin_unlock_irqrestore(&gic.lock, flags); - return; } static void gic_clear_one_lr(struct vcpu *v, int i) @@ -715,6 +714,7 @@ static void gic_clear_one_lr(struct vcpu *v, int i) int irq; ASSERT(spin_is_locked(&v->arch.vgic.lock)); + ASSERT(!local_irq_is_enabled()); lr = GICH[GICH_LR + i]; irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK; @@ -729,8 +729,6 @@ static void gic_clear_one_lr(struct vcpu *v, int i) } else if ( lr & GICH_LR_PENDING ) { clear_bit(GIC_IRQ_GUEST_PENDING, &p->status); } else { - spin_lock(&gic.lock); - GICH[GICH_LR + i] = 0; clear_bit(i, &this_cpu(lr_mask)); @@ -744,8 +742,6 @@ static void gic_clear_one_lr(struct vcpu *v, int i) gic_raise_guest_irq(v, irq, p->priority); } else list_del_init(&p->inflight); - - spin_unlock(&gic.lock); } } @@ -776,11 +772,11 @@ static void gic_restore_pending_irqs(struct vcpu *v) i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs); if ( i >= nr_lrs ) return; - spin_lock_irqsave(&gic.lock, flags); + spin_lock_irqsave(&v->arch.vgic.lock, flags); gic_set_lr(i, p, GICH_LR_PENDING); list_del_init(&p->lr_queue); set_bit(i, &this_cpu(lr_mask)); - spin_unlock_irqrestore(&gic.lock, flags); + spin_unlock_irqrestore(&v->arch.vgic.lock, flags); } } @@ -788,13 +784,12 @@ static void gic_restore_pending_irqs(struct vcpu *v) void gic_clear_pending_irqs(struct vcpu *v) { struct pending_irq *p, *t; - unsigned long flags; - spin_lock_irqsave(&gic.lock, flags); + ASSERT(spin_is_locked(&v->arch.vgic.lock)); + v->arch.lr_mask = 0; list_for_each_entry_safe ( p, t, &v->arch.vgic.lr_pending, lr_queue ) list_del_init(&p->lr_queue); - spin_unlock_irqrestore(&gic.lock, flags); } int gic_events_need_delivery(void) @@ -805,6 +800,8 @@ int gic_events_need_delivery(void) void gic_inject(void) { + ASSERT(!local_irq_is_enabled()); + gic_restore_pending_irqs(current); if ( !list_empty(¤t->arch.vgic.lr_pending) && lr_all_full() ) diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c index dc3a75f..bd15be7 100644 --- a/xen/arch/arm/vgic.c +++ b/xen/arch/arm/vgic.c @@ -393,8 +393,13 @@ static void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n) vcpu_info(current, evtchn_upcall_pending) && list_empty(&p->inflight) ) vgic_vcpu_inject_irq(v, irq); - else if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) ) - gic_raise_guest_irq(v, irq, p->priority); + else { + unsigned long flags; + spin_lock_irqsave(&v->arch.vgic.lock, flags); + if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) ) + gic_raise_guest_irq(v, irq, p->priority); + spin_unlock_irqrestore(&v->arch.vgic.lock, flags); + } if ( p->desc != NULL ) p->desc->handler->enable(p->desc); i++; diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h index 2d94d59..dcbeba1 100644 --- a/xen/include/asm-arm/domain.h +++ b/xen/include/asm-arm/domain.h @@ -67,7 +67,10 @@ struct pending_irq * vgic.inflight_irqs */ struct list_head inflight; /* lr_queue is used to append instances of pending_irq to - * gic.lr_pending */ + * lr_pending. lr_pending is a per vcpu queue, therefore lr_queue + * accesses are protected with the vgic lock. + * TODO: when implementing irq migration, taking only the current + * vgic lock is not going to be enough. */ struct list_head lr_queue; };