From patchwork Mon Jul 28 16:42:43 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 34392 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-yh0-f70.google.com (mail-yh0-f70.google.com [209.85.213.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id F248B2136C for ; Mon, 28 Jul 2014 16:44:49 +0000 (UTC) Received: by mail-yh0-f70.google.com with SMTP id b6sf27339146yha.5 for ; Mon, 28 Jul 2014 09:44:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:message-id:date:from:user-agent :mime-version:to:references:in-reply-to:cc:subject:precedence :list-id:list-unsubscribe:list-post:list-help:list-subscribe:sender :errors-to:x-original-sender:x-original-authentication-results :mailing-list:list-archive:content-type:content-transfer-encoding; bh=KMA30GVljBYv1edJtLnXGm/zDvf95rmCGbIAR/ADJAo=; b=iyIJOx3bAGM9UsR9PHzfCYkdys1UiyfwoDnIPKsJPSY3MQf0APGp5+VAj8GH5WbZzr MfvzIXWEKJuWCXyuKxj/6FcN5LwHZToG2/F9JlR/9oFsMyXHSVFNcT+9/m3IK2DG/rvM wJ/b6ApeqOdy/mBulKSC3SBQjuWHNLwrjP5S3KsTJUI46kE3mrF30YsV65phRwhtDqsU IO5P+ZJxk02j2VDhktQtDyVkgL2vjSBcF1J+WznuHIf/shE8FLDiNORpC1x8mWr5HN9x sIZpmZpUFikj+zIu2h6wX9bZxH6kGRYM/Jo7/0RokEtP5gq5Of/B8hp6vvIX82iYo4uz BDag== X-Gm-Message-State: ALoCoQmZdE5luWIlKOmRGK8ihd/hAMw6zGBv2ldcnyN3yA/gOC8Mwrj92rqucFITWhGjQR1AYz71 X-Received: by 10.236.15.167 with SMTP id f27mr13831702yhf.37.1406565889786; Mon, 28 Jul 2014 09:44:49 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.88.85 with SMTP id s79ls782473qgd.22.gmail; Mon, 28 Jul 2014 09:44:49 -0700 (PDT) X-Received: by 10.52.61.164 with SMTP id q4mr3184064vdr.89.1406565889688; Mon, 28 Jul 2014 09:44:49 -0700 (PDT) Received: from mail-vc0-f181.google.com (mail-vc0-f181.google.com [209.85.220.181]) by mx.google.com with ESMTPS id jp10si12876667vdb.9.2014.07.28.09.44.49 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 28 Jul 2014 09:44:49 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.181 as permitted sender) client-ip=209.85.220.181; Received: by mail-vc0-f181.google.com with SMTP id lf12so11801203vcb.12 for ; Mon, 28 Jul 2014 09:44:49 -0700 (PDT) X-Received: by 10.52.244.81 with SMTP id xe17mr39415380vdc.24.1406565889563; Mon, 28 Jul 2014 09:44:49 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.37.5 with SMTP id tc5csp181438vcb; Mon, 28 Jul 2014 09:44:49 -0700 (PDT) X-Received: by 10.42.96.132 with SMTP id j4mr45219127icn.16.1406565888963; Mon, 28 Jul 2014 09:44:48 -0700 (PDT) Received: from lists.xen.org (lists.xen.org. [50.57.142.19]) by mx.google.com with ESMTPS id v8si18112479igd.31.2014.07.28.09.44.48 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Mon, 28 Jul 2014 09:44:48 -0700 (PDT) Received-SPF: none (google.com: xen-devel-bounces@lists.xen.org does not designate permitted sender hosts) client-ip=50.57.142.19; Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1XBo0j-00043V-1Z; Mon, 28 Jul 2014 16:42:53 +0000 Received: from mail6.bemta4.messagelabs.com ([85.158.143.247]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1XBo0i-00043D-EB for xen-devel@lists.xensource.com; Mon, 28 Jul 2014 16:42:52 +0000 Received: from [85.158.143.35:48456] by server-3.bemta-4.messagelabs.com id E2/AD-06192-B8D76D35; Mon, 28 Jul 2014 16:42:51 +0000 X-Env-Sender: julien.grall@linaro.org X-Msg-Ref: server-3.tower-21.messagelabs.com!1406565770!11641177!1 X-Originating-IP: [209.85.212.177] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 6.11.3; banners=-,-,- X-VirusChecked: Checked Received: (qmail 18386 invoked from network); 28 Jul 2014 16:42:50 -0000 Received: from mail-wi0-f177.google.com (HELO mail-wi0-f177.google.com) (209.85.212.177) by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP; 28 Jul 2014 16:42:50 -0000 Received: by mail-wi0-f177.google.com with SMTP id ho1so4692852wib.16 for ; Mon, 28 Jul 2014 09:42:49 -0700 (PDT) X-Received: by 10.194.239.135 with SMTP id vs7mr51672964wjc.70.1406565768603; Mon, 28 Jul 2014 09:42:48 -0700 (PDT) Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id wi9sm50695340wjc.23.2014.07.28.09.42.47 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 28 Jul 2014 09:42:47 -0700 (PDT) Message-ID: <53D67D83.1010406@linaro.org> Date: Mon, 28 Jul 2014 17:42:43 +0100 From: Julien Grall User-Agent: Mozilla/5.0 (X11; Linux i686; rv:24.0) Gecko/20100101 Icedove/24.5.0 MIME-Version: 1.0 To: Stefano Stabellini References: <1406223192-26267-5-git-send-email-stefano.stabellini@eu.citrix.com> <53D67097.1020707@linaro.org> In-Reply-To: Cc: julien.grall@citrix.com, xen-devel@lists.xensource.com, Ian.Campbell@citrix.com Subject: Re: [Xen-devel] [PATCH v9 05/10] xen/arm: physical irq follow virtual irq X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: List-Unsubscribe: , List-Post: , List-Help: , List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: julien.grall@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.181 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Archive: On 07/28/2014 05:20 PM, Stefano Stabellini wrote: > On Mon, 28 Jul 2014, Julien Grall wrote: >> Hi stefano, >> >> On 07/24/2014 06:33 PM, Stefano Stabellini wrote: >>> +void arch_move_irqs(struct vcpu *v) >>> +{ >>> + const cpumask_t *cpu_mask = cpumask_of(v->processor); >>> + struct domain *d = v->domain; >>> + struct pending_irq *p; >>> + struct vcpu *v_target; >>> + int i; >>> + >>> + for ( i = 32; i < d->arch.vgic.nr_lines; i++ ) >> >> Sorry, I didn't spot this error until now. >> >> For the VGIC nr_lines contains the number of *SPIs* rather on the GIC >> structure it's the number of IRQs... the name is very confusing. I have >> a patch to rename nr_lines into nr_spis, along with adding a macro >> vgic_number_lines. > > I couldn't parse this sentence. Sorry it was not very clear. > I guess you are saying that vgic.nr_lines doesn't represent the number > of spis? Yes. In the VGIC structure nr_lines = number of SPIs. On GIC structure nr_lines = number of IRQs. >> I plan to send it which my device passthrough patch series. As the patch >> may help you. It may be better if you carry the patch. > > Please append it here so I can have a look. commit 534bdbbbd65b10f2780898bda2db5cdfc892dc34 Author: Julien Grall Date: Fri Jul 18 12:25:18 2014 +0100 xen/arm: vgic: Rename nr_lines into nr_spis The field nr_lines in the arch_domain vgic structure contains the number of SPIs for the emulated GIC. Using the nr_lines make confusion with the GIC code, where it means the number of IRQs. This can lead to coding error. Also introduce vgic_nr_lines to get the number of IRQ handled by the emulated GIC. Signed-off-by: Julien Grall Regards, diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c index 19b2167..ee4f6ca 100644 --- a/xen/arch/arm/gic-v2.c +++ b/xen/arch/arm/gic-v2.c @@ -432,8 +432,6 @@ static int gicv2v_setup(struct domain *d) d->arch.vgic.cbase = GUEST_GICC_BASE; } - d->arch.vgic.nr_lines = 0; - /* * Map the gic virtual cpu interface in the gic cpu interface * region of the guest. diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c index ae31dbf..fbda349 100644 --- a/xen/arch/arm/vgic-v2.c +++ b/xen/arch/arm/vgic-v2.c @@ -54,7 +54,7 @@ static int vgic_v2_distr_mmio_read(struct vcpu *v, mmio_info_t *info) /* No secure world support for guests. */ vgic_lock(v); *r = ( (v->domain->max_vcpus << 5) & GICD_TYPE_CPUS ) - |( ((v->domain->arch.vgic.nr_lines / 32)) & GICD_TYPE_LINES ); + |( ((v->domain->arch.vgic.nr_spis / 32)) & GICD_TYPE_LINES ); vgic_unlock(v); return 1; case GICD_IIDR: diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c index 9f7ed4d..3204131 100644 --- a/xen/arch/arm/vgic.c +++ b/xen/arch/arm/vgic.c @@ -59,13 +59,10 @@ int domain_vgic_init(struct domain *d) d->arch.vgic.ctlr = 0; - /* Currently nr_lines in vgic and gic doesn't have the same meanings - * Here nr_lines = number of SPIs - */ if ( is_hardware_domain(d) ) - d->arch.vgic.nr_lines = gic_number_lines() - 32; + d->arch.vgic.nr_spis = gic_number_lines() - 32; else - d->arch.vgic.nr_lines = 0; /* We don't need SPIs for the guest */ + d->arch.vgic.nr_spis = 0; /* We don't need SPIs for the guest */ switch ( gic_hw_version() ) { @@ -83,11 +80,11 @@ int domain_vgic_init(struct domain *d) return -ENOMEM; d->arch.vgic.pending_irqs = - xzalloc_array(struct pending_irq, d->arch.vgic.nr_lines); + xzalloc_array(struct pending_irq, d->arch.vgic.nr_spis); if ( d->arch.vgic.pending_irqs == NULL ) return -ENOMEM; - for (i=0; iarch.vgic.nr_lines; i++) + for (i=0; iarch.vgic.nr_spis; i++) { INIT_LIST_HEAD(&d->arch.vgic.pending_irqs[i].inflight); INIT_LIST_HEAD(&d->arch.vgic.pending_irqs[i].lr_queue); @@ -230,7 +227,7 @@ void arch_move_irqs(struct vcpu *v) struct vcpu *v_target; int i; - for ( i = 32; i < d->arch.vgic.nr_lines; i++ ) + for ( i = 32; i < d->arch.vgic.nr_spis; i++ ) { v_target = vgic_get_target_vcpu(v, i); p = irq_to_pending(v_target, i); @@ -354,7 +351,7 @@ int vgic_to_sgi(struct vcpu *v, register_t sgir, enum gic_sgi_mode irqmode, int struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq) { struct pending_irq *n; - /* Pending irqs allocation strategy: the first vgic.nr_lines irqs + /* Pending irqs allocation strategy: the first vgic.nr_spis irqs * are used for SPIs; the rests are used for per cpu irqs */ if ( irq < 32 ) n = &v->arch.vgic.pending_irqs[irq]; diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h index 32d0554..5719fe5 100644 --- a/xen/include/asm-arm/domain.h +++ b/xen/include/asm-arm/domain.h @@ -89,7 +89,7 @@ struct arch_domain */ spinlock_t lock; int ctlr; - int nr_lines; /* Number of SPIs */ + int nr_spis; /* Number of SPIs */ struct vgic_irq_rank *shared_irqs; /* * SPIs are domain global, SGIs and PPIs are per-VCPU and stored in diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h index 5d6a8ad..b94de8c 100644 --- a/xen/include/asm-arm/vgic.h +++ b/xen/include/asm-arm/vgic.h @@ -101,7 +101,7 @@ struct vgic_ops { }; /* Number of ranks of interrupt registers for a domain */ -#define DOMAIN_NR_RANKS(d) (((d)->arch.vgic.nr_lines+31)/32) +#define DOMAIN_NR_RANKS(d) (((d)->arch.vgic.nr_spis+31)/32) #define vgic_lock(v) spin_lock_irq(&(v)->domain->arch.vgic.lock) #define vgic_unlock(v) spin_unlock_irq(&(v)->domain->arch.vgic.lock) @@ -155,6 +155,8 @@ enum gic_sgi_mode; */ #define REG_RANK_INDEX(b, n, s) ((((n) >> s) & ((b)-1)) % 32) +#define vgic_num_irqs(d) ((d)->arch.vgic.nr_spis + 32) + extern int domain_vgic_init(struct domain *d); extern void domain_vgic_free(struct domain *d); extern int vcpu_vgic_init(struct vcpu *v);