From patchwork Thu Jul 31 15:00:41 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 34666 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wg0-f70.google.com (mail-wg0-f70.google.com [74.125.82.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id EB56320540 for ; Thu, 31 Jul 2014 15:02:42 +0000 (UTC) Received: by mail-wg0-f70.google.com with SMTP id l18sf1879065wgh.5 for ; Thu, 31 Jul 2014 08:02:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:in-reply-to :references:cc:subject:precedence:list-id:list-unsubscribe:list-post :list-help:list-subscribe:mime-version:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list :list-archive:content-type:content-transfer-encoding; bh=xotTmvw+15IevSFyd99aV28ueb9Xr4Nh7Dbg8myDKX0=; b=Pz8mhm8TirczN5/z+jERhFUI37rf7Uf1GkxCBR3gE0iS47bUaP6Qj2IEFLPS00oRye 2wzn6JQuSz7Ih9avhc4qAQMbZhy+LW+ys5RxwBZOIlMPIK6A7cy96lfo4gcMOs+h/42V p0jy61KOi8JRldFp4xUWtF6ZE/FmcpazawHO85kQa1umneidNMQnjOpCSQLGML1oqNmM HL9aSCpdadXSxiz9xxJFwWs9MPfDwB+vp1Caoq2eKGzRV/ZqXWmlytknT7pZCt7zojdu UmA6431hg5JRt4kZQYwEVabnp5wsIfcPQQVmDqSiTpStmnutJlyTkZqHGPFzJoiCDZTd yDiw== X-Gm-Message-State: ALoCoQlFIQo7gv27uKrkxayyJHiEx84wUni4OAfmy0I1435tGCv2fyS5e1C+Kf/kvi/YShIwgfkd X-Received: by 10.112.9.34 with SMTP id w2mr1174264lba.12.1406818959719; Thu, 31 Jul 2014 08:02:39 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.87.37 with SMTP id q34ls1009820qgd.5.gmail; Thu, 31 Jul 2014 08:02:39 -0700 (PDT) X-Received: by 10.52.120.83 with SMTP id la19mr9818535vdb.68.1406818959522; Thu, 31 Jul 2014 08:02:39 -0700 (PDT) Received: from mail-vc0-f180.google.com (mail-vc0-f180.google.com [209.85.220.180]) by mx.google.com with ESMTPS id id4si4591121vdb.100.2014.07.31.08.02.31 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 31 Jul 2014 08:02:31 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.180 as permitted sender) client-ip=209.85.220.180; Received: by mail-vc0-f180.google.com with SMTP id ij19so4418291vcb.11 for ; Thu, 31 Jul 2014 08:02:31 -0700 (PDT) X-Received: by 10.220.97.5 with SMTP id j5mr13422788vcn.16.1406818951394; Thu, 31 Jul 2014 08:02:31 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.37.5 with SMTP id tc5csp25088vcb; Thu, 31 Jul 2014 08:02:30 -0700 (PDT) X-Received: by 10.66.124.199 with SMTP id mk7mr4607930pab.89.1406818950527; Thu, 31 Jul 2014 08:02:30 -0700 (PDT) Received: from lists.xen.org (lists.xen.org. [50.57.142.19]) by mx.google.com with ESMTPS id s11si14921979ich.19.2014.07.31.08.02.30 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Thu, 31 Jul 2014 08:02:30 -0700 (PDT) Received-SPF: none (google.com: xen-devel-bounces@lists.xen.org does not designate permitted sender hosts) client-ip=50.57.142.19; Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1XCrr5-0000f0-Jz; Thu, 31 Jul 2014 15:01:19 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1XCrr3-0000cD-KY for xen-devel@lists.xenproject.org; Thu, 31 Jul 2014 15:01:17 +0000 Received: from [85.158.137.68:39542] by server-5.bemta-3.messagelabs.com id 60/45-30889-C3A5AD35; Thu, 31 Jul 2014 15:01:16 +0000 X-Env-Sender: julien.grall@linaro.org X-Msg-Ref: server-3.tower-31.messagelabs.com!1406818875!11169754!1 X-Originating-IP: [74.125.82.169] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 6.11.3; banners=-,-,- X-VirusChecked: Checked Received: (qmail 13377 invoked from network); 31 Jul 2014 15:01:16 -0000 Received: from mail-we0-f169.google.com (HELO mail-we0-f169.google.com) (74.125.82.169) by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP; 31 Jul 2014 15:01:16 -0000 Received: by mail-we0-f169.google.com with SMTP id u56so2869653wes.28 for ; Thu, 31 Jul 2014 08:01:14 -0700 (PDT) X-Received: by 10.180.187.6 with SMTP id fo6mr17107245wic.58.1406818874470; Thu, 31 Jul 2014 08:01:14 -0700 (PDT) Received: from belegaer.uk.xensource.com ([185.25.64.249]) by mx.google.com with ESMTPSA id r20sm67128337wik.0.2014.07.31.08.01.13 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 31 Jul 2014 08:01:13 -0700 (PDT) From: Julien Grall To: xen-devel@lists.xenproject.org Date: Thu, 31 Jul 2014 16:00:41 +0100 Message-Id: <1406818852-31856-11-git-send-email-julien.grall@linaro.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1406818852-31856-1-git-send-email-julien.grall@linaro.org> References: <1406818852-31856-1-git-send-email-julien.grall@linaro.org> Cc: stefano.stabellini@citrix.com, Julien Grall , tim@xen.org, ian.campbell@citrix.com Subject: [Xen-devel] [PATCH v2 10/21] xen/arm: Implement hypercall PHYSDEVOP_{, un}map_pirq X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: List-Unsubscribe: , List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: julien.grall@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.180 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Archive: The physdev sub-hypercalls PHYSDEVOP_{,map}_pirq allow the toolstack to assign/deassign a physical IRQ to the guest (via the config options "irqs" for xl). For now, we allow only SPIs to be mapped to the guest. The type MAP_PIRQ_TYPE_GSI is used for this purpose. The virtual IRQ number is allocated by Xen. The toolstack as to specify the number of SPIs handled by the vGIC via an hypercall. Signed-off-by: Julien Grall Acked-by: Stefano Stabellini --- I'm wondering if we should introduce an alias of MAP_PIRQ_TYPE_GSI for ARM. It's will be less confuse for the user. Changes in v2: - Add PHYSDEVOP_unmap_pirq - Rework commit message - Add functions to allocate/release a VIRQ - is_routable_irq has been renamed into is_assignable_irq --- xen/arch/arm/physdev.c | 120 +++++++++++++++++++++++++++++++++++++++++- xen/arch/arm/vgic.c | 51 ++++++++++++++++++ xen/include/asm-arm/domain.h | 1 + xen/include/asm-arm/vgic.h | 5 ++ 4 files changed, 175 insertions(+), 2 deletions(-) diff --git a/xen/arch/arm/physdev.c b/xen/arch/arm/physdev.c index 61b4a18..9333aa0 100644 --- a/xen/arch/arm/physdev.c +++ b/xen/arch/arm/physdev.c @@ -8,13 +8,129 @@ #include #include #include +#include +#include +#include +#include #include +#include +static int physdev_map_pirq(domid_t domid, int type, int index, int *pirq_p) +{ + struct domain *d; + int ret; + int irq = index; + int virq = 0; + + d = rcu_lock_domain_by_any_id(domid); + if ( d == NULL ) + return -ESRCH; + + ret = xsm_map_domain_pirq(XSM_TARGET, d); + if ( ret ) + goto free_domain; + + /* For now we only suport GSI */ + if ( type != MAP_PIRQ_TYPE_GSI ) + { + ret = -EINVAL; + dprintk(XENLOG_G_ERR, "dom%u: wrong map_pirq type 0x%x\n", + d->domain_id, type); + goto free_domain; + } + + if ( !is_assignable_irq(irq) ) + { + ret = -EINVAL; + dprintk(XENLOG_G_ERR, "IRQ%u is not routable to a guest\n", irq); + goto free_domain; + } + + ret = -EPERM; + if ( !irq_access_permitted(current->domain, irq) ) + goto free_domain; + + virq = vgic_allocate_virq(d, irq); + ret = -EMFILE; + if ( virq == -1 ) + goto free_domain; + + ret = route_irq_to_guest(d, virq, irq, "routed IRQ"); + + if ( !ret ) + *pirq_p = virq; + else + vgic_free_virq(d, virq); + +free_domain: + rcu_unlock_domain(d); + + return ret; +} + +int physdev_unmap_pirq(domid_t domid, int pirq) +{ + struct domain *d; + int ret; + + d = rcu_lock_domain_by_any_id(domid); + if ( d == NULL ) + return -ESRCH; + + ret = xsm_unmap_domain_pirq(XSM_TARGET, d); + if ( ret ) + goto free_domain; + + ret = release_guest_irq(d, pirq); + if ( ret ) + goto free_domain; + + vgic_free_virq(d, pirq); + +free_domain: + rcu_unlock_domain(d); + + return ret; +} int do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg) { - printk("%s %d cmd=%d: not implemented yet\n", __func__, __LINE__, cmd); - return -ENOSYS; + int ret; + + switch ( cmd ) + { + case PHYSDEVOP_map_pirq: + { + physdev_map_pirq_t map; + + ret = -EFAULT; + if ( copy_from_guest(&map, arg, 1) != 0 ) + break; + + ret = physdev_map_pirq(map.domid, map.type, map.index, &map.pirq); + + if ( __copy_to_guest(arg, &map, 1) ) + ret = -EFAULT; + } + break; + + case PHYSDEVOP_unmap_pirq: + { + physdev_unmap_pirq_t unmap; + + ret = -EFAULT; + if ( copy_from_guest(&unmap, arg, 1) != 0 ) + break; + + ret = physdev_unmap_pirq(unmap.domid, unmap.pirq); + } + + default: + ret = -ENOSYS; + break; + } + + return ret; } /* diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c index 2a5fc18..644742e 100644 --- a/xen/arch/arm/vgic.c +++ b/xen/arch/arm/vgic.c @@ -81,6 +81,8 @@ int domain_vgic_init(struct domain *d, unsigned int nr_spis) return -ENODEV; } + spin_lock_init(&d->arch.vgic.lock); + d->arch.vgic.shared_irqs = xzalloc_array(struct vgic_irq_rank, DOMAIN_NR_RANKS(d)); if ( d->arch.vgic.shared_irqs == NULL ) @@ -108,6 +110,11 @@ int domain_vgic_init(struct domain *d, unsigned int nr_spis) d->arch.vgic.handler->domain_init(d); + d->arch.vgic.allocated_spis = + xzalloc_array(unsigned long, BITS_TO_LONGS(d->arch.vgic.nr_spis)); + if ( !d->arch.vgic.allocated_spis ) + return -ENOMEM; + return 0; } @@ -444,6 +451,50 @@ void arch_evtchn_inject(struct vcpu *v) vgic_vcpu_inject_irq(v, v->domain->arch.evtchn_irq); } +int vgic_allocate_virq(struct domain *d, unsigned int irq) +{ + unsigned int spi; + int virq = -1; + + /* Hardware domain has IRQ mapped 1:1 */ + if ( is_hardware_domain(d) ) + return irq; + + spin_lock(&d->arch.vgic.lock); + + spi = find_first_zero_bit(d->arch.vgic.allocated_spis, + d->arch.vgic.nr_spis); + + if ( spi >= d->arch.vgic.nr_spis ) + goto unlock; + + set_bit(spi, d->arch.vgic.allocated_spis); + + virq = 32 + spi; + +unlock: + spin_unlock(&d->arch.vgic.lock); + + return virq; +} + +void vgic_free_virq(struct domain *d, unsigned int virq) +{ + unsigned int spi; + + if ( is_hardware_domain(d) ) + return; + + if ( virq < 32 && virq >= vgic_num_irqs(d) ) + return; + + spi = virq - 32; + + spin_lock(&d->arch.vgic.lock); + clear_bit(spi, d->arch.vgic.allocated_spis); + spin_unlock(&d->arch.vgic.lock); +} + /* * Local variables: * mode: C diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h index 44727b2..a4039c1 100644 --- a/xen/include/asm-arm/domain.h +++ b/xen/include/asm-arm/domain.h @@ -94,6 +94,7 @@ struct arch_domain spinlock_t lock; int ctlr; int nr_spis; /* Number of SPIs */ + unsigned long *allocated_spis; /* bitmap of SPIs allocated */ struct vgic_irq_rank *shared_irqs; /* * SPIs are domain global, SGIs and PPIs are per-VCPU and stored in diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h index 84ae441..c5d8b2e 100644 --- a/xen/include/asm-arm/vgic.h +++ b/xen/include/asm-arm/vgic.h @@ -180,6 +180,11 @@ extern int vgic_to_sgi(struct vcpu *v, register_t sgir, enum gic_sgi_mode irqmode, int virq, unsigned long vcpu_mask); extern void vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq); + +/* Allocate a VIRQ number of a guest SPI */ +extern int vgic_allocate_virq(struct domain *d, unsigned int irq); +extern void vgic_free_virq(struct domain *d, unsigned int irq); + #endif /* __ASM_ARM_VGIC_H__ */ /*