From patchwork Thu Jul 31 15:00:39 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 34664 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wg0-f71.google.com (mail-wg0-f71.google.com [74.125.82.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id BC3862188F for ; Thu, 31 Jul 2014 15:02:38 +0000 (UTC) Received: by mail-wg0-f71.google.com with SMTP id a1sf1881290wgh.2 for ; Thu, 31 Jul 2014 08:02:37 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:in-reply-to :references:cc:subject:precedence:list-id:list-unsubscribe:list-post :list-help:list-subscribe:mime-version:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list :list-archive:content-type:content-transfer-encoding; bh=jGXoF3286N6ttCt726QwCYRTgU1p8ItiuctDT8PxuJY=; b=Ekjz9PVRq5+AcZ9IncNfpEctV+Ppb0qlFrnYtEoP2FG+GOdb2fI4aacZkyp95P6yac VnhwHcu8/dqgmS2kdKrnqCW/wV2ovutkj6q9Sociiqxh4ZLAcyuznc6HsM6JP7AUHevK FZBP6GRdQa2p2ZYHvcbKJ93k26RWMQasr2wELgcGEVLL3JM5l3dHHEmvm+oElNesXS2J 7nHrfsoDUVswvwZL6UMpKm+jgFzH/GbXFB7Z4nlh6qd4XA6TwR1cS++pMgFC90Fptd3t vcgpTMXxjy+BQm4GY6vcPcdj2fB5n3CImd0naKz10Ufg56YwB0Mh5WtaVvxJRg/LsROo gnxw== X-Gm-Message-State: ALoCoQmKEYvmsUd4Y36Wqx6LIzONcmM2JudiZeDseN5UCu2cI40rZCS1sKRPiQtGV7IbbhqNgaaG X-Received: by 10.180.91.73 with SMTP id cc9mr1267818wib.4.1406818956065; Thu, 31 Jul 2014 08:02:36 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.84.239 with SMTP id l102ls981903qgd.81.gmail; Thu, 31 Jul 2014 08:02:35 -0700 (PDT) X-Received: by 10.52.61.164 with SMTP id q4mr2351824vdr.89.1406818955887; Thu, 31 Jul 2014 08:02:35 -0700 (PDT) Received: from mail-vc0-f171.google.com (mail-vc0-f171.google.com [209.85.220.171]) by mx.google.com with ESMTPS id c7si2717655vcc.15.2014.07.31.08.02.35 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 31 Jul 2014 08:02:35 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.171 as permitted sender) client-ip=209.85.220.171; Received: by mail-vc0-f171.google.com with SMTP id hq11so4407033vcb.16 for ; Thu, 31 Jul 2014 08:02:35 -0700 (PDT) X-Received: by 10.52.244.138 with SMTP id xg10mr15440235vdc.40.1406818954843; Thu, 31 Jul 2014 08:02:34 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.37.5 with SMTP id tc5csp25094vcb; Thu, 31 Jul 2014 08:02:34 -0700 (PDT) X-Received: by 10.42.64.77 with SMTP id f13mr15704859ici.72.1406818953776; Thu, 31 Jul 2014 08:02:33 -0700 (PDT) Received: from lists.xen.org (lists.xen.org. [50.57.142.19]) by mx.google.com with ESMTPS id ij6si14890200igb.41.2014.07.31.08.02.33 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Thu, 31 Jul 2014 08:02:33 -0700 (PDT) Received-SPF: none (google.com: xen-devel-bounces@lists.xen.org does not designate permitted sender hosts) client-ip=50.57.142.19; Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1XCrrJ-00010U-Pu; Thu, 31 Jul 2014 15:01:33 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1XCrrE-0000sI-Pe for xen-devel@lists.xenproject.org; Thu, 31 Jul 2014 15:01:29 +0000 Received: from [85.158.137.68:15383] by server-5.bemta-3.messagelabs.com id 0E/95-30889-74A5AD35; Thu, 31 Jul 2014 15:01:27 +0000 X-Env-Sender: julien.grall@linaro.org X-Msg-Ref: server-11.tower-31.messagelabs.com!1406818873!11140437!1 X-Originating-IP: [209.85.212.170] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 6.11.3; banners=-,-,- X-VirusChecked: Checked Received: (qmail 5108 invoked from network); 31 Jul 2014 15:01:13 -0000 Received: from mail-wi0-f170.google.com (HELO mail-wi0-f170.google.com) (209.85.212.170) by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP; 31 Jul 2014 15:01:13 -0000 Received: by mail-wi0-f170.google.com with SMTP id f8so1414830wiw.3 for ; Thu, 31 Jul 2014 08:01:11 -0700 (PDT) X-Received: by 10.180.88.167 with SMTP id bh7mr18445857wib.12.1406818871655; Thu, 31 Jul 2014 08:01:11 -0700 (PDT) Received: from belegaer.uk.xensource.com ([185.25.64.249]) by mx.google.com with ESMTPSA id r20sm67128337wik.0.2014.07.31.08.01.10 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 31 Jul 2014 08:01:10 -0700 (PDT) From: Julien Grall To: xen-devel@lists.xenproject.org Date: Thu, 31 Jul 2014 16:00:39 +0100 Message-Id: <1406818852-31856-9-git-send-email-julien.grall@linaro.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1406818852-31856-1-git-send-email-julien.grall@linaro.org> References: <1406818852-31856-1-git-send-email-julien.grall@linaro.org> Cc: ian.campbell@citrix.com, tim@xen.org, Julien Grall , Ian Jackson , stefano.stabellini@citrix.com, Jan Beulich , Daniel De Graaf Subject: [Xen-devel] [PATCH v2 08/21] xen/arm: Initialize the virtual GIC later X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: List-Unsubscribe: , List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: julien.grall@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.171 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Archive: The virtual GIC may differ between each guest (number of SPIs, emulate GIC version...). Those informations may not be know when the domain is created (for instance in case of migration). Therefore, move the VGIC initialization in a separate function. Introduce a new DOMCTL for ARM to configure the domain. This has to be called before setting the maximum number of VCPUs for the guest. For the moment, only configure the number SPIs. New informations could be added later. Signed-off-by: Julien Grall Cc: Ian Jackson Cc: Daniel De Graaf Cc: Jan Beulich Acked-by: Daniel De Graaf --- Changes in v2: - Patch added --- tools/libxc/xc_domain.c | 12 ++++++++++++ tools/libxc/xenctrl.h | 4 ++++ tools/libxl/libxl_arch.h | 3 +++ tools/libxl/libxl_arm.c | 19 +++++++++++++++++++ tools/libxl/libxl_dom.c | 4 ++++ tools/libxl/libxl_x86.c | 7 +++++++ xen/arch/arm/domain.c | 28 ++++++++++++++++++++++------ xen/arch/arm/domctl.c | 11 +++++++++++ xen/arch/arm/setup.c | 10 ++++++++-- xen/arch/arm/vgic.c | 10 +++++----- xen/include/asm-arm/domain.h | 6 ++++++ xen/include/asm-arm/vgic.h | 4 +++- xen/include/public/domctl.h | 14 ++++++++++++++ xen/xsm/flask/hooks.c | 3 +++ xen/xsm/flask/policy/access_vectors | 2 ++ 15 files changed, 123 insertions(+), 14 deletions(-) diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c index 27fe3b6..1348905 100644 --- a/tools/libxc/xc_domain.c +++ b/tools/libxc/xc_domain.c @@ -48,6 +48,18 @@ int xc_domain_create(xc_interface *xch, return 0; } +#if defined(__arm__) || defined(__arch64__) +int xc_domain_configure(xc_interface *xch, uint32_t domid, + uint32_t nr_spis) +{ + DECLARE_DOMCTL; + domctl.cmd = XEN_DOMCTL_configure_domain; + domctl.domain = (domid_t)domid; + domctl.u.configuredomain.nr_spis = nr_spis; + return do_domctl(xch, &domctl); +} +#endif + int xc_domain_cacheflush(xc_interface *xch, uint32_t domid, xen_pfn_t start_pfn, xen_pfn_t nr_pfns) { diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h index 5beb846..cdda4e5 100644 --- a/tools/libxc/xenctrl.h +++ b/tools/libxc/xenctrl.h @@ -482,6 +482,10 @@ int xc_domain_create(xc_interface *xch, uint32_t flags, uint32_t *pdomid); +#if defined(__arm__) || defined(__aarch64__) +int xc_domain_configure(xc_interface *xch, uint32_t domid, + uint32_t nr_spis); +#endif /* Functions to produce a dump of a given domain * xc_domain_dumpcore - produces a dump to a specified file diff --git a/tools/libxl/libxl_arch.h b/tools/libxl/libxl_arch.h index d3bc136..454f8db 100644 --- a/tools/libxl/libxl_arch.h +++ b/tools/libxl/libxl_arch.h @@ -16,6 +16,9 @@ #define LIBXL_ARCH_H /* arch specific internal domain creation function */ +int libxl__arch_domain_create_pre(libxl__gc *gc, libxl_domain_config *d_config, + libxl__domain_build_state *state, + uint32_t domid); int libxl__arch_domain_create(libxl__gc *gc, libxl_domain_config *d_config, uint32_t domid); diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c index e19e2f4..b0491c3 100644 --- a/tools/libxl/libxl_arm.c +++ b/tools/libxl/libxl_arm.c @@ -23,6 +23,25 @@ #define DT_IRQ_TYPE_LEVEL_HIGH 0x00000004 #define DT_IRQ_TYPE_LEVEL_LOW 0x00000008 +int libxl__arch_domain_create_pre(libxl__gc *gc, libxl_domain_config *d_config, + libxl__domain_build_state *state, + uint32_t domid) +{ + uint32_t nr_spis = 0; + + nr_spis += d_config->b_info.num_irqs; + + LOG(DEBUG, "Allocate %u SPIs\n", nr_spis); + + if (xc_domain_configure(CTX->xch, domid, nr_spis) != 0) { + LOG(ERROR, "Couldn't configure the domain"); + return ERROR_FAIL; + } + + return 0; +} + + int libxl__arch_domain_create(libxl__gc *gc, libxl_domain_config *d_config, uint32_t domid) { diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c index c944804..a0e4e34 100644 --- a/tools/libxl/libxl_dom.c +++ b/tools/libxl/libxl_dom.c @@ -235,6 +235,10 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid, char *xs_domid, *con_domid; int rc; + rc = libxl__arch_domain_create_pre(gc, d_config, state, domid); + if (rc != 0) + return rc; + if (xc_domain_max_vcpus(ctx->xch, domid, info->max_vcpus) != 0) { LOG(ERROR, "Couldn't set max vcpu count"); return ERROR_FAIL; diff --git a/tools/libxl/libxl_x86.c b/tools/libxl/libxl_x86.c index 7589060..0abc1aa 100644 --- a/tools/libxl/libxl_x86.c +++ b/tools/libxl/libxl_x86.c @@ -244,6 +244,13 @@ static int libxl__e820_alloc(libxl__gc *gc, uint32_t domid, return 0; } +int libxl__arch_domain_create_pre(libxl__gc *gc, libxl_domain_config *d_config, + libxl__domain_build_state *state, + uint32_t domid) +{ + return 0; +} + int libxl__arch_domain_create(libxl__gc *gc, libxl_domain_config *d_config, uint32_t domid) { diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index aa4a8d7..fc97f8c 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -472,6 +472,13 @@ int vcpu_initialise(struct vcpu *v) if ( is_idle_vcpu(v) ) return rc; + /* We can't initialize the VCPU if the VGIC has not been correctly + * initialized. + */ + rc = -ENXIO; + if ( !domain_vgic_is_initialized(v->domain) ) + goto fail; + v->arch.sctlr = SCTLR_GUEST_INIT; /* @@ -534,12 +541,6 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags) if ( (rc = p2m_alloc_table(d)) != 0 ) goto fail; - if ( (rc = gicv_setup(d)) != 0 ) - goto fail; - - if ( (rc = domain_vgic_init(d)) != 0 ) - goto fail; - if ( (rc = domain_vtimer_init(d)) != 0 ) goto fail; @@ -580,6 +581,21 @@ void arch_domain_destroy(struct domain *d) free_xenheap_page(d->shared_info); } +int domain_configure_vgic(struct domain *d, unsigned int nr_spis) +{ + int rc; + + if ( (rc = gicv_setup(d)) != 0 ) + return rc; + + if ( (rc = domain_vgic_init(d, nr_spis)) != 0 ) + return rc; + + d->arch.vgic.initialized = 1; + + return 0; +} + static int is_guest_pv32_psr(uint32_t psr) { switch (psr & PSR_MODE_MASK) diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c index 45974e7..bab92b2 100644 --- a/xen/arch/arm/domctl.c +++ b/xen/arch/arm/domctl.c @@ -30,6 +30,17 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d, return p2m_cache_flush(d, s, e); } + case XEN_DOMCTL_configure_domain: + { + if ( domain_vgic_is_initialized(d) ) + return -EBUSY; + + /* Sanity check on the number of SPIs */ + if ( domctl->u.configuredomain.nr_spis > (gic_number_lines() - 32) ) + return -EINVAL; + + return domain_configure_vgic(d, domctl->u.configuredomain.nr_spis); + } default: return subarch_do_domctl(domctl, d, u_domctl); diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c index 446b4dc..2be7dd1 100644 --- a/xen/arch/arm/setup.c +++ b/xen/arch/arm/setup.c @@ -840,8 +840,14 @@ void __init start_xen(unsigned long boot_phys_offset, /* Create initial domain 0. */ dom0 = domain_create(0, 0, 0); - if ( IS_ERR(dom0) || (alloc_dom0_vcpu0(dom0) == NULL) ) - panic("Error creating domain 0"); + if ( IS_ERR(dom0) ) + panic("Error creating domain 0"); + + if ( domain_configure_vgic(dom0, gic_number_lines() - 32) ) + panic("Error configure the vgic for domain 0"); + + if ( alloc_dom0_vcpu0(dom0) == NULL ) + panic("Error create vcpu0 for domain 0"); dom0->is_privileged = 1; dom0->target = NULL; diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c index 17cde7a..a037ecc 100644 --- a/xen/arch/arm/vgic.c +++ b/xen/arch/arm/vgic.c @@ -60,16 +60,16 @@ static void vgic_init_pending_irq(struct pending_irq *p, unsigned virq) p->irq = virq; } -int domain_vgic_init(struct domain *d) +int domain_vgic_init(struct domain *d, unsigned int nr_spis) { int i; d->arch.vgic.ctlr = 0; - if ( is_hardware_domain(d) ) - d->arch.vgic.nr_spis = gic_number_lines() - 32; - else - d->arch.vgic.nr_spis = 0; /* We don't need SPIs for the guest */ + /* The number of SPIs as to be aligned to 32 see + * GICD_TYPER.ITLinesNumber definition + */ + d->arch.vgic.nr_spis = ROUNDUP(nr_spis, 32); switch ( gic_hw_version() ) { diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h index 5719fe5..44727b2 100644 --- a/xen/include/asm-arm/domain.h +++ b/xen/include/asm-arm/domain.h @@ -76,6 +76,10 @@ struct arch_domain } virt_timer_base; struct { + /* The VGIC initialization is defered to let the toolstack + * configure the emulated GIC (such as number of SPIs, version...) + */ + bool_t initialized; /* GIC HW version specific vGIC driver handler */ const struct vgic_ops *handler; /* @@ -241,6 +245,8 @@ struct arch_vcpu void vcpu_show_execution_state(struct vcpu *); void vcpu_show_registers(const struct vcpu *); +int domain_configure_vgic(struct domain *d, unsigned int spis_number); + #endif /* __ASM_DOMAIN_H__ */ /* diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h index 5ddc681..84ae441 100644 --- a/xen/include/asm-arm/vgic.h +++ b/xen/include/asm-arm/vgic.h @@ -148,6 +148,8 @@ static inline void vgic_byte_write(uint32_t *reg, uint32_t var, int offset) *reg |= var; } +#define domain_vgic_is_initialized(d) ((d)->arch.vgic.initialized) + enum gic_sgi_mode; /* @@ -158,7 +160,7 @@ enum gic_sgi_mode; #define vgic_num_irqs(d) ((d)->arch.vgic.nr_spis + 32) -extern int domain_vgic_init(struct domain *d); +extern int domain_vgic_init(struct domain *d, unsigned int nr_spis); extern void domain_vgic_free(struct domain *d); extern int vcpu_vgic_init(struct vcpu *v); extern struct vcpu *vgic_get_target_vcpu(struct vcpu *v, unsigned int irq); diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h index 5b11bbf..b5f2ed7 100644 --- a/xen/include/public/domctl.h +++ b/xen/include/public/domctl.h @@ -67,6 +67,16 @@ struct xen_domctl_createdomain { typedef struct xen_domctl_createdomain xen_domctl_createdomain_t; DEFINE_XEN_GUEST_HANDLE(xen_domctl_createdomain_t); +#if defined(__arm__) || defined(__aarch64__) +/* XEN_DOMCTL_configure_domain */ +struct xen_domctl_configuredomain { + /* IN parameters */ + uint32_t nr_spis; +}; +typedef struct xen_domctl_configuredomain xen_domctl_configuredomain_t; +DEFINE_XEN_GUEST_HANDLE(xen_domctl_configuredomain_t); +#endif + /* XEN_DOMCTL_getdomaininfo */ struct xen_domctl_getdomaininfo { /* OUT variables. */ @@ -1008,6 +1018,7 @@ struct xen_domctl { #define XEN_DOMCTL_cacheflush 71 #define XEN_DOMCTL_get_vcpu_msrs 72 #define XEN_DOMCTL_set_vcpu_msrs 73 +#define XEN_DOMCTL_configure_domain 74 #define XEN_DOMCTL_gdbsx_guestmemio 1000 #define XEN_DOMCTL_gdbsx_pausevcpu 1001 #define XEN_DOMCTL_gdbsx_unpausevcpu 1002 @@ -1016,6 +1027,9 @@ struct xen_domctl { domid_t domain; union { struct xen_domctl_createdomain createdomain; +#if defined(__arm__) || defined(__aarch64__) + struct xen_domctl_configuredomain configuredomain; +#endif struct xen_domctl_getdomaininfo getdomaininfo; struct xen_domctl_getmemlist getmemlist; struct xen_domctl_getpageframeinfo getpageframeinfo; diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c index f2f59ea..4759461 100644 --- a/xen/xsm/flask/hooks.c +++ b/xen/xsm/flask/hooks.c @@ -715,6 +715,9 @@ static int flask_domctl(struct domain *d, int cmd) case XEN_DOMCTL_cacheflush: return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__CACHEFLUSH); + case XEN_DOMCTL_configure_domain: + return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__CONFIGURE_DOMAIN); + default: printk("flask_domctl: Unknown op %d\n", cmd); return -EPERM; diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors index 32371a9..33eec66 100644 --- a/xen/xsm/flask/policy/access_vectors +++ b/xen/xsm/flask/policy/access_vectors @@ -200,6 +200,8 @@ class domain2 cacheflush # Creation of the hardware domain when it is not dom0 create_hardware_domain +# XEN_DOMCTL_configure_domain + configure_domain } # Similar to class domain, but primarily contains domctls related to HVM domains