From patchwork Fri Nov 4 11:13:27 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vijay Kilari X-Patchwork-Id: 80834 Delivered-To: patch@linaro.org Received: by 10.140.97.247 with SMTP id m110csp1092421qge; Fri, 4 Nov 2016 04:16:12 -0700 (PDT) X-Received: by 10.98.34.218 with SMTP id p87mr25430323pfj.97.1478258172271; Fri, 04 Nov 2016 04:16:12 -0700 (PDT) Return-Path: Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id 142si15857048pgd.195.2016.11.04.04.16.12 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 04 Nov 2016 04:16:12 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) client-ip=2001:1868:205::9; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com; spf=pass (google.com: best guess record for domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) smtp.mailfrom=linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org; dmarc=fail (p=NONE dis=NONE) header.from=gmail.com Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1c2cSa-0003y6-7r; Fri, 04 Nov 2016 11:15:00 +0000 Received: from mail-pf0-x243.google.com ([2607:f8b0:400e:c00::243]) by bombadil.infradead.org with esmtps (Exim 4.85_2 #1 (Red Hat Linux)) id 1c2cSF-0003sy-EH for linux-arm-kernel@lists.infradead.org; Fri, 04 Nov 2016 11:14:42 +0000 Received: by mail-pf0-x243.google.com with SMTP id a136so7683574pfa.0 for ; Fri, 04 Nov 2016 04:14:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=2GQBBn21mJe0i7nTMMaEMEU5bG6uQDrKzC1bld8UJQo=; b=t7VvWT/wrfufZRE6ElQt9CvTqv19CGzJy/13X2q6+lVhtPkfdMImJRs6pX9b0pGAKQ zZSYdqcqs9AHGSkrxvjlb6zu5/taqLI4C+xIvJ5Gr6RjMw+4Cbc2XdgakD/LdnrsGrRr 2OEhEGVgh3S7dxJoHLmV+uQYdeVs2ZUzREkO39+5pSObT8hFHd2ZHI4Jd7gM6/bj3aN1 T5PKUehbAro7Z/k04S3DBNYxj9ESsB+ST+yrDodQrgvdPlo2bbz0Lj/zhucI7cQ0k+VS n0o6wwCa8f8QLUMG20S92OxRtmWi0y2PIO1iWVtZv/2KdnwhT2ORgQPIqhbYPT/DArmd XZ4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=2GQBBn21mJe0i7nTMMaEMEU5bG6uQDrKzC1bld8UJQo=; b=dpCCWDIivjUsgvPBokrvjmZRND6O8V7//WoXti+vOIuEgfRSrGP0Eev9vp9YwWkAVl yFvKkUYGzRT+4JJa6hUDuvU3AiEIHNjnETCFXpb6plGg3S2SFwl2cm/mqjtfqNbnQDv6 23RUfMfZFLopcya0a0WlChGOqV7J+uUxe+/nss5g1hMTChuvcNj+Xk934BXelrK/moY+ mY1iFyB9FyN9RBe1WKRcyo6T0YUOcYRoXp73uk0Be65MUly6GwzgtpCx5R0HIqDn6Hdw cnhVO+bthKIoorLeM3o/U9rn58v9g+U/wI/R8Rrjf5Pvfof49mDRi2U72+n2a1M3/kJD qzUw== X-Gm-Message-State: ABUngvf03sDP55XIucqBP1v3TnUhZgFUaUaG1RF2rjp9b/HMFT/KI4Y5yYL9tdZdPW/DiQ== X-Received: by 10.99.160.17 with SMTP id r17mr21291625pge.49.1478258058363; Fri, 04 Nov 2016 04:14:18 -0700 (PDT) Received: from cavium-Vostro-2520.caveonetworks.com ([111.93.218.67]) by smtp.gmail.com with ESMTPSA id j192sm19607662pfc.60.2016.11.04.04.14.15 (version=TLS1_1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 04 Nov 2016 04:14:17 -0700 (PDT) From: vijay.kilari@gmail.com To: marc.zyngier@arm.com, christoffer.dall@linaro.org, peter.maydell@linaro.org Subject: [PATCH v8 1/7] arm/arm64: vgic: Implement support for userspace access Date: Fri, 4 Nov 2016 16:43:27 +0530 Message-Id: <1478258013-6669-2-git-send-email-vijay.kilari@gmail.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1478258013-6669-1-git-send-email-vijay.kilari@gmail.com> References: <1478258013-6669-1-git-send-email-vijay.kilari@gmail.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20161104_041439_650432_4015D5BB X-CRM114-Status: GOOD ( 22.81 ) X-Spam-Score: -1.9 (-) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-1.9 points) pts rule name description ---- ---------------------- -------------------------------------------------- 0.8 RCVD_IN_SORBS_WEB RBL: SORBS: sender is an abusable web server [111.93.218.67 listed in dnsbl.sorbs.net] -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [2607:f8b0:400e:c00:0:0:0:243 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 FREEMAIL_FROM Sender email is commonly abused enduser mail provider (vijay.kilari[at]gmail.com) -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: p.fedin@samsung.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Vijaya Kumar K MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org From: Vijaya Kumar K Read and write of some registers like ISPENDR and ICPENDR from userspace requires special handling when compared to guest access for these registers. Refer to Documentation/virtual/kvm/devices/arm-vgic-v3.txt for handling of ISPENDR, ICPENDR registers handling. Add infrastructure to support guest and userspace read and write for the required registers Also moved vgic_uaccess from vgic-mmio-v2.c to vgic-mmio.c Signed-off-by: Vijaya Kumar K --- virt/kvm/arm/vgic/vgic-mmio-v2.c | 25 ---------- virt/kvm/arm/vgic/vgic-mmio-v3.c | 98 ++++++++++++++++++++++++++++++++-------- virt/kvm/arm/vgic/vgic-mmio.c | 78 ++++++++++++++++++++++++++++---- virt/kvm/arm/vgic/vgic-mmio.h | 19 ++++++++ 4 files changed, 169 insertions(+), 51 deletions(-) -- 1.9.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel diff --git a/virt/kvm/arm/vgic/vgic-mmio-v2.c b/virt/kvm/arm/vgic/vgic-mmio-v2.c index b44b359..0b32f40 100644 --- a/virt/kvm/arm/vgic/vgic-mmio-v2.c +++ b/virt/kvm/arm/vgic/vgic-mmio-v2.c @@ -406,31 +406,6 @@ int vgic_v2_has_attr_regs(struct kvm_device *dev, struct kvm_device_attr *attr) return -ENXIO; } -/* - * When userland tries to access the VGIC register handlers, we need to - * create a usable struct vgic_io_device to be passed to the handlers and we - * have to set up a buffer similar to what would have happened if a guest MMIO - * access occurred, including doing endian conversions on BE systems. - */ -static int vgic_uaccess(struct kvm_vcpu *vcpu, struct vgic_io_device *dev, - bool is_write, int offset, u32 *val) -{ - unsigned int len = 4; - u8 buf[4]; - int ret; - - if (is_write) { - vgic_data_host_to_mmio_bus(buf, len, *val); - ret = kvm_io_gic_ops.write(vcpu, &dev->dev, offset, len, buf); - } else { - ret = kvm_io_gic_ops.read(vcpu, &dev->dev, offset, len, buf); - if (!ret) - *val = vgic_data_mmio_bus_to_host(buf, len); - } - - return ret; -} - int vgic_v2_cpuif_uaccess(struct kvm_vcpu *vcpu, bool is_write, int offset, u32 *val) { diff --git a/virt/kvm/arm/vgic/vgic-mmio-v3.c b/virt/kvm/arm/vgic/vgic-mmio-v3.c index 0d3c76a..ce2708d 100644 --- a/virt/kvm/arm/vgic/vgic-mmio-v3.c +++ b/virt/kvm/arm/vgic/vgic-mmio-v3.c @@ -209,6 +209,62 @@ static unsigned long vgic_mmio_read_v3_idregs(struct kvm_vcpu *vcpu, return 0; } +static unsigned long vgic_v3_uaccess_read_pending(struct kvm_vcpu *vcpu, + gpa_t addr, unsigned int len) +{ + u32 intid = VGIC_ADDR_TO_INTID(addr, 1); + u32 value = 0; + int i; + + /* + * A level triggerred interrupt pending state is latched in both + * "soft_pending" and "line_level" variables. Userspace will save + * and restore soft_pending and line_level separately. + * Refer to Documentation/virtual/kvm/devices/arm-vgic-v3.txt + * handling of ISPENDR and ICPENDR. + */ + for (i = 0; i < len * 8; i++) { + struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i); + + if (irq->config == VGIC_CONFIG_LEVEL && irq->soft_pending) + value |= (1U << i); + if (irq->config == VGIC_CONFIG_EDGE && irq->pending) + value |= (1U << i); + + vgic_put_irq(vcpu->kvm, irq); + } + + return value; +} + +static void vgic_v3_uaccess_write_pending(struct kvm_vcpu *vcpu, + gpa_t addr, unsigned int len, + unsigned long val) +{ + u32 intid = VGIC_ADDR_TO_INTID(addr, 1); + int i; + + for (i = 0; i < len * 8; i++) { + struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i); + + spin_lock(&irq->irq_lock); + if (test_bit(i, &val)) { + irq->pending = true; + irq->soft_pending = true; + vgic_queue_irq_unlock(vcpu->kvm, irq); + } else { + irq->soft_pending = false; + if (irq->config == VGIC_CONFIG_EDGE || + (irq->config == VGIC_CONFIG_LEVEL && + !irq->line_level)) + irq->pending = false; + spin_unlock(&irq->irq_lock); + } + + vgic_put_irq(vcpu->kvm, irq); + } +} + /* We want to avoid outer shareable. */ u64 vgic_sanitise_shareability(u64 field) { @@ -358,7 +414,7 @@ static void vgic_mmio_write_pendbase(struct kvm_vcpu *vcpu, * We take some special care here to fix the calculation of the register * offset. */ -#define REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(off, rd, wr, bpi, acc) \ +#define REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(off, rd, wr, ur, uw, bpi, acc) \ { \ .reg_offset = off, \ .bits_per_irq = bpi, \ @@ -373,6 +429,8 @@ static void vgic_mmio_write_pendbase(struct kvm_vcpu *vcpu, .access_flags = acc, \ .read = rd, \ .write = wr, \ + .uaccess_read = ur, \ + .uaccess_write = uw, \ } static const struct vgic_register_region vgic_v3_dist_registers[] = { @@ -380,40 +438,42 @@ static void vgic_mmio_write_pendbase(struct kvm_vcpu *vcpu, vgic_mmio_read_v3_misc, vgic_mmio_write_v3_misc, 16, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_IGROUPR, - vgic_mmio_read_rao, vgic_mmio_write_wi, 1, + vgic_mmio_read_rao, vgic_mmio_write_wi, NULL, NULL, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_ISENABLER, - vgic_mmio_read_enable, vgic_mmio_write_senable, 1, + vgic_mmio_read_enable, vgic_mmio_write_senable, NULL, NULL, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_ICENABLER, - vgic_mmio_read_enable, vgic_mmio_write_cenable, 1, + vgic_mmio_read_enable, vgic_mmio_write_cenable, NULL, NULL, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_ISPENDR, - vgic_mmio_read_pending, vgic_mmio_write_spending, 1, + vgic_mmio_read_pending, vgic_mmio_write_spending, + vgic_v3_uaccess_read_pending, vgic_v3_uaccess_write_pending, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_ICPENDR, - vgic_mmio_read_pending, vgic_mmio_write_cpending, 1, + vgic_mmio_read_pending, vgic_mmio_write_cpending, + vgic_mmio_read_raz, vgic_mmio_write_wi, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_ISACTIVER, - vgic_mmio_read_active, vgic_mmio_write_sactive, 1, + vgic_mmio_read_active, vgic_mmio_write_sactive, NULL, NULL, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_ICACTIVER, - vgic_mmio_read_active, vgic_mmio_write_cactive, 1, + vgic_mmio_read_active, vgic_mmio_write_cactive, NULL, NULL, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_IPRIORITYR, - vgic_mmio_read_priority, vgic_mmio_write_priority, 8, - VGIC_ACCESS_32bit | VGIC_ACCESS_8bit), + vgic_mmio_read_priority, vgic_mmio_write_priority, NULL, NULL, + 8, VGIC_ACCESS_32bit | VGIC_ACCESS_8bit), REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_ITARGETSR, - vgic_mmio_read_raz, vgic_mmio_write_wi, 8, + vgic_mmio_read_raz, vgic_mmio_write_wi, NULL, NULL, 8, VGIC_ACCESS_32bit | VGIC_ACCESS_8bit), REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_ICFGR, - vgic_mmio_read_config, vgic_mmio_write_config, 2, + vgic_mmio_read_config, vgic_mmio_write_config, NULL, NULL, 2, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_IGRPMODR, - vgic_mmio_read_raz, vgic_mmio_write_wi, 1, + vgic_mmio_read_raz, vgic_mmio_write_wi, NULL, NULL, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_IROUTER, - vgic_mmio_read_irouter, vgic_mmio_write_irouter, 64, + vgic_mmio_read_irouter, vgic_mmio_write_irouter, NULL, NULL, 64, VGIC_ACCESS_64bit | VGIC_ACCESS_32bit), REGISTER_DESC_WITH_LENGTH(GICD_IDREGS, vgic_mmio_read_v3_idregs, vgic_mmio_write_wi, 48, @@ -451,11 +511,13 @@ static void vgic_mmio_write_pendbase(struct kvm_vcpu *vcpu, REGISTER_DESC_WITH_LENGTH(GICR_ICENABLER0, vgic_mmio_read_enable, vgic_mmio_write_cenable, 4, VGIC_ACCESS_32bit), - REGISTER_DESC_WITH_LENGTH(GICR_ISPENDR0, - vgic_mmio_read_pending, vgic_mmio_write_spending, 4, + REGISTER_DESC_WITH_LENGTH_UACCESS(GICR_ISPENDR0, + vgic_mmio_read_pending, vgic_mmio_write_spending, + vgic_v3_uaccess_read_pending, vgic_v3_uaccess_write_pending, 4, VGIC_ACCESS_32bit), - REGISTER_DESC_WITH_LENGTH(GICR_ICPENDR0, - vgic_mmio_read_pending, vgic_mmio_write_cpending, 4, + REGISTER_DESC_WITH_LENGTH_UACCESS(GICR_ICPENDR0, + vgic_mmio_read_pending, vgic_mmio_write_cpending, + vgic_mmio_read_raz, vgic_mmio_write_wi, 4, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_LENGTH(GICR_ISACTIVER0, vgic_mmio_read_active, vgic_mmio_write_sactive, 4, diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c index e18b30d..31f85df 100644 --- a/virt/kvm/arm/vgic/vgic-mmio.c +++ b/virt/kvm/arm/vgic/vgic-mmio.c @@ -468,6 +468,73 @@ static bool check_region(const struct vgic_register_region *region, return false; } +static const struct vgic_register_region * +vgic_get_mmio_region(struct vgic_io_device *iodev, gpa_t addr, int len) +{ + const struct vgic_register_region *region; + + region = vgic_find_mmio_region(iodev->regions, iodev->nr_regions, + addr - iodev->base_addr); + if (!region || !check_region(region, addr, len)) + return NULL; + + return region; +} + +static int vgic_uaccess_read(struct kvm_vcpu *vcpu, struct kvm_io_device *dev, + gpa_t addr, u32 *val) +{ + struct vgic_io_device *iodev = kvm_to_vgic_iodev(dev); + const struct vgic_register_region *region; + struct kvm_vcpu *r_vcpu; + + region = vgic_get_mmio_region(iodev, addr, sizeof(u32)); + if (!region) { + *val = 0; + return 0; + } + + r_vcpu = iodev->redist_vcpu ? iodev->redist_vcpu : vcpu; + if (region->uaccess_read) + *val = region->uaccess_read(r_vcpu, addr, sizeof(u32)); + else + *val = region->read(r_vcpu, addr, sizeof(u32)); + + return 0; +} + +static int vgic_uaccess_write(struct kvm_vcpu *vcpu, struct kvm_io_device *dev, + gpa_t addr, const u32 *val) +{ + struct vgic_io_device *iodev = kvm_to_vgic_iodev(dev); + const struct vgic_register_region *region; + struct kvm_vcpu *r_vcpu; + + region = vgic_get_mmio_region(iodev, addr, sizeof(u32)); + if (!region) + return 0; + + r_vcpu = iodev->redist_vcpu ? iodev->redist_vcpu : vcpu; + if (region->uaccess_write) + region->uaccess_write(r_vcpu, addr, sizeof(u32), *val); + else + region->write(r_vcpu, addr, sizeof(u32), *val); + + return 0; +} + +/* + * Userland access to VGIC registers. + */ +int vgic_uaccess(struct kvm_vcpu *vcpu, struct vgic_io_device *dev, + bool is_write, int offset, u32 *val) +{ + if (is_write) + return vgic_uaccess_write(vcpu, &dev->dev, offset, val); + else + return vgic_uaccess_read(vcpu, &dev->dev, offset, val); +} + static int dispatch_mmio_read(struct kvm_vcpu *vcpu, struct kvm_io_device *dev, gpa_t addr, int len, void *val) { @@ -475,9 +542,8 @@ static int dispatch_mmio_read(struct kvm_vcpu *vcpu, struct kvm_io_device *dev, const struct vgic_register_region *region; unsigned long data = 0; - region = vgic_find_mmio_region(iodev->regions, iodev->nr_regions, - addr - iodev->base_addr); - if (!region || !check_region(region, addr, len)) { + region = vgic_get_mmio_region(iodev, addr, len); + if (!region) { memset(val, 0, len); return 0; } @@ -508,14 +574,10 @@ static int dispatch_mmio_write(struct kvm_vcpu *vcpu, struct kvm_io_device *dev, const struct vgic_register_region *region; unsigned long data = vgic_data_mmio_bus_to_host(val, len); - region = vgic_find_mmio_region(iodev->regions, iodev->nr_regions, - addr - iodev->base_addr); + region = vgic_get_mmio_region(iodev, addr, len); if (!region) return 0; - if (!check_region(region, addr, len)) - return 0; - switch (iodev->iodev_type) { case IODEV_CPUIF: region->write(vcpu, addr, len, data); diff --git a/virt/kvm/arm/vgic/vgic-mmio.h b/virt/kvm/arm/vgic/vgic-mmio.h index 4c34d39..97e6df7 100644 --- a/virt/kvm/arm/vgic/vgic-mmio.h +++ b/virt/kvm/arm/vgic/vgic-mmio.h @@ -34,6 +34,10 @@ struct vgic_register_region { gpa_t addr, unsigned int len, unsigned long val); }; + unsigned long (*uaccess_read)(struct kvm_vcpu *vcpu, gpa_t addr, + unsigned int len); + void (*uaccess_write)(struct kvm_vcpu *vcpu, gpa_t addr, + unsigned int len, unsigned long val); }; extern struct kvm_io_device_ops kvm_io_gic_ops; @@ -86,6 +90,18 @@ struct vgic_register_region { .write = wr, \ } +#define REGISTER_DESC_WITH_LENGTH_UACCESS(off, rd, wr, urd, uwr, length, acc) \ + { \ + .reg_offset = off, \ + .bits_per_irq = 0, \ + .len = length, \ + .access_flags = acc, \ + .read = rd, \ + .write = wr, \ + .uaccess_read = urd, \ + .uaccess_write = uwr, \ + } + int kvm_vgic_register_mmio_region(struct kvm *kvm, struct kvm_vcpu *vcpu, struct vgic_register_region *reg_desc, struct vgic_io_device *region, @@ -158,6 +174,9 @@ void vgic_mmio_write_config(struct kvm_vcpu *vcpu, gpa_t addr, unsigned int len, unsigned long val); +int vgic_uaccess(struct kvm_vcpu *vcpu, struct vgic_io_device *dev, + bool is_write, int offset, u32 *val); + unsigned int vgic_v2_init_dist_iodev(struct vgic_io_device *dev); unsigned int vgic_v3_init_dist_iodev(struct vgic_io_device *dev);