From patchwork Thu Jun 19 09:19:30 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 32174 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-qa0-f71.google.com (mail-qa0-f71.google.com [209.85.216.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 329DB206A0 for ; Thu, 19 Jun 2014 09:24:39 +0000 (UTC) Received: by mail-qa0-f71.google.com with SMTP id m5sf5279716qaj.10 for ; Thu, 19 Jun 2014 02:24:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version:sender :errors-to:x-original-sender:x-original-authentication-results :mailing-list:content-type:content-transfer-encoding; bh=HzyUiRL3yj4YZ19RY2r3Tq6rrP2LELonKWDbqU+7hOU=; b=C97ZmvuKv/a5rtiL7dkdvFsIXb/8kOvX2GQ9eqkXi0kWtfRrX9Oqb4HBGbnniYke0P fjj4S9RvcIXzzad3tDEYTKmhMUgKbBbWxz9mu3nhREBmVzSCiCJyLEyr+de1UW4PgGpw 7gXnb5/AvjEmM3gOFrhAnNWse/JwbOcdXGk8oqYUIwHmh/2/hfDoSS1rYXHtkGVFPNwV Pqtxsmutc41vM7ZT0vB8CzIMnm26QZQTqnQFkfsXxTrHeAwGEtYAHwAVpGcCnQObdelV deO73Q6d9nvqB++pz3AmhMaVnBfEJHptcURllxrUieXZGIl3I6jgNi1MZFuVrvDM/j70 6iQQ== X-Gm-Message-State: ALoCoQn8EZ7N05htW9YhvhlaCKb9ZrG18fpWRTq3aeLyIoYc+/zb39VtQApTaJab4/xbuxCXlRnq X-Received: by 10.236.197.39 with SMTP id s27mr1647696yhn.36.1403169878956; Thu, 19 Jun 2014 02:24:38 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.43.135 with SMTP id e7ls533143qga.70.gmail; Thu, 19 Jun 2014 02:24:38 -0700 (PDT) X-Received: by 10.58.245.194 with SMTP id xq2mr3074187vec.26.1403169878848; Thu, 19 Jun 2014 02:24:38 -0700 (PDT) Received: from mail-ve0-f172.google.com (mail-ve0-f172.google.com [209.85.128.172]) by mx.google.com with ESMTPS id xs1si2096764vec.64.2014.06.19.02.24.38 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 19 Jun 2014 02:24:38 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.128.172 as permitted sender) client-ip=209.85.128.172; Received: by mail-ve0-f172.google.com with SMTP id jz11so2068312veb.3 for ; Thu, 19 Jun 2014 02:24:38 -0700 (PDT) X-Received: by 10.220.15.8 with SMTP id i8mr78612vca.45.1403169878756; Thu, 19 Jun 2014 02:24:38 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.54.6 with SMTP id vs6csp348366vcb; Thu, 19 Jun 2014 02:24:38 -0700 (PDT) X-Received: by 10.224.114.69 with SMTP id d5mr5006169qaq.96.1403169878290; Thu, 19 Jun 2014 02:24:38 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id e3si5720012qaa.112.2014.06.19.02.24.38 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 19 Jun 2014 02:24:38 -0700 (PDT) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WxYYi-00021C-L9; Thu, 19 Jun 2014 09:23:04 +0000 Received: from fw-tnat.austin.arm.com ([217.140.110.23] helo=collaborate-mta1.arm.com) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WxYWG-0000aj-18 for linux-arm-kernel@lists.infradead.org; Thu, 19 Jun 2014 09:20:35 +0000 Received: from e102391-lin.cambridge.arm.com (e102391-lin.cambridge.arm.com [10.1.209.143]) by collaborate-mta1.arm.com (Postfix) with ESMTP id 5AEA513FB3D; Thu, 19 Jun 2014 04:19:53 -0500 (CDT) From: Marc Zyngier To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH v5 07/20] KVM: ARM: vgic: abstract access to the ELRSR bitmap Date: Thu, 19 Jun 2014 10:19:30 +0100 Message-Id: <1403169583-13668-8-git-send-email-marc.zyngier@arm.com> X-Mailer: git-send-email 1.8.3.4 In-Reply-To: <1403169583-13668-1-git-send-email-marc.zyngier@arm.com> References: <1403169583-13668-1-git-send-email-marc.zyngier@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140619_022032_203365_3E734D6E X-CRM114-Status: GOOD ( 14.20 ) X-Spam-Score: -0.0 (/) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-0.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 SPF_PASS SPF: sender matches SPF record -0.0 T_RP_MATCHES_RCVD Envelope sender domain matches handover relay domain Cc: Catalin Marinas , Christoffer Dall X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: marc.zyngier@arm.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.128.172 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Move the GICH_ELRSR access to its own functions, and add them to the vgic_ops structure. Acked-by: Catalin Marinas Reviewed-by: Christoffer Dall Signed-off-by: Marc Zyngier --- include/kvm/arm_vgic.h | 2 ++ virt/kvm/arm/vgic.c | 38 +++++++++++++++++++++++++++++++++----- 2 files changed, 35 insertions(+), 5 deletions(-) diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h index 17bbe51..38864f5 100644 --- a/include/kvm/arm_vgic.h +++ b/include/kvm/arm_vgic.h @@ -84,6 +84,8 @@ struct vgic_lr { struct vgic_ops { struct vgic_lr (*get_lr)(const struct kvm_vcpu *, int); void (*set_lr)(struct kvm_vcpu *, int, struct vgic_lr); + void (*sync_lr_elrsr)(struct kvm_vcpu *, int, struct vgic_lr); + u64 (*get_elrsr)(const struct kvm_vcpu *vcpu); }; struct vgic_dist { diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c index 11408fe..8b73cd6 100644 --- a/virt/kvm/arm/vgic.c +++ b/virt/kvm/arm/vgic.c @@ -1015,9 +1015,24 @@ static void vgic_v2_set_lr(struct kvm_vcpu *vcpu, int lr, vcpu->arch.vgic_cpu.vgic_v2.vgic_lr[lr] = lr_val; } +static void vgic_v2_sync_lr_elrsr(struct kvm_vcpu *vcpu, int lr, + struct vgic_lr lr_desc) +{ + if (!(lr_desc.state & LR_STATE_MASK)) + set_bit(lr, (unsigned long *)vcpu->arch.vgic_cpu.vgic_v2.vgic_elrsr); +} + +static u64 vgic_v2_get_elrsr(const struct kvm_vcpu *vcpu) +{ + const u32 *elrsr = vcpu->arch.vgic_cpu.vgic_v2.vgic_elrsr; + return *(u64 *)elrsr; +} + static const struct vgic_ops vgic_ops = { .get_lr = vgic_v2_get_lr, .set_lr = vgic_v2_set_lr, + .sync_lr_elrsr = vgic_v2_sync_lr_elrsr, + .get_elrsr = vgic_v2_get_elrsr, }; static struct vgic_lr vgic_get_lr(const struct kvm_vcpu *vcpu, int lr) @@ -1031,6 +1046,17 @@ static void vgic_set_lr(struct kvm_vcpu *vcpu, int lr, vgic_ops.set_lr(vcpu, lr, vlr); } +static void vgic_sync_lr_elrsr(struct kvm_vcpu *vcpu, int lr, + struct vgic_lr vlr) +{ + vgic_ops.sync_lr_elrsr(vcpu, lr, vlr); +} + +static inline u64 vgic_get_elrsr(struct kvm_vcpu *vcpu) +{ + return vgic_ops.get_elrsr(vcpu); +} + static void vgic_retire_lr(int lr_nr, int irq, struct kvm_vcpu *vcpu) { struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; @@ -1260,7 +1286,7 @@ static bool vgic_process_maintenance(struct kvm_vcpu *vcpu) * Despite being EOIed, the LR may not have * been marked as empty. */ - set_bit(lr, (unsigned long *)vgic_cpu->vgic_v2.vgic_elrsr); + vgic_sync_lr_elrsr(vcpu, lr, vlr); } } @@ -1278,14 +1304,17 @@ static void __kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu) { struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; struct vgic_dist *dist = &vcpu->kvm->arch.vgic; + u64 elrsr; + unsigned long *elrsr_ptr; int lr, pending; bool level_pending; level_pending = vgic_process_maintenance(vcpu); + elrsr = vgic_get_elrsr(vcpu); + elrsr_ptr = (unsigned long *)&elrsr; /* Clear mappings for empty LRs */ - for_each_set_bit(lr, (unsigned long *)vgic_cpu->vgic_v2.vgic_elrsr, - vgic_cpu->nr_lr) { + for_each_set_bit(lr, elrsr_ptr, vgic_cpu->nr_lr) { struct vgic_lr vlr; if (!test_and_clear_bit(lr, vgic_cpu->lr_used)) @@ -1298,8 +1327,7 @@ static void __kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu) } /* Check if we still have something up our sleeve... */ - pending = find_first_zero_bit((unsigned long *)vgic_cpu->vgic_v2.vgic_elrsr, - vgic_cpu->nr_lr); + pending = find_first_zero_bit(elrsr_ptr, vgic_cpu->nr_lr); if (level_pending || pending < vgic_cpu->nr_lr) set_bit(vcpu->vcpu_id, &dist->irq_pending_on_cpu); }