From patchwork Wed Feb 24 05:08:34 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shannon Zhao X-Patchwork-Id: 62776 Delivered-To: patch@linaro.org Received: by 10.112.43.199 with SMTP id y7csp2205973lbl; Tue, 23 Feb 2016 21:25:53 -0800 (PST) X-Received: by 10.66.252.100 with SMTP id zr4mr50966401pac.111.1456291552950; Tue, 23 Feb 2016 21:25:52 -0800 (PST) Return-Path: Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id 134si2312011pfa.156.2016.02.23.21.25.52 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 23 Feb 2016 21:25:52 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) client-ip=2001:1868:205::9; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) smtp.mailfrom=linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1aYRwO-00044F-EC; Wed, 24 Feb 2016 05:24:48 +0000 Received: from szxga01-in.huawei.com ([58.251.152.64]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1aYRrd-00068w-LC for linux-arm-kernel@lists.infradead.org; Wed, 24 Feb 2016 05:20:11 +0000 Received: from 172.24.1.50 (EHLO szxeml425-hub.china.huawei.com) ([172.24.1.50]) by szxrg01-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id DFB42409; Wed, 24 Feb 2016 13:09:50 +0800 (CST) Received: from HGHY1Z002260041.china.huawei.com (10.177.16.142) by szxeml425-hub.china.huawei.com (10.82.67.180) with Microsoft SMTP Server id 14.3.235.1; Wed, 24 Feb 2016 13:09:39 +0800 From: Shannon Zhao To: , , Subject: [PATCH v13 14/20] KVM: ARM64: Add access handler for PMUSERENR register Date: Wed, 24 Feb 2016 13:08:34 +0800 Message-ID: <1456290520-10012-15-git-send-email-zhaoshenglong@huawei.com> X-Mailer: git-send-email 1.9.0.msysgit.0 In-Reply-To: <1456290520-10012-1-git-send-email-zhaoshenglong@huawei.com> References: <1456290520-10012-1-git-send-email-zhaoshenglong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.16.142] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A0B0205.56CD3B2F.0072, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2013-06-18 04:22:30, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 41edc3e226c2df3a0276c229e9f6ce3f X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160223_211955_390395_A708E832 X-CRM114-Status: GOOD ( 19.66 ) X-Spam-Score: -4.2 (----) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-4.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at http://www.dnswl.org/, medium trust [58.251.152.64 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [58.251.152.64 listed in wl.mailspike.net] -0.0 RP_MATCHES_RCVD Envelope sender domain matches handover relay domain -0.0 SPF_PASS SPF: sender matches SPF record -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: wei@redhat.com, hangaohuai@huawei.com, kvm@vger.kernel.org, will.deacon@arm.com, peter.huangpeng@huawei.com, shannon.zhao@linaro.org, zhaoshenglong@huawei.com, linux-arm-kernel@lists.infradead.org, cov@codeaurora.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org From: Shannon Zhao This register resets as unknown in 64bit mode while it resets as zero in 32bit mode. Here we choose to reset it as zero for consistency. PMUSERENR_EL0 holds some bits which decide whether PMU registers can be accessed from EL0. Add some check helpers to handle the access from EL0. When these bits are zero, only reading PMUSERENR will trap to EL2 and writing PMUSERENR or reading/writing other PMU registers will trap to EL1 other than EL2 when HCR.TGE==0. To current KVM configuration (HCR.TGE==0) there is no way to get these traps. Here we write 0xf to physical PMUSERENR register on VM entry, so that it will trap PMU access from EL0 to EL2. Within the register access handler we check the real value of guest PMUSERENR register to decide whether this access is allowed. If not allowed, return false to inject UND to guest. Signed-off-by: Shannon Zhao --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/include/asm/perf_event.h | 9 ++++ arch/arm64/kvm/hyp/hyp.h | 1 + arch/arm64/kvm/hyp/switch.c | 3 ++ arch/arm64/kvm/sys_regs.c | 101 ++++++++++++++++++++++++++++++++++-- 5 files changed, 110 insertions(+), 5 deletions(-) -- 2.0.4 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index de1f82d..7b61675 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -128,6 +128,7 @@ enum vcpu_sysreg { PMINTENSET_EL1, /* Interrupt Enable Set Register */ PMOVSSET_EL0, /* Overflow Flag Status Set Register */ PMSWINC_EL0, /* Software Increment Register */ + PMUSERENR_EL0, /* User Enable Register */ /* 32bit specific registers. Keep them at the end of the range */ DACR32_EL2, /* Domain Access Control Register */ diff --git a/arch/arm64/include/asm/perf_event.h b/arch/arm64/include/asm/perf_event.h index c3f5937..76e1931 100644 --- a/arch/arm64/include/asm/perf_event.h +++ b/arch/arm64/include/asm/perf_event.h @@ -56,6 +56,15 @@ #define ARMV8_PMU_EXCLUDE_EL0 (1 << 30) #define ARMV8_PMU_INCLUDE_EL2 (1 << 27) +/* + * PMUSERENR: user enable reg + */ +#define ARMV8_PMU_USERENR_MASK 0xf /* Mask for writable bits */ +#define ARMV8_PMU_USERENR_EN (1 << 0) /* PMU regs can be accessed at EL0 */ +#define ARMV8_PMU_USERENR_SW (1 << 1) /* PMSWINC can be written at EL0 */ +#define ARMV8_PMU_USERENR_CR (1 << 2) /* Cycle counter can be read at EL0 */ +#define ARMV8_PMU_USERENR_ER (1 << 3) /* Event counter can be read at EL0 */ + #ifdef CONFIG_PERF_EVENTS struct pt_regs; extern unsigned long perf_instruction_pointer(struct pt_regs *regs); diff --git a/arch/arm64/kvm/hyp/hyp.h b/arch/arm64/kvm/hyp/hyp.h index fb27517..c65f8c9 100644 --- a/arch/arm64/kvm/hyp/hyp.h +++ b/arch/arm64/kvm/hyp/hyp.h @@ -22,6 +22,7 @@ #include #include #include +#include #define __hyp_text __section(.hyp.text) notrace diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index f0e7bdf..dfe111d 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -41,6 +41,8 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu) val |= CPTR_EL2_TTA | CPTR_EL2_TFP; write_sysreg(val, cptr_el2); + /* Make sure we trap PMU access from EL0 to EL2 */ + write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0); write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2); } @@ -49,6 +51,7 @@ static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu) write_sysreg(HCR_RW, hcr_el2); write_sysreg(0, hstr_el2); write_sysreg(read_sysreg(mdcr_el2) & MDCR_EL2_HPMN_MASK, mdcr_el2); + write_sysreg(0, pmuserenr_el0); write_sysreg(CPTR_EL2_DEFAULT, cptr_el2); } diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 12f36ef..fe15c23 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -453,6 +453,37 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) vcpu_sys_reg(vcpu, PMCR_EL0) = val; } +static bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu) +{ + u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0); + + return !((reg & ARMV8_PMU_USERENR_EN) || vcpu_mode_priv(vcpu)); +} + +static bool pmu_write_swinc_el0_disabled(struct kvm_vcpu *vcpu) +{ + u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0); + + return !((reg & (ARMV8_PMU_USERENR_SW | ARMV8_PMU_USERENR_EN)) + || vcpu_mode_priv(vcpu)); +} + +static bool pmu_access_cycle_counter_el0_disabled(struct kvm_vcpu *vcpu) +{ + u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0); + + return !((reg & (ARMV8_PMU_USERENR_CR | ARMV8_PMU_USERENR_EN)) + || vcpu_mode_priv(vcpu)); +} + +static bool pmu_access_event_counter_el0_disabled(struct kvm_vcpu *vcpu) +{ + u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0); + + return !((reg & (ARMV8_PMU_USERENR_ER | ARMV8_PMU_USERENR_EN)) + || vcpu_mode_priv(vcpu)); +} + static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) { @@ -461,6 +492,9 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, if (!kvm_arm_pmu_v3_ready(vcpu)) return trap_raz_wi(vcpu, p, r); + if (pmu_access_el0_disabled(vcpu)) + return false; + if (p->is_write) { /* Only update writeable bits of PMCR */ val = vcpu_sys_reg(vcpu, PMCR_EL0); @@ -484,6 +518,9 @@ static bool access_pmselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, if (!kvm_arm_pmu_v3_ready(vcpu)) return trap_raz_wi(vcpu, p, r); + if (pmu_access_event_counter_el0_disabled(vcpu)) + return false; + if (p->is_write) vcpu_sys_reg(vcpu, PMSELR_EL0) = p->regval; else @@ -504,6 +541,9 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p, BUG_ON(p->is_write); + if (pmu_access_el0_disabled(vcpu)) + return false; + if (!(p->Op2 & 1)) asm volatile("mrs %0, pmceid0_el0\n" : "=r" (pmceid)); else @@ -538,16 +578,25 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu, if (r->CRn == 9 && r->CRm == 13) { if (r->Op2 == 2) { /* PMXEVCNTR_EL0 */ + if (pmu_access_event_counter_el0_disabled(vcpu)) + return false; + idx = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_PMU_COUNTER_MASK; } else if (r->Op2 == 0) { /* PMCCNTR_EL0 */ + if (pmu_access_cycle_counter_el0_disabled(vcpu)) + return false; + idx = ARMV8_PMU_CYCLE_IDX; } else { BUG(); } } else if (r->CRn == 14 && (r->CRm & 12) == 8) { /* PMEVCNTRn_EL0 */ + if (pmu_access_event_counter_el0_disabled(vcpu)) + return false; + idx = ((r->CRm & 3) << 3) | (r->Op2 & 7); } else { BUG(); @@ -556,10 +605,14 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu, if (!pmu_counter_idx_valid(vcpu, idx)) return false; - if (p->is_write) + if (p->is_write) { + if (pmu_access_el0_disabled(vcpu)) + return false; + kvm_pmu_set_counter_value(vcpu, idx, p->regval); - else + } else { p->regval = kvm_pmu_get_counter_value(vcpu, idx); + } return true; } @@ -572,6 +625,9 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p, if (!kvm_arm_pmu_v3_ready(vcpu)) return trap_raz_wi(vcpu, p, r); + if (pmu_access_el0_disabled(vcpu)) + return false; + if (r->CRn == 9 && r->CRm == 13 && r->Op2 == 1) { /* PMXEVTYPER_EL0 */ idx = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_PMU_COUNTER_MASK; @@ -608,6 +664,9 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p, if (!kvm_arm_pmu_v3_ready(vcpu)) return trap_raz_wi(vcpu, p, r); + if (pmu_access_el0_disabled(vcpu)) + return false; + mask = kvm_pmu_valid_counter_mask(vcpu); if (p->is_write) { val = p->regval & mask; @@ -635,6 +694,9 @@ static bool access_pminten(struct kvm_vcpu *vcpu, struct sys_reg_params *p, if (!kvm_arm_pmu_v3_ready(vcpu)) return trap_raz_wi(vcpu, p, r); + if (!vcpu_mode_priv(vcpu)) + return false; + if (p->is_write) { u64 val = p->regval & mask; @@ -659,6 +721,9 @@ static bool access_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_params *p, if (!kvm_arm_pmu_v3_ready(vcpu)) return trap_raz_wi(vcpu, p, r); + if (pmu_access_el0_disabled(vcpu)) + return false; + if (p->is_write) { if (r->CRm & 0x2) /* accessing PMOVSSET_EL0 */ @@ -681,6 +746,9 @@ static bool access_pmswinc(struct kvm_vcpu *vcpu, struct sys_reg_params *p, if (!kvm_arm_pmu_v3_ready(vcpu)) return trap_raz_wi(vcpu, p, r); + if (pmu_write_swinc_el0_disabled(vcpu)) + return false; + if (p->is_write) { mask = kvm_pmu_valid_counter_mask(vcpu); kvm_pmu_software_increment(vcpu, p->regval & mask); @@ -690,6 +758,26 @@ static bool access_pmswinc(struct kvm_vcpu *vcpu, struct sys_reg_params *p, return false; } +static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + if (!kvm_arm_pmu_v3_ready(vcpu)) + return trap_raz_wi(vcpu, p, r); + + if (p->is_write) { + if (!vcpu_mode_priv(vcpu)) + return false; + + vcpu_sys_reg(vcpu, PMUSERENR_EL0) = p->regval + & ARMV8_PMU_USERENR_MASK; + } else { + p->regval = vcpu_sys_reg(vcpu, PMUSERENR_EL0) + & ARMV8_PMU_USERENR_MASK; + } + + return true; +} + /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */ #define DBG_BCR_BVR_WCR_WVR_EL1(n) \ /* DBGBVRn_EL1 */ \ @@ -919,9 +1007,12 @@ static const struct sys_reg_desc sys_reg_descs[] = { /* PMXEVCNTR_EL0 */ { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010), access_pmu_evcntr }, - /* PMUSERENR_EL0 */ + /* PMUSERENR_EL0 + * This register resets as unknown in 64bit mode while it resets as zero + * in 32bit mode. Here we choose to reset it as zero for consistency. + */ { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b000), - trap_raz_wi }, + access_pmuserenr, reset_val, PMUSERENR_EL0, 0 }, /* PMOVSSET_EL0 */ { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b011), access_pmovs, reset_unknown, PMOVSSET_EL0 }, @@ -1246,7 +1337,7 @@ static const struct sys_reg_desc cp15_regs[] = { { Op1( 0), CRn( 9), CRm(13), Op2( 0), access_pmu_evcntr }, { Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_evtyper }, { Op1( 0), CRn( 9), CRm(13), Op2( 2), access_pmu_evcntr }, - { Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi }, + { Op1( 0), CRn( 9), CRm(14), Op2( 0), access_pmuserenr }, { Op1( 0), CRn( 9), CRm(14), Op2( 1), access_pminten }, { Op1( 0), CRn( 9), CRm(14), Op2( 2), access_pminten }, { Op1( 0), CRn( 9), CRm(14), Op2( 3), access_pmovs },