From patchwork Thu Sep 24 22:31:12 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shannon Zhao X-Patchwork-Id: 54140 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-la0-f70.google.com (mail-la0-f70.google.com [209.85.215.70]) by patches.linaro.org (Postfix) with ESMTPS id B600422B1E for ; Thu, 24 Sep 2015 22:35:42 +0000 (UTC) Received: by lamf6 with SMTP id f6sf47384350lam.1 for ; Thu, 24 Sep 2015 15:35:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:cc:mime-version :content-type:content-transfer-encoding:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list; bh=jNsntkzwqAtXMSUXMjhibYpYkro0jSfCixhf0N4tELQ=; b=N1H363TQXfmpUTw74rKBOPTsMJALxWPMN/jiXiiJWK6vKxatuel2E8MwOuNv5ZfAEJ ukKClMd+yDHBTuHe1fsDVob4Pe+snzFi+oGH2UjioIoP3AcY4p9iy8XSWlkzndJD1DRo FsH9yYj95/nLCfc0V47Srhp+BNd0mMFFlX4r6HKR2GunloP3/nfuU7YdT8J1blB2VxSg QcFqEQhH3HYKX95eHUWSFAhDIDcDFbVNT5U0iAFiYC9bd4y9LQeed2kRPD9J0nJ+BqDX +Z2h2ql2fMHnbDfN3x4e+6tI3UHyWFsBWiCq0U0cv1kf9wLG2ZGjOmTWPbgySWQdMwj6 60ew== X-Gm-Message-State: ALoCoQnnnN9Xny0wlpDHVJiDSXHyPsFvbnODia8UUkJeCHD6dBXnG3njngXOv2j8QA1yKjLnPjnW X-Received: by 10.112.55.69 with SMTP id q5mr323071lbp.24.1443134141661; Thu, 24 Sep 2015 15:35:41 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.25.79.85 with SMTP id d82ls183871lfb.47.gmail; Thu, 24 Sep 2015 15:35:41 -0700 (PDT) X-Received: by 10.25.26.71 with SMTP id a68mr395448lfa.109.1443134141129; Thu, 24 Sep 2015 15:35:41 -0700 (PDT) Received: from mail-la0-f51.google.com (mail-la0-f51.google.com. [209.85.215.51]) by mx.google.com with ESMTPS id ut6si252947lbc.67.2015.09.24.15.35.41 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 24 Sep 2015 15:35:41 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.51 as permitted sender) client-ip=209.85.215.51; Received: by lacrr8 with SMTP id rr8so814023lac.2 for ; Thu, 24 Sep 2015 15:35:41 -0700 (PDT) X-Received: by 10.152.170.225 with SMTP id ap1mr610664lac.72.1443134140941; Thu, 24 Sep 2015 15:35:40 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.59.35 with SMTP id w3csp586149lbq; Thu, 24 Sep 2015 15:35:39 -0700 (PDT) X-Received: by 10.67.30.74 with SMTP id kc10mr2521629pad.147.1443134139693; Thu, 24 Sep 2015 15:35:39 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id hw8si688424pac.167.2015.09.24.15.35.39 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 24 Sep 2015 15:35:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZfF5C-0008LY-2a; Thu, 24 Sep 2015 22:33:42 +0000 Received: from mail-pa0-f54.google.com ([209.85.220.54]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZfF49-00075d-W2 for linux-arm-kernel@lists.infradead.org; Thu, 24 Sep 2015 22:32:40 +0000 Received: by pacfv12 with SMTP id fv12so86491531pac.2 for ; Thu, 24 Sep 2015 15:32:22 -0700 (PDT) X-Received: by 10.66.190.41 with SMTP id gn9mr2663813pac.0.1443133942296; Thu, 24 Sep 2015 15:32:22 -0700 (PDT) Received: from localhost.localdomain ([40.139.248.3]) by smtp.gmail.com with ESMTPSA id ll9sm325723pbc.42.2015.09.24.15.32.17 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 24 Sep 2015 15:32:21 -0700 (PDT) From: Shannon Zhao To: kvmarm@lists.cs.columbia.edu Subject: [PATCH v3 07/20] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function Date: Thu, 24 Sep 2015 15:31:12 -0700 Message-Id: <1443133885-3366-8-git-send-email-shannon.zhao@linaro.org> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1443133885-3366-1-git-send-email-shannon.zhao@linaro.org> References: <1443133885-3366-1-git-send-email-shannon.zhao@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150924_153238_176899_34136E7E X-CRM114-Status: GOOD ( 23.98 ) X-Spam-Score: -2.6 (--) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-2.6 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.220.54 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [209.85.220.54 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , Cc: wei@redhat.com, kvm@vger.kernel.org, marc.zyngier@arm.com, will.deacon@arm.com, peter.huangpeng@huawei.com, linux-arm-kernel@lists.infradead.org, alex.bennee@linaro.org, christoffer.dall@linaro.org, shannon.zhao@linaro.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: shannon.zhao@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.51 as permitted sender) smtp.mailfrom=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 When we use tools like perf on host, perf passes the event type and the id of this event type category to kernel, then kernel will map them to hardware event number and write this number to PMU PMEVTYPER_EL0 register. When getting the event number in KVM, directly use raw event type to create a perf_event for it. Signed-off-by: Shannon Zhao --- arch/arm64/include/asm/pmu.h | 2 + arch/arm64/kvm/Makefile | 1 + include/kvm/arm_pmu.h | 13 ++++ virt/kvm/arm/pmu.c | 154 +++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 170 insertions(+) create mode 100644 virt/kvm/arm/pmu.c diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h index b9f394a..2c025f2 100644 --- a/arch/arm64/include/asm/pmu.h +++ b/arch/arm64/include/asm/pmu.h @@ -31,6 +31,8 @@ #define ARMV8_PMCR_D (1 << 3) /* CCNT counts every 64th cpu cycle */ #define ARMV8_PMCR_X (1 << 4) /* Export to ETM */ #define ARMV8_PMCR_DP (1 << 5) /* Disable CCNT if non-invasive debug*/ +/* Determines which PMCCNTR_EL0 bit generates an overflow */ +#define ARMV8_PMCR_LC (1 << 6) #define ARMV8_PMCR_N_SHIFT 11 /* Number of counters supported */ #define ARMV8_PMCR_N_MASK 0x1f #define ARMV8_PMCR_MASK 0x3f /* Mask for writable bits */ diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile index 1949fe5..18d56d8 100644 --- a/arch/arm64/kvm/Makefile +++ b/arch/arm64/kvm/Makefile @@ -27,3 +27,4 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3.o kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3-emul.o kvm-$(CONFIG_KVM_ARM_HOST) += vgic-v3-switch.o kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arch_timer.o +kvm-$(CONFIG_KVM_ARM_PMU) += $(KVM)/arm/pmu.o diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index bb0cd21..b48cdc6 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -37,4 +37,17 @@ struct kvm_pmu { #endif }; +#ifdef CONFIG_KVM_ARM_PMU +unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx); +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data, + u32 select_idx); +#else +unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx) +{ + return 0; +} +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data, + u32 select_idx) {} +#endif + #endif diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c new file mode 100644 index 0000000..002ec79 --- /dev/null +++ b/virt/kvm/arm/pmu.c @@ -0,0 +1,154 @@ +/* + * Copyright (C) 2015 Linaro Ltd. + * Author: Shannon Zhao + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program. If not, see . + */ + +#include +#include +#include +#include +#include +#include + +static void kvm_pmu_set_evttyper(struct kvm_vcpu *vcpu, u32 idx, u32 val) +{ + if (!vcpu_mode_is_32bit(vcpu)) + vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + idx) = val; + else + vcpu_cp15(vcpu, c14_PMEVTYPER0 + idx) = val; +} + +static unsigned long kvm_pmu_get_evttyper(struct kvm_vcpu *vcpu, u32 idx) +{ + if (!vcpu_mode_is_32bit(vcpu)) + return vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + idx) + & ARMV8_EVTYPE_EVENT; + else + return vcpu_cp15(vcpu, c14_PMEVTYPER0 + idx) + & ARMV8_EVTYPE_EVENT; +} + +/** + * kvm_pmu_stop_counter - stop PMU counter for the selected counter + * @vcpu: The vcpu pointer + * @select_idx: The counter index + * + * If this counter has been configured to monitor some event, disable and + * release it. + */ +static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, u32 select_idx) +{ + struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc = &pmu->pmc[select_idx]; + + if (pmc->perf_event) { + perf_event_disable(pmc->perf_event); + perf_event_release_kernel(pmc->perf_event); + pmc->perf_event = NULL; + } + kvm_pmu_set_evttyper(vcpu, select_idx, ARMV8_EVTYPE_EVENT); +} + +/** + * kvm_pmu_get_counter_value - get PMU counter value + * @vcpu: The vcpu pointer + * @select_idx: The counter index + */ +unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx) +{ + u64 enabled, running; + struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc = &pmu->pmc[select_idx]; + u64 counter; + + if (!vcpu_mode_is_32bit(vcpu)) + counter = vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + select_idx); + else + counter = vcpu_cp15(vcpu, c14_PMEVCNTR0 + select_idx); + + if (pmc->perf_event) { + counter += perf_event_read_value(pmc->perf_event, + &enabled, &running); + } + return counter; +} + +/** + * kvm_pmu_set_counter_event_type - set selected counter to monitor some event + * @vcpu: The vcpu pointer + * @data: The data guest writes to PMXEVTYPER_EL0 + * @select_idx: The number of selected counter + * + * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an + * event with given hardware event number. Here we call perf_event API to + * emulate this action and create a kernel perf event for it. + */ +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data, + u32 select_idx) +{ + struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc = &pmu->pmc[select_idx]; + struct perf_event *event; + struct perf_event_attr attr; + u32 new_eventsel, old_eventsel; + u64 counter; + int overflow_bit, pmcr_lc; + + old_eventsel = kvm_pmu_get_evttyper(vcpu, select_idx); + new_eventsel = data & ARMV8_EVTYPE_EVENT; + if (new_eventsel == old_eventsel) { + if (pmc->perf_event) + local64_set(&pmc->perf_event->count, 0); + return; + } + + kvm_pmu_stop_counter(vcpu, select_idx); + kvm_pmu_set_evttyper(vcpu, select_idx, data); + + memset(&attr, 0, sizeof(struct perf_event_attr)); + attr.type = PERF_TYPE_RAW; + attr.size = sizeof(attr); + attr.pinned = 1; + attr.disabled = 1; + attr.exclude_user = data & ARMV8_EXCLUDE_EL0 ? 1 : 0; + attr.exclude_kernel = data & ARMV8_EXCLUDE_EL1 ? 1 : 0; + attr.exclude_host = 1; /* Don't count host events */ + attr.config = new_eventsel; + + overflow_bit = 31; /* Generic counters are 32-bit registers*/ + if (new_eventsel == 0x11) { + /* Cycle counter overflow on increment that changes PMCCNTR[63] + * or PMCCNTR[31] from 1 to 0 according to the value of + * ARMV8_PMCR_LC + */ + if (!vcpu_mode_is_32bit(vcpu)) + pmcr_lc = vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_LC; + else + pmcr_lc = vcpu_cp15(vcpu, c9_PMCR) & ARMV8_PMCR_LC; + + overflow_bit = pmcr_lc ? 63 : 31; + } + counter = kvm_pmu_get_counter_value(vcpu, select_idx); + /* The initial sample period (overflow count) of an event. */ + attr.sample_period = (-counter) & (((u64)1 << overflow_bit) - 1); + + event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc); + if (IS_ERR(event)) { + printk_once("kvm: pmu event creation failed %ld\n", + PTR_ERR(event)); + return; + } + pmc->perf_event = event; +}