From patchwork Tue Jan 16 06:01:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shaoqin Huang X-Patchwork-Id: 763207 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E41E010A21 for ; Tue, 16 Jan 2024 06:01:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="e5Qq/41x" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1705384898; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tqg7c9T7jiq+ZGShn5JqBvivigq2f42UVRdO/5im1Wc=; b=e5Qq/41xgN3833TCfqa7hYEMXAwXeFUy5fzhlO7kF9AvuaH4BIzrljFqTVnIZYUF8riWHd Rof9xzE9Fm6muSBkAAyScU3/WCqnpUoJLm2C5T3T9BYf7VGPEF4YeHyclvmliRqOOb/O9Z YklCL6VRyHce/bxaDnxXvLJpHv5UqqQ= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-271-qZgA-5W7NRi9d5vWdbsfEg-1; Tue, 16 Jan 2024 01:01:34 -0500 X-MC-Unique: qZgA-5W7NRi9d5vWdbsfEg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C7ABF185A781; Tue, 16 Jan 2024 06:01:33 +0000 (UTC) Received: from virt-mtcollins-01.lab.eng.rdu2.redhat.com (virt-mtcollins-01.lab.eng.rdu2.redhat.com [10.8.1.196]) by smtp.corp.redhat.com (Postfix) with ESMTP id B0DC72026D6F; Tue, 16 Jan 2024 06:01:33 +0000 (UTC) From: Shaoqin Huang To: Oliver Upton , Marc Zyngier , kvmarm@lists.linux.dev Cc: Eric Auger , Shaoqin Huang , Eric Auger , Paolo Bonzini , Shuah Khan , James Morse , Suzuki K Poulose , Zenghui Yu , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v3 1/5] KVM: selftests: aarch64: Make the [create|destroy]_vpmu_vm() public Date: Tue, 16 Jan 2024 01:01:19 -0500 Message-Id: <20240116060129.55473-2-shahuang@redhat.com> In-Reply-To: <20240116060129.55473-1-shahuang@redhat.com> References: <20240116060129.55473-1-shahuang@redhat.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 Move the implementation of [create|destroy]_vpmu_vm() into lib/aarch64/pmu.c and export their declaration in a header so they can be reused by other tests. The sync exception handler install is test specific so we move it out of the helper function. No functional change intended. Reviewed-by: Eric Auger Signed-off-by: Shaoqin Huang --- tools/testing/selftests/kvm/Makefile | 1 + .../kvm/aarch64/vpmu_counter_access.c | 100 +++++------------- .../selftests/kvm/include/aarch64/vpmu.h | 16 +++ .../testing/selftests/kvm/lib/aarch64/vpmu.c | 64 +++++++++++ 4 files changed, 105 insertions(+), 76 deletions(-) create mode 100644 tools/testing/selftests/kvm/include/aarch64/vpmu.h create mode 100644 tools/testing/selftests/kvm/lib/aarch64/vpmu.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index a5963ab9215b..b60852c222ac 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -57,6 +57,7 @@ LIBKVM_aarch64 += lib/aarch64/processor.c LIBKVM_aarch64 += lib/aarch64/spinlock.c LIBKVM_aarch64 += lib/aarch64/ucall.c LIBKVM_aarch64 += lib/aarch64/vgic.c +LIBKVM_aarch64 += lib/aarch64/vpmu.c LIBKVM_s390x += lib/s390x/diag318_test_handler.c LIBKVM_s390x += lib/s390x/processor.c diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c index 5ea78986e665..17305408a334 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c @@ -16,6 +16,7 @@ #include #include #include +#include #include #include @@ -25,13 +26,7 @@ /* The cycle counter bit position that's common among the PMU registers */ #define ARMV8_PMU_CYCLE_IDX 31 -struct vpmu_vm { - struct kvm_vm *vm; - struct kvm_vcpu *vcpu; - int gic_fd; -}; - -static struct vpmu_vm vpmu_vm; +static struct vpmu_vm *vpmu_vm; struct pmreg_sets { uint64_t set_reg_id; @@ -421,64 +416,6 @@ static void guest_code(uint64_t expected_pmcr_n) GUEST_DONE(); } -#define GICD_BASE_GPA 0x8000000ULL -#define GICR_BASE_GPA 0x80A0000ULL - -/* Create a VM that has one vCPU with PMUv3 configured. */ -static void create_vpmu_vm(void *guest_code) -{ - struct kvm_vcpu_init init; - uint8_t pmuver, ec; - uint64_t dfr0, irq = 23; - struct kvm_device_attr irq_attr = { - .group = KVM_ARM_VCPU_PMU_V3_CTRL, - .attr = KVM_ARM_VCPU_PMU_V3_IRQ, - .addr = (uint64_t)&irq, - }; - struct kvm_device_attr init_attr = { - .group = KVM_ARM_VCPU_PMU_V3_CTRL, - .attr = KVM_ARM_VCPU_PMU_V3_INIT, - }; - - /* The test creates the vpmu_vm multiple times. Ensure a clean state */ - memset(&vpmu_vm, 0, sizeof(vpmu_vm)); - - vpmu_vm.vm = vm_create(1); - vm_init_descriptor_tables(vpmu_vm.vm); - for (ec = 0; ec < ESR_EC_NUM; ec++) { - vm_install_sync_handler(vpmu_vm.vm, VECTOR_SYNC_CURRENT, ec, - guest_sync_handler); - } - - /* Create vCPU with PMUv3 */ - vm_ioctl(vpmu_vm.vm, KVM_ARM_PREFERRED_TARGET, &init); - init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); - vpmu_vm.vcpu = aarch64_vcpu_add(vpmu_vm.vm, 0, &init, guest_code); - vcpu_init_descriptor_tables(vpmu_vm.vcpu); - vpmu_vm.gic_fd = vgic_v3_setup(vpmu_vm.vm, 1, 64, - GICD_BASE_GPA, GICR_BASE_GPA); - __TEST_REQUIRE(vpmu_vm.gic_fd >= 0, - "Failed to create vgic-v3, skipping"); - - /* Make sure that PMUv3 support is indicated in the ID register */ - vcpu_get_reg(vpmu_vm.vcpu, - KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0); - pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), dfr0); - TEST_ASSERT(pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && - pmuver >= ID_AA64DFR0_EL1_PMUVer_IMP, - "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver); - - /* Initialize vPMU */ - vcpu_ioctl(vpmu_vm.vcpu, KVM_SET_DEVICE_ATTR, &irq_attr); - vcpu_ioctl(vpmu_vm.vcpu, KVM_SET_DEVICE_ATTR, &init_attr); -} - -static void destroy_vpmu_vm(void) -{ - close(vpmu_vm.gic_fd); - kvm_vm_free(vpmu_vm.vm); -} - static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n) { struct ucall uc; @@ -497,13 +434,24 @@ static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n) } } +static void create_vpmu_vm_with_handler(void *guest_code) +{ + uint8_t ec; + vpmu_vm = create_vpmu_vm(guest_code); + + for (ec = 0; ec < ESR_EC_NUM; ec++) { + vm_install_sync_handler(vpmu_vm->vm, VECTOR_SYNC_CURRENT, ec, + guest_sync_handler); + } +} + static void test_create_vpmu_vm_with_pmcr_n(uint64_t pmcr_n, bool expect_fail) { struct kvm_vcpu *vcpu; uint64_t pmcr, pmcr_orig; - create_vpmu_vm(guest_code); - vcpu = vpmu_vm.vcpu; + create_vpmu_vm_with_handler(guest_code); + vcpu = vpmu_vm->vcpu; vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig); pmcr = pmcr_orig; @@ -539,7 +487,7 @@ static void run_access_test(uint64_t pmcr_n) pr_debug("Test with pmcr_n %lu\n", pmcr_n); test_create_vpmu_vm_with_pmcr_n(pmcr_n, false); - vcpu = vpmu_vm.vcpu; + vcpu = vpmu_vm->vcpu; /* Save the initial sp to restore them later to run the guest again */ vcpu_get_reg(vcpu, ARM64_CORE_REG(sp_el1), &sp); @@ -550,7 +498,7 @@ static void run_access_test(uint64_t pmcr_n) * Reset and re-initialize the vCPU, and run the guest code again to * check if PMCR_EL0.N is preserved. */ - vm_ioctl(vpmu_vm.vm, KVM_ARM_PREFERRED_TARGET, &init); + vm_ioctl(vpmu_vm->vm, KVM_ARM_PREFERRED_TARGET, &init); init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); aarch64_vcpu_setup(vcpu, &init); vcpu_init_descriptor_tables(vcpu); @@ -559,7 +507,7 @@ static void run_access_test(uint64_t pmcr_n) run_vcpu(vcpu, pmcr_n); - destroy_vpmu_vm(); + destroy_vpmu_vm(vpmu_vm); } static struct pmreg_sets validity_check_reg_sets[] = { @@ -580,7 +528,7 @@ static void run_pmregs_validity_test(uint64_t pmcr_n) uint64_t valid_counters_mask, max_counters_mask; test_create_vpmu_vm_with_pmcr_n(pmcr_n, false); - vcpu = vpmu_vm.vcpu; + vcpu = vpmu_vm->vcpu; valid_counters_mask = get_counters_mask(pmcr_n); max_counters_mask = get_counters_mask(ARMV8_PMU_MAX_COUNTERS); @@ -621,7 +569,7 @@ static void run_pmregs_validity_test(uint64_t pmcr_n) KVM_ARM64_SYS_REG(clr_reg_id), reg_val); } - destroy_vpmu_vm(); + destroy_vpmu_vm(vpmu_vm); } /* @@ -634,7 +582,7 @@ static void run_error_test(uint64_t pmcr_n) pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n); test_create_vpmu_vm_with_pmcr_n(pmcr_n, true); - destroy_vpmu_vm(); + destroy_vpmu_vm(vpmu_vm); } /* @@ -645,9 +593,9 @@ static uint64_t get_pmcr_n_limit(void) { uint64_t pmcr; - create_vpmu_vm(guest_code); - vcpu_get_reg(vpmu_vm.vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr); - destroy_vpmu_vm(); + create_vpmu_vm_with_handler(guest_code); + vcpu_get_reg(vpmu_vm->vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr); + destroy_vpmu_vm(vpmu_vm); return get_pmcr_n(pmcr); } diff --git a/tools/testing/selftests/kvm/include/aarch64/vpmu.h b/tools/testing/selftests/kvm/include/aarch64/vpmu.h new file mode 100644 index 000000000000..0a56183644ee --- /dev/null +++ b/tools/testing/selftests/kvm/include/aarch64/vpmu.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#include + +#define GICD_BASE_GPA 0x8000000ULL +#define GICR_BASE_GPA 0x80A0000ULL + +struct vpmu_vm { + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + int gic_fd; +}; + +struct vpmu_vm *create_vpmu_vm(void *guest_code); + +void destroy_vpmu_vm(struct vpmu_vm *vpmu_vm); diff --git a/tools/testing/selftests/kvm/lib/aarch64/vpmu.c b/tools/testing/selftests/kvm/lib/aarch64/vpmu.c new file mode 100644 index 000000000000..b3de8fdc555e --- /dev/null +++ b/tools/testing/selftests/kvm/lib/aarch64/vpmu.c @@ -0,0 +1,64 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include +#include +#include +#include + +/* Create a VM that has one vCPU with PMUv3 configured. */ +struct vpmu_vm *create_vpmu_vm(void *guest_code) +{ + struct kvm_vcpu_init init; + uint8_t pmuver; + uint64_t dfr0, irq = 23; + struct kvm_device_attr irq_attr = { + .group = KVM_ARM_VCPU_PMU_V3_CTRL, + .attr = KVM_ARM_VCPU_PMU_V3_IRQ, + .addr = (uint64_t)&irq, + }; + struct kvm_device_attr init_attr = { + .group = KVM_ARM_VCPU_PMU_V3_CTRL, + .attr = KVM_ARM_VCPU_PMU_V3_INIT, + }; + struct vpmu_vm *vpmu_vm; + + vpmu_vm = calloc(1, sizeof(*vpmu_vm)); + TEST_ASSERT(vpmu_vm != NULL, "Insufficient Memory"); + memset(vpmu_vm, 0, sizeof(vpmu_vm)); + + vpmu_vm->vm = vm_create(1); + vm_init_descriptor_tables(vpmu_vm->vm); + + /* Create vCPU with PMUv3 */ + vm_ioctl(vpmu_vm->vm, KVM_ARM_PREFERRED_TARGET, &init); + init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); + vpmu_vm->vcpu = aarch64_vcpu_add(vpmu_vm->vm, 0, &init, guest_code); + vcpu_init_descriptor_tables(vpmu_vm->vcpu); + vpmu_vm->gic_fd = vgic_v3_setup(vpmu_vm->vm, 1, 64, + GICD_BASE_GPA, GICR_BASE_GPA); + __TEST_REQUIRE(vpmu_vm->gic_fd >= 0, + "Failed to create vgic-v3, skipping"); + + /* Make sure that PMUv3 support is indicated in the ID register */ + vcpu_get_reg(vpmu_vm->vcpu, + KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0); + pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), dfr0); + TEST_ASSERT(pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && + pmuver >= ID_AA64DFR0_EL1_PMUVer_IMP, + "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver); + + /* Initialize vPMU */ + vcpu_ioctl(vpmu_vm->vcpu, KVM_SET_DEVICE_ATTR, &irq_attr); + vcpu_ioctl(vpmu_vm->vcpu, KVM_SET_DEVICE_ATTR, &init_attr); + + return vpmu_vm; +} + +void destroy_vpmu_vm(struct vpmu_vm *vpmu_vm) +{ + close(vpmu_vm->gic_fd); + kvm_vm_free(vpmu_vm->vm); + free(vpmu_vm); +} From patchwork Tue Jan 16 06:01:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shaoqin Huang X-Patchwork-Id: 763206 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 78A6411712 for ; Tue, 16 Jan 2024 06:01:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="hJcSKHPd" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1705384901; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=U9F5YnPtqeaOtnF1jbWP/C35kmOg9rX0z8z8z6VZYAU=; b=hJcSKHPdW4XrR9Gje6q5+FIAIWQM3CPzbzWzEpQ65C7MBTmId2UgfWXVywgr//NFtGcHYZ Ef92X8t1X+xTFgBq9znphMdsl+gO3GaKPR2KZ4xdCIsG0MicBXMFfh7IgsZUKYcXyoFAZw 7tVZyzZvkwCXTBDdWcRptR79bMdqWkE= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-341-Myyr4p0TPemJrsDxd090_Q-1; Tue, 16 Jan 2024 01:01:35 -0500 X-MC-Unique: Myyr4p0TPemJrsDxd090_Q-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B8CE01C05ABF; Tue, 16 Jan 2024 06:01:34 +0000 (UTC) Received: from virt-mtcollins-01.lab.eng.rdu2.redhat.com (virt-mtcollins-01.lab.eng.rdu2.redhat.com [10.8.1.196]) by smtp.corp.redhat.com (Postfix) with ESMTP id A5B1B2026D6F; Tue, 16 Jan 2024 06:01:34 +0000 (UTC) From: Shaoqin Huang To: Oliver Upton , Marc Zyngier , kvmarm@lists.linux.dev Cc: Eric Auger , Shaoqin Huang , Eric Auger , James Morse , Suzuki K Poulose , Zenghui Yu , Paolo Bonzini , Shuah Khan , linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 3/5] KVM: selftests: aarch64: Fix the buggy [enable|disable]_counter Date: Tue, 16 Jan 2024 01:01:21 -0500 Message-Id: <20240116060129.55473-4-shahuang@redhat.com> In-Reply-To: <20240116060129.55473-1-shahuang@redhat.com> References: <20240116060129.55473-1-shahuang@redhat.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 In general, the set/clr registers should always be used in their write form, never in a RMW form (imagine an interrupt disabling a counter between the read and the write...). The current implementation of [enable|disable]_counter both use the RMW form, fix them by directly write to the set/clr registers. At the same time, it also fix the buggy disable_counter() which would end up disabling all the counters. Reviewed-by: Eric Auger Signed-off-by: Shaoqin Huang --- tools/testing/selftests/kvm/include/aarch64/vpmu.h | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/tools/testing/selftests/kvm/include/aarch64/vpmu.h b/tools/testing/selftests/kvm/include/aarch64/vpmu.h index e0cc1ca1c4b7..644dae3814b5 100644 --- a/tools/testing/selftests/kvm/include/aarch64/vpmu.h +++ b/tools/testing/selftests/kvm/include/aarch64/vpmu.h @@ -78,17 +78,13 @@ static inline void write_sel_evtyper(int sel, unsigned long val) static inline void enable_counter(int idx) { - uint64_t v = read_sysreg(pmcntenset_el0); - - write_sysreg(BIT(idx) | v, pmcntenset_el0); + write_sysreg(BIT(idx), pmcntenset_el0); isb(); } static inline void disable_counter(int idx) { - uint64_t v = read_sysreg(pmcntenset_el0); - - write_sysreg(BIT(idx) | v, pmcntenclr_el0); + write_sysreg(BIT(idx), pmcntenclr_el0); isb(); } From patchwork Tue Jan 16 06:01:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shaoqin Huang X-Patchwork-Id: 763205 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 89119125AD for ; Tue, 16 Jan 2024 06:01:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="JHQUgDnN" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1705384902; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JdhKz6AkD7EoSnhTdlrYl+SSLH69MnV7i+2uXd+vqdc=; b=JHQUgDnNnbxwhoBwceXuarIajU4CV79F6AmZWrMgVlw55bE5GNa2kbo1kZNH/ZvhHE7yxZ 3u8CH2B+Dnar26s7w9oqsHucFqcm5FDFFsL/c0QJQYjHOymWgKF+SLwc11WHwNETRI5zoX hAuE9l1GLsueNAdiQmm27MpR+pu0Q44= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-42-RxYvaupvNKaKQFCfEbImgA-1; Tue, 16 Jan 2024 01:01:38 -0500 X-MC-Unique: RxYvaupvNKaKQFCfEbImgA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7588C85A589; Tue, 16 Jan 2024 06:01:37 +0000 (UTC) Received: from virt-mtcollins-01.lab.eng.rdu2.redhat.com (virt-mtcollins-01.lab.eng.rdu2.redhat.com [10.8.1.196]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6B99E2026D6F; Tue, 16 Jan 2024 06:01:37 +0000 (UTC) From: Shaoqin Huang To: Oliver Upton , Marc Zyngier , kvmarm@lists.linux.dev Cc: Eric Auger , Shaoqin Huang , James Morse , Suzuki K Poulose , Zenghui Yu , Paolo Bonzini , Shuah Khan , linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 5/5] KVM: selftests: aarch64: Add invalid filter test in pmu_event_filter_test Date: Tue, 16 Jan 2024 01:01:23 -0500 Message-Id: <20240116060129.55473-6-shahuang@redhat.com> In-Reply-To: <20240116060129.55473-1-shahuang@redhat.com> References: <20240116060129.55473-1-shahuang@redhat.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 Add the invalid filter test includes sets the filter beyond the event space and sets the invalid action to double check if the KVM_ARM_VCPU_PMU_V3_FILTER will return the expected error. Signed-off-by: Shaoqin Huang --- .../kvm/aarch64/pmu_event_filter_test.c | 36 +++++++++++++++++++ 1 file changed, 36 insertions(+) diff --git a/tools/testing/selftests/kvm/aarch64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/aarch64/pmu_event_filter_test.c index d280382f362f..68e1f2003312 100644 --- a/tools/testing/selftests/kvm/aarch64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/aarch64/pmu_event_filter_test.c @@ -7,6 +7,7 @@ * This test checks if the guest only see the limited pmu event that userspace * sets, if the guest can use those events which user allow, and if the guest * can't use those events which user deny. + * It also checks that setting invalid filter ranges return the expected error. * This test runs only when KVM_CAP_ARM_PMU_V3, KVM_ARM_VCPU_PMU_V3_FILTER * is supported on the host. */ @@ -183,6 +184,39 @@ static void for_each_test(void) run_test(t); } +static void set_invalid_filter(struct vpmu_vm *vm, void *arg) +{ + struct kvm_pmu_event_filter invalid; + struct kvm_device_attr attr = { + .group = KVM_ARM_VCPU_PMU_V3_CTRL, + .attr = KVM_ARM_VCPU_PMU_V3_FILTER, + .addr = (uint64_t)&invalid, + }; + int ret = 0; + + /* The max event number is (1 << 16), set a range largeer than it. */ + invalid = __DEFINE_FILTER(BIT(15), BIT(15)+1, 0); + ret = __vcpu_ioctl(vm->vcpu, KVM_SET_DEVICE_ATTR, &attr); + TEST_ASSERT(ret && errno == EINVAL, "Set Invalid filter range " + "ret = %d, errno = %d (expected ret = -1, errno = EINVAL)", + ret, errno); + + ret = 0; + + /* Set the Invalid action. */ + invalid = __DEFINE_FILTER(0, 1, 3); + ret = __vcpu_ioctl(vm->vcpu, KVM_SET_DEVICE_ATTR, &attr); + TEST_ASSERT(ret && errno == EINVAL, "Set Invalid filter action " + "ret = %d, errno = %d (expected ret = -1, errno = EINVAL)", + ret, errno); +} + +static void test_invalid_filter(void) +{ + vpmu_vm = __create_vpmu_vm(guest_code, set_invalid_filter, NULL); + destroy_vpmu_vm(vpmu_vm); +} + static bool kvm_supports_pmu_event_filter(void) { int r; @@ -216,4 +250,6 @@ int main(void) TEST_REQUIRE(host_pmu_supports_events()); for_each_test(); + + test_invalid_filter(); }