From patchwork Sat Sep 2 12:59:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haibo Xu X-Patchwork-Id: 719822 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8BA44C001DB for ; Sat, 2 Sep 2023 12:51:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352187AbjIBMvn (ORCPT ); Sat, 2 Sep 2023 08:51:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56238 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229991AbjIBMvm (ORCPT ); Sat, 2 Sep 2023 08:51:42 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A268510F9; Sat, 2 Sep 2023 05:51:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1693659097; x=1725195097; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=eQHGwpq/BgRDtD59gSM0IBvE6jqNy56udksu6EID5DE=; b=lbhzWnPcaQEGDaGyxyIxoQwA237riBUZxK9kJVqWywadbmPMu3pOuAVU Kr5CJ0UH0z4NhG7eh23DkRCXD82y7kIFLXkPKWW0iAT2huK2GTK0IM7kb dCOqTeZbExDMbv51rn6l9Apil+aqNcV1qEuSfUZNJ1RgL0QuQMWFw1Jjv 2HP/dPAXfBu33Mn351p4mhNsVktLgYmJbItXOzNeKDRdj8sOvrAUl0ae0 L0tbefoRiK2MXQ7YdkjxmdyI+pbh51PMvxTcJpaozivR3/6Ex2DHdYyGT VUrsZkKeEXCiK4tUcuOrE7V74yfqec3fTmRVMKhLsqjHDjdGVFYQEz7eo Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10821"; a="366599208" X-IronPort-AV: E=Sophos;i="6.02,222,1688454000"; d="scan'208";a="366599208" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2023 05:51:36 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10821"; a="855022005" X-IronPort-AV: E=Sophos;i="6.02,222,1688454000"; d="scan'208";a="855022005" Received: from haibo-optiplex-7090.sh.intel.com ([10.239.159.132]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2023 05:51:27 -0700 From: Haibo Xu Cc: xiaobo55x@gmail.com, haibo1.xu@intel.com, ajones@ventanamicro.com, Paul Walmsley , Palmer Dabbelt , Albert Ou , Paolo Bonzini , Shuah Khan , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Anup Patel , Atish Patra , Guo Ren , Conor Dooley , Daniel Henrique Barboza , Greentime Hu , Sean Christopherson , Ricardo Koller , Vishal Annapurve , Aaron Lewis , David Matlack , Vitaly Kuznetsov , Ackerley Tng , Mingwei Zhang , Vipin Sharma , Lei Wang , Like Xu , Peter Gonda , Maxim Levitsky , Thomas Huth , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , David Woodhouse , Michal Luczaj , linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvm-riscv@lists.infradead.org Subject: [PATCH v2 2/8] KVM: arm64: selftest: Split arch_timer test code Date: Sat, 2 Sep 2023 20:59:24 +0800 Message-Id: X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Split the arch-neutral test code out of aarch64/arch_timer.c and put them into a common arch_timer.c. This is a preparation to share timer test codes in riscv. Suggested-by: Andrew Jones Signed-off-by: Haibo Xu --- tools/testing/selftests/kvm/Makefile | 9 +- .../selftests/kvm/aarch64/arch_timer.c | 288 +----------------- tools/testing/selftests/kvm/arch_timer.c | 252 +++++++++++++++ .../selftests/kvm/include/timer_test.h | 52 ++++ 4 files changed, 317 insertions(+), 284 deletions(-) create mode 100644 tools/testing/selftests/kvm/arch_timer.c create mode 100644 tools/testing/selftests/kvm/include/timer_test.h diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 0b9c42fbce8c..fb8904e2c06a 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -140,7 +140,6 @@ TEST_GEN_PROGS_x86_64 += system_counter_offset_test TEST_GEN_PROGS_EXTENDED_x86_64 += x86_64/nx_huge_pages_test TEST_GEN_PROGS_aarch64 += aarch64/aarch32_id_regs -TEST_GEN_PROGS_aarch64 += aarch64/arch_timer TEST_GEN_PROGS_aarch64 += aarch64/debug-exceptions TEST_GEN_PROGS_aarch64 += aarch64/hypercalls TEST_GEN_PROGS_aarch64 += aarch64/page_fault_test @@ -150,6 +149,7 @@ TEST_GEN_PROGS_aarch64 += aarch64/vcpu_width_config TEST_GEN_PROGS_aarch64 += aarch64/vgic_init TEST_GEN_PROGS_aarch64 += aarch64/vgic_irq TEST_GEN_PROGS_aarch64 += access_tracking_perf_test +TEST_GEN_PROGS_aarch64 += arch_timer TEST_GEN_PROGS_aarch64 += demand_paging_test TEST_GEN_PROGS_aarch64 += dirty_log_test TEST_GEN_PROGS_aarch64 += dirty_log_perf_test @@ -188,6 +188,7 @@ TEST_GEN_PROGS_riscv += set_memory_region_test TEST_GEN_PROGS_riscv += kvm_binary_stats_test SPLIT_TESTS += get-reg-list +SPLIT_TESTS += arch_timer TEST_PROGS += $(TEST_PROGS_$(ARCH_DIR)) TEST_GEN_PROGS += $(TEST_GEN_PROGS_$(ARCH_DIR)) @@ -248,13 +249,10 @@ TEST_DEP_FILES += $(patsubst %.o, %.d, $(SPLIT_TESTS_OBJS)) -include $(TEST_DEP_FILES) $(TEST_GEN_PROGS) $(TEST_GEN_PROGS_EXTENDED): %: %.o - $(CC) $(CFLAGS) $(CPPFLAGS) $(LDFLAGS) $(TARGET_ARCH) $< $(LIBKVM_OBJS) $(LDLIBS) -o $@ + $(CC) $(CFLAGS) $(CPPFLAGS) $(LDFLAGS) $(TARGET_ARCH) $^ $(LDLIBS) -o $@ $(TEST_GEN_OBJ): $(OUTPUT)/%.o: %.c $(CC) $(CFLAGS) $(CPPFLAGS) $(TARGET_ARCH) -c $< -o $@ -$(SPLIT_TESTS_TARGETS): %: %.o $(SPLIT_TESTS_OBJS) - $(CC) $(CFLAGS) $(CPPFLAGS) $(LDFLAGS) $(TARGET_ARCH) $^ $(LDLIBS) -o $@ - EXTRA_CLEAN += $(LIBKVM_OBJS) $(TEST_DEP_FILES) $(TEST_GEN_OBJ) $(SPLIT_TESTS_OBJS) cscope.* x := $(shell mkdir -p $(sort $(dir $(LIBKVM_C_OBJ) $(LIBKVM_S_OBJ)))) @@ -273,6 +271,7 @@ $(LIBKVM_STRING_OBJ): $(OUTPUT)/%.o: %.c x := $(shell mkdir -p $(sort $(dir $(TEST_GEN_PROGS)))) $(TEST_GEN_PROGS): $(LIBKVM_OBJS) $(TEST_GEN_PROGS_EXTENDED): $(LIBKVM_OBJS) +$(SPLIT_TESTS_TARGETS): $(OUTPUT)/%: $(ARCH_DIR)/%.o cscope: include_paths = $(LINUX_TOOL_INCLUDE) $(LINUX_HDR_PATH) include lib .. cscope: diff --git a/tools/testing/selftests/kvm/aarch64/arch_timer.c b/tools/testing/selftests/kvm/aarch64/arch_timer.c index b63859829a96..ceb649548751 100644 --- a/tools/testing/selftests/kvm/aarch64/arch_timer.c +++ b/tools/testing/selftests/kvm/aarch64/arch_timer.c @@ -1,91 +1,25 @@ // SPDX-License-Identifier: GPL-2.0-only /* - * arch_timer.c - Tests the aarch64 timer IRQ functionality - * * The test validates both the virtual and physical timer IRQs using - * CVAL and TVAL registers. This consitutes the four stages in the test. - * The guest's main thread configures the timer interrupt for a stage - * and waits for it to fire, with a timeout equal to the timer period. - * It asserts that the timeout doesn't exceed the timer period. - * - * On the other hand, upon receipt of an interrupt, the guest's interrupt - * handler validates the interrupt by checking if the architectural state - * is in compliance with the specifications. - * - * The test provides command-line options to configure the timer's - * period (-p), number of vCPUs (-n), and iterations per stage (-i). - * To stress-test the timer stack even more, an option to migrate the - * vCPUs across pCPUs (-m), at a particular rate, is also provided. + * CVAL and TVAL registers. * * Copyright (c) 2021, Google LLC. */ #define _GNU_SOURCE -#include -#include -#include -#include -#include -#include - -#include "kvm_util.h" -#include "processor.h" -#include "delay.h" #include "arch_timer.h" +#include "delay.h" #include "gic.h" +#include "processor.h" +#include "timer_test.h" #include "vgic.h" -#define NR_VCPUS_DEF 4 -#define NR_TEST_ITERS_DEF 5 -#define TIMER_TEST_PERIOD_MS_DEF 10 -#define TIMER_TEST_ERR_MARGIN_US 100 -#define TIMER_TEST_MIGRATION_FREQ_MS 2 - -struct test_args { - int nr_vcpus; - int nr_iter; - int timer_period_ms; - int migration_freq_ms; - struct kvm_arm_counter_offset offset; -}; - -static struct test_args test_args = { - .nr_vcpus = NR_VCPUS_DEF, - .nr_iter = NR_TEST_ITERS_DEF, - .timer_period_ms = TIMER_TEST_PERIOD_MS_DEF, - .migration_freq_ms = TIMER_TEST_MIGRATION_FREQ_MS, - .offset = { .reserved = 1 }, -}; - -#define msecs_to_usecs(msec) ((msec) * 1000LL) - -#define GICD_BASE_GPA 0x8000000ULL -#define GICR_BASE_GPA 0x80A0000ULL - -enum guest_stage { - GUEST_STAGE_VTIMER_CVAL = 1, - GUEST_STAGE_VTIMER_TVAL, - GUEST_STAGE_PTIMER_CVAL, - GUEST_STAGE_PTIMER_TVAL, - GUEST_STAGE_MAX, -}; - -/* Shared variables between host and guest */ -struct test_vcpu_shared_data { - int nr_iter; - enum guest_stage guest_stage; - uint64_t xcnt; -}; - -static struct kvm_vcpu *vcpus[KVM_MAX_VCPUS]; -static pthread_t pt_vcpu_run[KVM_MAX_VCPUS]; -static struct test_vcpu_shared_data vcpu_shared_data[KVM_MAX_VCPUS]; +extern struct test_args test_args; +extern struct kvm_vcpu *vcpus[]; +extern struct test_vcpu_shared_data vcpu_shared_data[]; static int vtimer_irq, ptimer_irq; -static unsigned long *vcpu_done_map; -static pthread_mutex_t vcpu_done_map_lock; - static void guest_configure_timer_action(struct test_vcpu_shared_data *shared_data) { @@ -222,137 +156,6 @@ static void guest_code(void) GUEST_DONE(); } -static void *test_vcpu_run(void *arg) -{ - unsigned int vcpu_idx = (unsigned long)arg; - struct ucall uc; - struct kvm_vcpu *vcpu = vcpus[vcpu_idx]; - struct kvm_vm *vm = vcpu->vm; - struct test_vcpu_shared_data *shared_data = &vcpu_shared_data[vcpu_idx]; - - vcpu_run(vcpu); - - /* Currently, any exit from guest is an indication of completion */ - pthread_mutex_lock(&vcpu_done_map_lock); - __set_bit(vcpu_idx, vcpu_done_map); - pthread_mutex_unlock(&vcpu_done_map_lock); - - switch (get_ucall(vcpu, &uc)) { - case UCALL_SYNC: - case UCALL_DONE: - break; - case UCALL_ABORT: - sync_global_from_guest(vm, *shared_data); - fprintf(stderr, "Guest assert failed, vcpu %u; stage; %u; iter: %u\n", - vcpu_idx, shared_data->guest_stage, shared_data->nr_iter); - REPORT_GUEST_ASSERT(uc); - break; - default: - TEST_FAIL("Unexpected guest exit\n"); - } - - return NULL; -} - -static uint32_t test_get_pcpu(void) -{ - uint32_t pcpu; - unsigned int nproc_conf; - cpu_set_t online_cpuset; - - nproc_conf = get_nprocs_conf(); - sched_getaffinity(0, sizeof(cpu_set_t), &online_cpuset); - - /* Randomly find an available pCPU to place a vCPU on */ - do { - pcpu = rand() % nproc_conf; - } while (!CPU_ISSET(pcpu, &online_cpuset)); - - return pcpu; -} - -static int test_migrate_vcpu(unsigned int vcpu_idx) -{ - int ret; - cpu_set_t cpuset; - uint32_t new_pcpu = test_get_pcpu(); - - CPU_ZERO(&cpuset); - CPU_SET(new_pcpu, &cpuset); - - pr_debug("Migrating vCPU: %u to pCPU: %u\n", vcpu_idx, new_pcpu); - - ret = pthread_setaffinity_np(pt_vcpu_run[vcpu_idx], - sizeof(cpuset), &cpuset); - - /* Allow the error where the vCPU thread is already finished */ - TEST_ASSERT(ret == 0 || ret == ESRCH, - "Failed to migrate the vCPU:%u to pCPU: %u; ret: %d\n", - vcpu_idx, new_pcpu, ret); - - return ret; -} - -static void *test_vcpu_migration(void *arg) -{ - unsigned int i, n_done; - bool vcpu_done; - - do { - usleep(msecs_to_usecs(test_args.migration_freq_ms)); - - for (n_done = 0, i = 0; i < test_args.nr_vcpus; i++) { - pthread_mutex_lock(&vcpu_done_map_lock); - vcpu_done = test_bit(i, vcpu_done_map); - pthread_mutex_unlock(&vcpu_done_map_lock); - - if (vcpu_done) { - n_done++; - continue; - } - - test_migrate_vcpu(i); - } - } while (test_args.nr_vcpus != n_done); - - return NULL; -} - -static void test_run(struct kvm_vm *vm) -{ - pthread_t pt_vcpu_migration; - unsigned int i; - int ret; - - pthread_mutex_init(&vcpu_done_map_lock, NULL); - vcpu_done_map = bitmap_zalloc(test_args.nr_vcpus); - TEST_ASSERT(vcpu_done_map, "Failed to allocate vcpu done bitmap\n"); - - for (i = 0; i < (unsigned long)test_args.nr_vcpus; i++) { - ret = pthread_create(&pt_vcpu_run[i], NULL, test_vcpu_run, - (void *)(unsigned long)i); - TEST_ASSERT(!ret, "Failed to create vCPU-%d pthread\n", i); - } - - /* Spawn a thread to control the vCPU migrations */ - if (test_args.migration_freq_ms) { - srand(time(NULL)); - - ret = pthread_create(&pt_vcpu_migration, NULL, - test_vcpu_migration, NULL); - TEST_ASSERT(!ret, "Failed to create the migration pthread\n"); - } - - - for (i = 0; i < test_args.nr_vcpus; i++) - pthread_join(pt_vcpu_run[i], NULL); - - if (test_args.migration_freq_ms) - pthread_join(pt_vcpu_migration, NULL); - - bitmap_free(vcpu_done_map); -} - static void test_init_timer_irq(struct kvm_vm *vm) { /* Timer initid should be same for all the vCPUs, so query only vCPU-0 */ @@ -369,7 +172,7 @@ static void test_init_timer_irq(struct kvm_vm *vm) static int gic_fd; -static struct kvm_vm *test_vm_create(void) +struct kvm_vm *test_vm_create(void) { struct kvm_vm *vm; unsigned int i; @@ -400,81 +203,8 @@ static struct kvm_vm *test_vm_create(void) return vm; } -static void test_vm_cleanup(struct kvm_vm *vm) +void test_vm_cleanup(struct kvm_vm *vm) { close(gic_fd); kvm_vm_free(vm); } - -static void test_print_help(char *name) -{ - pr_info("Usage: %s [-h] [-n nr_vcpus] [-i iterations] [-p timer_period_ms]\n", - name); - pr_info("\t-n: Number of vCPUs to configure (default: %u; max: %u)\n", - NR_VCPUS_DEF, KVM_MAX_VCPUS); - pr_info("\t-i: Number of iterations per stage (default: %u)\n", - NR_TEST_ITERS_DEF); - pr_info("\t-p: Periodicity (in ms) of the guest timer (default: %u)\n", - TIMER_TEST_PERIOD_MS_DEF); - pr_info("\t-m: Frequency (in ms) of vCPUs to migrate to different pCPU. 0 to turn off (default: %u)\n", - TIMER_TEST_MIGRATION_FREQ_MS); - pr_info("\t-o: Counter offset (in counter cycles, default: 0)\n"); - pr_info("\t-h: print this help screen\n"); -} - -static bool parse_args(int argc, char *argv[]) -{ - int opt; - - while ((opt = getopt(argc, argv, "hn:i:p:m:o:")) != -1) { - switch (opt) { - case 'n': - test_args.nr_vcpus = atoi_positive("Number of vCPUs", optarg); - if (test_args.nr_vcpus > KVM_MAX_VCPUS) { - pr_info("Max allowed vCPUs: %u\n", - KVM_MAX_VCPUS); - goto err; - } - break; - case 'i': - test_args.nr_iter = atoi_positive("Number of iterations", optarg); - break; - case 'p': - test_args.timer_period_ms = atoi_positive("Periodicity", optarg); - break; - case 'm': - test_args.migration_freq_ms = atoi_non_negative("Frequency", optarg); - break; - case 'o': - test_args.offset.counter_offset = strtol(optarg, NULL, 0); - test_args.offset.reserved = 0; - break; - case 'h': - default: - goto err; - } - } - - return true; - -err: - test_print_help(argv[0]); - return false; -} - -int main(int argc, char *argv[]) -{ - struct kvm_vm *vm; - - if (!parse_args(argc, argv)) - exit(KSFT_SKIP); - - __TEST_REQUIRE(!test_args.migration_freq_ms || get_nprocs() >= 2, - "At least two physical CPUs needed for vCPU migration"); - - vm = test_vm_create(); - test_run(vm); - test_vm_cleanup(vm); - - return 0; -} diff --git a/tools/testing/selftests/kvm/arch_timer.c b/tools/testing/selftests/kvm/arch_timer.c new file mode 100644 index 000000000000..529024f58c98 --- /dev/null +++ b/tools/testing/selftests/kvm/arch_timer.c @@ -0,0 +1,252 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * arch_timer.c - Tests the arch timer IRQ functionality + * + * The guest's main thread configures the timer interrupt for and waits + * for it to fire, with a timeout equal to the timer period. + * It asserts that the timeout doesn't exceed the timer period. + * + * On the other hand, upon receipt of an interrupt, the guest's interrupt + * handler validates the interrupt by checking if the architectural state + * is in compliance with the specifications. + * + * The test provides command-line options to configure the timer's + * period (-p), number of vCPUs (-n), and iterations per stage (-i). + * To stress-test the timer stack even more, an option to migrate the + * vCPUs across pCPUs (-m), at a particular rate, is also provided. + * + * Copyright (c) 2021, Google LLC. + */ + +#define _GNU_SOURCE + +#include +#include +#include +#include +#include + +#include "timer_test.h" + +struct test_args test_args = { + .nr_vcpus = NR_VCPUS_DEF, + .nr_iter = NR_TEST_ITERS_DEF, + .timer_period_ms = TIMER_TEST_PERIOD_MS_DEF, + .migration_freq_ms = TIMER_TEST_MIGRATION_FREQ_MS, +#ifdef __aarch64__ + .offset = { .reserved = 1 }, +#endif +}; + +struct kvm_vcpu *vcpus[KVM_MAX_VCPUS]; +struct test_vcpu_shared_data vcpu_shared_data[KVM_MAX_VCPUS]; + +static pthread_t pt_vcpu_run[KVM_MAX_VCPUS]; +static unsigned long *vcpu_done_map; +static pthread_mutex_t vcpu_done_map_lock; + +static void *test_vcpu_run(void *arg) +{ + unsigned int vcpu_idx = (unsigned long)arg; + struct ucall uc; + struct kvm_vcpu *vcpu = vcpus[vcpu_idx]; + struct kvm_vm *vm = vcpu->vm; + struct test_vcpu_shared_data *shared_data = &vcpu_shared_data[vcpu_idx]; + + vcpu_run(vcpu); + + /* Currently, any exit from guest is an indication of completion */ + pthread_mutex_lock(&vcpu_done_map_lock); + __set_bit(vcpu_idx, vcpu_done_map); + pthread_mutex_unlock(&vcpu_done_map_lock); + + switch (get_ucall(vcpu, &uc)) { + case UCALL_SYNC: + case UCALL_DONE: + break; + case UCALL_ABORT: + sync_global_from_guest(vm, *shared_data); + fprintf(stderr, "Guest assert failed, vcpu %u; stage; %u; iter: %u\n", + vcpu_idx, shared_data->guest_stage, shared_data->nr_iter); + REPORT_GUEST_ASSERT(uc); + break; + default: + TEST_FAIL("Unexpected guest exit\n"); + } + + pr_info("PASS(vCPU-%d).\n", vcpu_idx); + + return NULL; +} + +static uint32_t test_get_pcpu(void) +{ + uint32_t pcpu; + unsigned int nproc_conf; + cpu_set_t online_cpuset; + + nproc_conf = get_nprocs_conf(); + sched_getaffinity(0, sizeof(cpu_set_t), &online_cpuset); + + /* Randomly find an available pCPU to place a vCPU on */ + do { + pcpu = rand() % nproc_conf; + } while (!CPU_ISSET(pcpu, &online_cpuset)); + + return pcpu; +} + +static int test_migrate_vcpu(unsigned int vcpu_idx) +{ + int ret; + cpu_set_t cpuset; + uint32_t new_pcpu = test_get_pcpu(); + + CPU_ZERO(&cpuset); + CPU_SET(new_pcpu, &cpuset); + + pr_debug("Migrating vCPU: %u to pCPU: %u\n", vcpu_idx, new_pcpu); + + ret = pthread_setaffinity_np(pt_vcpu_run[vcpu_idx], + sizeof(cpuset), &cpuset); + + /* Allow the error where the vCPU thread is already finished */ + TEST_ASSERT(ret == 0 || ret == ESRCH, + "Failed to migrate the vCPU:%u to pCPU: %u; ret: %d\n", + vcpu_idx, new_pcpu, ret); + + return ret; +} + +static void *test_vcpu_migration(void *arg) +{ + unsigned int i, n_done; + bool vcpu_done; + + do { + usleep(msecs_to_usecs(test_args.migration_freq_ms)); + + for (n_done = 0, i = 0; i < test_args.nr_vcpus; i++) { + pthread_mutex_lock(&vcpu_done_map_lock); + vcpu_done = test_bit(i, vcpu_done_map); + pthread_mutex_unlock(&vcpu_done_map_lock); + + if (vcpu_done) { + n_done++; + continue; + } + + test_migrate_vcpu(i); + } + } while (test_args.nr_vcpus != n_done); + + return NULL; +} + +static void test_run(struct kvm_vm *vm) +{ + pthread_t pt_vcpu_migration; + unsigned int i; + int ret; + + pthread_mutex_init(&vcpu_done_map_lock, NULL); + vcpu_done_map = bitmap_zalloc(test_args.nr_vcpus); + TEST_ASSERT(vcpu_done_map, "Failed to allocate vcpu done bitmap\n"); + + for (i = 0; i < (unsigned long)test_args.nr_vcpus; i++) { + ret = pthread_create(&pt_vcpu_run[i], NULL, test_vcpu_run, + (void *)(unsigned long)i); + TEST_ASSERT(!ret, "Failed to create vCPU-%d pthread\n", i); + } + + /* Spawn a thread to control the vCPU migrations */ + if (test_args.migration_freq_ms) { + srand(time(NULL)); + + ret = pthread_create(&pt_vcpu_migration, NULL, + test_vcpu_migration, NULL); + TEST_ASSERT(!ret, "Failed to create the migration pthread\n"); + } + + + for (i = 0; i < test_args.nr_vcpus; i++) + pthread_join(pt_vcpu_run[i], NULL); + + if (test_args.migration_freq_ms) + pthread_join(pt_vcpu_migration, NULL); + + bitmap_free(vcpu_done_map); +} + +static void test_print_help(char *name) +{ + pr_info("Usage: %s [-h] [-n nr_vcpus] [-i iterations] [-p timer_period_ms]\n", + name); + pr_info("\t-n: Number of vCPUs to configure (default: %u; max: %u)\n", + NR_VCPUS_DEF, KVM_MAX_VCPUS); + pr_info("\t-i: Number of iterations per stage (default: %u)\n", + NR_TEST_ITERS_DEF); + pr_info("\t-p: Periodicity (in ms) of the guest timer (default: %u)\n", + TIMER_TEST_PERIOD_MS_DEF); + pr_info("\t-m: Frequency (in ms) of vCPUs to migrate to different pCPU. 0 to turn off (default: %u)\n", + TIMER_TEST_MIGRATION_FREQ_MS); + pr_info("\t-o: Counter offset (in counter cycles, default: 0)\n"); + pr_info("\t-h: print this help screen\n"); +} + +static bool parse_args(int argc, char *argv[]) +{ + int opt; + + while ((opt = getopt(argc, argv, "hn:i:p:m:o:")) != -1) { + switch (opt) { + case 'n': + test_args.nr_vcpus = atoi_positive("Number of vCPUs", optarg); + if (test_args.nr_vcpus > KVM_MAX_VCPUS) { + pr_info("Max allowed vCPUs: %u\n", + KVM_MAX_VCPUS); + goto err; + } + break; + case 'i': + test_args.nr_iter = atoi_positive("Number of iterations", optarg); + break; + case 'p': + test_args.timer_period_ms = atoi_positive("Periodicity", optarg); + break; + case 'm': + test_args.migration_freq_ms = atoi_non_negative("Frequency", optarg); + break; + case 'o': + test_args.offset.counter_offset = strtol(optarg, NULL, 0); + test_args.offset.reserved = 0; + break; + case 'h': + default: + goto err; + } + } + + return true; + +err: + test_print_help(argv[0]); + return false; +} + +int main(int argc, char *argv[]) +{ + struct kvm_vm *vm; + + if (!parse_args(argc, argv)) + exit(KSFT_SKIP); + + __TEST_REQUIRE(!test_args.migration_freq_ms || get_nprocs() >= 2, + "At least two physical CPUs needed for vCPU migration"); + + vm = test_vm_create(); + test_run(vm); + test_vm_cleanup(vm); + + return 0; +} diff --git a/tools/testing/selftests/kvm/include/timer_test.h b/tools/testing/selftests/kvm/include/timer_test.h new file mode 100644 index 000000000000..109e4d635627 --- /dev/null +++ b/tools/testing/selftests/kvm/include/timer_test.h @@ -0,0 +1,52 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * tools/testing/selftests/kvm/include/timer_test.h + * + * Copyright (C) 2018, Google LLC + */ + +#ifndef SELFTEST_KVM_TIMER_TEST_H +#define SELFTEST_KVM_TIMER_TEST_H + +#include "kvm_util.h" + +#define NR_VCPUS_DEF 4 +#define NR_TEST_ITERS_DEF 5 +#define TIMER_TEST_PERIOD_MS_DEF 10 +#define TIMER_TEST_ERR_MARGIN_US 100 +#define TIMER_TEST_MIGRATION_FREQ_MS 2 + +#define msecs_to_usecs(msec) ((msec) * 1000LL) + +#define GICD_BASE_GPA 0x8000000ULL +#define GICR_BASE_GPA 0x80A0000ULL + +enum guest_stage { + GUEST_STAGE_VTIMER_CVAL=1, + GUEST_STAGE_VTIMER_TVAL, + GUEST_STAGE_PTIMER_CVAL, + GUEST_STAGE_PTIMER_TVAL, + GUEST_STAGE_MAX, +}; + +/* Timer test cmdline parameters */ +struct test_args +{ + int nr_vcpus; + int nr_iter; + int timer_period_ms; + int migration_freq_ms; + struct kvm_arm_counter_offset offset; +}; + +/* Shared variables between host and guest */ +struct test_vcpu_shared_data { + int nr_iter; + enum guest_stage guest_stage; + uint64_t xcnt; +}; + +struct kvm_vm* test_vm_create(void); +void test_vm_cleanup(struct kvm_vm *vm); + +#endif /* SELFTEST_KVM_TIMER_TEST_H */ From patchwork Sat Sep 2 12:59:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haibo Xu X-Patchwork-Id: 719821 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2367C71153 for ; Sat, 2 Sep 2023 12:52:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352223AbjIBMws (ORCPT ); Sat, 2 Sep 2023 08:52:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58820 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352206AbjIBMwq (ORCPT ); Sat, 2 Sep 2023 08:52:46 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DF49E1702; Sat, 2 Sep 2023 05:52:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1693659162; x=1725195162; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=nK8zunl6sSytTPWcLV67OItdVTMIy0Fmekb7+APwIxA=; b=ifZPalusNqL/vcwItmTGZAMUq1DYL2rdaduGPOCZOtsf9vunpjHtHiBO aQE9Mqm8rwJGDb/DJWQZkahrFKHBq+o5ciwMmSFIIpQvQ+Vgh7hqopI1k DJ2Cytc72aQdeMIo0mHE8yfALLe12P7+K+ko7hGxypGTqRS7tiBNSXZ56 7ytgWFe9hd/rkdOA2s0ooE2TmiaZudEkfANctj4jBnv9V8jqxwq9dd2KZ 2cNxHV6tEEK3T3Jcei2+/4JxlNdMWzuBKqV3aA8DkUBhzp19F+JpDqHxn BmfyNKts2wY2ENcTAn1/NBYEWvrPwGcn+utj47V+k+oQ38fkXX++CYL+f w==; X-IronPort-AV: E=McAfee;i="6600,9927,10821"; a="366599316" X-IronPort-AV: E=Sophos;i="6.02,222,1688454000"; d="scan'208";a="366599316" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2023 05:52:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10821"; a="855022099" X-IronPort-AV: E=Sophos;i="6.02,222,1688454000"; d="scan'208";a="855022099" Received: from haibo-optiplex-7090.sh.intel.com ([10.239.159.132]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2023 05:52:33 -0700 From: Haibo Xu Cc: xiaobo55x@gmail.com, haibo1.xu@intel.com, ajones@ventanamicro.com, Paul Walmsley , Palmer Dabbelt , Albert Ou , Paolo Bonzini , Shuah Khan , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Anup Patel , Atish Patra , Guo Ren , Conor Dooley , wchen , Greentime Hu , Sean Christopherson , Ricardo Koller , Vishal Annapurve , Vitaly Kuznetsov , Aaron Lewis , David Matlack , Mingwei Zhang , Ackerley Tng , Jim Mattson , Vipin Sharma , Maxim Levitsky , Peter Gonda , Like Xu , =?utf-8?q?Ph?= =?utf-8?q?ilippe_Mathieu-Daud=C3=A9?= , Thomas Huth , David Woodhouse , Michal Luczaj , linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvm-riscv@lists.infradead.org Subject: [PATCH v2 4/8] KVM: riscv: selftests: Switch to use macro from csr.h Date: Sat, 2 Sep 2023 20:59:26 +0800 Message-Id: X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Signed-off-by: Haibo Xu --- tools/testing/selftests/kvm/include/riscv/processor.h | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/tools/testing/selftests/kvm/include/riscv/processor.h b/tools/testing/selftests/kvm/include/riscv/processor.h index 5b62a3d2aa9b..6810c887fadc 100644 --- a/tools/testing/selftests/kvm/include/riscv/processor.h +++ b/tools/testing/selftests/kvm/include/riscv/processor.h @@ -8,6 +8,7 @@ #define SELFTEST_KVM_PROCESSOR_H #include "kvm_util.h" +#include #include static inline uint64_t __kvm_reg_id(uint64_t type, uint64_t idx, @@ -95,12 +96,8 @@ static inline uint64_t __kvm_reg_id(uint64_t type, uint64_t idx, #define PGTBL_PAGE_SIZE PGTBL_L0_BLOCK_SIZE #define PGTBL_PAGE_SIZE_SHIFT PGTBL_L0_BLOCK_SHIFT -#define SATP_PPN _AC(0x00000FFFFFFFFFFF, UL) #define SATP_MODE_39 _AC(0x8000000000000000, UL) #define SATP_MODE_48 _AC(0x9000000000000000, UL) -#define SATP_ASID_BITS 16 -#define SATP_ASID_SHIFT 44 -#define SATP_ASID_MASK _AC(0xFFFF, UL) #define SBI_EXT_EXPERIMENTAL_START 0x08000000 #define SBI_EXT_EXPERIMENTAL_END 0x08FFFFFF From patchwork Sat Sep 2 12:59:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haibo Xu X-Patchwork-Id: 719820 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8DCCC001DB for ; Sat, 2 Sep 2023 12:53:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352218AbjIBMxw (ORCPT ); Sat, 2 Sep 2023 08:53:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40248 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352219AbjIBMxv (ORCPT ); Sat, 2 Sep 2023 08:53:51 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5C26EB6; Sat, 2 Sep 2023 05:53:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1693659227; x=1725195227; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=1wLtdrMyvQhGd5esVTekctGGS1exxnCKxDGiyUI+Auw=; b=HJxs2ZFjVAjfp+PlFUyRyf0fLVSVyr4ziHqFEoTVuvcZ3s27mGK0fKnT sTItVasvGZEOHrdIrXm319UVwpGmT4Fyln9PwF4s+tkt5M/zIiEAqdPiL vq0vt6iHI9ij5Q6xImjvn6iBKh0bG55DwRQu8lwPqfvvQ4cNPvJ+W8Qk/ g8LjePekn37+AEBEQ2r+yrONBexEg/32lWqAkzOD+dpRmsr5d4wZQf2rb 7kgQW1XTq8IyIbUg015VzDHV4W1LgRTF1hkDNWCQmBCvr/ffmvkzOMOA+ WvCW/9/iNioehaS3U3ZpYPCb4yb6OUkMuQ8e01rNk46yjTEAS09Ao5r0f A==; X-IronPort-AV: E=McAfee;i="6600,9927,10821"; a="366599376" X-IronPort-AV: E=Sophos;i="6.02,222,1688454000"; d="scan'208";a="366599376" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2023 05:53:46 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10821"; a="855022207" X-IronPort-AV: E=Sophos;i="6.02,222,1688454000"; d="scan'208";a="855022207" Received: from haibo-optiplex-7090.sh.intel.com ([10.239.159.132]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2023 05:53:38 -0700 From: Haibo Xu Cc: xiaobo55x@gmail.com, haibo1.xu@intel.com, ajones@ventanamicro.com, Paul Walmsley , Palmer Dabbelt , Albert Ou , Paolo Bonzini , Shuah Khan , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Anup Patel , Atish Patra , Guo Ren , Daniel Henrique Barboza , Greentime Hu , Sean Christopherson , Ricardo Koller , Vishal Annapurve , Aaron Lewis , David Matlack , Mingwei Zhang , Vitaly Kuznetsov , Ackerley Tng , Lei Wang , Vipin Sharma , Like Xu , Peter Gonda , Maxim Levitsky , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Thomas Huth , David Woodhouse , Michal Luczaj , linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvm-riscv@lists.infradead.org Subject: [PATCH v2 6/8] KVM: riscv: selftests: Add guest helper to get vcpu id Date: Sat, 2 Sep 2023 20:59:28 +0800 Message-Id: <23d13f60b5a2fd31b87ae78458507f46442fac3a.1693659382.git.haibo1.xu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add guest_get_vcpuid() helper to simplify accessing to per-cpu private data. The sscratch CSR was used to store the vcpu id. Signed-off-by: Haibo Xu Reviewed-by: Andrew Jones --- tools/testing/selftests/kvm/include/aarch64/processor.h | 4 ---- tools/testing/selftests/kvm/include/kvm_util_base.h | 2 ++ tools/testing/selftests/kvm/lib/riscv/processor.c | 8 ++++++++ 3 files changed, 10 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h index 69e7b08d3f99..f41fcd63624f 100644 --- a/tools/testing/selftests/kvm/include/aarch64/processor.h +++ b/tools/testing/selftests/kvm/include/aarch64/processor.h @@ -219,8 +219,4 @@ void smccc_smc(uint32_t function_id, uint64_t arg0, uint64_t arg1, uint64_t arg2, uint64_t arg3, uint64_t arg4, uint64_t arg5, uint64_t arg6, struct arm_smccc_res *res); - - -uint32_t guest_get_vcpuid(void); - #endif /* SELFTEST_KVM_PROCESSOR_H */ diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index 135ae2eb5249..666438113d22 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -939,4 +939,6 @@ struct ex_regs; typedef void(*exception_handler_fn)(struct ex_regs *); void vm_install_exception_handler(struct kvm_vm *vm, int vector, exception_handler_fn handler); +uint32_t guest_get_vcpuid(void); + #endif /* SELFTEST_KVM_UTIL_BASE_H */ diff --git a/tools/testing/selftests/kvm/lib/riscv/processor.c b/tools/testing/selftests/kvm/lib/riscv/processor.c index efd9ac4b0198..39a1e9902dec 100644 --- a/tools/testing/selftests/kvm/lib/riscv/processor.c +++ b/tools/testing/selftests/kvm/lib/riscv/processor.c @@ -316,6 +316,9 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id, vcpu_set_reg(vcpu, RISCV_CORE_REG(regs.sp), stack_vaddr + stack_size); vcpu_set_reg(vcpu, RISCV_CORE_REG(regs.pc), (unsigned long)guest_code); + /* Setup sscratch for guest_get_vcpuid() */ + vcpu_set_reg(vcpu, RISCV_CSR_REG(sscratch), vcpu_id); + /* Setup default exception vector of guest */ vcpu_set_reg(vcpu, RISCV_CSR_REG(stvec), (unsigned long)guest_unexp_trap); @@ -436,3 +439,8 @@ void vm_install_interrupt_handler(struct kvm_vm *vm, exception_handler_fn handle handlers->exception_handlers[1][0] = handler; } + +uint32_t guest_get_vcpuid(void) +{ + return csr_read(CSR_SSCRATCH); +} From patchwork Sat Sep 2 12:59:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haibo Xu X-Patchwork-Id: 719819 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A2D0C71153 for ; Sat, 2 Sep 2023 12:55:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237893AbjIBMzC (ORCPT ); Sat, 2 Sep 2023 08:55:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59524 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244187AbjIBMzB (ORCPT ); Sat, 2 Sep 2023 08:55:01 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 59E4E135; Sat, 2 Sep 2023 05:54:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1693659293; x=1725195293; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=thLtNGChztA4iIOh+MAOWw+PgjHItukGczItWBbBlD4=; b=E8YCTkF++MK6dQJ2E2Xv3CAgLra962oe5SGb6CTGg70+Lv/11LahAGSY Nmu+8ed2da4ih2rFymC8TIeiJvfTQ6IA2ThZ/i+QxXLn1Ws4H6vPWU0XP ytTivKdHnF4vQ7baH1kePKJ5GjfKa77qFVMC8tQcHqVErZwrreI1MRuQ9 1PpSIEEkRYpbQk1Oe5d9TfpNue3GSDoQb8wcsTz5oPV5QzuEZW8WSXmIC 2nKc65guibaEf3bpm0K16laO8FDwXBB4xtM4nAFEjWCZikaV+tsdJrU7b iwBFQXdQFZt2X7J3bVQxl+vAPmu0x2jdyx6otqZ+hY8c3OUJiOkhsysg0 A==; X-IronPort-AV: E=McAfee;i="6600,9927,10821"; a="366599473" X-IronPort-AV: E=Sophos;i="6.02,222,1688454000"; d="scan'208";a="366599473" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2023 05:54:52 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10821"; a="855022397" X-IronPort-AV: E=Sophos;i="6.02,222,1688454000"; d="scan'208";a="855022397" Received: from haibo-optiplex-7090.sh.intel.com ([10.239.159.132]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2023 05:54:43 -0700 From: Haibo Xu Cc: xiaobo55x@gmail.com, haibo1.xu@intel.com, ajones@ventanamicro.com, Paul Walmsley , Palmer Dabbelt , Albert Ou , Paolo Bonzini , Shuah Khan , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Anup Patel , Atish Patra , Guo Ren , Conor Dooley , Greentime Hu , wchen , Daniel Henrique Barboza , Sean Christopherson , Ricardo Koller , Vishal Annapurve , David Matlack , Aaron Lewis , Mingwei Zhang , Vitaly Kuznetsov , Ackerley Tng , Vipin Sharma , Maxim Levitsky , Peter Gonda , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Thomas Huth , Like Xu , David Woodhouse , Michal Luczaj , zhang songyi , linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvm-riscv@lists.infradead.org Subject: [PATCH v2 8/8] KVM: riscv: selftests: Add sstc timer test Date: Sat, 2 Sep 2023 20:59:30 +0800 Message-Id: X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add a KVM selftest to validate the Sstc timer functionality. The test was ported from arm64 arch timer test. Signed-off-by: Haibo Xu --- tools/testing/selftests/kvm/Makefile | 1 + tools/testing/selftests/kvm/arch_timer.c | 9 ++ .../selftests/kvm/include/riscv/arch_timer.h | 80 +++++++++++ .../selftests/kvm/include/riscv/processor.h | 10 ++ .../selftests/kvm/include/timer_test.h | 6 + .../testing/selftests/kvm/riscv/arch_timer.c | 130 ++++++++++++++++++ 6 files changed, 236 insertions(+) create mode 100644 tools/testing/selftests/kvm/include/riscv/arch_timer.h create mode 100644 tools/testing/selftests/kvm/riscv/arch_timer.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 01638027d059..7897d8051d8c 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -179,6 +179,7 @@ TEST_GEN_PROGS_s390x += rseq_test TEST_GEN_PROGS_s390x += set_memory_region_test TEST_GEN_PROGS_s390x += kvm_binary_stats_test +TEST_GEN_PROGS_riscv += arch_timer TEST_GEN_PROGS_riscv += demand_paging_test TEST_GEN_PROGS_riscv += dirty_log_test TEST_GEN_PROGS_riscv += get-reg-list diff --git a/tools/testing/selftests/kvm/arch_timer.c b/tools/testing/selftests/kvm/arch_timer.c index 529024f58c98..691bd454e362 100644 --- a/tools/testing/selftests/kvm/arch_timer.c +++ b/tools/testing/selftests/kvm/arch_timer.c @@ -66,8 +66,13 @@ static void *test_vcpu_run(void *arg) break; case UCALL_ABORT: sync_global_from_guest(vm, *shared_data); +#ifdef __aarch64__ fprintf(stderr, "Guest assert failed, vcpu %u; stage; %u; iter: %u\n", vcpu_idx, shared_data->guest_stage, shared_data->nr_iter); +#else + fprintf(stderr, "Guest assert failed, vcpu %u; iter: %u\n", + vcpu_idx, shared_data->nr_iter); +#endif REPORT_GUEST_ASSERT(uc); break; default: @@ -190,7 +195,9 @@ static void test_print_help(char *name) TIMER_TEST_PERIOD_MS_DEF); pr_info("\t-m: Frequency (in ms) of vCPUs to migrate to different pCPU. 0 to turn off (default: %u)\n", TIMER_TEST_MIGRATION_FREQ_MS); +#ifdef __aarch64__ pr_info("\t-o: Counter offset (in counter cycles, default: 0)\n"); +#endif pr_info("\t-h: print this help screen\n"); } @@ -217,10 +224,12 @@ static bool parse_args(int argc, char *argv[]) case 'm': test_args.migration_freq_ms = atoi_non_negative("Frequency", optarg); break; +#ifdef __aarch64__ case 'o': test_args.offset.counter_offset = strtol(optarg, NULL, 0); test_args.offset.reserved = 0; break; +#endif case 'h': default: goto err; diff --git a/tools/testing/selftests/kvm/include/riscv/arch_timer.h b/tools/testing/selftests/kvm/include/riscv/arch_timer.h new file mode 100644 index 000000000000..897edcef8fc2 --- /dev/null +++ b/tools/testing/selftests/kvm/include/riscv/arch_timer.h @@ -0,0 +1,80 @@ +// SPDX-License-Identifier: GPL-2.0 */ +/* + * RISC-V Arch Timer(sstc) specific interface + * + * Copyright (c) 2023 Intel Corporation + */ + +#ifndef SELFTEST_KVM_ARCH_TIMER_H +#define SELFTEST_KVM_ARCH_TIMER_H + +#include + +static unsigned long timer_freq; + +#define msec_to_cycles(msec) \ + ((timer_freq) * (uint64_t)(msec) / 1000) + +#define usec_to_cycles(usec) \ + ((timer_freq) * (uint64_t)(usec) / 1000000) + +#define cycles_to_usec(cycles) \ + ((uint64_t)(cycles) * 1000000 / (timer_freq)) + +static inline uint64_t timer_get_cntct(void) +{ + return csr_read(CSR_TIME); +} + +static inline void timer_set_cval(uint64_t cval) +{ + csr_write(CSR_STIMECMP, cval); +} + +static inline uint64_t timer_get_cval(void) +{ + return csr_read(CSR_STIMECMP); +} + +static inline void timer_irq_enable(void) +{ + csr_set(CSR_SIE, IE_TIE); +} + +static inline void timer_irq_disable(void) +{ + csr_clear(CSR_SIE, IE_TIE); +} + +static inline void timer_set_next_cval_ms(uint32_t msec) +{ + uint64_t now_ct = timer_get_cntct(); + uint64_t next_ct = now_ct + msec_to_cycles(msec); + + timer_set_cval(next_ct); +} + +static inline void cpu_relax(void) +{ +#ifdef __riscv_zihintpause + asm volatile("pause" ::: "memory"); +#else + /* Encoding of the pause instruction */ + asm volatile(".4byte 0x100000F" ::: "memory"); +#endif +} + +static inline void __delay(uint64_t cycles) +{ + uint64_t start = timer_get_cntct(); + + while ((timer_get_cntct() - start) < cycles) + cpu_relax(); +} + +static inline void udelay(unsigned long usec) +{ + __delay(usec_to_cycles(usec)); +} + +#endif /* SELFTEST_KVM_ARCH_TIMER_H */ diff --git a/tools/testing/selftests/kvm/include/riscv/processor.h b/tools/testing/selftests/kvm/include/riscv/processor.h index 6087c8fc263a..c69f36302d41 100644 --- a/tools/testing/selftests/kvm/include/riscv/processor.h +++ b/tools/testing/selftests/kvm/include/riscv/processor.h @@ -161,4 +161,14 @@ struct sbiret sbi_ecall(int ext, int fid, unsigned long arg0, unsigned long arg3, unsigned long arg4, unsigned long arg5); +static inline void local_irq_enable(void) +{ + csr_set(CSR_SSTATUS, SR_SIE); +} + +static inline void local_irq_disable(void) +{ + csr_clear(CSR_SSTATUS, SR_SIE); +} + #endif /* SELFTEST_KVM_PROCESSOR_H */ diff --git a/tools/testing/selftests/kvm/include/timer_test.h b/tools/testing/selftests/kvm/include/timer_test.h index 109e4d635627..091c05a14c93 100644 --- a/tools/testing/selftests/kvm/include/timer_test.h +++ b/tools/testing/selftests/kvm/include/timer_test.h @@ -18,6 +18,7 @@ #define msecs_to_usecs(msec) ((msec) * 1000LL) +#ifdef __aarch64__ #define GICD_BASE_GPA 0x8000000ULL #define GICR_BASE_GPA 0x80A0000ULL @@ -28,6 +29,7 @@ enum guest_stage { GUEST_STAGE_PTIMER_TVAL, GUEST_STAGE_MAX, }; +#endif /* Timer test cmdline parameters */ struct test_args @@ -36,13 +38,17 @@ struct test_args int nr_iter; int timer_period_ms; int migration_freq_ms; +#ifdef __aarch64__ struct kvm_arm_counter_offset offset; +#endif }; /* Shared variables between host and guest */ struct test_vcpu_shared_data { int nr_iter; +#ifdef __aarch64__ enum guest_stage guest_stage; +#endif uint64_t xcnt; }; diff --git a/tools/testing/selftests/kvm/riscv/arch_timer.c b/tools/testing/selftests/kvm/riscv/arch_timer.c new file mode 100644 index 000000000000..c50a33c1e4f9 --- /dev/null +++ b/tools/testing/selftests/kvm/riscv/arch_timer.c @@ -0,0 +1,130 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * arch_timer.c - Tests the riscv64 sstc timer IRQ functionality + * + * The test validates the sstc timer IRQs using vstimecmp registers. + * It's ported from the aarch64 arch_timer test. + * + * Copyright (c) 2023, Intel Corporation. + */ + +#define _GNU_SOURCE + +#include "arch_timer.h" +#include "kvm_util.h" +#include "processor.h" +#include "timer_test.h" + +extern struct test_args test_args; +extern struct kvm_vcpu *vcpus[]; +extern struct test_vcpu_shared_data vcpu_shared_data[]; + +static int timer_irq = IRQ_S_TIMER; + +static void +guest_configure_timer_action(struct test_vcpu_shared_data *shared_data) +{ + timer_set_next_cval_ms(test_args.timer_period_ms); + shared_data->xcnt = timer_get_cntct(); + timer_irq_enable(); +} + +static void guest_validate_irq(unsigned int intid, + struct test_vcpu_shared_data *shared_data) +{ + uint64_t xcnt = 0, xcnt_diff_us, cval = 0; + + timer_irq_disable(); + xcnt = timer_get_cntct(); + cval = timer_get_cval(); + + xcnt_diff_us = cycles_to_usec(xcnt - shared_data->xcnt); + + /* Make sure we are dealing with the correct timer IRQ */ + GUEST_ASSERT_EQ(intid, timer_irq); + + __GUEST_ASSERT(xcnt >= cval, + "xcnt = 0x%llx, cval = 0x%llx, xcnt_diff_us = 0x%llx", + xcnt, cval, xcnt_diff_us); + + WRITE_ONCE(shared_data->nr_iter, shared_data->nr_iter + 1); +} + +static void guest_irq_handler(struct ex_regs *regs) +{ + unsigned int intid = regs->cause & ~CAUSE_IRQ_FLAG; + uint32_t cpu = guest_get_vcpuid(); + struct test_vcpu_shared_data *shared_data = &vcpu_shared_data[cpu]; + + guest_validate_irq(intid, shared_data); +} + +static void guest_run(struct test_vcpu_shared_data *shared_data) +{ + uint32_t irq_iter, config_iter; + + shared_data->nr_iter = 0; + + for (config_iter = 0; config_iter < test_args.nr_iter; config_iter++) { + /* Setup the next interrupt */ + guest_configure_timer_action(shared_data); + + /* Setup a timeout for the interrupt to arrive */ + udelay(msecs_to_usecs(test_args.timer_period_ms) + + TIMER_TEST_ERR_MARGIN_US); + + irq_iter = READ_ONCE(shared_data->nr_iter); + GUEST_ASSERT_EQ(config_iter + 1, irq_iter); + } +} + +static void guest_code(void) +{ + uint32_t cpu = guest_get_vcpuid(); + struct test_vcpu_shared_data *shared_data = &vcpu_shared_data[cpu]; + + local_irq_disable(); + timer_irq_disable(); + local_irq_enable(); + + guest_run(shared_data); + + GUEST_DONE(); +} + +static void test_init_timer_freq(struct kvm_vm *vm) +{ + /* Timer frequency should be same for all the vCPUs, so query only vCPU-0 */ + vcpu_get_reg(vcpus[0], RISCV_TIMER_REG(frequency), &timer_freq); + sync_global_to_guest(vm, timer_freq); + + pr_debug("timer_freq: %lu\n", timer_freq); +} + +struct kvm_vm *test_vm_create(void) +{ + struct kvm_vm *vm; + int nr_vcpus = test_args.nr_vcpus; + + vm = vm_create_with_vcpus(nr_vcpus, guest_code, vcpus); + __TEST_REQUIRE(vcpu_has_ext(vcpus[0], KVM_RISCV_ISA_EXT_SSTC), + "SSTC not available, skipping test\n"); + + vm_init_vector_tables(vm); + vm_install_interrupt_handler(vm, guest_irq_handler); + + for (int i = 0; i < nr_vcpus; i++) + vcpu_init_vector_tables(vcpus[i]); + + test_init_timer_freq(vm); + + /* Make all the test's cmdline args visible to the guest */ + sync_global_to_guest(vm, test_args); + + return vm; +} + +void test_vm_cleanup(struct kvm_vm *vm) +{ + kvm_vm_free(vm); +}