mbox series

[0/6] KVM: selftests: Perf test cleanups and memslot modification test

Message ID 20210112214253.463999-1-bgardon@google.com
Headers show
Series KVM: selftests: Perf test cleanups and memslot modification test | expand

Message

Ben Gardon Jan. 12, 2021, 9:42 p.m. UTC
This series contains a few cleanups that didn't make it into previous
series, including some cosmetic changes and small bug fixes. The series
also lays the groundwork for a memslot modification test which stresses
the memslot update and page fault code paths in an attempt to expose races.

Tested: dirty_log_perf_test, memslot_modification_stress_test, and
	demand_paging_test were run, with all the patches in this series
	applied, on an Intel Skylake machine.

	echo Y > /sys/module/kvm/parameters/tdp_mmu; \
	./memslot_modification_stress_test -i 1000 -v 64 -b 1G; \
	./memslot_modification_stress_test -i 1000 -v 64 -b 64M -o; \
	./dirty_log_perf_test -v 64 -b 1G; \
	./dirty_log_perf_test -v 64 -b 64M -o; \
	./demand_paging_test -v 64 -b 1G; \
	./demand_paging_test -v 64 -b 64M -o; \
	echo N > /sys/module/kvm/parameters/tdp_mmu; \
	./memslot_modification_stress_test -i 1000 -v 64 -b 1G; \
	./memslot_modification_stress_test -i 1000 -v 64 -b 64M -o; \
	./dirty_log_perf_test -v 64 -b 1G; \
	./dirty_log_perf_test -v 64 -b 64M -o; \
	./demand_paging_test -v 64 -b 1G; \
	./demand_paging_test -v 64 -b 64M -o

	The tests behaved as expected, and fixed the problem of the
	population stage being skipped in dirty_log_perf_test. This can be
	seen in the output, with the population stage taking about the time
	dirty pass 1 took and dirty pass 1 falling closer to the times for
	the other passes.

Note that when running these tests, the -o option causes the test to take
much longer as the work each vCPU must do increases proportional to the
number of vCPUs.

You can view this series in Gerrit at:
https://linux-review.googlesource.com/c/linux/kernel/git/torvalds/linux/+/7216

Ben Gardon (6):
  KVM: selftests: Rename timespec_diff_now to timespec_elapsed
  KVM: selftests: Avoid flooding debug log while populating memory
  KVM: selftests: Convert iterations to int in dirty_log_perf_test
  KVM: selftests: Fix population stage in dirty_log_perf_test
  KVM: selftests: Add option to overlap vCPU memory access
  KVM: selftests: Add memslot modification stress test

 tools/testing/selftests/kvm/.gitignore        |   1 +
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../selftests/kvm/demand_paging_test.c        |  40 +++-
 .../selftests/kvm/dirty_log_perf_test.c       |  72 +++---
 .../selftests/kvm/include/perf_test_util.h    |   4 +-
 .../testing/selftests/kvm/include/test_util.h |   2 +-
 .../selftests/kvm/lib/perf_test_util.c        |  25 ++-
 tools/testing/selftests/kvm/lib/test_util.c   |   2 +-
 .../kvm/memslot_modification_stress_test.c    | 211 ++++++++++++++++++
 9 files changed, 307 insertions(+), 51 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/memslot_modification_stress_test.c

Comments

Thomas Huth Jan. 13, 2021, 7:37 a.m. UTC | #1
On 12/01/2021 22.42, Ben Gardon wrote:
> Peter Xu pointed out that a log message printed while waiting for the

> memory population phase of the dirty_log_perf_test will flood the debug

> logs as there is no delay after printing the message. Since the message

> does not provide much value anyway, remove it.

> 

> Reviewed-by: Jacob Xu <jacobhxu@google.com>

> 

> Signed-off-by: Ben Gardon <bgardon@google.com>

> ---

>   tools/testing/selftests/kvm/dirty_log_perf_test.c | 9 ++++-----

>   1 file changed, 4 insertions(+), 5 deletions(-)

> 

> diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c

> index 16efe6589b43..15a9c45bdb5f 100644

> --- a/tools/testing/selftests/kvm/dirty_log_perf_test.c

> +++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c

> @@ -146,8 +146,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)

>   	/* Allow the vCPU to populate memory */

>   	pr_debug("Starting iteration %lu - Populating\n", iteration);

>   	while (READ_ONCE(vcpu_last_completed_iteration[vcpu_id]) != iteration)

> -		pr_debug("Waiting for vcpu_last_completed_iteration == %lu\n",

> -			iteration);

> +		;

>   

>   	ts_diff = timespec_elapsed(start);

>   	pr_info("Populate memory time: %ld.%.9lds\n",

> @@ -171,9 +170,9 @@ static void run_test(enum vm_guest_mode mode, void *arg)

>   

>   		pr_debug("Starting iteration %lu\n", iteration);

>   		for (vcpu_id = 0; vcpu_id < nr_vcpus; vcpu_id++) {

> -			while (READ_ONCE(vcpu_last_completed_iteration[vcpu_id]) != iteration)

> -				pr_debug("Waiting for vCPU %d vcpu_last_completed_iteration == %lu\n",

> -					 vcpu_id, iteration);

> +			while (READ_ONCE(vcpu_last_completed_iteration[vcpu_id])

> +			       != iteration)

> +				;

>   		}

>   

>   		ts_diff = timespec_elapsed(start);

> 


Reviewed-by: Thomas Huth <thuth@redhat.com>
Paolo Bonzini Jan. 18, 2021, 6:18 p.m. UTC | #2
On 12/01/21 22:42, Ben Gardon wrote:
> This series contains a few cleanups that didn't make it into previous

> series, including some cosmetic changes and small bug fixes. The series

> also lays the groundwork for a memslot modification test which stresses

> the memslot update and page fault code paths in an attempt to expose races.

> 

> Tested: dirty_log_perf_test, memslot_modification_stress_test, and

> 	demand_paging_test were run, with all the patches in this series

> 	applied, on an Intel Skylake machine.

> 

> 	echo Y > /sys/module/kvm/parameters/tdp_mmu; \

> 	./memslot_modification_stress_test -i 1000 -v 64 -b 1G; \

> 	./memslot_modification_stress_test -i 1000 -v 64 -b 64M -o; \

> 	./dirty_log_perf_test -v 64 -b 1G; \

> 	./dirty_log_perf_test -v 64 -b 64M -o; \

> 	./demand_paging_test -v 64 -b 1G; \

> 	./demand_paging_test -v 64 -b 64M -o; \

> 	echo N > /sys/module/kvm/parameters/tdp_mmu; \

> 	./memslot_modification_stress_test -i 1000 -v 64 -b 1G; \

> 	./memslot_modification_stress_test -i 1000 -v 64 -b 64M -o; \

> 	./dirty_log_perf_test -v 64 -b 1G; \

> 	./dirty_log_perf_test -v 64 -b 64M -o; \

> 	./demand_paging_test -v 64 -b 1G; \

> 	./demand_paging_test -v 64 -b 64M -o

> 

> 	The tests behaved as expected, and fixed the problem of the

> 	population stage being skipped in dirty_log_perf_test. This can be

> 	seen in the output, with the population stage taking about the time

> 	dirty pass 1 took and dirty pass 1 falling closer to the times for

> 	the other passes.

> 

> Note that when running these tests, the -o option causes the test to take

> much longer as the work each vCPU must do increases proportional to the

> number of vCPUs.

> 

> You can view this series in Gerrit at:

> https://linux-review.googlesource.com/c/linux/kernel/git/torvalds/linux/+/7216

> 

> Ben Gardon (6):

>    KVM: selftests: Rename timespec_diff_now to timespec_elapsed

>    KVM: selftests: Avoid flooding debug log while populating memory

>    KVM: selftests: Convert iterations to int in dirty_log_perf_test

>    KVM: selftests: Fix population stage in dirty_log_perf_test

>    KVM: selftests: Add option to overlap vCPU memory access

>    KVM: selftests: Add memslot modification stress test

> 

>   tools/testing/selftests/kvm/.gitignore        |   1 +

>   tools/testing/selftests/kvm/Makefile          |   1 +

>   .../selftests/kvm/demand_paging_test.c        |  40 +++-

>   .../selftests/kvm/dirty_log_perf_test.c       |  72 +++---

>   .../selftests/kvm/include/perf_test_util.h    |   4 +-

>   .../testing/selftests/kvm/include/test_util.h |   2 +-

>   .../selftests/kvm/lib/perf_test_util.c        |  25 ++-

>   tools/testing/selftests/kvm/lib/test_util.c   |   2 +-

>   .../kvm/memslot_modification_stress_test.c    | 211 ++++++++++++++++++

>   9 files changed, 307 insertions(+), 51 deletions(-)

>   create mode 100644 tools/testing/selftests/kvm/memslot_modification_stress_test.c

> 


Queued, thanks.

Paolo