From patchwork Fri Jun 2 16:08:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vipin Sharma X-Patchwork-Id: 688590 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4657C7EE24 for ; Fri, 2 Jun 2023 16:09:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235952AbjFBQJZ (ORCPT ); Fri, 2 Jun 2023 12:09:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34874 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236487AbjFBQJX (ORCPT ); Fri, 2 Jun 2023 12:09:23 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 44CA81BC for ; Fri, 2 Jun 2023 09:09:22 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id d2e1a72fcca58-65356ed7053so604755b3a.2 for ; Fri, 02 Jun 2023 09:09:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685722162; x=1688314162; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=DpCi+Z+5IrlXYT4x1uPx6ey3Kh6k4RwU4EEsfyAeFCY=; b=HfcMR6iJ3p+XpoWwwGnuCf8DXEruqodHMrgo0qRUl9XKRxAQVZGiN8TxS7KXZ6liYk IHfRjrAnrHvGlvRXdd0vMGLUxjqvqzDBCEZqkZ+AlgA2D+HqWzekq3osmRA2k0du5O5o kCcbGty+UgEm8rxi/OLq7qgVCFwHH13HTXpftBj1XaM0i7tjBO9lKRJTdTnJF5R/3t6c JrkS5ti9XRCTfaa1D8P3ktMUx2ASyp8GqB5XYe9RG7pBvxny7Wq3BWLQtvGrXB/UToqw xyXoWdXWsB9HM0UfeGzOOvYeGlmazG1w/VKQFNAblPuwmKiFaBvQdPac1I4EbR7ShaXp NDtQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685722162; x=1688314162; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=DpCi+Z+5IrlXYT4x1uPx6ey3Kh6k4RwU4EEsfyAeFCY=; b=M/4yvPeYl5dKM4kgDr7APK8GGZtWqFor6edWp1+xLF60/3cO/hok3Kgjm0Orn6lOfa hva2VVYMVKOaov7rRFjOlAg7X17jEwEhspFv5ASy2wSSxAeSSIZKwZzLhmn3HaZwX2vx UoO9rfet056ra4xHZDAsEMtB+G+nXP74rlxPMhhREWM71JvYUQ92pvQzchLaOpDt3KSE XdcQEBMbjIp8IGKIiWT2uvAcWvuBYk09X1YaXjV7eY8LV17P8MgBsquV2lJXeBi199qo kcSOB0lMhdtTthYb8IMRAUAj+oxgXlZMPX9h8c4R4IuRGJG4vH5/wHvoHc6sZNyDCdkX 8CEA== X-Gm-Message-State: AC+VfDyVfkzHjvtwAkatM2oX1lv4oWJ4zG7sC4xyMHfuG+0M2Nvzsw2q 6P9vzOzzFpfXeU7sYiOR4cUoCzNhOsoJ X-Google-Smtp-Source: ACHHUZ7r4si2IcQKI55bOSM8SHxJbAhndrnIqDFmjr9vqnHPxyaBa77OMmwMCW0FKU1TUemviXs4kKMtvfhR X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a05:6a00:2183:b0:643:a542:b311 with SMTP id h3-20020a056a00218300b00643a542b311mr4840248pfi.0.1685722161708; Fri, 02 Jun 2023 09:09:21 -0700 (PDT) Date: Fri, 2 Jun 2023 09:08:59 -0700 In-Reply-To: <20230602160914.4011728-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230602160914.4011728-1-vipinsh@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230602160914.4011728-2-vipinsh@google.com> Subject: [PATCH v2 01/16] KVM: selftests: Clear dirty logs in user defined chunks sizes in dirty_log_perf_test From: Vipin Sharma To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, chenhuacai@kernel.org, aleksandar.qemu.devel@gmail.com, tsbogend@alpha.franken.de, anup@brainfault.org, atishp@atishpatra.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, pbonzini@redhat.com, dmatlack@google.com, ricarkol@google.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org In dirty_log_perf_test, provide a new option 'k' to specify the size of the chunks and clear dirty memory in chunks in each iteration. If option is not provided then fallback to the old way of clearing whole memslot in one call in each iteration. In production environment whole memslot is rarely cleared in a single call, instead clearing operation is split across multiple calls to reduce time between clearing and sending memory to a remote host. This change mimics the production usecases and allows to get performance numbers based on that. Signed-off-by: Vipin Sharma --- .../selftests/kvm/dirty_log_perf_test.c | 42 +++++++++++++++---- 1 file changed, 34 insertions(+), 8 deletions(-) diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c index e9d6d1aecf89..119ddfc7306e 100644 --- a/tools/testing/selftests/kvm/dirty_log_perf_test.c +++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c @@ -134,6 +134,7 @@ struct test_params { uint32_t write_percent; uint32_t random_seed; bool random_access; + uint64_t clear_chunk_size; }; static void toggle_dirty_logging(struct kvm_vm *vm, int slots, bool enable) @@ -169,16 +170,28 @@ static void get_dirty_log(struct kvm_vm *vm, unsigned long *bitmaps[], int slots } } -static void clear_dirty_log(struct kvm_vm *vm, unsigned long *bitmaps[], - int slots, uint64_t pages_per_slot) +static void clear_dirty_log_in_chunks(struct kvm_vm *vm, + unsigned long *bitmaps[], int slots, + uint64_t pages_per_slot, + uint64_t pages_per_clear) { - int i; + uint64_t from, clear_pages_count; + int i, slot; for (i = 0; i < slots; i++) { - int slot = MEMSTRESS_MEM_SLOT_INDEX + i; - - kvm_vm_clear_dirty_log(vm, slot, bitmaps[i], 0, pages_per_slot); + slot = MEMSTRESS_MEM_SLOT_INDEX + i; + from = 0; + clear_pages_count = pages_per_clear; + + while (from < pages_per_slot) { + if (from + clear_pages_count > pages_per_slot) + clear_pages_count = pages_per_slot - from; + kvm_vm_clear_dirty_log(vm, slot, bitmaps[i], from, + clear_pages_count); + from += clear_pages_count; + } } + } static unsigned long **alloc_bitmaps(int slots, uint64_t pages_per_slot) @@ -215,6 +228,7 @@ static void run_test(enum vm_guest_mode mode, void *arg) uint64_t guest_num_pages; uint64_t host_num_pages; uint64_t pages_per_slot; + uint64_t pages_per_clear; struct timespec start; struct timespec ts_diff; struct timespec get_dirty_log_total = (struct timespec){0}; @@ -235,6 +249,7 @@ static void run_test(enum vm_guest_mode mode, void *arg) guest_num_pages = vm_adjust_num_guest_pages(mode, guest_num_pages); host_num_pages = vm_num_host_pages(mode, guest_num_pages); pages_per_slot = host_num_pages / p->slots; + pages_per_clear = p->clear_chunk_size / getpagesize(); bitmaps = alloc_bitmaps(p->slots, pages_per_slot); @@ -315,7 +330,9 @@ static void run_test(enum vm_guest_mode mode, void *arg) if (dirty_log_manual_caps) { clock_gettime(CLOCK_MONOTONIC, &start); - clear_dirty_log(vm, bitmaps, p->slots, pages_per_slot); + clear_dirty_log_in_chunks(vm, bitmaps, p->slots, + pages_per_slot, + pages_per_clear); ts_diff = timespec_elapsed(start); clear_dirty_log_total = timespec_add(clear_dirty_log_total, ts_diff); @@ -413,6 +430,11 @@ static void help(char *name) " To leave the application task unpinned, drop the final entry:\n\n" " ./dirty_log_perf_test -v 3 -c 22,23,24\n\n" " (default: no pinning)\n"); + printf(" -k: Specify the chunk size in which dirty memory gets cleared\n" + " in memslots in each iteration. If the size is bigger than\n" + " the memslot size then whole memslot is cleared in one call.\n" + " Size must be aligned to the host page size. e.g. 10M or 3G\n" + " (default: UINT64_MAX, clears whole memslot in one call)\n"); puts(""); exit(0); } @@ -428,6 +450,7 @@ int main(int argc, char *argv[]) .slots = 1, .random_seed = 1, .write_percent = 100, + .clear_chunk_size = UINT64_MAX, }; int opt; @@ -438,7 +461,7 @@ int main(int argc, char *argv[]) guest_modes_append_default(); - while ((opt = getopt(argc, argv, "ab:c:eghi:m:nop:r:s:v:x:w:")) != -1) { + while ((opt = getopt(argc, argv, "ab:c:eghi:k:m:nop:r:s:v:x:w:")) != -1) { switch (opt) { case 'a': p.random_access = true; @@ -462,6 +485,9 @@ int main(int argc, char *argv[]) case 'i': p.iterations = atoi_positive("Number of iterations", optarg); break; + case 'k': + p.clear_chunk_size = parse_size(optarg); + break; case 'm': guest_modes_cmdline(optarg); break; From patchwork Fri Jun 2 16:09:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vipin Sharma X-Patchwork-Id: 688589 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4DBDAC77B7A for ; Fri, 2 Jun 2023 16:09:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236489AbjFBQJc (ORCPT ); Fri, 2 Jun 2023 12:09:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35000 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236507AbjFBQJ3 (ORCPT ); Fri, 2 Jun 2023 12:09:29 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 55751E40 for ; Fri, 2 Jun 2023 09:09:26 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id 41be03b00d2f7-53fa457686eso2055225a12.0 for ; Fri, 02 Jun 2023 09:09:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685722166; x=1688314166; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=U16a7SsqQ7yjyv5J8Y2N4u7Ylkl11H9qatwpuhmZUgo=; b=636YoGkhL1EaD07rsMfPK9rB926Ht76M8pVZszNcRp8cSFBaHFxW4ESJErvKH1m1wQ b8SVVb+49Lm4c7lpuGUxvabzksatMUh3aHtTfXuucGjqsfAbB7x8jB5X+IIkfWd/B+pM hI1Jypk91JlkfEFB2dbGzi8Bjd9QmphZBvbPzeFfXopwyd64oFIEk7hcFeTi49fzsYb0 jMhjapC0h8n44W4QCQ7q/RI4lKIJPywJjobPFbGliUK4+jwpyLj3v9rdAbRrHTavotzv jSbTLKdo3Rd++6WRrKKX3ZluUSmLDy9wL6sxwwb/x/fXUrLPPNX7BCJ2JivFRXiwGY+V XR9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685722166; x=1688314166; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=U16a7SsqQ7yjyv5J8Y2N4u7Ylkl11H9qatwpuhmZUgo=; b=b1fXsAJ3c3wNtmk2wdeyiBUE8Xv1/5mOSaA/6sRTP7MV5ts14kbI741ny4er2A9Vzi +MCdNFv1pH0iHWy0IDIe6KGk8DKlJJhmtyfdQ1IZwZzDC0umQLHFof8I2bQSfNiX2Vpc cSYlJwj+SvXGEIX8LBVo1LcC8/dhNTYZ6Ax8SR2cDi1jOUk7n9wi0ov6geQ4Dph1He2P 7MEHYEPiFNMcjTmQnLERAwJrsFbVJ1qSOyWFT5yZS4fWfjGmuNF7gazmucxe9Wqzi9ui MaEUqsQVrExMCYw8rlqQjs3Tn7q3/GYasK590wtwqnJ79+MXj4jk3ZvUa4JBqMhnYxci /Z3g== X-Gm-Message-State: AC+VfDwK9AotbQ32NJlEyGutXQ+VLgpZAbEbxtb3uYJCWbk4VGCQd6eY XpEytz0eGHOeqNqVDcOqprvhdA6nqw0y X-Google-Smtp-Source: ACHHUZ5ZPLDaNjdv6ql0/d9oDMUgHRd2fE1fBqzKriMYD3yXbiA53bpGux4rVDyMUkeRamgRC/kjMfQm5qU6 X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a17:90a:de04:b0:24e:18ff:5bad with SMTP id m4-20020a17090ade0400b0024e18ff5badmr42850pjv.0.1685722165847; Fri, 02 Jun 2023 09:09:25 -0700 (PDT) Date: Fri, 2 Jun 2023 09:09:01 -0700 In-Reply-To: <20230602160914.4011728-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230602160914.4011728-1-vipinsh@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230602160914.4011728-4-vipinsh@google.com> Subject: [PATCH v2 03/16] KVM: selftests: Pass the count of read and write accesses from guest to host From: Vipin Sharma To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, chenhuacai@kernel.org, aleksandar.qemu.devel@gmail.com, tsbogend@alpha.franken.de, anup@brainfault.org, atishp@atishpatra.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, pbonzini@redhat.com, dmatlack@google.com, ricarkol@google.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Pass the number of read and write accesses done in the memstress guest code to userspace. These counts will provide a way to measure vCPUs performance during memstress and dirty logging related tests. For example, in dirty_log_perf_test this can be used to measure how much progress vCPUs are able to do while VMM is getting and clearing dirty logs. In dirty_log_perf_test, each vCPU runs once and then waits until iteration value is incremented by main thread, therefore, these access counts will not provide much useful information except for observing read vs write counts. However, in future commits, dirty_log_perf_test behavior will be changed to allow vCPUs to execute independent of userspace iterations. This will mimic real world workload where guest keeps on executing while VMM is collecting and clearing dirty logs separately. With read and write accesses known for each vCPU, impact of get and clear dirty log APIs can be quantified. Note that access counts will not be 100% reliable in knowing vCPUs performances. Few things which can affect vCPU progress: 1. vCPUs are scheduled less by host 2. Userspace operations run for longer time which end up giving vCPUs more time to execute. Signed-off-by: Vipin Sharma --- tools/testing/selftests/kvm/lib/memstress.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/lib/memstress.c b/tools/testing/selftests/kvm/lib/memstress.c index 5f1d3173c238..ac53cc6e36d7 100644 --- a/tools/testing/selftests/kvm/lib/memstress.c +++ b/tools/testing/selftests/kvm/lib/memstress.c @@ -49,6 +49,8 @@ void memstress_guest_code(uint32_t vcpu_idx) struct memstress_args *args = &memstress_args; struct memstress_vcpu_args *vcpu_args = &args->vcpu_args[vcpu_idx]; struct guest_random_state rand_state; + uint64_t write_access; + uint64_t read_access; uint64_t gva; uint64_t pages; uint64_t addr; @@ -64,6 +66,8 @@ void memstress_guest_code(uint32_t vcpu_idx) GUEST_ASSERT(vcpu_args->vcpu_idx == vcpu_idx); while (true) { + write_access = 0; + read_access = 0; for (i = 0; i < pages; i++) { if (args->random_access) page = guest_random_u32(&rand_state) % pages; @@ -72,13 +76,16 @@ void memstress_guest_code(uint32_t vcpu_idx) addr = gva + (page * args->guest_page_size); - if (guest_random_u32(&rand_state) % 100 < args->write_percent) + if (guest_random_u32(&rand_state) % 100 < args->write_percent) { *(uint64_t *)addr = 0x0123456789ABCDEF; - else + write_access++; + } else { READ_ONCE(*(uint64_t *)addr); + read_access++; + } } - GUEST_SYNC(1); + GUEST_SYNC_ARGS(1, read_access, write_access, 0, 0); } } From patchwork Fri Jun 2 16:09:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vipin Sharma X-Patchwork-Id: 688588 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6399C7EE2D for ; Fri, 2 Jun 2023 16:09:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236605AbjFBQJ6 (ORCPT ); Fri, 2 Jun 2023 12:09:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35040 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236351AbjFBQJn (ORCPT ); Fri, 2 Jun 2023 12:09:43 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 510DCE54 for ; Fri, 2 Jun 2023 09:09:30 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id 41be03b00d2f7-53f6e194e7bso2085890a12.1 for ; Fri, 02 Jun 2023 09:09:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685722169; x=1688314169; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=TCZIk7VgxCOZOGj9BrulapCebBiXjQ5+VJfBT1Ieq9g=; b=48Jv9Ndg6W2FWtv0TNF+EvHQkuzevnf0de3wNHQhiINnd1y+TJ3ZHZNsYvgiBjl6lh 9WUrwiuBOv0A4GYPSawEV6yVx5LbPQL0U+gQGnFWAFVUnyrF1z1qm1xlbl6HRjy6NNwE 8XB2SVK1dbUaEeLrM/H8JHaqBSIup+A6xNFpzpaufkcVOP+VAaTUukUdmKNrTQcISuZt yTtWhk1kuZr3BXzyN9NP6mKp7IAXkKM1nK9WQaTXO/PtqFm7I9EIaNt4fcwETtWcECAj 9ktEJ+8oLB8jJgpGPXgzImfVTKXWHcOIgbQS6xiesRZonieLTn89FjSta0ZVX6QUoqee mZLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685722169; x=1688314169; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=TCZIk7VgxCOZOGj9BrulapCebBiXjQ5+VJfBT1Ieq9g=; b=OVma14NIQr6GjK3wLGXufRAaqzlygT5/jk/nIzMZ03RA/Vjo/Y/rZ1xyiXveOqYw8V 3WjIHGo8jjrVHbQxcy+u4Iy3II72hpt/ZMnSNQC9M+i9NVTI0HJXgzxC+05ar0PwbEAx cREHNzqNP60DqyLHLRfFyt3FF5RELdJhf5ZX9Z8kGeioEMlA8Qr2Fo1EEP83Fel1vlJ0 FNpGkb6ufGXNOafcI3z/pXTmOwtZJ4oNnDdDM17PARJORXQ6FGW986IeqLj/ou22oGLy BzVO3hsForurhcXX8nKJW9+E4DBofgdh4oQeZX61MGNfD8lX1xDZDA0wWTJpgFSunvEm BVcw== X-Gm-Message-State: AC+VfDyoahYt5mLQF1og7zWQ8jFa5/M2zfjY5b38N/Wx4Zqy5YoCz+tG 8io9n5FI+qk9WPHst+8M5tr+Ma1AF1Ap X-Google-Smtp-Source: ACHHUZ5eYF/RV9lQrAgLH67VBGXqFnNNXk0G34PjM3lxGa6/KnJxp9P54a/FAO5X5SRVmoZXvlFdh+lSImCS X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a63:d044:0:b0:53f:32cf:bcd1 with SMTP id s4-20020a63d044000000b0053f32cfbcd1mr2492124pgi.5.1685722169678; Fri, 02 Jun 2023 09:09:29 -0700 (PDT) Date: Fri, 2 Jun 2023 09:09:03 -0700 In-Reply-To: <20230602160914.4011728-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230602160914.4011728-1-vipinsh@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230602160914.4011728-6-vipinsh@google.com> Subject: [PATCH v2 05/16] KVM: selftests: Allow independent execution of vCPUs in dirty_log_perf_test From: Vipin Sharma To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, chenhuacai@kernel.org, aleksandar.qemu.devel@gmail.com, tsbogend@alpha.franken.de, anup@brainfault.org, atishp@atishpatra.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, pbonzini@redhat.com, dmatlack@google.com, ricarkol@google.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Give users command line option (-j) to execute vCPUs independently of dirty log iterations after initialization is complete. This change makes dirty_log_perf_test behave like real world workflows where guest vCPUs keep on executing while VMM collects and clear dirty logs. Total pages touched during execution of test will give good estimate of how vCPUs are performing while dirty logging is enabled. Signed-off-by: Vipin Sharma --- .../selftests/kvm/dirty_log_perf_test.c | 64 +++++++++++++------ 1 file changed, 44 insertions(+), 20 deletions(-) diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c index 14b012a0dcb1..fbf973d6cc66 100644 --- a/tools/testing/selftests/kvm/dirty_log_perf_test.c +++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c @@ -69,6 +69,7 @@ static int iteration; static int vcpu_last_completed_iteration[KVM_MAX_VCPUS]; static atomic_ullong total_reads; static atomic_ullong total_writes; +static bool lockstep_iterations; static void vcpu_worker(struct memstress_vcpu_args *vcpu_args) { @@ -83,12 +84,16 @@ static void vcpu_worker(struct memstress_vcpu_args *vcpu_args) struct timespec total = (struct timespec){0}; struct timespec avg; struct ucall uc = {}; + int current_iteration = -1; int ret; run = vcpu->run; while (!READ_ONCE(host_quit)) { - int current_iteration = READ_ONCE(iteration); + if (lockstep_iterations) + current_iteration = READ_ONCE(iteration); + else + current_iteration++; clock_gettime(CLOCK_MONOTONIC, &start); ret = _vcpu_run(vcpu); @@ -118,13 +123,19 @@ static void vcpu_worker(struct memstress_vcpu_args *vcpu_args) ts_diff.tv_nsec); } - /* - * Keep running the guest while dirty logging is being disabled - * (iteration is negative) so that vCPUs are accessing memory - * for the entire duration of zapping collapsible SPTEs. - */ - while (current_iteration == READ_ONCE(iteration) && - READ_ONCE(iteration) >= 0 && !READ_ONCE(host_quit)) {} + if (lockstep_iterations) { + /* + * Keep running the guest while dirty logging is being disabled + * (iteration is negative) so that vCPUs are accessing memory + * for the entire duration of zapping collapsible SPTEs. + */ + while (current_iteration == READ_ONCE(iteration) && + READ_ONCE(iteration) >= 0 && !READ_ONCE(host_quit)) + ; + } else { + while (!READ_ONCE(iteration) && !READ_ONCE(host_quit)) + ; + } } avg = timespec_div(total, vcpu_last_completed_iteration[vcpu_idx]); @@ -332,18 +343,20 @@ static void run_test(enum vm_guest_mode mode, void *arg) clock_gettime(CLOCK_MONOTONIC, &start); iteration++; - pr_debug("Starting iteration %d\n", iteration); - for (i = 0; i < nr_vcpus; i++) { - while (READ_ONCE(vcpu_last_completed_iteration[i]) - != iteration) - ; + if (lockstep_iterations) { + pr_debug("Starting iteration %d\n", iteration); + for (i = 0; i < nr_vcpus; i++) { + while (READ_ONCE(vcpu_last_completed_iteration[i]) + != iteration) + ; + } + + ts_diff = timespec_elapsed(start); + vcpu_dirty_total = timespec_add(vcpu_dirty_total, ts_diff); + pr_info("Iteration %d dirty memory time: %ld.%.9lds\n", + iteration, ts_diff.tv_sec, ts_diff.tv_nsec); } - ts_diff = timespec_elapsed(start); - vcpu_dirty_total = timespec_add(vcpu_dirty_total, ts_diff); - pr_info("Iteration %d dirty memory time: %ld.%.9lds\n", - iteration, ts_diff.tv_sec, ts_diff.tv_nsec); - clock_gettime(CLOCK_MONOTONIC, &start); get_dirty_log(vm, bitmaps, p->slots); ts_diff = timespec_elapsed(start); @@ -365,6 +378,10 @@ static void run_test(enum vm_guest_mode mode, void *arg) } } + /* Block further vCPUs execution */ + if (!lockstep_iterations) + WRITE_ONCE(iteration, 0); + /* * Run vCPUs while dirty logging is being disabled to stress disabling * in terms of both performance and correctness. Opt-in via command @@ -458,6 +475,10 @@ static void help(char *name) " To leave the application task unpinned, drop the final entry:\n\n" " ./dirty_log_perf_test -v 3 -c 22,23,24\n\n" " (default: no pinning)\n"); + printf(" -j: Execute vCPUs independent of dirty log iterations\n" + " Independent vCPUs execution will allow them to continuously\n" + " dirty memory while main thread is collecting and clearing\n" + " dirty logs in each iteration.\n"); printf(" -k: Specify the chunk size in which dirty memory gets cleared\n" " in memslots in each iteration. If the size is bigger than\n" " the memslot size then whole memslot is cleared in one call.\n" @@ -492,10 +513,10 @@ int main(int argc, char *argv[]) kvm_check_cap(KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2); dirty_log_manual_caps &= (KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE | KVM_DIRTY_LOG_INITIALLY_SET); - + lockstep_iterations = true; guest_modes_append_default(); - while ((opt = getopt(argc, argv, "ab:c:eghi:k:l:m:nop:r:s:v:x:w:")) != -1) { + while ((opt = getopt(argc, argv, "ab:c:eghi:jk:l:m:nop:r:s:v:x:w:")) != -1) { switch (opt) { case 'a': p.random_access = true; @@ -519,6 +540,9 @@ int main(int argc, char *argv[]) case 'i': p.iterations = atoi_positive("Number of iterations", optarg); break; + case 'j': + lockstep_iterations = false; + break; case 'k': p.clear_chunk_size = parse_size(optarg); break; From patchwork Fri Jun 2 16:09:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vipin Sharma X-Patchwork-Id: 688587 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5BA70C7EE2E for ; Fri, 2 Jun 2023 16:10:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236215AbjFBQKA (ORCPT ); Fri, 2 Jun 2023 12:10:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35366 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236611AbjFBQJo (ORCPT ); Fri, 2 Jun 2023 12:09:44 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B0CA81B8 for ; Fri, 2 Jun 2023 09:09:32 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-56536dd5f79so33623827b3.3 for ; Fri, 02 Jun 2023 09:09:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685722172; x=1688314172; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=fngtlIXFBOk5ojPhoCZKxdftJsl/PCjzcivfHulUfxM=; b=KTqFTUl8/O/otunDnIcW8NMT9NoZpURoL+fT5x3jr7xLTtmqLTsOpMHELHqceBBNBQ AmDayYQtgKFH5Z55J+qEEznt7CMzcfBLKa9rXdwkcbaxYdDyFrpWK5tIf+OgM6RdtGyq GQdHIK3QOQ2drYG/ZFFk+kHjBy+s90SO6eGGqGaoiIn9r7XkEBIEo/mXJtp3GumBEOA+ p4STDDMfVtsOTK10O8PNGuXazkAWxMkEfK3iCnrl9MiWc43AYLln6h9/+ty9OxynFFMd DAYdFz0PuZJQnqyg+7O9VnZBBnLY/2RVdSpDM0DZZNDzkFP0BkvcK+wZMlJvwm1mfdaV VyLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685722172; x=1688314172; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=fngtlIXFBOk5ojPhoCZKxdftJsl/PCjzcivfHulUfxM=; b=MMplL79G5S1vZMcHdzhi3inLlVUIMRbDXdz+Odk4yJfk/6ssid9ZGsmc3izOn2SDAa KlzXtOLCKqq+8kK5vwzxGBFofWPOOlqph+sqOr2DxEbHh6damFwDq4Sub6VkYb90A3fy gcJCqMdkp1PWljGw4t6wyrfijYLsyFBUeG9ModXd7HO6QHT9Bxft5gtKEKdjtgcGedwo +mBWXTp4HIT4uG0WeZdOq9/0bToRk3ENA1sDaxF6QnWpVBUIFbAEJcm/EfOVwZGnBapz odXdZaLqc+S48zAtRrNS5fyOZhmBa7+cYyxI3xhRCENkFSDlXsFZwpwneo70V1zZSrms FDpg== X-Gm-Message-State: AC+VfDx1VRcm+grVgQeBzTsMNvhFMvjvcd5pfdE6tsnuIQ6NeBMPIGMw 6otKuNCq8KJuRtttybql2mKT51PzwyoE X-Google-Smtp-Source: ACHHUZ6gan5YlBkZGg9KteJiKVhKxkJ5gsh9dOHqcHA/QTL4d+xT++IRNn1eOG3Ha8PAz+KvOVFz0vRB3dhD X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a81:c509:0:b0:55a:3133:86fa with SMTP id k9-20020a81c509000000b0055a313386famr182818ywi.3.1685722171727; Fri, 02 Jun 2023 09:09:31 -0700 (PDT) Date: Fri, 2 Jun 2023 09:09:04 -0700 In-Reply-To: <20230602160914.4011728-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230602160914.4011728-1-vipinsh@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230602160914.4011728-7-vipinsh@google.com> Subject: [PATCH v2 06/16] KVM: arm64: Correct the kvm_pgtable_stage2_flush() documentation From: Vipin Sharma To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, chenhuacai@kernel.org, aleksandar.qemu.devel@gmail.com, tsbogend@alpha.franken.de, anup@brainfault.org, atishp@atishpatra.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, pbonzini@redhat.com, dmatlack@google.com, ricarkol@google.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Remove _range suffix from kvm_pgtable_stage2_flush_range which is used in documentation of kvm_pgtable_stage2_flush(). There is no function named kvm_pgtable_stage2_flush_range(). Fixes: 93c66b40d728 ("KVM: arm64: Add support for stage-2 cache flushing in generic page-table") Signed-off-by: Vipin Sharma --- arch/arm64/include/asm/kvm_pgtable.h | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 850d65f705fa..d542a671c564 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -657,9 +657,8 @@ int kvm_pgtable_stage2_relax_perms(struct kvm_pgtable *pgt, u64 addr, bool kvm_pgtable_stage2_is_young(struct kvm_pgtable *pgt, u64 addr); /** - * kvm_pgtable_stage2_flush_range() - Clean and invalidate data cache to Point - * of Coherency for guest stage-2 address - * range. + * kvm_pgtable_stage2_flush() - Clean and invalidate data cache to Point of + * Coherency for guest stage-2 address range. * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). * @addr: Intermediate physical address from which to flush. * @size: Size of the range. From patchwork Fri Jun 2 16:09:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vipin Sharma X-Patchwork-Id: 688586 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3EC1EC7EE2E for ; Fri, 2 Jun 2023 16:10:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235298AbjFBQKD (ORCPT ); Fri, 2 Jun 2023 12:10:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35368 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236831AbjFBQJz (ORCPT ); Fri, 2 Jun 2023 12:09:55 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 948D710F5 for ; Fri, 2 Jun 2023 09:09:40 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id d9443c01a7336-1b01aa55219so22123535ad.0 for ; Fri, 02 Jun 2023 09:09:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685722180; x=1688314180; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=i16qvWrTxXmM44zqvAfMO+VQ2ZgLrONxQcP35eFduXw=; b=7epqj+D7b/Vs8MJCaB/Yaig4pdk6Xsa6+PhHha/KiUX7vkzJuwNVRvh/QKSBOYWOoF 9Kl1CGagQi1DXxlGyYvTod7C6g15Jc/lvseccbcVTnZGlpUMuN85YotpNb24SiVNDe/4 jJWg+BewppQ9iRi+5TbG/EFvPtgidVbx51XdS+ZgnSZfKoodj1WI4tk7ctB/2mpU7ujt 5GR5L1lxyi6Yc/dQP61OizppKV4BxJWb97C2tfKTvu6BWLjF7QcXhyPnKGED9xXxbRV0 rkKjEao0Yem9n3mVyvzo3PMl7x2joHF/KT0tNezMLtd5RxYZ9imnBCUvokvJr8Ssuvgw mKhg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685722180; x=1688314180; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=i16qvWrTxXmM44zqvAfMO+VQ2ZgLrONxQcP35eFduXw=; b=XH3Hi2yEkSYh/1ZmFG5SotaPFW/NBnZlHZ8l7+GqvWapL4kEoBzwK6ZE1sIKP8Egsb NdjRMV1iRVjn2k52rtphfIXzM4kAcFdDJ1uwmRQiG5FQLQtKnJnc/NtyJmt3uAdwieUU 7BWUwsQLZ9BUJZduOTj0YUeJggW5oQ4JNH8bKh8pN+pYPO8woMxcX0aLypw/OUzmLW5o UOwPbRZAywyi1fuVDeJmh8KezvB3KQOhgVQ19qTd99LOXd3J93+jALkp255dyBVOtHer pjm01Za4hRf7wEmi1/r2Bd3osHZvDbbYzqERBgBT/Q3m4aakWES7WpUaJk3ZrMribptr Vb6w== X-Gm-Message-State: AC+VfDzW3nHFf+EbkC7D8t1SbWQpZq12nw5AzoCCKqirkqZMfdHLVPun XI+0eNKsUC40Q0QtH/KgxxSbxsf/Sjxv X-Google-Smtp-Source: ACHHUZ50eFmYtmjd+7phpNHJVAKKYKF0b2Pf24cv0bbiOkOy3oeWBFiQvv39uMwyDgbA4q0xlnPSa5qY2vtt X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a17:903:3251:b0:1b0:6a10:1ba1 with SMTP id ji17-20020a170903325100b001b06a101ba1mr117384plb.13.1685722180001; Fri, 02 Jun 2023 09:09:40 -0700 (PDT) Date: Fri, 2 Jun 2023 09:09:08 -0700 In-Reply-To: <20230602160914.4011728-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230602160914.4011728-1-vipinsh@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230602160914.4011728-11-vipinsh@google.com> Subject: [PATCH v2 10/16] KVM: arm64: Return -ENOENT if PTE is not valid in stage2_attr_walker From: Vipin Sharma To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, chenhuacai@kernel.org, aleksandar.qemu.devel@gmail.com, tsbogend@alpha.franken.de, anup@brainfault.org, atishp@atishpatra.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, pbonzini@redhat.com, dmatlack@google.com, ricarkol@google.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Return -ENOENT from stage2_attr_walker for invalid PTE. Continue page table walk if walker callback returns -ENOENT outside of the fault handler path else terminate the walk. In fault handler path, similar to -EAGAIN in user_mem_abort, retry guest execution. stage2_attr_walker() is used from multiple places like, write protection, MMU notifier callbacks, and relaxing permission during vCPU faults. This function returns -EAGAIN for different cases: 1. When PTE is not valid. 2. When cmpxchg() fails while setting new SPTE. For non-shared walkers, like write protection and MMU notifier, above 2 cases are just ignored by walker and it moves to the next SPTE. #2 will never happen for non-shared walkers as they don't use cmpxchg() for updating SPTEs. For shared walkers, like vCPU fault handler, above 2 cases results in walk termination. In future commits, clear-dirty-log walker will write protect SPTEs under MMU read lock and use shared page table walker. This will result in two shared page table walkers type, vCPUs fault handler and clear-dirty-log, competing with each other and sometime causing cmpxchg() failure. So, -EAGAIN in clear-dirty-log walker due to cmpxchg() failure must be retried. Whereas, -EAGAIN in the clear-dirty-log due to invalid SPTE must be ignored instead of exiting as per the current logic of shared page table walker. This is not needed for vCPU fault handler which also runs via shared page table walker and terminates walk on getting -EAGAIN due to invalid SPTE. To handle all these scenarios, stage2_attr_walker must return different error codes for invalid SPTEs and cmxchg() failure. -ENOENT for invalid SPTE is chosen because it is not used by any other shared walker. When clear-dirty-log will be changed to use shared page table walker, it will be possible to differentiate cases of retrying, continuing or terminating the walk for shared fault handler and shared clear-dirty-log. Signed-off-by: Vipin Sharma --- arch/arm64/include/asm/kvm_pgtable.h | 1 + arch/arm64/kvm/hyp/pgtable.c | 19 ++++++++++++------- arch/arm64/kvm/mmu.c | 2 +- 3 files changed, 14 insertions(+), 8 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 957bc20dab00..23e7e7851f1d 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -720,6 +720,7 @@ int kvm_pgtable_stage2_split(struct kvm_pgtable *pgt, u64 addr, u64 size, * -------------|------------------|-------------- * Non-Shared | 0 | Continue * Non-Shared | -EAGAIN | Continue + * Non-Shared | -ENOENT | Continue * Non-Shared | Any other | Exit * -------------|------------------|-------------- * Shared | 0 | Continue diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index a3a0812b2301..bc8c5c4ac1cf 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -186,14 +186,19 @@ static bool kvm_pgtable_walk_continue(const struct kvm_pgtable_walker *walker, /* * Visitor callbacks return EAGAIN when the conditions that led to a * fault are no longer reflected in the page tables due to a race to - * update a PTE. In the context of a fault handler this is interpreted - * as a signal to retry guest execution. + * update a PTE. * - * Ignore the return code altogether for walkers outside a fault handler - * (e.g. write protecting a range of memory) and chug along with the - * page table walk. + * Callbacks can also return ENOENT when PTE which is visited is not + * valid. + * + * In the context of a fault handler interpret these as a signal + * to retry guest execution. + * + * Ignore these return codes altogether for walkers outside a fault + * handler (e.g. write protecting a range of memory) and chug along + * with the page table walk. */ - if (r == -EAGAIN) + if (r == -EAGAIN || r == -ENOENT) return !(walker->flags & KVM_PGTABLE_WALK_HANDLE_FAULT); return !r; @@ -1072,7 +1077,7 @@ static int stage2_attr_walker(const struct kvm_pgtable_visit_ctx *ctx, struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops; if (!kvm_pte_valid(ctx->old)) - return -EAGAIN; + return -ENOENT; data->level = ctx->level; data->pte = pte; diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 1030921d89f8..356dc4131023 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1551,7 +1551,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, read_unlock(&kvm->mmu_lock); kvm_set_pfn_accessed(pfn); kvm_release_pfn_clean(pfn); - return ret != -EAGAIN ? ret : 0; + return (ret != -EAGAIN && ret != -ENOENT) ? ret : 0; } /* Resolve the access fault by making the page young again. */ From patchwork Fri Jun 2 16:09:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vipin Sharma X-Patchwork-Id: 688585 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2CE7C77B7A for ; Fri, 2 Jun 2023 16:10:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236619AbjFBQKw (ORCPT ); Fri, 2 Jun 2023 12:10:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35814 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236732AbjFBQKd (ORCPT ); Fri, 2 Jun 2023 12:10:33 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DD62EE67 for ; Fri, 2 Jun 2023 09:09:57 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id d9443c01a7336-1b04dbcf0dbso22535545ad.1 for ; Fri, 02 Jun 2023 09:09:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685722181; x=1688314181; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=rivHDhk0ioxMpP1XynwGaSZFsNZDBBbqgVAGbR7E/X8=; b=IxpZG4TkYuPd8UtpcDQplKJDDIpAs6lGKWZe95duADLUk5Xly6SRvjrVvCEGB/Fk1a E9X3IJECzkdQ25Bkjejj4IbXek6JqPY58HRA50EOI/YFTzUpFaA/bzGSlAOm+5PEvo+T 6jKSmBS0KSmdPni70wpBlwTPrvyYGZvMtxSC3Fbri9jwwzePk7OY91lpj01vmu9g7PZn a5J+YN0h1Qu/z/w7ubpNpCMDwcQqYWttGGXgoLeySgwUxIZDyVDmv/3Bk/4OJqAcfD0+ gHcpPqAo0vTPvSXpzWrIqDWuTjrALALhGQwaLZCyItTHvmVctEjS/JfpfTvctKcX+1gM QFLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685722181; x=1688314181; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rivHDhk0ioxMpP1XynwGaSZFsNZDBBbqgVAGbR7E/X8=; b=AkJ2NT6XGQAxWPxZLLN6bRp96GFxKU9OeSVonkvpnSKf+8P7OfYy2zvZiUnpkldL4r FJmmC3/FPxEaHzA9id02Wi+wPUTLendOorxfi9MH7M9whhra9GRM8dRyyNTg06XBT0X5 5Cj/AVRExFoFQmd0+2YAEk0Ww3x4855TWYDbbPVFbh8gF5sVqY9dmrowX43FcSmAJNcO a1kvNSBYlSBfE8QJ0tgRO4r/mKXSWDU2SPQ5Lgt1i+7ykMzj73d2DBPFtMo/tbAH8vNz oqLrQabCJ05cF2C07FpvXAAKF39Tp+hHz4EnjQaEEyvUkVhaYQQZEAsUxv97mDPbrGpF eJvQ== X-Gm-Message-State: AC+VfDyy5/3sETjG9gYESMtqDzoVGXWX2pwBBEvKDPyQnlE7J2B2eRk8 RHaHHZcQvU5LDXsINbDmAWsIq+FIjWEX X-Google-Smtp-Source: ACHHUZ4crxpGe15BxcOcaGOZO2WjPbibX9jRmjl7rSrwdD8Z6/aya1uE9CJM9AhjU7HHArKs+z3PEQeSOj8s X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a17:902:e3c5:b0:1b1:c90e:b7aa with SMTP id r5-20020a170902e3c500b001b1c90eb7aamr56727ple.4.1685722181662; Fri, 02 Jun 2023 09:09:41 -0700 (PDT) Date: Fri, 2 Jun 2023 09:09:09 -0700 In-Reply-To: <20230602160914.4011728-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230602160914.4011728-1-vipinsh@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230602160914.4011728-12-vipinsh@google.com> Subject: [PATCH v2 11/16] KVM: arm64: Use KVM_PGTABLE_WALK_SHARED flag instead of KVM_PGTABLE_WALK_HANDLE_FAULT From: Vipin Sharma To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, chenhuacai@kernel.org, aleksandar.qemu.devel@gmail.com, tsbogend@alpha.franken.de, anup@brainfault.org, atishp@atishpatra.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, pbonzini@redhat.com, dmatlack@google.com, ricarkol@google.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Check against shared page table walker flag instead of fault handler flag when determining if walk should continue or not. vCPU page fault handlers uses shared page walker and there are no other shared page walkers in Arm. This will change in future commit when clear-dirty-log will use shared page walker and continue, retry or terminate logic for a walk will change between shared page walkers. Signed-off-by: Vipin Sharma --- arch/arm64/kvm/hyp/pgtable.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index bc8c5c4ac1cf..7f80e953b502 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -191,7 +191,7 @@ static bool kvm_pgtable_walk_continue(const struct kvm_pgtable_walker *walker, * Callbacks can also return ENOENT when PTE which is visited is not * valid. * - * In the context of a fault handler interpret these as a signal + * In the context of a shared walker interpret these as a signal * to retry guest execution. * * Ignore these return codes altogether for walkers outside a fault @@ -199,7 +199,7 @@ static bool kvm_pgtable_walk_continue(const struct kvm_pgtable_walker *walker, * with the page table walk. */ if (r == -EAGAIN || r == -ENOENT) - return !(walker->flags & KVM_PGTABLE_WALK_HANDLE_FAULT); + return !(walker->flags & KVM_PGTABLE_WALK_SHARED); return !r; } From patchwork Fri Jun 2 16:09:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vipin Sharma X-Patchwork-Id: 688584 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50603C7EE2A for ; Fri, 2 Jun 2023 16:11:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236860AbjFBQLD (ORCPT ); Fri, 2 Jun 2023 12:11:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36352 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236709AbjFBQKt (ORCPT ); Fri, 2 Jun 2023 12:10:49 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0559810CC for ; Fri, 2 Jun 2023 09:10:05 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-bb1e332f648so2298625276.0 for ; Fri, 02 Jun 2023 09:10:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685722186; x=1688314186; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=iiQKR5r12cEmfxEtqKB501qQ7U5/inaqlNqkSHrg1Dk=; b=pa3mVvglr9ldo0oASWJgWygRLd3hw8lG1pjLeB+Qzm3K3zOU5BQU88aja7wxcgPXwA AwayadZx6shxLAXjuJFrDzlMtsvb67kGNUVBIz8jQ9pcx5Jd6BF7pMABQixgPuk47qo/ QPSMIrvsitPTIbIvBs3Unmlez10KGpvyazPz0vuoWxeM0ddu7MUX6bOpsZv5FX/RHsfI ab0UhQ4BKG9ohwFpekgRN7rsQtHAtMFU2PEbvOsbX5+K7m399sRVLlUcmEJAu36XrUm4 MIK72o4Xoc2dnSjS77GyfH7dzmpmSt4gmenJO3sxGLvRMkP8fiKXjBjSppWa9vRlnGfd FwPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685722186; x=1688314186; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=iiQKR5r12cEmfxEtqKB501qQ7U5/inaqlNqkSHrg1Dk=; b=aYrlVvHaQbVQt2a5Pu0mIakWKo1fkGFjo4nRa7ssDws5xYxETxWokadjBosRxzbHdd UWtAERIhPGETg9GRcAx+nf0gBFzme1yqWNi/QpwDzgViFp3Dbg5dDCGplFQXO3XV0H3q TXG59zFtu5K81CRVIExDHimG8riDshJjXacJnenwcTZK3HS6Xd7lV1c1uhvKrphbB3qB 6wB9Ca9FhvHwQkG5AI0Y3BneuN8bhaw/Yn3JL+X0CXB1hYL7QePKF+pAdCBComdgMf2A OceCAV+jeRUvHN8SkDDP32noc5IpLcKTcKTrvJzsAtMmlvHqZrMSvsBI74xSkDCfNOui w1TQ== X-Gm-Message-State: AC+VfDySugFNqw0LvPrsQMwBs9bB/MYMTG+gi0499fF1MWXJFrK4YVNt 0UUUcv6z9LlPSqKMSDJtHT5OYCt0LNyw X-Google-Smtp-Source: ACHHUZ4B/VNE5B0Lzqq5kfdIQx5Z2vYuoQBh8/63RZRhZfWL9mTILlGstZ7p9kQje0NOJUJPGy2sE9Fs2xL/ X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a05:6902:1545:b0:ba8:181b:2558 with SMTP id r5-20020a056902154500b00ba8181b2558mr2255805ybu.4.1685722186031; Fri, 02 Jun 2023 09:09:46 -0700 (PDT) Date: Fri, 2 Jun 2023 09:09:11 -0700 In-Reply-To: <20230602160914.4011728-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230602160914.4011728-1-vipinsh@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230602160914.4011728-14-vipinsh@google.com> Subject: [PATCH v2 13/16] KVM: arm64: Run clear-dirty-log under MMU read lock From: Vipin Sharma To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, chenhuacai@kernel.org, aleksandar.qemu.devel@gmail.com, tsbogend@alpha.franken.de, anup@brainfault.org, atishp@atishpatra.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, pbonzini@redhat.com, dmatlack@google.com, ricarkol@google.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Take MMU read lock for clearing dirty logs and use shared page table walker. Dirty logs are currently cleared using MMU write locks. This means vCPUs page faults, which takes MMU read lock, will be blocked while dirty logs are being cleared. This causes guest degradation and especially noticeable on VMs with lot of vCPUs. Taking MMU read lock will allow vCPUs to execute parallelly and reduces the impact on vCPUs performance. Signed-off-by: Vipin Sharma --- arch/arm64/kvm/mmu.c | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 356dc4131023..7c966f6f1a41 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -74,8 +74,12 @@ static int stage2_apply_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, if (ret) break; - if (resched && next != end) - cond_resched_rwlock_write(&kvm->mmu_lock); + if (resched && next != end) { + if (flags & KVM_PGTABLE_WALK_SHARED) + cond_resched_rwlock_read(&kvm->mmu_lock); + else + cond_resched_rwlock_write(&kvm->mmu_lock); + } } while (addr = next, addr != end); return ret; @@ -1131,11 +1135,11 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, phys_addr_t start = (base_gfn + __ffs(mask)) << PAGE_SHIFT; phys_addr_t end = (base_gfn + __fls(mask) + 1) << PAGE_SHIFT; - write_lock(&kvm->mmu_lock); - lockdep_assert_held_write(&kvm->mmu_lock); - - stage2_wp_range(&kvm->arch.mmu, start, end, 0); + read_lock(&kvm->mmu_lock); + stage2_wp_range(&kvm->arch.mmu, start, end, KVM_PGTABLE_WALK_SHARED); + read_unlock(&kvm->mmu_lock); + write_lock(&kvm->mmu_lock); /* * Eager-splitting is done when manual-protect is set. We * also check for initially-all-set because we can avoid From patchwork Fri Jun 2 16:09:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vipin Sharma X-Patchwork-Id: 688583 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6A05C7EE2A for ; Fri, 2 Jun 2023 16:11:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236740AbjFBQLK (ORCPT ); Fri, 2 Jun 2023 12:11:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36482 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236410AbjFBQK4 (ORCPT ); Fri, 2 Jun 2023 12:10:56 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1278810DD for ; Fri, 2 Jun 2023 09:10:09 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-babb51cb4d4so2949181276.1 for ; Fri, 02 Jun 2023 09:10:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685722190; x=1688314190; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Hryh0ZWAFdBP/1ensclnMWOs6DFcoKq9t8dEIwwzMjM=; b=yFcSJ5sk1YiYcQktVHUrIraA4vq3gzBg01wawQ+99X99BcERdv6aw79BJglFIsX38X Jmk2e5UaolCr5t1qAVgv5xhRcuwqz8qsPeKTN4dMinvHjEGhpXtPGawN2ICvlDQZGj8D ZmOScsl1sdyn4gxehMyRauugNpIrQjpu1KxL67VdJydhpwrAe8TXpUojYixIDTXpkYLZ L/KcS63c1WkLy8I19gAWFgXf90HgC8c3L1xN5Hze37jt9RE4scxALJ3lb9KGpgKF7m+y IPNOsGvdWwTPZEtqpxeUuo6lpLAcPxZcX9jQeWMRNApjKkg5jyviIXgCTWMDwLFLuVTH Fsfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685722190; x=1688314190; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Hryh0ZWAFdBP/1ensclnMWOs6DFcoKq9t8dEIwwzMjM=; b=bT7tdmLsTGmLhuqOGHoJcmLFO1ZJsjLm2dYIDy8bOv5IbJAkzv9WcLKhBRF/gQYRdW drHkc3l48S/H6AJvttCLHLag6F5UV97pcA94Kumh0xALKrcvSdb56u3Kg4I1zS2xIsBn 4OY8MqJL/Ckxdr1fIYxWSOFNz5X7XchzV1yFK7O/h2nK4q+FLj5A6sAwM4J8WmsOE7eb VfJUj3WhRTcwos/p5WZgMdoGdInusOnBGPCcp8/IvUZZ5ZJZHLVlAG2ciIHiKwBzEAqn 2dgeWOgYAeT893SypWJhL717keg2+f7AEXmKnTOZztM70mDSZHiXKhD4rm5qq2AzHJr5 lIRg== X-Gm-Message-State: AC+VfDygDyH26HgQkCPOmkufSrKye/WNeDXZmjHESZcl2HebA8EKSStB Vo4AQkzbMGdTdRwdchrQrE5K4uYvE7DD X-Google-Smtp-Source: ACHHUZ5M9AG9HKhKFMo8prXUgGc/ucGnPOZDLd1OSRiqHslc40DT/QGSesfErKBawc3k99MHt4wj3TuQ4IOn X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a25:bc7:0:b0:ba8:cbd2:61b3 with SMTP id 190-20020a250bc7000000b00ba8cbd261b3mr1223005ybl.5.1685722190144; Fri, 02 Jun 2023 09:09:50 -0700 (PDT) Date: Fri, 2 Jun 2023 09:09:13 -0700 In-Reply-To: <20230602160914.4011728-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230602160914.4011728-1-vipinsh@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230602160914.4011728-16-vipinsh@google.com> Subject: [PATCH v2 15/16] KVM: arm64: Provide option to pass page walker flag for huge page splits From: Vipin Sharma To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, chenhuacai@kernel.org, aleksandar.qemu.devel@gmail.com, tsbogend@alpha.franken.de, anup@brainfault.org, atishp@atishpatra.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, pbonzini@redhat.com, dmatlack@google.com, ricarkol@google.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Pass enum kvm_pgtable_walk_flags{} to kvm_mmu_split_huge_pages(). Use 0 as the flag value to make it no-op. In future commit kvm_mmu_split_huge_pages() will be used under both MMU read lock and MMU write lock. Flag allows to pass intent to use shared or non-shared page walkers to split the huge pages. Signed-off-by: Vipin Sharma --- arch/arm64/kvm/mmu.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 34d2bd03cf5f..6dd964e3682c 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -118,7 +118,8 @@ static bool need_split_memcache_topup_or_resched(struct kvm *kvm) } static int kvm_mmu_split_huge_pages(struct kvm *kvm, phys_addr_t addr, - phys_addr_t end) + phys_addr_t end, + enum kvm_pgtable_walk_flags flags) { struct kvm_mmu_memory_cache *cache; struct kvm_pgtable *pgt; @@ -153,7 +154,8 @@ static int kvm_mmu_split_huge_pages(struct kvm *kvm, phys_addr_t addr, return -EINVAL; next = __stage2_range_addr_end(addr, end, chunk_size); - ret = kvm_pgtable_stage2_split(pgt, addr, next - addr, cache, 0); + ret = kvm_pgtable_stage2_split(pgt, addr, next - addr, cache, + flags); if (ret) break; } while (addr = next, addr != end); @@ -1112,7 +1114,7 @@ static void kvm_mmu_split_memory_region(struct kvm *kvm, int slot) end = (memslot->base_gfn + memslot->npages) << PAGE_SHIFT; write_lock(&kvm->mmu_lock); - kvm_mmu_split_huge_pages(kvm, start, end); + kvm_mmu_split_huge_pages(kvm, start, end, 0); write_unlock(&kvm->mmu_lock); } @@ -1149,7 +1151,7 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, * again. */ if (kvm_dirty_log_manual_protect_and_init_set(kvm)) - kvm_mmu_split_huge_pages(kvm, start, end); + kvm_mmu_split_huge_pages(kvm, start, end, 0); write_unlock(&kvm->mmu_lock); }