From patchwork Fri Aug 20 22:49:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 500705 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49659C43214 for ; Fri, 20 Aug 2021 22:50:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 301DF61185 for ; Fri, 20 Aug 2021 22:50:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239025AbhHTWvK (ORCPT ); Fri, 20 Aug 2021 18:51:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37788 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238712AbhHTWvI (ORCPT ); Fri, 20 Aug 2021 18:51:08 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AFE76C061757 for ; Fri, 20 Aug 2021 15:50:29 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id l85-20020a252558000000b0059537cd6aceso11122872ybl.16 for ; Fri, 20 Aug 2021 15:50:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=gJGDno5klw1XWXHZwjjz8QtpqHXGwJ6PM18IirOkI4E=; b=Y/9oq0nMIPvoCL9CTWZ/XfjOgK5S9SGDHb2RXM7plfEV8R736kfS/mVaau+uS61SJS ZcHbTFNVquLsTw5/Cgp7gZ9GlIaPPC1AuedEsix1Xl+nOiYD5bszAV76QDFt/JgvEqCl wsyG1kyx9Jl3u03r2nF2Oub5MJ93qXCws2Pa5A1Zf6UmXmszQK8I460CPN6ayIpMhkY8 eMX5TGteHEALuSiHGHkxbmeXFfVFIJ6NzZZbghS+9tm87+GhTwMdEMS1gn49Y1aH42fw zJ4BqNg/gPBQ3v9tbwL0EAJ+OXKA9B2Gf9tro5WKFD5dQMrDYXujqk8rPQDDTP8IPIYq W58w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=gJGDno5klw1XWXHZwjjz8QtpqHXGwJ6PM18IirOkI4E=; b=QiNkPBSk5RStgxcef+RlI++6SCEHMlGGQYit8VxzkUhEwmf3ON30HYUpecchG1vWkN jKqkJjNqV/xMbbkOr97Unevcu0fU8WaLoKqKP4KCFhyg7BRTRub0axP5d72UdGCZ6vXD tvRCFDki3Yq0hsyGPdptYo3nZ6xG6K8Gnxi6ItMxkYx9+r6rbQKk+/0HXl1+Cilql/Jw Wu8AojIGZGn+ltln7LHKLNH1+1UUR2rocNnn4vV103bsTCtxtRBDCqpatp3p9aBf//AP HI21WQei0Q9+McqpE0ZOVeG13PNeL4MImxB3P7Md5bm/H1j7eO/ss7io2IvMYqPcNbNW RFNw== X-Gm-Message-State: AOAM531Vs2oAcEC9BhVmDsK/BlfCyBDaLdcdnefegLWFHBTYH6SyU9pU 8uxC/42L0f8DdygyxwnF3mltpDeM79c= X-Google-Smtp-Source: ABdhPJyV62gidknEb4k4a6DMafmSrAzZNIWz+Uk5mR+ELt24rWU/K+sp1dx9Z4H73mtx1X87SZND7NGjzdo= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:90:200:f11d:a281:af9b:5de6]) (user=seanjc job=sendgmr) by 2002:a25:d242:: with SMTP id j63mr27053838ybg.99.1629499828874; Fri, 20 Aug 2021 15:50:28 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 20 Aug 2021 15:49:59 -0700 In-Reply-To: <20210820225002.310652-1-seanjc@google.com> Message-Id: <20210820225002.310652-3-seanjc@google.com> Mime-Version: 1.0 References: <20210820225002.310652-1-seanjc@google.com> X-Mailer: git-send-email 2.33.0.rc2.250.ged5fa647cd-goog Subject: [PATCH v2 2/5] entry: rseq: Call rseq_handle_notify_resume() in tracehook_notify_resume() From: Sean Christopherson To: Russell King , Catalin Marinas , Will Deacon , Guo Ren , Thomas Bogendoerfer , Michael Ellerman , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Steven Rostedt , Ingo Molnar , Oleg Nesterov , Thomas Gleixner , Peter Zijlstra , Andy Lutomirski , Mathieu Desnoyers , "Paul E. McKenney" , Boqun Feng , Paolo Bonzini , Shuah Khan Cc: Benjamin Herrenschmidt , Paul Mackerras , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-csky@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Foley , Shakeel Butt , Sean Christopherson , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Invoke rseq_handle_notify_resume() from tracehook_notify_resume() now that the two function are always called back-to-back by architectures that have rseq. The rseq helper is stubbed out for architectures that don't support rseq, i.e. this is a nop across the board. Note, tracehook_notify_resume() is horribly named and arguably does not belong in tracehook.h as literally every line of code in it has nothing to do with tracing. But, that's been true since commit a42c6ded827d ("move key_repace_session_keyring() into tracehook_notify_resume()") first usurped tracehook_notify_resume() back in 2012. Punt cleaning that mess up to future patches. No functional change intended. Acked-by: Mathieu Desnoyers Signed-off-by: Sean Christopherson --- arch/arm/kernel/signal.c | 1 - arch/arm64/kernel/signal.c | 1 - arch/csky/kernel/signal.c | 4 +--- arch/mips/kernel/signal.c | 4 +--- arch/powerpc/kernel/signal.c | 4 +--- arch/s390/kernel/signal.c | 1 - include/linux/tracehook.h | 2 ++ kernel/entry/common.c | 4 +--- kernel/entry/kvm.c | 4 +--- 9 files changed, 7 insertions(+), 18 deletions(-) diff --git a/arch/arm/kernel/signal.c b/arch/arm/kernel/signal.c index a3a38d0a4c85..9df68d139965 100644 --- a/arch/arm/kernel/signal.c +++ b/arch/arm/kernel/signal.c @@ -670,7 +670,6 @@ do_work_pending(struct pt_regs *regs, unsigned int thread_flags, int syscall) uprobe_notify_resume(regs); } else { tracehook_notify_resume(regs); - rseq_handle_notify_resume(NULL, regs); } } local_irq_disable(); diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c index 23036334f4dc..22b55db13da6 100644 --- a/arch/arm64/kernel/signal.c +++ b/arch/arm64/kernel/signal.c @@ -951,7 +951,6 @@ asmlinkage void do_notify_resume(struct pt_regs *regs, if (thread_flags & _TIF_NOTIFY_RESUME) { tracehook_notify_resume(regs); - rseq_handle_notify_resume(NULL, regs); /* * If we reschedule after checking the affinity diff --git a/arch/csky/kernel/signal.c b/arch/csky/kernel/signal.c index 312f046d452d..bc4238b9f709 100644 --- a/arch/csky/kernel/signal.c +++ b/arch/csky/kernel/signal.c @@ -260,8 +260,6 @@ asmlinkage void do_notify_resume(struct pt_regs *regs, if (thread_info_flags & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL)) do_signal(regs); - if (thread_info_flags & _TIF_NOTIFY_RESUME) { + if (thread_info_flags & _TIF_NOTIFY_RESUME) tracehook_notify_resume(regs); - rseq_handle_notify_resume(NULL, regs); - } } diff --git a/arch/mips/kernel/signal.c b/arch/mips/kernel/signal.c index f1e985109da0..c9b2a75563e1 100644 --- a/arch/mips/kernel/signal.c +++ b/arch/mips/kernel/signal.c @@ -906,10 +906,8 @@ asmlinkage void do_notify_resume(struct pt_regs *regs, void *unused, if (thread_info_flags & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL)) do_signal(regs); - if (thread_info_flags & _TIF_NOTIFY_RESUME) { + if (thread_info_flags & _TIF_NOTIFY_RESUME) tracehook_notify_resume(regs); - rseq_handle_notify_resume(NULL, regs); - } user_enter(); } diff --git a/arch/powerpc/kernel/signal.c b/arch/powerpc/kernel/signal.c index e600764a926c..b93b87df499d 100644 --- a/arch/powerpc/kernel/signal.c +++ b/arch/powerpc/kernel/signal.c @@ -293,10 +293,8 @@ void do_notify_resume(struct pt_regs *regs, unsigned long thread_info_flags) do_signal(current); } - if (thread_info_flags & _TIF_NOTIFY_RESUME) { + if (thread_info_flags & _TIF_NOTIFY_RESUME) tracehook_notify_resume(regs); - rseq_handle_notify_resume(NULL, regs); - } } static unsigned long get_tm_stackpointer(struct task_struct *tsk) diff --git a/arch/s390/kernel/signal.c b/arch/s390/kernel/signal.c index 78ef53b29958..b307db26bf2d 100644 --- a/arch/s390/kernel/signal.c +++ b/arch/s390/kernel/signal.c @@ -537,5 +537,4 @@ void arch_do_signal_or_restart(struct pt_regs *regs, bool has_signal) void do_notify_resume(struct pt_regs *regs) { tracehook_notify_resume(regs); - rseq_handle_notify_resume(NULL, regs); } diff --git a/include/linux/tracehook.h b/include/linux/tracehook.h index 3e80c4bc66f7..2564b7434b4d 100644 --- a/include/linux/tracehook.h +++ b/include/linux/tracehook.h @@ -197,6 +197,8 @@ static inline void tracehook_notify_resume(struct pt_regs *regs) mem_cgroup_handle_over_high(); blkcg_maybe_throttle_current(); + + rseq_handle_notify_resume(NULL, regs); } /* diff --git a/kernel/entry/common.c b/kernel/entry/common.c index bf16395b9e13..d5a61d565ad5 100644 --- a/kernel/entry/common.c +++ b/kernel/entry/common.c @@ -171,10 +171,8 @@ static unsigned long exit_to_user_mode_loop(struct pt_regs *regs, if (ti_work & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL)) handle_signal_work(regs, ti_work); - if (ti_work & _TIF_NOTIFY_RESUME) { + if (ti_work & _TIF_NOTIFY_RESUME) tracehook_notify_resume(regs); - rseq_handle_notify_resume(NULL, regs); - } /* Architecture specific TIF work */ arch_exit_to_user_mode_work(regs, ti_work); diff --git a/kernel/entry/kvm.c b/kernel/entry/kvm.c index 049fd06b4c3d..49972ee99aff 100644 --- a/kernel/entry/kvm.c +++ b/kernel/entry/kvm.c @@ -19,10 +19,8 @@ static int xfer_to_guest_mode_work(struct kvm_vcpu *vcpu, unsigned long ti_work) if (ti_work & _TIF_NEED_RESCHED) schedule(); - if (ti_work & _TIF_NOTIFY_RESUME) { + if (ti_work & _TIF_NOTIFY_RESUME) tracehook_notify_resume(NULL); - rseq_handle_notify_resume(NULL, NULL); - } ret = arch_xfer_to_guest_mode_handle_work(vcpu, ti_work); if (ret) From patchwork Fri Aug 20 22:50:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 500704 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5FD0BC19F33 for ; Fri, 20 Aug 2021 22:50:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 481286117A for ; Fri, 20 Aug 2021 22:50:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229451AbhHTWvQ (ORCPT ); Fri, 20 Aug 2021 18:51:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37840 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240695AbhHTWvM (ORCPT ); Fri, 20 Aug 2021 18:51:12 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 20AE2C061760 for ; Fri, 20 Aug 2021 15:50:34 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id m66-20020a257145000000b00598282d96ceso4549910ybc.3 for ; Fri, 20 Aug 2021 15:50:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=Hob1dmJMrw94HSjTjJyO7V/z3K389Z6kceuVVvWteEg=; b=ZBPi0uJzte5sV+sLJPV9xwjqxEWGEetp8xrdF8k54k/wc3cJN67VgOhUhjKmKQ4Xhd YqdQSUKFPYacTLIBn+m+Gpa7jzLVqyIiKc5ijD3DiDu8v4DLcG3UldH9EAJ+m5ExOzrL O9ISw4lQ5FhzsoHmconTDOnb4joYqyvZ/niIJc3XkIo2bAKvQrwzQnUpMTk9YgBiDKRd dq/pHixNezQy4KZymANBRBoW1GxOCTFLow8hIjYvjzXPqasXApGiLjIE1l6UWAluDTZn kKovOEM7RvHsvBfgpkcVrSEeKDhfS15FlFRJF+Kn+3xtB+5p+YZ+b3m0NahrfffdwBHk Myuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=Hob1dmJMrw94HSjTjJyO7V/z3K389Z6kceuVVvWteEg=; b=OsUnX1k1VW4dyU2RnFcyM6Mb241sEtuiDuemQQuI56Bc0QvPB7CsmOxD1Tjh8CoOv2 B7YgbQi6pWPeJQjx+xJadjagJVZWdHvAq8LYvo9a6tpe5boqls5ri6Y6LcDUnO0FvT9+ qV8ZtloGM+gtwimxOjebORgAV2gkuAuPoov6fhzBUo6IVvuQOV793qYMkO+jpw2+ozaQ gFc+OuedoXlZRPtiLhGgirXqczCkhgzCgoZdpfNi/zYOjfceAy79WNgIHBUw8OEPNe6d FEESpfF3ZdEniUBf/YWBQ3BkSl+HXjBFTwzdQjjgUnd7Y+xKEAaVSMfCREJziFumh9wa nENQ== X-Gm-Message-State: AOAM533d23Dj7waN+avivEc0EHxUMMQvGCWChqxDUb2cJub0cBXUVcxk AIEXhv+6C4KDsxxhn9H1KRbLdWIJl58= X-Google-Smtp-Source: ABdhPJxecYCoriXfTjXSBq3bkWOC7Zv8YgYOW9njfOH3mosyAnU4b5iytiV5UE6gH7mGyPMEYVj2u6gPToc= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:90:200:f11d:a281:af9b:5de6]) (user=seanjc job=sendgmr) by 2002:a25:3bcb:: with SMTP id i194mr27272983yba.442.1629499833270; Fri, 20 Aug 2021 15:50:33 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 20 Aug 2021 15:50:01 -0700 In-Reply-To: <20210820225002.310652-1-seanjc@google.com> Message-Id: <20210820225002.310652-5-seanjc@google.com> Mime-Version: 1.0 References: <20210820225002.310652-1-seanjc@google.com> X-Mailer: git-send-email 2.33.0.rc2.250.ged5fa647cd-goog Subject: [PATCH v2 4/5] KVM: selftests: Add a test for KVM_RUN+rseq to detect task migration bugs From: Sean Christopherson To: Russell King , Catalin Marinas , Will Deacon , Guo Ren , Thomas Bogendoerfer , Michael Ellerman , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Steven Rostedt , Ingo Molnar , Oleg Nesterov , Thomas Gleixner , Peter Zijlstra , Andy Lutomirski , Mathieu Desnoyers , "Paul E. McKenney" , Boqun Feng , Paolo Bonzini , Shuah Khan Cc: Benjamin Herrenschmidt , Paul Mackerras , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-csky@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Foley , Shakeel Butt , Sean Christopherson , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add a test to verify an rseq's CPU ID is updated correctly if the task is migrated while the kernel is handling KVM_RUN. This is a regression test for a bug introduced by commit 72c3c0fe54a3 ("x86/kvm: Use generic xfer to guest work function"), where TIF_NOTIFY_RESUME would be cleared by KVM without updating rseq, leading to a stale CPU ID and other badness. Signed-off-by: Sean Christopherson --- tools/testing/selftests/kvm/.gitignore | 1 + tools/testing/selftests/kvm/Makefile | 3 + tools/testing/selftests/kvm/rseq_test.c | 154 ++++++++++++++++++++++++ 3 files changed, 158 insertions(+) create mode 100644 tools/testing/selftests/kvm/rseq_test.c diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore index 0709af0144c8..6d031ff6b68e 100644 --- a/tools/testing/selftests/kvm/.gitignore +++ b/tools/testing/selftests/kvm/.gitignore @@ -47,6 +47,7 @@ /kvm_page_table_test /memslot_modification_stress_test /memslot_perf_test +/rseq_test /set_memory_region_test /steal_time /kvm_binary_stats_test diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 5832f510a16c..0756e79cb513 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -80,6 +80,7 @@ TEST_GEN_PROGS_x86_64 += kvm_create_max_vcpus TEST_GEN_PROGS_x86_64 += kvm_page_table_test TEST_GEN_PROGS_x86_64 += memslot_modification_stress_test TEST_GEN_PROGS_x86_64 += memslot_perf_test +TEST_GEN_PROGS_x86_64 += rseq_test TEST_GEN_PROGS_x86_64 += set_memory_region_test TEST_GEN_PROGS_x86_64 += steal_time TEST_GEN_PROGS_x86_64 += kvm_binary_stats_test @@ -92,6 +93,7 @@ TEST_GEN_PROGS_aarch64 += dirty_log_test TEST_GEN_PROGS_aarch64 += dirty_log_perf_test TEST_GEN_PROGS_aarch64 += kvm_create_max_vcpus TEST_GEN_PROGS_aarch64 += kvm_page_table_test +TEST_GEN_PROGS_aarch64 += rseq_test TEST_GEN_PROGS_aarch64 += set_memory_region_test TEST_GEN_PROGS_aarch64 += steal_time TEST_GEN_PROGS_aarch64 += kvm_binary_stats_test @@ -103,6 +105,7 @@ TEST_GEN_PROGS_s390x += demand_paging_test TEST_GEN_PROGS_s390x += dirty_log_test TEST_GEN_PROGS_s390x += kvm_create_max_vcpus TEST_GEN_PROGS_s390x += kvm_page_table_test +TEST_GEN_PROGS_s390x += rseq_test TEST_GEN_PROGS_s390x += set_memory_region_test TEST_GEN_PROGS_s390x += kvm_binary_stats_test diff --git a/tools/testing/selftests/kvm/rseq_test.c b/tools/testing/selftests/kvm/rseq_test.c new file mode 100644 index 000000000000..d28d7ba1a64a --- /dev/null +++ b/tools/testing/selftests/kvm/rseq_test.c @@ -0,0 +1,154 @@ +// SPDX-License-Identifier: GPL-2.0-only +#define _GNU_SOURCE /* for program_invocation_short_name */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "kvm_util.h" +#include "processor.h" +#include "test_util.h" + +#define VCPU_ID 0 + +static __thread volatile struct rseq __rseq = { + .cpu_id = RSEQ_CPU_ID_UNINITIALIZED, +}; + +#define RSEQ_SIG 0xdeadbeef + +static pthread_t migration_thread; +static cpu_set_t possible_mask; +static bool done; + +static atomic_t seq_cnt; + +static void guest_code(void) +{ + for (;;) + GUEST_SYNC(0); +} + +static void sys_rseq(int flags) +{ + int r; + + r = syscall(__NR_rseq, &__rseq, sizeof(__rseq), flags, RSEQ_SIG); + TEST_ASSERT(!r, "rseq failed, errno = %d (%s)", errno, strerror(errno)); +} + +static void *migration_worker(void *ign) +{ + cpu_set_t allowed_mask; + int r, i, nr_cpus, cpu; + + CPU_ZERO(&allowed_mask); + + nr_cpus = CPU_COUNT(&possible_mask); + + for (i = 0; i < 20000; i++) { + cpu = i % nr_cpus; + if (!CPU_ISSET(cpu, &possible_mask)) + continue; + + CPU_SET(cpu, &allowed_mask); + + /* + * Bump the sequence count twice to allow the reader to detect + * that a migration may have occurred in between rseq and sched + * CPU ID reads. An odd sequence count indicates a migration + * is in-progress, while a completely different count indicates + * a migration occurred since the count was last read. + */ + atomic_inc(&seq_cnt); + r = sched_setaffinity(0, sizeof(allowed_mask), &allowed_mask); + TEST_ASSERT(!r, "sched_setaffinity failed, errno = %d (%s)", + errno, strerror(errno)); + atomic_inc(&seq_cnt); + + CPU_CLR(cpu, &allowed_mask); + + /* + * Let the read-side get back into KVM_RUN to improve the odds + * of task migration coinciding with KVM's run loop. + */ + usleep(1); + } + done = true; + return NULL; +} + +int main(int argc, char *argv[]) +{ + struct kvm_vm *vm; + u32 cpu, rseq_cpu; + int r, snapshot; + + /* Tell stdout not to buffer its content */ + setbuf(stdout, NULL); + + r = sched_getaffinity(0, sizeof(possible_mask), &possible_mask); + TEST_ASSERT(!r, "sched_getaffinity failed, errno = %d (%s)", errno, + strerror(errno)); + + if (CPU_COUNT(&possible_mask) < 2) { + print_skip("Only one CPU, task migration not possible\n"); + exit(KSFT_SKIP); + } + + sys_rseq(0); + + /* + * Create and run a dummy VM that immediately exits to userspace via + * GUEST_SYNC, while concurrently migrating the process by setting its + * CPU affinity. + */ + vm = vm_create_default(VCPU_ID, 0, guest_code); + + pthread_create(&migration_thread, NULL, migration_worker, 0); + + while (!done) { + vcpu_run(vm, VCPU_ID); + TEST_ASSERT(get_ucall(vm, VCPU_ID, NULL) == UCALL_SYNC, + "Guest failed?"); + + /* + * Verify rseq's CPU matches sched's CPU. Ensure migration + * doesn't occur between sched_getcpu() and reading the rseq + * cpu_id by rereading both if the sequence count changes, or + * if the count is odd (migration in-progress). + */ + do { + /* + * Drop bit 0 to force a mismatch if the count is odd, + * i.e. if a migration is in-progress. + */ + snapshot = atomic_read(&seq_cnt) & ~1; + smp_rmb(); + cpu = sched_getcpu(); + rseq_cpu = READ_ONCE(__rseq.cpu_id); + smp_rmb(); + } while (snapshot != atomic_read(&seq_cnt)); + + TEST_ASSERT(rseq_cpu == cpu, + "rseq CPU = %d, sched CPU = %d\n", rseq_cpu, cpu); + } + + pthread_join(migration_thread, NULL); + + kvm_vm_free(vm); + + sys_rseq(RSEQ_FLAG_UNREGISTER); + + return 0; +}