From patchwork Thu May 23 10:34:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sudeep Holla X-Patchwork-Id: 164967 Delivered-To: patch@linaro.org Received: by 2002:a92:9e1a:0:0:0:0:0 with SMTP id q26csp1994221ili; Thu, 23 May 2019 03:35:42 -0700 (PDT) X-Google-Smtp-Source: APXvYqwm3FDk1eyRihUhZ4VKtzwhgSViGRPseIPAut/8d6wYhKjXPu7+i5jXTmukICgRW5pj8D+w X-Received: by 2002:a17:902:2884:: with SMTP id f4mr68006108plb.230.1558607742798; Thu, 23 May 2019 03:35:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558607742; cv=none; d=google.com; s=arc-20160816; b=bM5afZZnwIGoAheMdPhMbc6w/h6Xqqce8fbVq3lKzjGHfL+VqDShIMkNyM79D0Dq2u HBIVaJwBYI21pkKUBtUdP/lM2twDqi/aQ9gO1DLkU/tskYyzo2GPfWGJ8bdd8eS44x43 Bbeh71tGpKmAIHh3z/B6yFEJqXFegn2ZcsSkI8eGfIzEqXGmoiAuiXLVEURKwxGihaTL iv7c5VJxecZNYFpeN1rM6Pzo71UpLuH5v4xsvA8IpnsdzyWLx2dlA4QyfNO9de9uX0oS 1lgJkAh1Km01E0eRPtNQYX8Yj7+/DrYfwYgULaQUKe7FPJRCCu3fdteNu/AVfOdIwfPO nRMw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=xb2NHWv7epm4+Yib8PXrJPoZJGha3y/BcPva6HGCpPg=; b=09gnEEUNJudcLRLD7kQu+2FglBL5RpyfCUxke5s6zcz11tFAZnHnpcFEEVfY5fCd/g oTYgNmroueCn91wAf+xhTyREYx2L1GgP8qu5uMJ0EW7uqHWx26Avqf6f50atiZtrBePo nWCwRE9x/XgoVIgQdSYOFXuGG7/HXvY2sWrcnkK7Y+4EHKAyj0oset73PKfEMM+fCB+2 KCbpX+0uWGDAImDtuZNnCO29CjbqzWdGRRq8z2MMYyjm7uuOut62TYBBZNTAA/g88gpe mvK6uyuWh1X27lYjKMYQKL+jszAFCNwvIbEX0qURAtq9Do5Uaik3lCVUHFuNiClwT6Qf rv9A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x3si30567422plb.347.2019.05.23.03.35.42; Thu, 23 May 2019 03:35:42 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730647AbfEWKfl (ORCPT + 30 others); Thu, 23 May 2019 06:35:41 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:43086 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730613AbfEWKfh (ORCPT ); Thu, 23 May 2019 06:35:37 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5757AA78; Thu, 23 May 2019 03:35:37 -0700 (PDT) Received: from usa.arm.com (e107155-lin.cambridge.arm.com [10.1.196.42]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 425933F718; Thu, 23 May 2019 03:35:35 -0700 (PDT) From: Sudeep Holla To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Cc: Sudeep Holla , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Christoffer Dall , Marc Zyngier , James Morse , Suzuki K Pouloze , Catalin Marinas , Will Deacon , Julien Thierry Subject: [PATCH v2 07/15] arm64: KVM: split debug save restore across vm/traps activation Date: Thu, 23 May 2019 11:34:54 +0100 Message-Id: <20190523103502.25925-8-sudeep.holla@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190523103502.25925-1-sudeep.holla@arm.com> References: <20190523103502.25925-1-sudeep.holla@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org If we enable profiling buffer controls at EL1 generate a trap exception to EL2, it also changes profiling buffer to use EL1&0 stage 1 translation regime in case of VHE. To support SPE both in the guest and host, we need to first stop profiling and flush the profiling buffers before we activate/switch vm or enable/disable the traps. In prepartion to do that, lets split the debug save restore functionality into 4 steps: 1. debug_save_host_context - saves the host context 2. debug_restore_guest_context - restore the guest context 3. debug_save_guest_context - saves the guest context 4. debug_restore_host_context - restores the host context Lets rename existing __debug_switch_to_{host,guest} to make sure it's aligned to the above and just add the place holders for new ones getting added here as we need them to support SPE in guests. Signed-off-by: Sudeep Holla --- arch/arm64/include/asm/kvm_hyp.h | 6 ++++-- arch/arm64/kvm/hyp/debug-sr.c | 25 ++++++++++++++++--------- arch/arm64/kvm/hyp/switch.c | 12 ++++++++---- 3 files changed, 28 insertions(+), 15 deletions(-) -- 2.17.1 diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 782955db61dd..1c5ed80fcbda 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -164,8 +164,10 @@ void sysreg_restore_guest_state_vhe(struct kvm_cpu_context *ctxt); void __sysreg32_save_state(struct kvm_vcpu *vcpu); void __sysreg32_restore_state(struct kvm_vcpu *vcpu); -void __debug_switch_to_guest(struct kvm_vcpu *vcpu); -void __debug_switch_to_host(struct kvm_vcpu *vcpu); +void __debug_save_host_context(struct kvm_vcpu *vcpu); +void __debug_restore_guest_context(struct kvm_vcpu *vcpu); +void __debug_save_guest_context(struct kvm_vcpu *vcpu); +void __debug_restore_host_context(struct kvm_vcpu *vcpu); void __fpsimd_save_state(struct user_fpsimd_state *fp_regs); void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs); diff --git a/arch/arm64/kvm/hyp/debug-sr.c b/arch/arm64/kvm/hyp/debug-sr.c index fa51236ebcb3..618884df1dc4 100644 --- a/arch/arm64/kvm/hyp/debug-sr.c +++ b/arch/arm64/kvm/hyp/debug-sr.c @@ -149,20 +149,13 @@ static void __hyp_text __debug_restore_state(struct kvm_vcpu *vcpu, write_sysreg(ctxt->sys_regs[MDCCINT_EL1], mdccint_el1); } -void __hyp_text __debug_switch_to_guest(struct kvm_vcpu *vcpu) +void __hyp_text __debug_restore_guest_context(struct kvm_vcpu *vcpu) { struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; struct kvm_guest_debug_arch *host_dbg; struct kvm_guest_debug_arch *guest_dbg; - /* - * Non-VHE: Disable and flush SPE data generation - * VHE: The vcpu can run, but it can't hide. - */ - if (!has_vhe()) - __debug_save_spe_nvhe(&vcpu->arch.host_debug_state.pmscr_el1); - if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)) return; @@ -175,7 +168,7 @@ void __hyp_text __debug_switch_to_guest(struct kvm_vcpu *vcpu) __debug_restore_state(vcpu, guest_dbg, guest_ctxt); } -void __hyp_text __debug_switch_to_host(struct kvm_vcpu *vcpu) +void __hyp_text __debug_restore_host_context(struct kvm_vcpu *vcpu) { struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; @@ -199,6 +192,20 @@ void __hyp_text __debug_switch_to_host(struct kvm_vcpu *vcpu) vcpu->arch.flags &= ~KVM_ARM64_DEBUG_DIRTY; } +void __hyp_text __debug_save_host_context(struct kvm_vcpu *vcpu) +{ + /* + * Non-VHE: Disable and flush SPE data generation + * VHE: The vcpu can run, but it can't hide. + */ + if (!has_vhe()) + __debug_save_spe_nvhe(&vcpu->arch.host_debug_state.pmscr_el1); +} + +void __hyp_text __debug_save_guest_context(struct kvm_vcpu *vcpu) +{ +} + u32 __hyp_text __kvm_get_mdcr_el2(void) { return read_sysreg(mdcr_el2); diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index 9b2461138ddc..844f0dd7a7f0 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -515,6 +515,7 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) guest_ctxt = &vcpu->arch.ctxt; sysreg_save_host_state_vhe(host_ctxt); + __debug_save_host_context(vcpu); /* * ARM erratum 1165522 requires us to configure both stage 1 and @@ -531,7 +532,7 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) __activate_traps(vcpu); sysreg_restore_guest_state_vhe(guest_ctxt); - __debug_switch_to_guest(vcpu); + __debug_restore_guest_context(vcpu); __set_guest_arch_workaround_state(vcpu); @@ -545,6 +546,7 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) __set_host_arch_workaround_state(vcpu); sysreg_save_guest_state_vhe(guest_ctxt); + __debug_save_guest_context(vcpu); __deactivate_traps(vcpu); @@ -553,7 +555,7 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) __fpsimd_save_fpexc32(vcpu); - __debug_switch_to_host(vcpu); + __debug_restore_host_context(vcpu); return exit_code; } @@ -587,6 +589,7 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) pmu_switch_needed = __pmu_switch_to_guest(host_ctxt); __sysreg_save_state_nvhe(host_ctxt); + __debug_save_host_context(vcpu); __activate_vm(kern_hyp_va(vcpu->kvm)); __activate_traps(vcpu); @@ -600,7 +603,7 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) */ __sysreg32_restore_state(vcpu); __sysreg_restore_state_nvhe(guest_ctxt); - __debug_switch_to_guest(vcpu); + __debug_restore_guest_context(vcpu); __set_guest_arch_workaround_state(vcpu); @@ -614,6 +617,7 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) __set_host_arch_workaround_state(vcpu); __sysreg_save_state_nvhe(guest_ctxt); + __debug_save_guest_context(vcpu); __sysreg32_save_state(vcpu); __timer_disable_traps(vcpu); __hyp_vgic_save_state(vcpu); @@ -630,7 +634,7 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) * This must come after restoring the host sysregs, since a non-VHE * system may enable SPE here and make use of the TTBRs. */ - __debug_switch_to_host(vcpu); + __debug_restore_host_context(vcpu); if (pmu_switch_needed) __pmu_switch_to_host(host_ctxt);