From patchwork Thu Aug 20 16:28:44 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 52589 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f199.google.com (mail-lb0-f199.google.com [209.85.217.199]) by patches.linaro.org (Postfix) with ESMTPS id 8C27A22E4F for ; Thu, 20 Aug 2015 16:32:15 +0000 (UTC) Received: by lbbpd10 with SMTP id pd10sf12496674lbb.3 for ; Thu, 20 Aug 2015 09:32:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:mime-version:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:list-subscribe:cc :content-type:content-transfer-encoding:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list; bh=Z9+20sCdElt/Qo5IohIUpIVUVjHva+9aY8nuxBCvLOE=; b=QAJ2I5qgWFikO6vDJBBdRO2/lbGwODPmkJ/8xbwyqN76JF9GPvJYwqVjWjJVv3K9Ma 9Yr58Kls8Z5OVBeqiSCrrno+TQCiZiMxb5SHZhbb1mTzkBiUA6/G1NagauACRIdSfZ7p oD1YKiRzUuRXcdN3K36vghe3mHfJusIGEsGEXjLt4QKLYNL2GLrEF/GjD/CsuquP5LXq QX9Kr4tNHeKdFLRIuPOKpbpkd3balry4uXmGpKpmZMLL7DEjeE5OzqFFZf+tZd6qDrmg wKFuNpr2erSypeYQm3ZMzm87fU0JVIxFW8DmsrXEqLnPo2wf99sZ4ECsOJ5gmEXUe3U4 DHNQ== X-Gm-Message-State: ALoCoQkJIgWjYGrQbJ1j/y+YXQGb4pIpIT9ETDc2zUcL3Z53uoXaVc/QbQqog0KBv0n+hE8NcHkH X-Received: by 10.112.139.137 with SMTP id qy9mr1074604lbb.17.1440088334559; Thu, 20 Aug 2015 09:32:14 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.42.211 with SMTP id q19ls189106lal.73.gmail; Thu, 20 Aug 2015 09:32:14 -0700 (PDT) X-Received: by 10.152.18.200 with SMTP id y8mr3577271lad.71.1440088334406; Thu, 20 Aug 2015 09:32:14 -0700 (PDT) Received: from mail-lb0-f173.google.com (mail-lb0-f173.google.com. [209.85.217.173]) by mx.google.com with ESMTPS id i19si4038818lbh.10.2015.08.20.09.32.14 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 20 Aug 2015 09:32:14 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.173 as permitted sender) client-ip=209.85.217.173; Received: by lbbtg9 with SMTP id tg9so27508700lbb.1 for ; Thu, 20 Aug 2015 09:32:14 -0700 (PDT) X-Received: by 10.152.29.132 with SMTP id k4mr3650140lah.88.1440088334207; Thu, 20 Aug 2015 09:32:14 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.162.200 with SMTP id yc8csp105089lbb; Thu, 20 Aug 2015 09:32:12 -0700 (PDT) X-Received: by 10.66.182.161 with SMTP id ef1mr7854483pac.97.1440088331247; Thu, 20 Aug 2015 09:32:11 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id l2si8287120pdb.1.2015.08.20.09.32.10 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 20 Aug 2015 09:32:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZSSja-0001AC-QM; Thu, 20 Aug 2015 16:30:34 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZSSjO-0008Co-H0 for linux-arm-kernel@lists.infradead.org; Thu, 20 Aug 2015 16:30:23 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 57C025F6; Thu, 20 Aug 2015 09:29:33 -0700 (PDT) Received: from zomby-woof.event.rightround.com (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B93C33F59E; Thu, 20 Aug 2015 09:29:41 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , Gleb Natapov Subject: [PATCH 06/25] KVM: arm64: guest debug, add support for single-step Date: Thu, 20 Aug 2015 17:28:44 +0100 Message-Id: <1440088143-4722-7-git-send-email-marc.zyngier@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1440088143-4722-1-git-send-email-marc.zyngier@arm.com> References: <1440088143-4722-1-git-send-email-marc.zyngier@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150820_093022_584896_45982F8F X-CRM114-Status: GOOD ( 19.89 ) X-Spam-Score: -7.5 (-------) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-7.5 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [217.140.101.70 listed in list.dnswl.org] -0.6 RP_MATCHES_RCVD Envelope sender domain matches handover relay domain -0.0 SPF_PASS SPF: sender matches SPF record -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , Cc: Vladimir Murzin , kvm@vger.kernel.org, "Suzuki K. Poulose" , linux-arm-kernel@lists.infradead.org, =?UTF-8?q?Alex=20Benn=C3=A9e?= , kvmarm@lists.cs.columbia.edu, Christoffer Dall , Mario Smarduch Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: marc.zyngier@arm.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.173 as permitted sender) smtp.mailfrom=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 From: Alex Bennée This adds support for single-stepping the guest. To do this we need to manipulate the guests PSTATE.SS and MDSCR_EL1.SS bits to trigger stepping. We take care to preserve MDSCR_EL1 and trap access to it to ensure we don't affect the apparent state of the guest. As we have to enable trapping of all software debug exceptions we suppress the ability of the guest to single-step itself. If we didn't we would have to deal with the exception arriving while the guest was in kernelspace when the guest is expecting to single-step userspace. This is something we don't want to unwind in the kernel. Once the host is no longer debugging the guest its ability to single-step userspace is restored. Signed-off-by: Alex Bennée Reviewed-by: Christoffer Dall Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_host.h | 11 +++++++ arch/arm64/kvm/debug.c | 68 ++++++++++++++++++++++++++++++++++++--- arch/arm64/kvm/guest.c | 4 ++- arch/arm64/kvm/handle_exit.c | 2 ++ 4 files changed, 80 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index c90c6a4..cfb6754 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -123,6 +123,17 @@ struct kvm_vcpu_arch { * here. */ + /* + * Guest registers we preserve during guest debugging. + * + * These shadow registers are updated by the kvm_handle_sys_reg + * trap handler if the guest accesses or updates them while we + * are using guest debug. + */ + struct { + u32 mdscr_el1; + } guest_debug_preserved; + /* Don't run the guest */ bool pause; diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c index 8d1bfa4..d439eb8 100644 --- a/arch/arm64/kvm/debug.c +++ b/arch/arm64/kvm/debug.c @@ -19,11 +19,39 @@ #include +#include +#include #include +#include + +/* These are the bits of MDSCR_EL1 we may manipulate */ +#define MDSCR_EL1_DEBUG_MASK (DBG_MDSCR_SS | \ + DBG_MDSCR_KDE | \ + DBG_MDSCR_MDE) static DEFINE_PER_CPU(u32, mdcr_el2); /** + * save/restore_guest_debug_regs + * + * For some debug operations we need to tweak some guest registers. As + * a result we need to save the state of those registers before we + * make those modifications. + * + * Guest access to MDSCR_EL1 is trapped by the hypervisor and handled + * after we have restored the preserved value to the main context. + */ +static void save_guest_debug_regs(struct kvm_vcpu *vcpu) +{ + vcpu->arch.guest_debug_preserved.mdscr_el1 = vcpu_sys_reg(vcpu, MDSCR_EL1); +} + +static void restore_guest_debug_regs(struct kvm_vcpu *vcpu) +{ + vcpu_sys_reg(vcpu, MDSCR_EL1) = vcpu->arch.guest_debug_preserved.mdscr_el1; +} + +/** * kvm_arm_init_debug - grab what we need for debug * * Currently the sole task of this function is to retrieve the initial @@ -38,7 +66,6 @@ void kvm_arm_init_debug(void) __this_cpu_write(mdcr_el2, kvm_call_hyp(__kvm_get_mdcr_el2)); } - /** * kvm_arm_setup_debug - set up debug related stuff * @@ -73,12 +100,45 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) if (trap_debug) vcpu->arch.mdcr_el2 |= MDCR_EL2_TDA; - /* Trap breakpoints? */ - if (vcpu->guest_debug & KVM_GUESTDBG_USE_SW_BP) + /* Is Guest debugging in effect? */ + if (vcpu->guest_debug) { + /* Route all software debug exceptions to EL2 */ vcpu->arch.mdcr_el2 |= MDCR_EL2_TDE; + + /* Save guest debug state */ + save_guest_debug_regs(vcpu); + + /* + * Single Step (ARM ARM D2.12.3 The software step state + * machine) + * + * If we are doing Single Step we need to manipulate + * the guest's MDSCR_EL1.SS and PSTATE.SS. Once the + * step has occurred the hypervisor will trap the + * debug exception and we return to userspace. + * + * If the guest attempts to single step its userspace + * we would have to deal with a trapped exception + * while in the guest kernel. Because this would be + * hard to unwind we suppress the guest's ability to + * do so by masking MDSCR_EL.SS. + * + * This confuses guest debuggers which use + * single-step behind the scenes but everything + * returns to normal once the host is no longer + * debugging the system. + */ + if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP) { + *vcpu_cpsr(vcpu) |= DBG_SPSR_SS; + vcpu_sys_reg(vcpu, MDSCR_EL1) |= DBG_MDSCR_SS; + } else { + vcpu_sys_reg(vcpu, MDSCR_EL1) &= ~DBG_MDSCR_SS; + } + } } void kvm_arm_clear_debug(struct kvm_vcpu *vcpu) { - /* Nothing to do yet */ + if (vcpu->guest_debug) + restore_guest_debug_regs(vcpu); } diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 22d22c5..48de4f4 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -332,7 +332,9 @@ int kvm_arch_vcpu_ioctl_translate(struct kvm_vcpu *vcpu, return -EINVAL; } -#define KVM_GUESTDBG_VALID_MASK (KVM_GUESTDBG_ENABLE | KVM_GUESTDBG_USE_SW_BP) +#define KVM_GUESTDBG_VALID_MASK (KVM_GUESTDBG_ENABLE | \ + KVM_GUESTDBG_USE_SW_BP | \ + KVM_GUESTDBG_SINGLESTEP) /** * kvm_arch_vcpu_ioctl_set_guest_debug - set up guest debugging diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index 27f38a9..e9de13e 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -103,6 +103,7 @@ static int kvm_handle_guest_debug(struct kvm_vcpu *vcpu, struct kvm_run *run) run->debug.arch.hsr = hsr; switch (hsr >> ESR_ELx_EC_SHIFT) { + case ESR_ELx_EC_SOFTSTP_LOW: case ESR_ELx_EC_BKPT32: case ESR_ELx_EC_BRK64: break; @@ -130,6 +131,7 @@ static exit_handle_fn arm_exit_handlers[] = { [ESR_ELx_EC_SYS64] = kvm_handle_sys_reg, [ESR_ELx_EC_IABT_LOW] = kvm_handle_guest_abort, [ESR_ELx_EC_DABT_LOW] = kvm_handle_guest_abort, + [ESR_ELx_EC_SOFTSTP_LOW]= kvm_handle_guest_debug, [ESR_ELx_EC_BKPT32] = kvm_handle_guest_debug, [ESR_ELx_EC_BRK64] = kvm_handle_guest_debug, };