From patchwork Tue Nov 25 16:10:03 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 41482 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-la0-f70.google.com (mail-la0-f70.google.com [209.85.215.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 9E6B725E18 for ; Tue, 25 Nov 2014 16:14:05 +0000 (UTC) Received: by mail-la0-f70.google.com with SMTP id q1sf633302lam.1 for ; Tue, 25 Nov 2014 08:14:04 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:mime-version:cc:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:list-subscribe :content-type:content-transfer-encoding:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list; bh=R4uqfkxRNOl/WZ6yT1QfB0m1SIUy0/0B0AWbWqu9ZlE=; b=dbc1UWF6EwLZ4mEfGCUswa3k6kB6qUklXy6DbSOjNQaqEt6/a+kN4AUNmz4G1BJ/Jo TnDfhm0A9GP87hAfxqYKOIq59AT0jCWKZ1j9RMTWZna2CGBAjg8S4Lbo85Ibzgw3rGxM PGxxfb9hZRqrf1tp4+q2pk7mki1ivoM3bvOm6+P2T7U2wbPYCWXPfpEdRcpXzqgtnPLr mwQ0yJGvvHG17gqlKBUZ0NmyMlgtOO6lGeTPhtLHZUqKw1bX8zhh07MV8vv1mj3rFQga p3nu+p6CfE72HToNSpSOjlMvMi01v6Uu5WPuc2j9vJrTq124O0lKX9D34fkmkL9Ii1Lv uHHA== X-Gm-Message-State: ALoCoQlzvk0mcJ2YJf6XnhBZY0tyQymWNknQ5mC0959czFUznTXUjFkdZN4afrQ52nenYPR3Z4Ug X-Received: by 10.181.13.147 with SMTP id ey19mr5760659wid.2.1416932044631; Tue, 25 Nov 2014 08:14:04 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.19.98 with SMTP id d2ls597675lae.19.gmail; Tue, 25 Nov 2014 08:14:03 -0800 (PST) X-Received: by 10.152.10.67 with SMTP id g3mr28806525lab.59.1416932043861; Tue, 25 Nov 2014 08:14:03 -0800 (PST) Received: from mail-lb0-f171.google.com (mail-lb0-f171.google.com. [209.85.217.171]) by mx.google.com with ESMTPS id rd5si279822lbb.73.2014.11.25.08.14.03 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 25 Nov 2014 08:14:03 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.171 as permitted sender) client-ip=209.85.217.171; Received: by mail-lb0-f171.google.com with SMTP id n15so842684lbi.2 for ; Tue, 25 Nov 2014 08:14:03 -0800 (PST) X-Received: by 10.152.87.100 with SMTP id w4mr27760106laz.27.1416932043684; Tue, 25 Nov 2014 08:14:03 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.184.201 with SMTP id ew9csp462806lbc; Tue, 25 Nov 2014 08:14:02 -0800 (PST) X-Received: by 10.66.146.135 with SMTP id tc7mr45360628pab.155.1416932041570; Tue, 25 Nov 2014 08:14:01 -0800 (PST) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id hf1si2499541pbb.209.2014.11.25.08.14.00 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 25 Nov 2014 08:14:01 -0800 (PST) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XtIjM-0003sL-PL; Tue, 25 Nov 2014 16:12:44 +0000 Received: from static.88-198-71-155.clients.your-server.de ([88.198.71.155] helo=socrates.bennee.com) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1XtIho-0003I7-OB for linux-arm-kernel@lists.infradead.org; Tue, 25 Nov 2014 16:11:21 +0000 Received: from localhost ([127.0.0.1] helo=zen.linaroharston) by socrates.bennee.com with esmtp (Exim 4.80) (envelope-from ) id 1XtJJV-0005HW-FC; Tue, 25 Nov 2014 17:50:05 +0100 From: =?UTF-8?q?Alex=20Benn=C3=A9e?= To: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, christoffer.dall@linaro.org, marc.zyngier@arm.com, peter.maydell@linaro.org, agraf@suse.de Subject: [PATCH 5/7] KVM: arm64: guest debug, add support for single-step Date: Tue, 25 Nov 2014 16:10:03 +0000 Message-Id: <1416931805-23223-6-git-send-email-alex.bennee@linaro.org> X-Mailer: git-send-email 2.1.3 In-Reply-To: <1416931805-23223-1-git-send-email-alex.bennee@linaro.org> References: <1416931805-23223-1-git-send-email-alex.bennee@linaro.org> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 127.0.0.1 X-SA-Exim-Mail-From: alex.bennee@linaro.org X-SA-Exim-Scanned: No (on socrates.bennee.com); SAEximRunCond expanded to false X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20141125_081109_005394_33F1F5BC X-CRM114-Status: GOOD ( 16.96 ) X-Spam-Score: 0.0 (/) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (0.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- Cc: Lorenzo Pieralisi , Russell King , Gleb Natapov , jan.kiszka@siemens.com, Will Deacon , open list , "open list:ABI/API" , dahi@linux.vnet.ibm.com, Catalin Marinas , r65777@freescale.com, pbonzini@redhat.com, bp@suse.de, =?UTF-8?q?Alex=20Benn=C3=A9e?= X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: alex.bennee@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.171 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 This adds support for single-stepping the guest. As userspace can and will manipulate guest registers before restarting any tweaking of the registers has to occur just before control is passed back to the guest. Furthermore while guest debugging is in effect we need to squash the ability of the guest to single-step itself as we have no easy way of re-entering the guest after the exception has been delivered to the hypervisor. Signed-off-by: Alex Bennée diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index 48d26bb..a76daae 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -38,6 +38,7 @@ #include #include #include +#include #include #include #include @@ -300,6 +301,17 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) kvm_arm_set_running_vcpu(NULL); } +/** + * kvm_arch_vcpu_ioctl_set_guest_debug - Setup guest debugging + * @kvm: pointer to the KVM struct + * @kvm_guest_debug: the ioctl data buffer + * + * This sets up the VM for guest debugging. Care has to be taken when + * manipulating guest registers as these will be set/cleared by the + * hyper-visor controller, typically before each kvm_run event. As a + * result modification of the guest registers needs to take place + * after they have been restored in the hyp.S trampoline code. + */ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu, struct kvm_guest_debug *dbg) { @@ -317,8 +329,8 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu, /* Single Step */ if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP) { - kvm_info("SS requested, not yet implemented\n"); - return -EINVAL; + kvm_info("SS requested\n"); + route_el2 = true; } /* Software Break Points */ diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 8da1043..78e5ae1 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -121,6 +121,7 @@ int main(void) DEFINE(VCPU_FAR_EL2, offsetof(struct kvm_vcpu, arch.fault.far_el2)); DEFINE(VCPU_HPFAR_EL2, offsetof(struct kvm_vcpu, arch.fault.hpfar_el2)); DEFINE(VCPU_DEBUG_FLAGS, offsetof(struct kvm_vcpu, arch.debug_flags)); + DEFINE(GUEST_DEBUG, offsetof(struct kvm_vcpu, guest_debug)); DEFINE(VCPU_HCR_EL2, offsetof(struct kvm_vcpu, arch.hcr_el2)); DEFINE(VCPU_MDCR_EL2, offsetof(struct kvm_vcpu, arch.mdcr_el2)); DEFINE(VCPU_IRQ_LINES, offsetof(struct kvm_vcpu, arch.irq_lines)); diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index 28dc92b..6def054 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -91,6 +91,25 @@ static int kvm_handle_bkpt(struct kvm_vcpu *vcpu, struct kvm_run *run) return 0; } +/** + * kvm_handle_ss - handle single step exceptions + * + * @vcpu: the vcpu pointer + * + * See: ARM ARM D2.12 for the details. While the host is routing debug + * exceptions to it's handlers we have to suppress the ability of the + * guest to trigger exceptions. + */ +static int kvm_handle_ss(struct kvm_vcpu *vcpu, struct kvm_run *run) +{ + WARN_ON(!(vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP)); + + run->exit_reason = KVM_EXIT_DEBUG; + run->debug.arch.exit_type = KVM_DEBUG_EXIT_SINGLE_STEP; + run->debug.arch.address = *vcpu_pc(vcpu); + return 0; +} + static exit_handle_fn arm_exit_handlers[] = { [ESR_EL2_EC_WFI] = kvm_handle_wfx, [ESR_EL2_EC_CP15_32] = kvm_handle_cp15_32, @@ -105,6 +124,7 @@ static exit_handle_fn arm_exit_handlers[] = { [ESR_EL2_EC_SYS64] = kvm_handle_sys_reg, [ESR_EL2_EC_IABT] = kvm_handle_guest_abort, [ESR_EL2_EC_DABT] = kvm_handle_guest_abort, + [ESR_EL2_EC_SOFTSTP] = kvm_handle_ss, [ESR_EL2_EC_BKPT32] = kvm_handle_bkpt, [ESR_EL2_EC_BRK64] = kvm_handle_bkpt, }; diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S index 3c733ea..c0bc218 100644 --- a/arch/arm64/kvm/hyp.S +++ b/arch/arm64/kvm/hyp.S @@ -16,6 +16,7 @@ */ #include +#include #include #include @@ -168,6 +169,31 @@ // x19-x29, lr, sp*, elr*, spsr* restore_common_regs + // After restoring the guest registers but before we return to the guest + // we may want to make some final tweaks to support guest debugging. + ldr x3, [x0, #GUEST_DEBUG] + tbz x3, #KVM_GUESTDBG_ENABLE_SHIFT, 2f // No guest debug + + // x0 - preserved as VCPU ptr + // x1 - spsr + // x2 - mdscr + mrs x1, spsr_el2 + mrs x2, mdscr_el1 + + // See ARM ARM D2.12.3 The software step state machine + // If we are doing Single Step - set MDSCR_EL1.SS and PSTATE.SS + orr x1, x1, #DBG_SPSR_SS + orr x2, x2, #DBG_MDSCR_SS + tbnz x3, #KVM_GUESTDBG_SINGLESTEP_SHIFT, 1f + // If we are not doing Single Step we want to prevent the guest doing so + // as otherwise we will have to deal with the re-routed exceptions as we + // are doing other guest debug related things + eor x1, x1, #DBG_SPSR_SS + eor x2, x2, #DBG_MDSCR_SS +1: + msr spsr_el2, x1 + msr mdscr_el1, x2 +2: // Last bits of the 64bit state pop x2, x3 pop x0, x1 diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 523f476..347e5b0 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -7,6 +7,8 @@ * Note: you must update KVM_API_VERSION if you change this interface. */ +#ifndef __ASSEMBLY__ + #include #include #include @@ -515,11 +517,6 @@ struct kvm_s390_irq { } u; }; -/* for KVM_SET_GUEST_DEBUG */ - -#define KVM_GUESTDBG_ENABLE 0x00000001 -#define KVM_GUESTDBG_SINGLESTEP 0x00000002 - struct kvm_guest_debug { __u32 control; __u32 pad; @@ -1189,4 +1186,15 @@ struct kvm_assigned_msix_entry { __u16 padding[3]; }; +#endif /* __ASSEMBLY__ */ + +/* for KVM_SET_GUEST_DEBUG */ + +#define KVM_GUESTDBG_ENABLE_SHIFT 0 +#define KVM_GUESTDBG_ENABLE (1 << KVM_GUESTDBG_ENABLE_SHIFT) +#define KVM_GUESTDBG_SINGLESTEP_SHIFT 1 +#define KVM_GUESTDBG_SINGLESTEP (1 << KVM_GUESTDBG_SINGLESTEP_SHIFT) + + + #endif /* __LINUX_KVM_H */