From patchwork Mon Oct 12 13:17:33 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Morse X-Patchwork-Id: 54755 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f197.google.com (mail-wi0-f197.google.com [209.85.212.197]) by patches.linaro.org (Postfix) with ESMTPS id E844E23001 for ; Mon, 12 Oct 2015 13:23:16 +0000 (UTC) Received: by wicgb1 with SMTP id gb1sf59521261wic.3 for ; Mon, 12 Oct 2015 06:23:16 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:cc:mime-version :content-type:content-transfer-encoding:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list; bh=p9jzpwqRL/+fH1p8Q37YR6+UzFC6Bd64xcStroCU3Dc=; b=bn3VQmuWNWqeG4GUuf+mcyk0FfMMRPVDwsdpOyY6RL2iE94mv7baDA5dqXPYk1W7JW vO4yLql2XC+utVCKbxduN7+6kDoz6RcWsdNVsHajb1Kqn+27RDtYCqbl/fN0GatPb+hK 3vIlvUfjy7Taw65beXfcRiC25l5aZ5ExLCZp93J7FRgQFCXC4Xh0IssIEFjgs64u3sgH //YvJQw3+IE+0dWyjA5fxsKv7qNPKZWylQ8qp+/uhNu+viButyXpNONgbhvVNp11i/76 ySqXA4gpxclx81zHjx3fwDgIL68cj8tOMXEnDGo3fWOvu0XRCNXxfhZVwAcdZfnUTXbu zhVA== X-Gm-Message-State: ALoCoQmIcrE81jh+Rfcp8DsQkW/n9UN25qnceS/wvVZQDFR0G+ZX0QM5N6B9l50pLX+uR1F7s2Qp X-Received: by 10.112.144.99 with SMTP id sl3mr5561327lbb.12.1444656196178; Mon, 12 Oct 2015 06:23:16 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.25.151.79 with SMTP id z76ls480463lfd.104.gmail; Mon, 12 Oct 2015 06:23:16 -0700 (PDT) X-Received: by 10.112.162.162 with SMTP id yb2mr12559062lbb.94.1444656196057; Mon, 12 Oct 2015 06:23:16 -0700 (PDT) Received: from mail-lb0-f176.google.com (mail-lb0-f176.google.com. [209.85.217.176]) by mx.google.com with ESMTPS id r143si11208160lfr.106.2015.10.12.06.23.15 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 12 Oct 2015 06:23:15 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.176 as permitted sender) client-ip=209.85.217.176; Received: by lbwr8 with SMTP id r8so141287397lbw.2 for ; Mon, 12 Oct 2015 06:23:15 -0700 (PDT) X-Received: by 10.112.180.230 with SMTP id dr6mr12500207lbc.72.1444656195847; Mon, 12 Oct 2015 06:23:15 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.59.35 with SMTP id w3csp1548361lbq; Mon, 12 Oct 2015 06:23:14 -0700 (PDT) X-Received: by 10.107.30.12 with SMTP id e12mr27456766ioe.57.1444656194443; Mon, 12 Oct 2015 06:23:14 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id bf6si8270746igb.42.2015.10.12.06.23.14 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 12 Oct 2015 06:23:14 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Zld2y-0004Wf-BV; Mon, 12 Oct 2015 13:21:48 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Zld2l-0004NO-Mc for linux-arm-kernel@lists.infradead.org; Mon, 12 Oct 2015 13:21:38 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1F563584; Mon, 12 Oct 2015 06:21:14 -0700 (PDT) Received: from melchizedek.cambridge.arm.com (melchizedek.cambridge.arm.com [10.1.209.158]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C93033F236; Mon, 12 Oct 2015 06:21:13 -0700 (PDT) From: James Morse To: linux-arm-kernel@lists.infradead.org Subject: [PATCH 1/6] arm64: kvm: add a cpu tear-down function Date: Mon, 12 Oct 2015 14:17:33 +0100 Message-Id: <1444655858-26083-2-git-send-email-james.morse@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1444655858-26083-1-git-send-email-james.morse@arm.com> References: <1444655858-26083-1-git-send-email-james.morse@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20151012_062135_860692_339332D4 X-CRM114-Status: GOOD ( 17.61 ) X-Spam-Score: -6.9 (------) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-6.9 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [217.140.101.70 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 T_RP_MATCHES_RCVD Envelope sender domain matches handover relay domain -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , Cc: Mark Rutland , "Rafael J. Wysocki" , Lorenzo Pieralisi , Geoff Levand , Catalin Marinas , Will Deacon , AKASHI Takahiro , James Morse , Pavel Machek , Sudeep Holla , Marc Zyngier , Kevin Kang MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: patch@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.176 as permitted sender) smtp.mailfrom=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 From: AKASHI Takahiro The CPU must be put back into its initial state, at least, in the following cases in order to shutdown the system and/or re-initialize CPUs later on: 1) kexec/kdump 2) cpu hotplug (offline) 3) removing kvm as a module 4) resume from hibernate (pgd+stack moved) To address those issues in later patches, this patch adds a tear-down function, kvm_cpu_reset(), that disables the MMU and restores the vector table to the initial stub at EL2. Signed-off-by: AKASHI Takahiro [use kvm_call_hyp(), simplified mmu-off code] Signed-off-by: James Morse --- This is based on v4 from http://lists.infradead.org/pipermail/kexec/2015-May/013709.html. This patch is superseded by a v5 [0], but its changes to the cpu hotplug hook are causing a problem. [0] https://lists.linaro.org/pipermail/linaro-kernel/2015-May/021575.html arch/arm/include/asm/kvm_asm.h | 1 + arch/arm/include/asm/kvm_host.h | 7 +++++++ arch/arm/include/asm/kvm_mmu.h | 7 +++++++ arch/arm/kvm/arm.c | 18 ++++++++++++++++++ arch/arm/kvm/init.S | 5 +++++ arch/arm/kvm/mmu.c | 7 +++++-- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/include/asm/kvm_host.h | 8 ++++++++ arch/arm64/include/asm/kvm_mmu.h | 7 +++++++ arch/arm64/kvm/hyp-init.S | 37 +++++++++++++++++++++++++++++++++++++ 10 files changed, 96 insertions(+), 2 deletions(-) diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h index 194c91b610ff..6ecd59127f3f 100644 --- a/arch/arm/include/asm/kvm_asm.h +++ b/arch/arm/include/asm/kvm_asm.h @@ -85,6 +85,7 @@ struct kvm_vcpu; extern char __kvm_hyp_init[]; extern char __kvm_hyp_init_end[]; +extern char __kvm_hyp_reset[]; extern char __kvm_hyp_exit[]; extern char __kvm_hyp_exit_end[]; diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index c4072d9f32c7..f27d45f9e346 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -44,6 +44,7 @@ u32 *kvm_vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num, u32 mode); int __attribute_const__ kvm_target_cpu(void); int kvm_reset_vcpu(struct kvm_vcpu *vcpu); +void kvm_reset_cpu(void); void kvm_reset_coprocs(struct kvm_vcpu *vcpu); struct kvm_arch { @@ -211,6 +212,12 @@ static inline void __cpu_init_hyp_mode(phys_addr_t boot_pgd_ptr, kvm_call_hyp((void*)hyp_stack_ptr, vector_ptr, pgd_ptr); } +static inline void __cpu_reset_hyp_mode(phys_addr_t boot_pgd_ptr, + phys_addr_t phys_idmap_start, + unsigned long reset_func) +{ +} + static inline int kvm_arch_dev_ioctl_check_extension(long ext) { return 0; diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index 405aa1883307..64201f4f2de8 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -66,6 +66,8 @@ void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu); phys_addr_t kvm_mmu_get_httbr(void); phys_addr_t kvm_mmu_get_boot_httbr(void); phys_addr_t kvm_get_idmap_vector(void); +phys_addr_t kvm_get_idmap_start(void); +extern char __hyp_idmap_text_start[], __hyp_idmap_text_end[]; int kvm_mmu_init(void); void kvm_clear_hyp_idmap(void); @@ -269,6 +271,11 @@ static inline void __kvm_flush_dcache_pud(pud_t pud) void kvm_set_way_flush(struct kvm_vcpu *vcpu); void kvm_toggle_cache(struct kvm_vcpu *vcpu, bool was_enabled); +#define kvm_virt_to_trampoline(x) \ + (TRAMPOLINE_VA \ + + ((unsigned long)(x) \ + - ((unsigned long)__hyp_idmap_text_start & PAGE_MASK))) + static inline bool __kvm_cpu_uses_extended_idmap(void) { return false; diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index dc017adfddc8..f145c4453893 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -937,6 +937,24 @@ static void cpu_init_hyp_mode(void *dummy) kvm_arm_init_debug(); } +void kvm_reset_cpu(void) +{ + phys_addr_t boot_pgd_ptr = kvm_mmu_get_boot_httbr(); + phys_addr_t phys_idmap_start = kvm_get_idmap_start(); + + /* Is KVM initialised? */ + if (boot_pgd_ptr == virt_to_phys(NULL) || + phys_idmap_start == virt_to_phys(NULL)) + return; + + /* Do we need to return the vectors to hyp_default_vectors? */ + if (__hyp_get_vectors() == hyp_default_vectors) + return; + + __cpu_reset_hyp_mode(boot_pgd_ptr, phys_idmap_start, + kvm_virt_to_trampoline(__kvm_hyp_reset)); +} + static int hyp_init_cpu_notify(struct notifier_block *self, unsigned long action, void *cpu) { diff --git a/arch/arm/kvm/init.S b/arch/arm/kvm/init.S index 3988e72d16ff..23bdeac287da 100644 --- a/arch/arm/kvm/init.S +++ b/arch/arm/kvm/init.S @@ -151,6 +151,11 @@ target: @ We're now in the trampoline code, switch page tables eret + .globl __kvm_hyp_reset +__kvm_hyp_reset: + /* not yet implemented */ + ret lr + .ltorg .globl __kvm_hyp_init_end diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 6984342da13d..88e7d29d8da8 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -31,8 +31,6 @@ #include "trace.h" -extern char __hyp_idmap_text_start[], __hyp_idmap_text_end[]; - static pgd_t *boot_hyp_pgd; static pgd_t *hyp_pgd; static pgd_t *merged_hyp_pgd; @@ -1644,6 +1642,11 @@ phys_addr_t kvm_get_idmap_vector(void) return hyp_idmap_vector; } +phys_addr_t kvm_get_idmap_start(void) +{ + return hyp_idmap_start; +} + int kvm_mmu_init(void) { int err; diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 5e377101f919..fae48c9584c3 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -108,6 +108,7 @@ struct kvm_vcpu; extern char __kvm_hyp_init[]; extern char __kvm_hyp_init_end[]; +extern char __kvm_hyp_reset[]; extern char __kvm_hyp_vector[]; diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index ed039688c221..91157de8a30a 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -44,6 +44,7 @@ int __attribute_const__ kvm_target_cpu(void); int kvm_reset_vcpu(struct kvm_vcpu *vcpu); +void kvm_reset_cpu(void); int kvm_arch_dev_ioctl_check_extension(long ext); struct kvm_arch { @@ -244,6 +245,13 @@ static inline void __cpu_init_hyp_mode(phys_addr_t boot_pgd_ptr, hyp_stack_ptr, vector_ptr); } +static inline void __cpu_reset_hyp_mode(phys_addr_t boot_pgd_ptr, + phys_addr_t phys_idmap_start, + unsigned long reset_func) +{ + kvm_call_hyp((void *)reset_func, boot_pgd_ptr, phys_idmap_start); +} + static inline void kvm_arch_hardware_disable(void) {} static inline void kvm_arch_hardware_unsetup(void) {} static inline void kvm_arch_sync_events(struct kvm *kvm) {} diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 61505676d085..31c52e3bc518 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -98,6 +98,8 @@ void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu); phys_addr_t kvm_mmu_get_httbr(void); phys_addr_t kvm_mmu_get_boot_httbr(void); phys_addr_t kvm_get_idmap_vector(void); +phys_addr_t kvm_get_idmap_start(void); +extern char __hyp_idmap_text_start[], __hyp_idmap_text_end[]; int kvm_mmu_init(void); void kvm_clear_hyp_idmap(void); @@ -271,6 +273,11 @@ static inline void __kvm_flush_dcache_pud(pud_t pud) void kvm_set_way_flush(struct kvm_vcpu *vcpu); void kvm_toggle_cache(struct kvm_vcpu *vcpu, bool was_enabled); +#define kvm_virt_to_trampoline(x) \ + (TRAMPOLINE_VA \ + + ((unsigned long)(x) \ + - ((unsigned long)__hyp_idmap_text_start & PAGE_MASK))) + static inline bool __kvm_cpu_uses_extended_idmap(void) { return __cpu_uses_extended_idmap(); diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp-init.S index 178ba2248a98..009a9ffdfca3 100644 --- a/arch/arm64/kvm/hyp-init.S +++ b/arch/arm64/kvm/hyp-init.S @@ -140,6 +140,43 @@ merged: eret ENDPROC(__kvm_hyp_init) + /* + * x0: HYP boot pgd + * x1: HYP phys_idmap_start + */ +ENTRY(__kvm_hyp_reset) + /* + * Restore el1's lr so we can eret from here. The stack is inaccessible + * after we turn the mmu off. This value was pushed in el1_sync(). + */ + pop lr, xzr + + /* We're in trampoline code in VA, switch back to boot page tables */ + msr ttbr0_el2, x0 + isb + + /* Invalidate the old TLBs */ + tlbi alle2 + dsb sy + + /* Branch into PA space */ + adr x0, 1f + bfi x1, x0, #0, #PAGE_SHIFT + br x1 + + /* We're now in idmap, disable MMU */ +1: mrs x0, sctlr_el2 + bic x0, x0, #SCTLR_EL2_M + msr sctlr_el2, x0 + isb + + /* Install stub vectors */ + adr_l x2, __hyp_stub_vectors + msr vbar_el2, x2 + + eret +ENDPROC(__kvm_hyp_reset) + .ltorg .popsection