From patchwork Wed Feb 15 18:16:03 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Martin X-Patchwork-Id: 6804 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 3430123DE1 for ; Wed, 15 Feb 2012 18:17:22 +0000 (UTC) Received: from mail-iy0-f180.google.com (mail-iy0-f180.google.com [209.85.210.180]) by fiordland.canonical.com (Postfix) with ESMTP id D1CD7A189B2 for ; Wed, 15 Feb 2012 18:17:21 +0000 (UTC) Received: by iabz7 with SMTP id z7so2418320iab.11 for ; Wed, 15 Feb 2012 10:17:21 -0800 (PST) Received: by 10.50.184.168 with SMTP id ev8mr43406650igc.29.1329329841282; Wed, 15 Feb 2012 10:17:21 -0800 (PST) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.231.80.19 with SMTP id r19cs2683ibk; Wed, 15 Feb 2012 10:17:20 -0800 (PST) Received: by 10.216.136.211 with SMTP id w61mr9841302wei.18.1329329839886; Wed, 15 Feb 2012 10:17:19 -0800 (PST) Received: from mail-we0-f178.google.com (mail-we0-f178.google.com [74.125.82.178]) by mx.google.com with ESMTPS id fv3si14633430wib.45.2012.02.15.10.17.19 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 15 Feb 2012 10:17:19 -0800 (PST) Received-SPF: neutral (google.com: 74.125.82.178 is neither permitted nor denied by best guess record for domain of dave.martin@linaro.org) client-ip=74.125.82.178; Authentication-Results: mx.google.com; spf=neutral (google.com: 74.125.82.178 is neither permitted nor denied by best guess record for domain of dave.martin@linaro.org) smtp.mail=dave.martin@linaro.org Received: by mail-we0-f178.google.com with SMTP id b12so1116438wer.37 for ; Wed, 15 Feb 2012 10:17:19 -0800 (PST) MIME-Version: 1.0 Received: by 10.216.138.13 with SMTP id z13mr10168071wei.41.1329329838733; Wed, 15 Feb 2012 10:17:18 -0800 (PST) Received: from localhost.localdomain ([213.123.120.124]) by mx.google.com with ESMTPS id dw7sm35106696wib.4.2012.02.15.10.17.16 (version=SSLv3 cipher=OTHER); Wed, 15 Feb 2012 10:17:18 -0800 (PST) From: Dave Martin To: Nicolas Pitre , Christoffer Dall , Marc Zyngier , Will Deacon Cc: patches@linaro.org, Dave Martin Subject: [PATCH 10/10] ARM: virt: Fix hypervisor stub installation work without RAM Date: Wed, 15 Feb 2012 18:16:03 +0000 Message-Id: <1329329763-31508-11-git-send-email-dave.martin@linaro.org> X-Mailer: git-send-email 1.7.5.4 In-Reply-To: <1329329763-31508-1-git-send-email-dave.martin@linaro.org> References: <1329329763-31508-1-git-send-email-dave.martin@linaro.org> X-Gm-Message-State: ALoCoQkK9vABWXkeNFlJZtcs0GRYPewU4OMscDsvgS9w2pjtUOEcX9S0/+YEw5oQjIFAoLr8XZjC The zImage loader doesn't have any usable RAM until the kernel is relocated. To support this, the hypervisor stub is abstracted to allow the boot CPU mode to be stored into an SPSR when building the zImage loader. When building the kernel proper, we store the value into .data (moved from .bss, because zeroing out of the .bss at boot time will likely clobber it). A helper function __get_boot_cpu_mode() is provided to read back the stored value. This is mostly to help keep the zImage loader code clean, since in the kernel proper, the __boot_cpu_mode variable can straightforwardly be relied upon instead. However, __get_boot_cpu_mode() will still work. The safe_svcmode_maskall macro is modified to transfer the old mode's SPSR to the new mode. For the main kernel entry point this will result in a couple of redundant instructions, since the SPSR is not significant -- however, this allows us to continue to use the same macro in the kernel and in the zImage loader. Signed-off-by: Dave Martin --- arch/arm/boot/compressed/Makefile | 2 +- arch/arm/boot/compressed/head.S | 6 +- arch/arm/include/asm/assembler.h | 15 +++++-- arch/arm/include/asm/virt.h | 7 +++ arch/arm/kernel/head.S | 4 +- arch/arm/kernel/hyp-stub.S | 86 ++++++++++++++++++++++++++++-------- 6 files changed, 91 insertions(+), 29 deletions(-) diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile index 732dd44..5974255 100644 --- a/arch/arm/boot/compressed/Makefile +++ b/arch/arm/boot/compressed/Makefile @@ -128,7 +128,7 @@ KBUILD_CFLAGS = $(subst -pg, , $(ORIG_CFLAGS)) endif ccflags-y := -fpic -fno-builtin -I$(obj) -asflags-y := -Wa,-march=all +asflags-y := -Wa,-march=all -DZIMAGE # Supply kernel BSS size to the decompressor via a linker symbol. KBSS_SZ = $(shell $(CROSS_COMPILE)size $(obj)/../../../../vmlinux | \ diff --git a/arch/arm/boot/compressed/head.S b/arch/arm/boot/compressed/head.S index 1f9b498..aabb713 100644 --- a/arch/arm/boot/compressed/head.S +++ b/arch/arm/boot/compressed/head.S @@ -137,7 +137,7 @@ start: #ifdef CONFIG_ARM_VIRT bl __hyp_stub_install @ get into SVC mode, reversibly #endif - safe_svcmode_maskall r4 + safe_svcmode_maskall r7, r8 mov r7, r1 @ save architecture ID mov r8, r2 @ save atags pointer @@ -469,7 +469,7 @@ not_relocated: mov r0, #0 mov r2, r8 @ restore atags pointer #ifdef CONFIG_ARM_VIRT - ldr r0, =__boot_cpu_mode + bl __get_boot_cpu_mode and r0, r0, #MODE_MASK cmp r0, #HYP_MODE @ if not booted in HYP mode... bne __enter_kernel @ boot kernel directly @@ -479,7 +479,7 @@ not_relocated: mov r0, #0 add r0, r0, r12 bl __hyp_set_vectors - hvc #0 @ otherwise bounce via HVC call + hvc 0 @ otherwise bounce via HVC call b . @ should never be reached diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h index fcdc332..1f2b312 100644 --- a/arch/arm/include/asm/assembler.h +++ b/arch/arm/include/asm/assembler.h @@ -233,10 +233,17 @@ #endif /* - * Helper macro to enter SVC mode cleanly and mask interrupts. reg is a - * scratch register available for the macro to overwrite. + * Helper macro to enter SVC mode cleanly and mask interrupts. reg and reg2 + * are scratch registers for the macro to overwrite. + * + * This macro is intended for forcing the CPU into SVC mode at boot time. + * you cannot return to the original mode. + * + * The old mode's SPSR is transferred to SPSR_svc, in case it is important + * (the zImage loader currently uses this). */ -.macro safe_svcmode_maskall reg:req +.macro safe_svcmode_maskall reg:req, reg2:req + mrs \reg2 , spsr mrs \reg , cpsr orr \reg , \reg , #PSR_A_BIT | PSR_I_BIT | PSR_F_BIT bic \reg , \reg , #MODE_MASK @@ -244,7 +251,7 @@ msr spsr_cxsf, \reg adr \reg , BSYM(1f) movs pc, \reg -1: +1: msr spsr_cxsf, \reg2 .endm /* diff --git a/arch/arm/include/asm/virt.h b/arch/arm/include/asm/virt.h index 3659eb5..6aba724 100644 --- a/arch/arm/include/asm/virt.h +++ b/arch/arm/include/asm/virt.h @@ -45,6 +45,7 @@ extern int __boot_cpu_mode; void __hyp_set_vectors(unsigned long phys_vector_base); +unsigned long __get_boot_cpu_mode(void); #endif /* __ASSEMBLY__ */ @@ -53,6 +54,12 @@ void __hyp_set_vectors(unsigned long phys_vector_base); * __boot_cpu_mode: */ #define BOOT_CPU_MODE_PRIMARY(x) (x & MODE_MASK) + +/* + * Flag indicating that the kernel was not entered in the same mode on every + * CPU. The zImage loader stashes this value in an SPSR, so we need an + * architecturally defined flag bit here (the N flag, as it happens) + */ #define BOOT_CPU_MODE_MISMATCH (1<<31) #define BOOT_CPU_MODE_HAVE_HYP(x) \ diff --git a/arch/arm/kernel/head.S b/arch/arm/kernel/head.S index ab1a941..9255381 100644 --- a/arch/arm/kernel/head.S +++ b/arch/arm/kernel/head.S @@ -95,7 +95,7 @@ ENTRY(stext) bl __hyp_stub_install #endif @ ensure svc mode and all interrupts masked - safe_svcmode_maskall r4 + safe_svcmode_maskall r9, r10 mrc p15, 0, r9, c0, c0 @ get processor id bl __lookup_processor_type @ r5=procinfo r9=cpuid @@ -353,7 +353,7 @@ ENTRY(secondary_startup) #ifdef CONFIG_ARM_VIRT bl __hyp_stub_install #endif - safe_svcmode_maskall r4 + safe_svcmode_maskall r9, r10 mrc p15, 0, r9, c0, c0 @ get processor id bl __lookup_processor_type diff --git a/arch/arm/kernel/hyp-stub.S b/arch/arm/kernel/hyp-stub.S index 8de03fb..6a2393c 100644 --- a/arch/arm/kernel/hyp-stub.S +++ b/arch/arm/kernel/hyp-stub.S @@ -32,6 +32,48 @@ #include #include +#ifdef ZIMAGE + +/* + * For the zImage loader, we have no writable storage until the kernel has + * been relocated. However, we never change mode or take exceptions, so we + * can save the initial mode in SPSR_svc. + */ +.macro save_boot_mode Rvalue:req, Rtemp:req, Rtemp2 + msr SPSR_cxsf, \Rvalue +.endm + +.macro retrieve_boot_mode Rd:req, Rtemp:req + mrs \Rd , SPSR +.endm + +#else /* ! ZIMAGE */ + +/* + * For the kernel proper, we need to find out the CPU boot mode long after + * boot, so we need to store it in a writable variable. + * + * This is not in .bss, because we set it sufficiently early that the boot-time + * zeroing of .bss would clobber it. + */ +.data +ENTRY(__boot_cpu_mode) + .long 0 +.text + +.macro save_boot_mode Rvalue:req, Rtemp:req, Rtemp2 + adr \Rtemp , .L__boot_cpu_mode_offset + ldr \Rtemp2 , [ \Rtemp ] + str \Rvalue , [ \Rtemp , \Rtemp2 ] +.endm + +.macro retrieve_boot_mode Rd:req, Rtemp:req + adr \Rd , .L__boot_cpu_mode_offset + ldr \Rtemp , [ \Rd ] + ldr \Rd , [ \Rd , \Rtemp ] +.endm +#endif /* ! ZIMAGE */ + /* * Hypervisor stub installation functions. * @@ -41,32 +83,35 @@ */ @ Call this from the primary CPU ENTRY(__hyp_stub_install) - adr r4, 1f - ldr r5, .L__boot_cpu_mode_offset mrs r6, cpsr and r6, r6, #MODE_MASK - str r6, [r4, r5] @ record the CPU mode we were booted in + save_boot_mode r6, r4, r5 ENDPROC(__hyp_stub_install) @ fall through... @ Secondary CPUs should call here ENTRY(__hyp_stub_install_secondary) - adr r4, 1f - ldr r5, .L__boot_cpu_mode_offset mrs r6, cpsr and r6, r6, #MODE_MASK - ldr r7, [r4, r5] + retrieve_boot_mode r7, r4 cmp r6, r7 - beq 2f @ matches primary CPU boot mode? + beq 1f @ matches primary CPU boot mode? orr r7, r7, #BOOT_CPU_MODE_MISMATCH - str r7, [r4, r5] + save_boot_mode r7, r4, r5 bx lr @ record what happened and give up - @ otherwise ... + /* + * Once we have given up on one CPU, we do not try to install the + * stub hypervisor on the remaining ones: because the saved boot mode + * is modified, it can't compare equal to the CPSR mode field any + * more. + * + * Otherwise... + */ -2: cmp r6, #HYP_MODE +1: cmp r6, #HYP_MODE bxne lr @ give up if the CPU is not in HYP mode /* @@ -85,12 +130,12 @@ ENTRY(__hyp_stub_install_secondary) adr r7, __hyp_stub_vectors mcr p15, 4, r7, c12, c0, 0 @ set hypervisor vector base (HVBAR) - adr r4, 1f - ldr r5, .L__boot_cpu_mode_offset -1: str r6, [r4, r5] @ Store the boot mode bic r7, r6, #MODE_MASK orr r7, r7, #SVC_MODE - msr spsr_cxsf, r7 + msr spsr_cxsf, r7 @ This is SPSR_hyp. + + save_boot_mode r6, r4, r5 @ Store the boot mode + msr_elr_hyp 14 @ msr elr_hyp, lr eret @ return, switching to SVC mode ENDPROC(__hyp_stub_install_secondary) @@ -127,9 +172,16 @@ ENTRY(__hyp_set_vectors) bx lr ENDPROC(__hyp_set_vectors) +ENTRY(__get_boot_cpu_mode) + retrieve_boot_mode r0, r1 + bx lr +ENDPROC(__get_boot_cpu_mode) + +#ifndef ZIMAGE .align 2 .L__boot_cpu_mode_offset: - .long __boot_cpu_mode - 1b + .long __boot_cpu_mode - . +#endif .align 5 __hyp_stub_vectors: @@ -143,7 +195,3 @@ __hyp_stub_irq: W(b) . __hyp_stub_fiq: W(b) . ENDPROC(__hyp_stub_vectors) -.bss - -ENTRY(__boot_cpu_mode) - .long 0