From patchwork Mon Aug 15 13:42:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 597691 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3603C00140 for ; Mon, 15 Aug 2022 13:42:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232622AbiHONmk (ORCPT ); Mon, 15 Aug 2022 09:42:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54178 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232285AbiHONmh (ORCPT ); Mon, 15 Aug 2022 09:42:37 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 358582654; Mon, 15 Aug 2022 06:42:35 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 8C7BAB80EC0; Mon, 15 Aug 2022 13:42:33 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 651C3C433B5; Mon, 15 Aug 2022 13:42:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1660570952; bh=6TIOpW/bujrZdCqkgLNTz2GkGR/7C29Y2vJhi5gLK6Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LzyLRHfpuByX0NDfDVCCiYdn5hCfLWgNEGIMqja0ycO9c10AAUo9Sg1zUBeMDb6bo 7IV6l7g/vJaXrgLhKFBzQY9JhzMjR99dnc0QrmTSybFfZtUuOqb4+hn7b/9hpWwjZn LyAGdsPdo6rHB+4ohQaFC89xN1N3dnNZpdxLly37wE4gSIro466DvomOhpRNiL1mij Tc0sFhk5oBV8K8BspCRJqyIqWIIH4lbXYk/s4wCbVJkrN4tAtMS4JS54QVK6fyNk5N nFRhvgzDczVkS1eerOcdOtqdmne4C7msNXUMefjNpIzWFUqamb49P17KQMxHL6fmSO QMomRLNwVtIfg== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-efi@vger.kernel.org, Ard Biesheuvel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Michael Roth Subject: [PATCH 1/6] x86/head_64: clean up mixed mode 32-bit entry code Date: Mon, 15 Aug 2022 15:42:18 +0200 Message-Id: <20220815134223.740112-2-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220815134223.740112-1-ardb@kernel.org> References: <20220815134223.740112-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=22310; i=ardb@kernel.org; h=from:subject; bh=6TIOpW/bujrZdCqkgLNTz2GkGR/7C29Y2vJhi5gLK6Y=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBi+k0ykrA47C3bOrYjwyQNijmBnh0mYJ8+BMxqcB+l TdsOjemJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYvpNMgAKCRDDTyI5ktmPJDZGDA DGe4TtaEv1+oq0/I3l/dHBdAeBDf21Gi6sFTJg+NnUi/zc+uneNOrWgEPGzp0kuIZPQ4j+d3OrT4JZ v4KDuO71qB99SzzGsMaPuXrRrj1byOa4+RhiAUXEXEAakjuG2mfup28DoHk7p2f/zZtTKl2VIYicZP zmw5bDbFRonUt9ft9alJMOvp7/toT+NYjUI7bhONokfJqbTqafZWEXWuiJYg75uIVm0YgIJ3PFZxyU wlaWYXBbTObd6pOdigf7TEQHH+dESM13gXBOwaKV8nstvqgFLefno3Zm1SiV9cNVyUoEeI0vzoMNaM DaTGk+ja5qmE4XEXoYVO1tyYwNaeq0SMf8/9TZIYe/1itD7+fOBoQHkxdMkOPIRFj6B/vzCrDrqYYY E0kRarGo78GKIXdazStPIgoF98MGp9Hs4U2EXD16aCAashJVz7JM2p7I911UB/eqox7O6J4y6wvAdB qVcG8GpO9XOpn9rq4AQ5//AIE+nGGvF4OAO5RVIdu4yrc= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org The x86_64 32-bit entry code is a jumble of EFI and SEV routines, which is not good for maintainability. Let's isolate the EFI mixed mode code and combine it with the boot service thunk that lives in another .S file, so that we can remove it from head_64.S Signed-off-by: Ard Biesheuvel --- arch/x86/boot/compressed/Makefile | 6 +- arch/x86/boot/compressed/efi_mixed.S | 358 ++++++++++++++++++++ arch/x86/boot/compressed/efi_thunk_64.S | 195 ----------- arch/x86/boot/compressed/head_32.S | 4 - arch/x86/boot/compressed/head_64.S | 149 +------- drivers/firmware/efi/libstub/x86-stub.c | 3 +- 6 files changed, 370 insertions(+), 345 deletions(-) diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile index 35ce1a64068b..d6dbb46696a2 100644 --- a/arch/x86/boot/compressed/Makefile +++ b/arch/x86/boot/compressed/Makefile @@ -107,11 +107,11 @@ endif vmlinux-objs-$(CONFIG_ACPI) += $(obj)/acpi.o vmlinux-objs-$(CONFIG_INTEL_TDX_GUEST) += $(obj)/tdx.o $(obj)/tdcall.o -vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_thunk_$(BITS).o vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o -efi-obj-$(CONFIG_EFI_STUB) = $(objtree)/drivers/firmware/efi/libstub/lib.a +vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o +vmlinux-objs-$(CONFIG_EFI_STUB) += $(objtree)/drivers/firmware/efi/libstub/lib.a -$(obj)/vmlinux: $(vmlinux-objs-y) $(efi-obj-y) FORCE +$(obj)/vmlinux: $(vmlinux-objs-y) FORCE $(call if_changed,ld) OBJCOPYFLAGS_vmlinux.bin := -R .comment -S diff --git a/arch/x86/boot/compressed/efi_mixed.S b/arch/x86/boot/compressed/efi_mixed.S new file mode 100644 index 000000000000..6fd7ac517c53 --- /dev/null +++ b/arch/x86/boot/compressed/efi_mixed.S @@ -0,0 +1,358 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2014, 2015 Intel Corporation; author Matt Fleming + * + * Early support for invoking 32-bit EFI services from a 64-bit kernel. + * + * Because this thunking occurs before ExitBootServices() we have to + * restore the firmware's 32-bit GDT and IDT before we make EFI service + * calls. + * + * On the plus side, we don't have to worry about mangling 64-bit + * addresses into 32-bits because we're executing with an identity + * mapped pagetable and haven't transitioned to 64-bit virtual addresses + * yet. + */ + +#include +#include +#include +#include +#include + + .text + .code64 +/* + * This is the first thing that runs after switching to long mode. Depending on + * whether we are using the EFI handover protocol or the compat entry point, we + * will either branch to the 64-bit EFI handover entrypoint at offset 0x390 in + * the image, or to the 64-bit EFI PE/COFF entrypoint efi_pe_entry(). In the + * former case, the bootloader must provide a struct bootparams pointer as the + * third argument, so we use that to disambiguate. + * + * +--------------+ + * +----------------+ +------------+ +---->| efi_pe_entry | + * | efi32_pe_entry |---->| | | +-----------+--+ + * +----------------+ | | +--------+-------------+ | + * | startup_32 |---->| startup_64_mixedmode | | + * +----------------+ | | +--------+-------------+ V + * | efi32_handover |---->| | | +-----------+------+ + * +----------------+ +------------+ +---->| efi64_stub_entry | + * +-----------+------+ + * +------------+ +----------+ | + * | startup_64 |<----| efi_main |<--------------+ + * +------------+ +----------+ + */ +SYM_FUNC_START(startup_64_mixedmode) + lea efi32_boot_args(%rip), %rdx + mov 0(%rdx), %edi + mov 4(%rdx), %esi + mov 8(%rdx), %edx // saved bootparams pointer + test %edx, %edx + jnz efi64_stub_entry + /* + * efi_pe_entry uses MS calling convention, which requires 32 bytes of + * shadow space on the stack even if all arguments are passed in + * registers. We also need an additional 8 bytes for the space that + * would be occupied by the return address, and this also results in + * the correct stack alignment for entry. + */ + sub $40, %rsp + mov %rdi, %rcx // MS calling convention + mov %rsi, %rdx + jmp efi_pe_entry +SYM_FUNC_END(startup_64_mixedmode) + +SYM_FUNC_START(__efi64_thunk) + push %rbp + push %rbx + + movl %ds, %eax + push %rax + movl %es, %eax + push %rax + movl %ss, %eax + push %rax + + /* Copy args passed on stack */ + movq 0x30(%rsp), %rbp + movq 0x38(%rsp), %rbx + movq 0x40(%rsp), %rax + + /* + * Convert x86-64 ABI params to i386 ABI + */ + subq $64, %rsp + movl %esi, 0x0(%rsp) + movl %edx, 0x4(%rsp) + movl %ecx, 0x8(%rsp) + movl %r8d, 0xc(%rsp) + movl %r9d, 0x10(%rsp) + movl %ebp, 0x14(%rsp) + movl %ebx, 0x18(%rsp) + movl %eax, 0x1c(%rsp) + + leaq 0x20(%rsp), %rbx + sgdt (%rbx) + + addq $16, %rbx + sidt (%rbx) + + leaq 1f(%rip), %rbp + + /* + * Switch to IDT and GDT with 32-bit segments. This is the firmware GDT + * and IDT that was installed when the kernel started executing. The + * pointers were saved at the efi32_stub_handover entry point below. + * + * Pass the saved DS selector to the 32-bit code, and use far return to + * restore the saved CS selector. + */ + leaq efi32_boot_idt(%rip), %rax + lidt (%rax) + leaq efi32_boot_gdt(%rip), %rax + lgdt (%rax) + + movzwl efi32_boot_ds(%rip), %edx + movzwq efi32_boot_cs(%rip), %rax + pushq %rax + leaq efi_enter32(%rip), %rax + pushq %rax + lretq + +1: addq $64, %rsp + movq %rdi, %rax + + pop %rbx + movl %ebx, %ss + pop %rbx + movl %ebx, %es + pop %rbx + movl %ebx, %ds + /* Clear out 32-bit selector from FS and GS */ + xorl %ebx, %ebx + movl %ebx, %fs + movl %ebx, %gs + + /* + * Convert 32-bit status code into 64-bit. + */ + roll $1, %eax + rorq $1, %rax + + pop %rbx + pop %rbp + RET +SYM_FUNC_END(__efi64_thunk) + + .code32 +/* + * EFI service pointer must be in %edi. + * + * The stack should represent the 32-bit calling convention. + */ +SYM_FUNC_START_LOCAL(efi_enter32) + /* Load firmware selector into data and stack segment registers */ + movl %edx, %ds + movl %edx, %es + movl %edx, %fs + movl %edx, %gs + movl %edx, %ss + + /* Reload pgtables */ + movl %cr3, %eax + movl %eax, %cr3 + + /* Disable paging */ + movl %cr0, %eax + btrl $X86_CR0_PG_BIT, %eax + movl %eax, %cr0 + + /* Disable long mode via EFER */ + movl $MSR_EFER, %ecx + rdmsr + btrl $_EFER_LME, %eax + wrmsr + + call *%edi + + /* We must preserve return value */ + movl %eax, %edi + + /* + * Some firmware will return with interrupts enabled. Be sure to + * disable them before we switch GDTs and IDTs. + */ + cli + + lidtl (%ebx) + subl $16, %ebx + + lgdtl (%ebx) + + movl %cr4, %eax + btsl $(X86_CR4_PAE_BIT), %eax + movl %eax, %cr4 + + movl %cr3, %eax + movl %eax, %cr3 + + movl $MSR_EFER, %ecx + rdmsr + btsl $_EFER_LME, %eax + wrmsr + + xorl %eax, %eax + lldt %ax + + pushl $__KERNEL_CS + pushl %ebp + + /* Enable paging */ + movl %cr0, %eax + btsl $X86_CR0_PG_BIT, %eax + movl %eax, %cr0 + lret +SYM_FUNC_END(efi_enter32) + +#define ST32_boottime 60 // offsetof(efi_system_table_32_t, boottime) +#define BS32_handle_protocol 88 // offsetof(efi_boot_services_32_t, handle_protocol) +#define LI32_image_base 32 // offsetof(efi_loaded_image_32_t, image_base) + +/* + * efi_status_t efi32_pe_entry(efi_handle_t image_handle, + * efi_system_table_32_t *sys_table) + * + * This is the EFI compat entrypoint that will be invoked by the 32-bit EFI + * firmware directly (provided that it implements support for compat + * entrypoints). + */ +SYM_FUNC_START(efi32_pe_entry) + pushl %ebp + movl %esp, %ebp + pushl %eax // dummy push to allocate loaded_image + + pushl %ebx // save callee-save registers + pushl %edi + + call verify_cpu // check for long mode support + testl %eax, %eax + movl $0x80000003, %eax // EFI_UNSUPPORTED + jnz 2f + + call 1f +1: pop %ebx + + /* Get the loaded image protocol pointer from the image handle */ + leal -4(%ebp), %eax + pushl %eax // &loaded_image + leal (loaded_image_proto - 1b)(%ebx), %eax + pushl %eax // pass the GUID address + pushl 8(%ebp) // pass the image handle + + /* + * Note the alignment of the stack frame. + * sys_table + * handle <-- 16-byte aligned on entry by ABI + * return address + * frame pointer + * loaded_image <-- local variable + * saved %ebx <-- 16-byte aligned here + * saved %edi + * &loaded_image + * &loaded_image_proto + * handle <-- 16-byte aligned for call to handle_protocol + */ + + movl 12(%ebp), %eax // sys_table + movl ST32_boottime(%eax), %eax // sys_table->boottime + call *BS32_handle_protocol(%eax) // sys_table->boottime->handle_protocol + addl $12, %esp // restore argument space + testl %eax, %eax + jnz 2f + + movl 8(%ebp), %ecx // image_handle + movl 12(%ebp), %edx // sys_table + movl -4(%ebp), %esi // loaded_image + movl LI32_image_base(%esi), %esi // loaded_image->image_base + leal (startup_32 - 1b)(%ebx), %ebp // runtime address of startup_32 + /* + * We need to set the image_offset variable here since startup_32() will + * use it before we get to the 64-bit efi_pe_entry() in C code. + */ + subl %esi, %ebp + movl %ebp, (image_offset - 1b)(%ebx) // save image_offset + xorl %esi, %esi + jmp efi32_entry + +2: popl %edi // restore callee-save registers + popl %ebx + leave + RET +SYM_FUNC_END(efi32_pe_entry) + +/* + * This is the EFI handover protocol entrypoint, which is invoked by jumping to + * an offset of 0x190 bytes into the kernel image after it has been loaded into + * memory by the EFI firmware or the bootloader. + */ +SYM_FUNC_START(efi32_handover) + add $0x4, %esp // Discard return address + popl %ecx + popl %edx + popl %esi + jmp efi32_entry +SYM_FUNC_END(efi32_handover) + +SYM_FUNC_START_LOCAL(efi32_entry) + call 3f +3: pop %ebx + + /* Save firmware GDTR and code/data selectors */ + sgdtl (efi32_boot_gdt - 3b)(%ebx) + movw %cs, (efi32_boot_cs - 3b)(%ebx) + movw %ds, (efi32_boot_ds - 3b)(%ebx) + + /* Store firmware IDT descriptor */ + sidtl (efi32_boot_idt - 3b)(%ebx) + + leal (efi32_boot_args - 3b)(%ebx), %ebx + movl %ecx, 0(%ebx) + movl %edx, 4(%ebx) + movl %esi, 8(%ebx) + movb $0x0, 12(%ebx) // efi_is64 + + /* Disable paging */ + movl %cr0, %eax + btrl $X86_CR0_PG_BIT, %eax + movl %eax, %cr0 + + jmp startup_32 +SYM_FUNC_END(efi32_entry) + + .section ".rodata" + /* EFI loaded image protocol GUID */ + .balign 4 +SYM_DATA_START_LOCAL(loaded_image_proto) + .long 0x5b1b31a1 + .word 0x9562, 0x11d2 + .byte 0x8e, 0x3f, 0x00, 0xa0, 0xc9, 0x69, 0x72, 0x3b +SYM_DATA_END(loaded_image_proto) + + .data + .balign 8 +SYM_DATA_START_LOCAL(efi32_boot_gdt) + .word 0 + .quad 0 +SYM_DATA_END(efi32_boot_gdt) + +SYM_DATA_START_LOCAL(efi32_boot_idt) + .word 0 + .quad 0 +SYM_DATA_END(efi32_boot_idt) + +SYM_DATA_LOCAL(efi32_boot_cs, .word 0) +SYM_DATA_LOCAL(efi32_boot_ds, .word 0) +SYM_DATA_LOCAL(efi32_boot_args, .long 0, 0, 0) +SYM_DATA(efi_is64, .byte 1) + diff --git a/arch/x86/boot/compressed/efi_thunk_64.S b/arch/x86/boot/compressed/efi_thunk_64.S deleted file mode 100644 index 67e7edcdfea8..000000000000 --- a/arch/x86/boot/compressed/efi_thunk_64.S +++ /dev/null @@ -1,195 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* - * Copyright (C) 2014, 2015 Intel Corporation; author Matt Fleming - * - * Early support for invoking 32-bit EFI services from a 64-bit kernel. - * - * Because this thunking occurs before ExitBootServices() we have to - * restore the firmware's 32-bit GDT and IDT before we make EFI service - * calls. - * - * On the plus side, we don't have to worry about mangling 64-bit - * addresses into 32-bits because we're executing with an identity - * mapped pagetable and haven't transitioned to 64-bit virtual addresses - * yet. - */ - -#include -#include -#include -#include -#include - - .code64 - .text -SYM_FUNC_START(__efi64_thunk) - push %rbp - push %rbx - - movl %ds, %eax - push %rax - movl %es, %eax - push %rax - movl %ss, %eax - push %rax - - /* Copy args passed on stack */ - movq 0x30(%rsp), %rbp - movq 0x38(%rsp), %rbx - movq 0x40(%rsp), %rax - - /* - * Convert x86-64 ABI params to i386 ABI - */ - subq $64, %rsp - movl %esi, 0x0(%rsp) - movl %edx, 0x4(%rsp) - movl %ecx, 0x8(%rsp) - movl %r8d, 0xc(%rsp) - movl %r9d, 0x10(%rsp) - movl %ebp, 0x14(%rsp) - movl %ebx, 0x18(%rsp) - movl %eax, 0x1c(%rsp) - - leaq 0x20(%rsp), %rbx - sgdt (%rbx) - - addq $16, %rbx - sidt (%rbx) - - leaq 1f(%rip), %rbp - - /* - * Switch to IDT and GDT with 32-bit segments. This is the firmware GDT - * and IDT that was installed when the kernel started executing. The - * pointers were saved at the EFI stub entry point in head_64.S. - * - * Pass the saved DS selector to the 32-bit code, and use far return to - * restore the saved CS selector. - */ - leaq efi32_boot_idt(%rip), %rax - lidt (%rax) - leaq efi32_boot_gdt(%rip), %rax - lgdt (%rax) - - movzwl efi32_boot_ds(%rip), %edx - movzwq efi32_boot_cs(%rip), %rax - pushq %rax - leaq efi_enter32(%rip), %rax - pushq %rax - lretq - -1: addq $64, %rsp - movq %rdi, %rax - - pop %rbx - movl %ebx, %ss - pop %rbx - movl %ebx, %es - pop %rbx - movl %ebx, %ds - /* Clear out 32-bit selector from FS and GS */ - xorl %ebx, %ebx - movl %ebx, %fs - movl %ebx, %gs - - /* - * Convert 32-bit status code into 64-bit. - */ - roll $1, %eax - rorq $1, %rax - - pop %rbx - pop %rbp - RET -SYM_FUNC_END(__efi64_thunk) - - .code32 -/* - * EFI service pointer must be in %edi. - * - * The stack should represent the 32-bit calling convention. - */ -SYM_FUNC_START_LOCAL(efi_enter32) - /* Load firmware selector into data and stack segment registers */ - movl %edx, %ds - movl %edx, %es - movl %edx, %fs - movl %edx, %gs - movl %edx, %ss - - /* Reload pgtables */ - movl %cr3, %eax - movl %eax, %cr3 - - /* Disable paging */ - movl %cr0, %eax - btrl $X86_CR0_PG_BIT, %eax - movl %eax, %cr0 - - /* Disable long mode via EFER */ - movl $MSR_EFER, %ecx - rdmsr - btrl $_EFER_LME, %eax - wrmsr - - call *%edi - - /* We must preserve return value */ - movl %eax, %edi - - /* - * Some firmware will return with interrupts enabled. Be sure to - * disable them before we switch GDTs and IDTs. - */ - cli - - lidtl (%ebx) - subl $16, %ebx - - lgdtl (%ebx) - - movl %cr4, %eax - btsl $(X86_CR4_PAE_BIT), %eax - movl %eax, %cr4 - - movl %cr3, %eax - movl %eax, %cr3 - - movl $MSR_EFER, %ecx - rdmsr - btsl $_EFER_LME, %eax - wrmsr - - xorl %eax, %eax - lldt %ax - - pushl $__KERNEL_CS - pushl %ebp - - /* Enable paging */ - movl %cr0, %eax - btsl $X86_CR0_PG_BIT, %eax - movl %eax, %cr0 - lret -SYM_FUNC_END(efi_enter32) - - .data - .balign 8 -SYM_DATA_START(efi32_boot_gdt) - .word 0 - .quad 0 -SYM_DATA_END(efi32_boot_gdt) - -SYM_DATA_START(efi32_boot_idt) - .word 0 - .quad 0 -SYM_DATA_END(efi32_boot_idt) - -SYM_DATA_START(efi32_boot_cs) - .word 0 -SYM_DATA_END(efi32_boot_cs) - -SYM_DATA_START(efi32_boot_ds) - .word 0 -SYM_DATA_END(efi32_boot_ds) diff --git a/arch/x86/boot/compressed/head_32.S b/arch/x86/boot/compressed/head_32.S index 3b354eb9516d..6589ddd4cfaf 100644 --- a/arch/x86/boot/compressed/head_32.S +++ b/arch/x86/boot/compressed/head_32.S @@ -208,10 +208,6 @@ SYM_DATA_START_LOCAL(gdt) .quad 0x00cf92000000ffff /* __KERNEL_DS */ SYM_DATA_END_LABEL(gdt, SYM_L_LOCAL, gdt_end) -#ifdef CONFIG_EFI_STUB -SYM_DATA(image_offset, .long 0) -#endif - /* * Stack and heap for uncompression */ diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S index d33f060900d2..c2cdbd8a3375 100644 --- a/arch/x86/boot/compressed/head_64.S +++ b/arch/x86/boot/compressed/head_64.S @@ -259,32 +259,16 @@ SYM_FUNC_START(startup_32) * We place all of the values on our mini stack so lret can * used to perform that far jump. */ - leal rva(startup_64)(%ebp), %eax #ifdef CONFIG_EFI_MIXED - movl rva(efi32_boot_args)(%ebp), %edi - testl %edi, %edi - jz 1f - leal rva(efi64_stub_entry)(%ebp), %eax - movl rva(efi32_boot_args+4)(%ebp), %esi - movl rva(efi32_boot_args+8)(%ebp), %edx // saved bootparams pointer - testl %edx, %edx - jnz 1f - /* - * efi_pe_entry uses MS calling convention, which requires 32 bytes of - * shadow space on the stack even if all arguments are passed in - * registers. We also need an additional 8 bytes for the space that - * would be occupied by the return address, and this also results in - * the correct stack alignment for entry. - */ - subl $40, %esp - leal rva(efi_pe_entry)(%ebp), %eax - movl %edi, %ecx // MS calling convention - movl %esi, %edx -1: + cmpb $1, rva(efi_is64)(%ebp) + leal rva(startup_64_mixedmode)(%ebp), %eax + jne 1f #endif /* Check if the C-bit position is correct when SEV is active */ call startup32_check_sev_cbit + leal rva(startup_64)(%ebp), %eax +1: pushl $__KERNEL_CS pushl %eax @@ -299,35 +283,7 @@ SYM_FUNC_END(startup_32) #ifdef CONFIG_EFI_MIXED .org 0x190 SYM_FUNC_START(efi32_stub_entry) - add $0x4, %esp /* Discard return address */ - popl %ecx - popl %edx - popl %esi - - call 1f -1: pop %ebp - subl $ rva(1b), %ebp - - movl %esi, rva(efi32_boot_args+8)(%ebp) -SYM_INNER_LABEL(efi32_pe_stub_entry, SYM_L_LOCAL) - movl %ecx, rva(efi32_boot_args)(%ebp) - movl %edx, rva(efi32_boot_args+4)(%ebp) - movb $0, rva(efi_is64)(%ebp) - - /* Save firmware GDTR and code/data selectors */ - sgdtl rva(efi32_boot_gdt)(%ebp) - movw %cs, rva(efi32_boot_cs)(%ebp) - movw %ds, rva(efi32_boot_ds)(%ebp) - - /* Store firmware IDT descriptor */ - sidtl rva(efi32_boot_idt)(%ebp) - - /* Disable paging */ - movl %cr0, %eax - btrl $X86_CR0_PG_BIT, %eax - movl %eax, %cr0 - - jmp startup_32 + jmp efi32_handover SYM_FUNC_END(efi32_stub_entry) #endif @@ -713,6 +669,7 @@ SYM_FUNC_START_LOCAL_NOALIGN(.Lno_longmode) jmp 1b SYM_FUNC_END(.Lno_longmode) + .globl verify_cpu #include "../../kernel/verify_cpu.S" .data @@ -757,98 +714,6 @@ SYM_DATA_START(boot32_idt) SYM_DATA_END_LABEL(boot32_idt, SYM_L_GLOBAL, boot32_idt_end) #endif -#ifdef CONFIG_EFI_STUB -SYM_DATA(image_offset, .long 0) -#endif -#ifdef CONFIG_EFI_MIXED -SYM_DATA_LOCAL(efi32_boot_args, .long 0, 0, 0) -SYM_DATA(efi_is64, .byte 1) - -#define ST32_boottime 60 // offsetof(efi_system_table_32_t, boottime) -#define BS32_handle_protocol 88 // offsetof(efi_boot_services_32_t, handle_protocol) -#define LI32_image_base 32 // offsetof(efi_loaded_image_32_t, image_base) - - __HEAD - .code32 -SYM_FUNC_START(efi32_pe_entry) -/* - * efi_status_t efi32_pe_entry(efi_handle_t image_handle, - * efi_system_table_32_t *sys_table) - */ - - pushl %ebp - movl %esp, %ebp - pushl %eax // dummy push to allocate loaded_image - - pushl %ebx // save callee-save registers - pushl %edi - - call verify_cpu // check for long mode support - testl %eax, %eax - movl $0x80000003, %eax // EFI_UNSUPPORTED - jnz 2f - - call 1f -1: pop %ebx - subl $ rva(1b), %ebx - - /* Get the loaded image protocol pointer from the image handle */ - leal -4(%ebp), %eax - pushl %eax // &loaded_image - leal rva(loaded_image_proto)(%ebx), %eax - pushl %eax // pass the GUID address - pushl 8(%ebp) // pass the image handle - - /* - * Note the alignment of the stack frame. - * sys_table - * handle <-- 16-byte aligned on entry by ABI - * return address - * frame pointer - * loaded_image <-- local variable - * saved %ebx <-- 16-byte aligned here - * saved %edi - * &loaded_image - * &loaded_image_proto - * handle <-- 16-byte aligned for call to handle_protocol - */ - - movl 12(%ebp), %eax // sys_table - movl ST32_boottime(%eax), %eax // sys_table->boottime - call *BS32_handle_protocol(%eax) // sys_table->boottime->handle_protocol - addl $12, %esp // restore argument space - testl %eax, %eax - jnz 2f - - movl 8(%ebp), %ecx // image_handle - movl 12(%ebp), %edx // sys_table - movl -4(%ebp), %esi // loaded_image - movl LI32_image_base(%esi), %esi // loaded_image->image_base - movl %ebx, %ebp // startup_32 for efi32_pe_stub_entry - /* - * We need to set the image_offset variable here since startup_32() will - * use it before we get to the 64-bit efi_pe_entry() in C code. - */ - subl %esi, %ebx - movl %ebx, rva(image_offset)(%ebp) // save image_offset - jmp efi32_pe_stub_entry - -2: popl %edi // restore callee-save registers - popl %ebx - leave - RET -SYM_FUNC_END(efi32_pe_entry) - - .section ".rodata" - /* EFI loaded image protocol GUID */ - .balign 4 -SYM_DATA_START_LOCAL(loaded_image_proto) - .long 0x5b1b31a1 - .word 0x9562, 0x11d2 - .byte 0x8e, 0x3f, 0x00, 0xa0, 0xc9, 0x69, 0x72, 0x3b -SYM_DATA_END(loaded_image_proto) -#endif - #ifdef CONFIG_AMD_MEM_ENCRYPT __HEAD .code32 diff --git a/drivers/firmware/efi/libstub/x86-stub.c b/drivers/firmware/efi/libstub/x86-stub.c index 05ae8bcc9d67..d7b53dec01be 100644 --- a/drivers/firmware/efi/libstub/x86-stub.c +++ b/drivers/firmware/efi/libstub/x86-stub.c @@ -23,7 +23,8 @@ const efi_system_table_t *efi_system_table; const efi_dxe_services_table_t *efi_dxe_table; -extern u32 image_offset; + +u32 image_offset; static efi_loaded_image_t *image = NULL; static efi_status_t From patchwork Mon Aug 15 13:42:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 597369 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E303C00140 for ; Mon, 15 Aug 2022 13:42:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232270AbiHONmg (ORCPT ); Mon, 15 Aug 2022 09:42:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54174 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232233AbiHONmg (ORCPT ); Mon, 15 Aug 2022 09:42:36 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7BA3C13E8D; Mon, 15 Aug 2022 06:42:35 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 1A44E60EEA; Mon, 15 Aug 2022 13:42:35 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B8CBDC4347C; Mon, 15 Aug 2022 13:42:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1660570954; bh=ihsoHw6swMHBcNjKbyZ/awdMFSUepCqZnSAEI50Meig=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=s3Qgne5tMHtSYVZIfOSWj4TY9OrPPsIF5MoQbfD8INVfCiqbQ0b5T+Q6lS8wHmQB7 mtQC0AlFY0xn/oYeE3qgtgxOC55QnWe0mQMObc7PleL9DqbD2mnvVwmtRB6yryHGws 1Hm4K/NLG0kD6MKTIO02rDJ18ngEJ/Dyo8wtgLJ/SGoAOx5j1JyzVJpJiuMMrFDlXx Hyp4G39EsNtBbthgesNM9p5sNNq/Ba+QGGOY/EGUkI2HsA4rL8LQijgnjb9MRXM38d 4h6OnxHfDelpNyGWME4zA2H4Io7+jnxbT+uSRc29YlsPNo5yFfdISKuuaI8OK5NwmJ FC5tD9hLrB+Xg== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-efi@vger.kernel.org, Ard Biesheuvel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Michael Roth Subject: [PATCH 2/6] efi/x86: simplify IDT/GDT preserve/restore Date: Mon, 15 Aug 2022 15:42:19 +0200 Message-Id: <20220815134223.740112-3-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220815134223.740112-1-ardb@kernel.org> References: <20220815134223.740112-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1667; i=ardb@kernel.org; h=from:subject; bh=ihsoHw6swMHBcNjKbyZ/awdMFSUepCqZnSAEI50Meig=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBi+k00ml/9VAQVR9pgiopOTiemTPgJVpwdMjY3KsGG 3S+k64GJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYvpNNAAKCRDDTyI5ktmPJOU2C/ 91l5W5RmN4Cm1/0M/iQwd4tn4wFfqVvBHRDsHkMdCNV3iaqlvdoQ91nQ3iflP8hy8Q6IilQ5s6mYPa DkvmTpnt6vvpSYV3YvGjSiywMAM9swZtcO6LAMvn220ktZ1zPZ9+Ywyj7KsfqbIdVuJ5+XYYpg1muW x8RkpM3+DWFSoulH5hqpB4Bk2gZyT1Zli+5Vviy2jpVG7m9V7Ag8Yc6XnOXo8TB0jFLadcmrf2TlG2 RvRkR+5PSasKbIIB3prAGU/kbX5EOqpg0NzRixJFNoKeXWlrLuJN+q2mgiRsZodp3XfeenqJenOL77 wvZ3eiBdjfAYONq6ZThF+lveF8362moZzhKQqjuhw9GJkt7Ra+3oyMWKRK/ranmqp4rjeWYsF2M1y3 Owr4nGM7zv91D0FdsBspjKWw+Cgf5BIbRUs36COp7znGBPc391YL8bjIL7t7/edJ4QgF9+THuVChbJ Ukrbw0oxW2Yuty/FJekpsqpWyLNPEypdK+bJWn5A4CeuA= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Tweak the asm and remove some redundant instructions. While at it, fix the associated comment for style and correctness. Signed-off-by: Ard Biesheuvel --- arch/x86/boot/compressed/efi_mixed.S | 20 +++++++------------- 1 file changed, 7 insertions(+), 13 deletions(-) diff --git a/arch/x86/boot/compressed/efi_mixed.S b/arch/x86/boot/compressed/efi_mixed.S index 6fd7ac517c53..a0f217c8ab38 100644 --- a/arch/x86/boot/compressed/efi_mixed.S +++ b/arch/x86/boot/compressed/efi_mixed.S @@ -94,24 +94,20 @@ SYM_FUNC_START(__efi64_thunk) leaq 0x20(%rsp), %rbx sgdt (%rbx) - - addq $16, %rbx - sidt (%rbx) + sidt 16(%rbx) leaq 1f(%rip), %rbp /* - * Switch to IDT and GDT with 32-bit segments. This is the firmware GDT - * and IDT that was installed when the kernel started executing. The - * pointers were saved at the efi32_stub_handover entry point below. + * Switch to IDT and GDT with 32-bit segments. These are the firmware + * GDT and IDT that were installed when the kernel started executing. + * The pointers were saved at the efi32_entry entry point below. * * Pass the saved DS selector to the 32-bit code, and use far return to * restore the saved CS selector. */ - leaq efi32_boot_idt(%rip), %rax - lidt (%rax) - leaq efi32_boot_gdt(%rip), %rax - lgdt (%rax) + lidt efi32_boot_idt(%rip) + lgdt efi32_boot_gdt(%rip) movzwl efi32_boot_ds(%rip), %edx movzwq efi32_boot_cs(%rip), %rax @@ -185,9 +181,7 @@ SYM_FUNC_START_LOCAL(efi_enter32) */ cli - lidtl (%ebx) - subl $16, %ebx - + lidtl 16(%ebx) lgdtl (%ebx) movl %cr4, %eax From patchwork Mon Aug 15 13:42:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 597368 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E0DCC00140 for ; Mon, 15 Aug 2022 13:42:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232656AbiHONml (ORCPT ); Mon, 15 Aug 2022 09:42:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54252 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232575AbiHONmk (ORCPT ); Mon, 15 Aug 2022 09:42:40 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 467E822289; Mon, 15 Aug 2022 06:42:39 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id E5C82B80EBC; Mon, 15 Aug 2022 13:42:37 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E0304C433D7; Mon, 15 Aug 2022 13:42:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1660570956; bh=offx7NkKGZBzdriaIeydphHo5vIpisW1NvkleSw7ccw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Ohf9Zs5m8Qt7F1daFud/tIcfEeshb0oIpPFlhiFQ+7nlTkbmdvH2GX2mydBQsPeIk qUfQUWzaS0qSbgZPKI3xpj1g1eGzVJfJXs/z/QtiCcNp8eliU/DCCC86zMORJjkJqL k5FJm1It+ny+n409YEt6n1u4nypaWlI8jZqhyZ6eRdCgD2d5H9KHFMUnez2iRzFvQd wU68pJjcmWQFvo2eohNJcs6eqMEsA40Yzt63kGTy3E5EtN1d8oCL8/yJwMC9SrtLr+ xSFOXGSjbFZ5kQ9FcxKBNkjyiPkQ7NRCXX8/7l36U3r42+r/0CSH+Fhod2Aw2dXqB3 1IYUrrYw1bQIw== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-efi@vger.kernel.org, Ard Biesheuvel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Michael Roth Subject: [PATCH 3/6] x86/compressed: move startup32_load_idt() out of startup code Date: Mon, 15 Aug 2022 15:42:20 +0200 Message-Id: <20220815134223.740112-4-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220815134223.740112-1-ardb@kernel.org> References: <20220815134223.740112-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=5692; i=ardb@kernel.org; h=from:subject; bh=offx7NkKGZBzdriaIeydphHo5vIpisW1NvkleSw7ccw=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBi+k03/dhSw+jga7OIa+6N+ECIys8qujkg8w9nFzz/ ppAdM8mJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYvpNNwAKCRDDTyI5ktmPJDSxDA Cr+BEyES73damuvsUYyk81mavJ7fiZX92lvWIjd+0a7+1n+LwnUThC13KmzDku/SzV3PnTVklzurgk ivsDVfuXhbAnGJn+g/mmFkoqW2fkaqhmJFZf0/+nWiHoVE/ugAkXpzHoUzsILbsZ2cW44SwQjVeSRl 2BlNg1cQZolALQECBj4XnbBcUO4bwC+IsGFBqmZxOF9EFVgwT8W/RioDhCpfmDBCDdHy0wzWy6YMbC cpA54WyJZmleQLhisHugGMFYWD4Rx5qxpPmWQH7J1XCIwz/PoKnHVm+2ttW0MkJvYdLbVRJPoq4Dyo qyBB6oLZauMBzzbxbklN6BqnWZb6od/g9Nv2rp/e6oLSmwknGnlmiNW1LGQwjz5Vc9M+DXFLW9U0Vj idpMBU9E3QFmc3NC5D5lBNdPX5a5Tjct3TPM8H1SPTuVkWtLafbAbosMYeKDWtrokDzXHdpbGYHr+Q FaKq0BVgjawuVwkNQUcFVMUidx0v3ybLu2yfjawuL6sKc= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Move the function startup32_load_idt() into mem_encrypt.S, and turn it into an ordinary function, instead of relying on hidden register arguments and non-standard calling conventions. While at it, simplify the arithmetic involved in populating a IDT entry. Signed-off-by: Ard Biesheuvel --- arch/x86/boot/compressed/head_64.S | 71 +------------------ arch/x86/boot/compressed/mem_encrypt.S | 72 ++++++++++++++++++-- 2 files changed, 69 insertions(+), 74 deletions(-) diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S index c2cdbd8a3375..1ca2ed52f93c 100644 --- a/arch/x86/boot/compressed/head_64.S +++ b/arch/x86/boot/compressed/head_64.S @@ -118,7 +118,9 @@ SYM_FUNC_START(startup_32) 1: /* Setup Exception handling for SEV-ES */ +#ifdef CONFIG_AMD_MEM_ENCRYPT call startup32_load_idt +#endif /* Make sure cpu supports long mode. */ call verify_cpu @@ -701,80 +703,11 @@ SYM_DATA_START(boot_idt) .endr SYM_DATA_END_LABEL(boot_idt, SYM_L_GLOBAL, boot_idt_end) -#ifdef CONFIG_AMD_MEM_ENCRYPT -SYM_DATA_START(boot32_idt_desc) - .word boot32_idt_end - boot32_idt - 1 - .long 0 -SYM_DATA_END(boot32_idt_desc) - .balign 8 -SYM_DATA_START(boot32_idt) - .rept 32 - .quad 0 - .endr -SYM_DATA_END_LABEL(boot32_idt, SYM_L_GLOBAL, boot32_idt_end) -#endif - #ifdef CONFIG_AMD_MEM_ENCRYPT __HEAD .code32 -/* - * Write an IDT entry into boot32_idt - * - * Parameters: - * - * %eax: Handler address - * %edx: Vector number - * - * Physical offset is expected in %ebp - */ -SYM_FUNC_START(startup32_set_idt_entry) - push %ebx - push %ecx - - /* IDT entry address to %ebx */ - leal rva(boot32_idt)(%ebp), %ebx - shl $3, %edx - addl %edx, %ebx - - /* Build IDT entry, lower 4 bytes */ - movl %eax, %edx - andl $0x0000ffff, %edx # Target code segment offset [15:0] - movl $__KERNEL32_CS, %ecx # Target code segment selector - shl $16, %ecx - orl %ecx, %edx - - /* Store lower 4 bytes to IDT */ - movl %edx, (%ebx) - - /* Build IDT entry, upper 4 bytes */ - movl %eax, %edx - andl $0xffff0000, %edx # Target code segment offset [31:16] - orl $0x00008e00, %edx # Present, Type 32-bit Interrupt Gate - - /* Store upper 4 bytes to IDT */ - movl %edx, 4(%ebx) - - pop %ecx - pop %ebx - RET -SYM_FUNC_END(startup32_set_idt_entry) #endif -SYM_FUNC_START(startup32_load_idt) -#ifdef CONFIG_AMD_MEM_ENCRYPT - /* #VC handler */ - leal rva(startup32_vc_handler)(%ebp), %eax - movl $X86_TRAP_VC, %edx - call startup32_set_idt_entry - - /* Load IDT */ - leal rva(boot32_idt)(%ebp), %eax - movl %eax, rva(boot32_idt_desc+2)(%ebp) - lidt rva(boot32_idt_desc)(%ebp) -#endif - RET -SYM_FUNC_END(startup32_load_idt) - /* * Check for the correct C-bit position when the startup_32 boot-path is used. * diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S index a73e4d783cae..889450d073ea 100644 --- a/arch/x86/boot/compressed/mem_encrypt.S +++ b/arch/x86/boot/compressed/mem_encrypt.S @@ -11,9 +11,14 @@ #include #include +#include +#include #include .text + .code64 +#include "../../kernel/sev_verify_cbit.S" + .code32 SYM_FUNC_START(get_sev_encryption_bit) xor %eax, %eax @@ -61,6 +66,8 @@ SYM_FUNC_START(get_sev_encryption_bit) RET SYM_FUNC_END(get_sev_encryption_bit) +#ifdef CONFIG_AMD_MEM_ENCRYPT + /** * sev_es_req_cpuid - Request a CPUID value from the Hypervisor using * the GHCB MSR protocol @@ -98,7 +105,7 @@ SYM_CODE_START_LOCAL(sev_es_req_cpuid) jmp 1b SYM_CODE_END(sev_es_req_cpuid) -SYM_CODE_START(startup32_vc_handler) +SYM_CODE_START_LOCAL(startup32_vc_handler) pushl %eax pushl %ebx pushl %ecx @@ -184,15 +191,70 @@ SYM_CODE_START(startup32_vc_handler) jmp .Lfail SYM_CODE_END(startup32_vc_handler) - .code64 +/* + * Write an IDT entry + * + * Parameters: + * + * %eax: Handler address + * %edx: Vector number + * %ecx: IDT address + */ +SYM_FUNC_START_LOCAL(startup32_set_idt_entry) + /* IDT entry address to %ecx */ + leal (%ecx, %edx, 8), %ecx + + /* Build IDT entry */ + movl %eax, %edx + andl $0x0000ffff, %edx # Target code segment offset [15:0] + andl $0xffff0000, %eax # Target code segment offset [31:16] + orl $(__KERNEL32_CS << 16), %edx # Target code segment selector + orl $0x00008e00, %eax # Present, Type 32-bit Interrupt Gate + + /* Store entry to IDT */ + movl %edx, (%ecx) + movl %eax, 4(%ecx) + RET +SYM_FUNC_END(startup32_set_idt_entry) -#include "../../kernel/sev_verify_cbit.S" +SYM_FUNC_START(startup32_load_idt) + push %ebp + push %ebx - .data + call 1f +1: pop %ebp + leal (boot32_idt - 1b)(%ebp), %ebx -#ifdef CONFIG_AMD_MEM_ENCRYPT + /* #VC handler */ + leal (startup32_vc_handler - 1b)(%ebp), %eax + movl $X86_TRAP_VC, %edx + movl %ebx, %ecx + call startup32_set_idt_entry + + /* Load IDT */ + leal (boot32_idt_desc - 1b)(%ebp), %ecx + movl %ebx, 2(%ecx) + lidt (%ecx) + + pop %ebx + pop %ebp + RET +SYM_FUNC_END(startup32_load_idt) + + .data .balign 8 SYM_DATA(sme_me_mask, .quad 0) SYM_DATA(sev_status, .quad 0) SYM_DATA(sev_check_data, .quad 0) + +SYM_DATA_START_LOCAL(boot32_idt) + .rept 32 + .quad 0 + .endr +SYM_DATA_END(boot32_idt) + +SYM_DATA_START_LOCAL(boot32_idt_desc) + .word . - boot32_idt - 1 + .long 0 +SYM_DATA_END(boot32_idt_desc) #endif From patchwork Mon Aug 15 13:42:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 597690 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2F5CC25B06 for ; Mon, 15 Aug 2022 13:42:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232727AbiHONmr (ORCPT ); Mon, 15 Aug 2022 09:42:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54294 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232717AbiHONmm (ORCPT ); Mon, 15 Aug 2022 09:42:42 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5F0C714024; Mon, 15 Aug 2022 06:42:41 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 18C41B80EC8; Mon, 15 Aug 2022 13:42:40 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1736BC43470; Mon, 15 Aug 2022 13:42:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1660570958; bh=xByW7dV9/cKzhYEh1j47j3lNV6Mk/c8vY36yCmya5aY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZvVtriXIiTjmkFS6s0H0YLJhi9UhLR9VrjjxFykRB6fiT8YxkYzAoUx9pD4e8ypVO JnlfXLpFyqXF7/n6xEh6VVtdyojxwC2RitwnuQgm2W4Dz25Mu31+m6jDbtF+AlcB2+ 6blkaAzx9AdKuX6IYPwarNUFp0Tr6gtM8dPGYtmq3814jyZbtg+0RuiaghVx6F+J70 cMU+6GZBDVrQco24I3im8zJiYwzzLXpAqGOIIsh0DVWU9NhGKokbGS2eURaJ/FE9pE Sz0K3c37xgRqYTvMEhGa/OHaDJM+OOlJcaA13dxZ0ragywVoofs9YrK0G+iIHCRCqU KXXBuDJdFgJSg== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-efi@vger.kernel.org, Ard Biesheuvel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Michael Roth Subject: [PATCH 4/6] x86/compressed: move startup32_check_sev_cbit out of startup code Date: Mon, 15 Aug 2022 15:42:21 +0200 Message-Id: <20220815134223.740112-5-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220815134223.740112-1-ardb@kernel.org> References: <20220815134223.740112-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=6212; i=ardb@kernel.org; h=from:subject; bh=xByW7dV9/cKzhYEh1j47j3lNV6Mk/c8vY36yCmya5aY=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBi+k055RbJI1XNVCRt1v3gqLP4M6j3z2aLhOS7EZ1R KrcDYSmJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYvpNOQAKCRDDTyI5ktmPJNJRDA CFmEczjWHfP1GDyWmA4vUVy22UHzsnTSg8ltznY2f33janw9HPtj92amX4WhnZLDHL/BWkmiA3++Up r0CmZatRl0E2iYUyoTUBqCpbPk+oqMLoPlh1xseLVU39k9WLSfkz4NsdxLTTj9HmMb10aO8EdVCfPE Aj40guOibk08Bpd0xublBYBkbx1tG7xzQdKeouInlrWCLJVNelfiRY4l6rIfOu6iIPo65HF5cOZkF2 GAcCC1A8skSaVQBOPNU5nWgtjnfiZpRzKnGf/MzJ5n33/rHbYubCR01UgWy9klWhbeXPs7QbwquDo8 qQpWHDBQ/210bumlorm/4cMYMBIiHwAiQ1O6cObIa3UJWhk3woEDgaQlhppkgAEKJ8OYA8DUfUXWI1 G0Ns12l9zmzr00410ioghWaK1Dh4RbsRyLbfvIOJUzgjFyr7JDMx117yY8cbh7RggLCWShQ7izgy3Q gCWrPaP17f7OtjCx7+vQTTFT+vnBYcI36LnFAahT5LYuc= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Move startup32_check_sev_cbit() out of head_64.S and turn it into an ordinary function using the ordinary calling conventions, rather than preserving and restoring the registers that are known to be live at the call site. Also reorder the call with the EFI mixed mode check, so we are not omitting the call to startup32_check_sev_cbit() on mixed mode systems implicitly, even if the set of machines supporting both features is empty. Signed-off-by: Ard Biesheuvel --- arch/x86/boot/compressed/head_64.S | 83 ++------------------ arch/x86/boot/compressed/mem_encrypt.S | 65 +++++++++++++++ 2 files changed, 70 insertions(+), 78 deletions(-) diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S index 1ca2ed52f93c..382ed3d8b26a 100644 --- a/arch/x86/boot/compressed/head_64.S +++ b/arch/x86/boot/compressed/head_64.S @@ -251,6 +251,11 @@ SYM_FUNC_START(startup_32) movl $__BOOT_TSS, %eax ltr %ax +#ifdef CONFIG_AMD_MEM_ENCRYPT + /* Check if the C-bit position is correct when SEV is active */ + call startup32_check_sev_cbit +#endif + /* * Setup for the jump to 64bit mode * @@ -266,9 +271,6 @@ SYM_FUNC_START(startup_32) leal rva(startup_64_mixedmode)(%ebp), %eax jne 1f #endif - /* Check if the C-bit position is correct when SEV is active */ - call startup32_check_sev_cbit - leal rva(startup_64)(%ebp), %eax 1: pushl $__KERNEL_CS @@ -703,81 +705,6 @@ SYM_DATA_START(boot_idt) .endr SYM_DATA_END_LABEL(boot_idt, SYM_L_GLOBAL, boot_idt_end) -#ifdef CONFIG_AMD_MEM_ENCRYPT - __HEAD - .code32 -#endif - -/* - * Check for the correct C-bit position when the startup_32 boot-path is used. - * - * The check makes use of the fact that all memory is encrypted when paging is - * disabled. The function creates 64 bits of random data using the RDRAND - * instruction. RDRAND is mandatory for SEV guests, so always available. If the - * hypervisor violates that the kernel will crash right here. - * - * The 64 bits of random data are stored to a memory location and at the same - * time kept in the %eax and %ebx registers. Since encryption is always active - * when paging is off the random data will be stored encrypted in main memory. - * - * Then paging is enabled. When the C-bit position is correct all memory is - * still mapped encrypted and comparing the register values with memory will - * succeed. An incorrect C-bit position will map all memory unencrypted, so that - * the compare will use the encrypted random data and fail. - */ -SYM_FUNC_START(startup32_check_sev_cbit) -#ifdef CONFIG_AMD_MEM_ENCRYPT - pushl %eax - pushl %ebx - pushl %ecx - pushl %edx - - /* Check for non-zero sev_status */ - movl rva(sev_status)(%ebp), %eax - testl %eax, %eax - jz 4f - - /* - * Get two 32-bit random values - Don't bail out if RDRAND fails - * because it is better to prevent forward progress if no random value - * can be gathered. - */ -1: rdrand %eax - jnc 1b -2: rdrand %ebx - jnc 2b - - /* Store to memory and keep it in the registers */ - movl %eax, rva(sev_check_data)(%ebp) - movl %ebx, rva(sev_check_data+4)(%ebp) - - /* Enable paging to see if encryption is active */ - movl %cr0, %edx /* Backup %cr0 in %edx */ - movl $(X86_CR0_PG | X86_CR0_PE), %ecx /* Enable Paging and Protected mode */ - movl %ecx, %cr0 - - cmpl %eax, rva(sev_check_data)(%ebp) - jne 3f - cmpl %ebx, rva(sev_check_data+4)(%ebp) - jne 3f - - movl %edx, %cr0 /* Restore previous %cr0 */ - - jmp 4f - -3: /* Check failed - hlt the machine */ - hlt - jmp 3b - -4: - popl %edx - popl %ecx - popl %ebx - popl %eax -#endif - RET -SYM_FUNC_END(startup32_check_sev_cbit) - /* * Stack and heap for uncompression */ diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S index 889450d073ea..3cd3db0da49d 100644 --- a/arch/x86/boot/compressed/mem_encrypt.S +++ b/arch/x86/boot/compressed/mem_encrypt.S @@ -241,6 +241,71 @@ SYM_FUNC_START(startup32_load_idt) RET SYM_FUNC_END(startup32_load_idt) +/* + * Check for the correct C-bit position when the startup_32 boot-path is used. + * + * The check makes use of the fact that all memory is encrypted when paging is + * disabled. The function creates 64 bits of random data using the RDRAND + * instruction. RDRAND is mandatory for SEV guests, so always available. If the + * hypervisor violates that the kernel will crash right here. + * + * The 64 bits of random data are stored to a memory location and at the same + * time kept in the %eax and %ebx registers. Since encryption is always active + * when paging is off the random data will be stored encrypted in main memory. + * + * Then paging is enabled. When the C-bit position is correct all memory is + * still mapped encrypted and comparing the register values with memory will + * succeed. An incorrect C-bit position will map all memory unencrypted, so that + * the compare will use the encrypted random data and fail. + */ +SYM_FUNC_START(startup32_check_sev_cbit) + push %ebp + push %ebx + + call 0f +0: pop %ebp + + /* Check for non-zero sev_status */ + movl (sev_status - 0b)(%ebp), %eax + testl %eax, %eax + jz 3f + + /* + * Get two 32-bit random values - Don't bail out if RDRAND fails + * because it is better to prevent forward progress if no random value + * can be gathered. + */ +1: rdrand %eax + jnc 1b +2: rdrand %ebx + jnc 2b + + /* Store to memory and keep it in the registers */ + leal (sev_check_data - 0b)(%ebp), %ebp + movl %eax, 0(%ebp) + movl %ebx, 4(%ebp) + + /* Enable paging to see if encryption is active */ + movl %cr0, %edx /* Backup %cr0 in %edx */ + movl $(X86_CR0_PG | X86_CR0_PE), %ecx /* Enable Paging and Protected mode */ + movl %ecx, %cr0 + + cmpl %eax, 0(%ebp) + jne 4f + cmpl %ebx, 4(%ebp) + jne 4f + + movl %edx, %cr0 /* Restore previous %cr0 */ + +3: pop %ebx + pop %ebp + RET + +4: /* Check failed - hlt the machine */ + hlt + jmp 4b +SYM_FUNC_END(startup32_check_sev_cbit) + .data .balign 8 SYM_DATA(sme_me_mask, .quad 0) From patchwork Mon Aug 15 13:42:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 597689 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB06BC00140 for ; Mon, 15 Aug 2022 13:42:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232781AbiHONmu (ORCPT ); Mon, 15 Aug 2022 09:42:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54332 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232733AbiHONmo (ORCPT ); Mon, 15 Aug 2022 09:42:44 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 30E582229A; Mon, 15 Aug 2022 06:42:42 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 9136C60ECD; Mon, 15 Aug 2022 13:42:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3EAC9C433D7; Mon, 15 Aug 2022 13:42:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1660570961; bh=iLZ/QdlEqOikKI8mubzHLev1D6XwzdHhOGlBP3RyGSs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ncQbKvFNR4GvZaQTrJoFL4uwRNOL++xVBJ/gZIPq1GjJ5+pG8NLOyGRDXSn7Krzvh QKmddlFBh/KwULsBxFcx8Jk+PNEFUS30iC0MqEARPUN23fdXhuqDnOedRvDKfXaaIa eSP+pRcItGRCbmVuhDrHlXekc97HczehZ4FvxmsrVsiRF/1Z62qOD1XM4rHmnN4Jjy 4n5VDa6s8vgo/DZ4dcEDke+M5SNNfcAT+36OwXdCajEDNq23sPJHaEBjFCHs4bd/qo 2BI0jh2zOeegjbfzpAFQA8lrOxPzqcrT4s7mhhbLXKfKjTWgXVp0/52Er6Q40BnFa2 6epOWMCNwTDRw== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-efi@vger.kernel.org, Ard Biesheuvel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Michael Roth Subject: [PATCH 5/6] x86/compressed: adhere to calling convention in get_sev_encryption_bit() Date: Mon, 15 Aug 2022 15:42:22 +0200 Message-Id: <20220815134223.740112-6-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220815134223.740112-1-ardb@kernel.org> References: <20220815134223.740112-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2039; i=ardb@kernel.org; h=from:subject; bh=iLZ/QdlEqOikKI8mubzHLev1D6XwzdHhOGlBP3RyGSs=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBi+k07xFpdATaza7RxA3Aem61HCMPy0ZtsZWGaSeID EMnEt2WJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYvpNOwAKCRDDTyI5ktmPJFwHDA Ce2cOoBH2t9EWQu1/TVHav9JmX1u5P6RGZQQKG7TKKIjLM3iOcpjrOdLOCv+CTT5s31G76RCNRlW1Z RvDdc3Uw+uPmCztutDQG3WxpZYMo/ex1ndX281yeHO9Jz1qT69WzX8oR24t6A9rEgX2K0mNJYg7Tuv EnoVTLHmsJtqLcdz82SSzPkuGyb6W2kTf+vbA1VQOPlcHuc+WCvCgzKr9HZyjzhtqHqLL30CTaL/oY m6PxNrj1iEf3ADlBEOf/kqKe059uC1va1bDg21QyXKw4NKZWpvsjFfje/GuwEJLYBhokiSLVWHMS23 +dIlKhuJ+q2hQrTV91TO1Up8KpblLUDJ0GCSVE/eCosoSprxcXw+2viEeh2nI7fnmBgigfYUFs15M9 R59CrDAWInKMdaTPUjoDcpgnBx8QrWLxMpHKkSi338OhuvY/ulfwO1NEnCOekd+DZuOPn13Z7VAjRv lHDWxxiTzQYr2WVvfEjGEUr5XThV3wVFygsayHkHNvrdM= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Make get_sev_encryption_bit() follow the ordinary x86 calling convention, and only call it if CONFIG_AMD_MEM_ENCRYPT is actually enabled. This clarifies the calling code, and makes it more maintainable. Signed-off-by: Ard Biesheuvel --- arch/x86/boot/compressed/head_64.S | 6 ++++-- arch/x86/boot/compressed/mem_encrypt.S | 10 ---------- 2 files changed, 4 insertions(+), 12 deletions(-) diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S index 382ed3d8b26a..4539e7c6d4c3 100644 --- a/arch/x86/boot/compressed/head_64.S +++ b/arch/x86/boot/compressed/head_64.S @@ -180,12 +180,12 @@ SYM_FUNC_START(startup_32) */ /* * If SEV is active then set the encryption mask in the page tables. - * This will insure that when the kernel is copied and decompressed + * This will ensure that when the kernel is copied and decompressed * it will be done so encrypted. */ +#ifdef CONFIG_AMD_MEM_ENCRYPT call get_sev_encryption_bit xorl %edx, %edx -#ifdef CONFIG_AMD_MEM_ENCRYPT testl %eax, %eax jz 1f subl $32, %eax /* Encryption bit is always above bit 31 */ @@ -199,6 +199,8 @@ SYM_FUNC_START(startup_32) */ movl $1, rva(sev_status)(%ebp) 1: +#else + xorl %edx, %edx #endif /* Initialize Page tables to 0 */ diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S index 3cd3db0da49d..b4a116283bd9 100644 --- a/arch/x86/boot/compressed/mem_encrypt.S +++ b/arch/x86/boot/compressed/mem_encrypt.S @@ -21,12 +21,7 @@ .code32 SYM_FUNC_START(get_sev_encryption_bit) - xor %eax, %eax - -#ifdef CONFIG_AMD_MEM_ENCRYPT push %ebx - push %ecx - push %edx movl $0x80000000, %eax /* CPUID to check the highest leaf */ cpuid @@ -57,12 +52,7 @@ SYM_FUNC_START(get_sev_encryption_bit) xor %eax, %eax .Lsev_exit: - pop %edx - pop %ecx pop %ebx - -#endif /* CONFIG_AMD_MEM_ENCRYPT */ - RET SYM_FUNC_END(get_sev_encryption_bit) From patchwork Mon Aug 15 13:42:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 597367 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A6E3C25B0E for ; Mon, 15 Aug 2022 13:42:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232676AbiHONmt (ORCPT ); Mon, 15 Aug 2022 09:42:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54338 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232742AbiHONmo (ORCPT ); Mon, 15 Aug 2022 09:42:44 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 21CE2222A0; Mon, 15 Aug 2022 06:42:44 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id B125D60EEA; Mon, 15 Aug 2022 13:42:43 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 663B4C433C1; Mon, 15 Aug 2022 13:42:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1660570963; bh=mHYMYrIKhs9U0wXjWKzNqvRMitC+0l4qNZV7fWk7X8M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VKcNmBvorVi7XDkCZ6ypaWxnVix/M79KptntCH+5gHdVcRJHl9eNcK5XyPKsTo8p7 vSuVXxbzpFc9GENxrwzyc5v6PsnZ+ZUYXbc705cO3Vh7P8nQnZOw6t7F9X1xg+xx5V T5exjegKAtMw+HgybEDJFW5McFrTzZqUK+uObNV5D6Hc07jsOR5UrQiBHK88RpqnKA 4skailrizrDkOlFJbL8u4/YcwjB6PWa3GBbGdA8Et8E0aAY47pKgdzB8e3XCRL+46Y PUqoGKmSgzYwFjj2hV9JNynsC4VUMiBG/NbrOmXXi688TU24K86Nq8YNx4s2nyGW7M slxbQKJveGcFw== From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-efi@vger.kernel.org, Ard Biesheuvel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Michael Roth Subject: [PATCH 6/6] x86/compressed: only build mem_encrypt.S if AMD_MEM_ENCRYPT=y Date: Mon, 15 Aug 2022 15:42:23 +0200 Message-Id: <20220815134223.740112-7-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220815134223.740112-1-ardb@kernel.org> References: <20220815134223.740112-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1567; i=ardb@kernel.org; h=from:subject; bh=mHYMYrIKhs9U0wXjWKzNqvRMitC+0l4qNZV7fWk7X8M=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBi+k09OC8dA12snNY5CsnCDOCSoEbARxyPgRitEQ9A EENMbKqJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYvpNPQAKCRDDTyI5ktmPJPuRDA CFy5SUFN+rIB8khTuctSqQ+qiQ+BInWcd3OnU22o9KjiUukfDLxa7ZWG6yfDm6RlOVBPDJsoJAnOFx ChINmEdO2oWXogc0rRpdCdOP5LZpra2kew6rEPsqL2Tnr4EyHVorr1lvY0iOO1FtqSWAn7P0UZqgxb VUSCb1qkYPTvIBbOUL6yqcvpbv/Y3pdw3R2RvVrjdPND6elEWikpvSnLK2WH3RkiRVnBdWldBNdBUw nKv2yvNzMaO1lwv3VmoLfekzCQpeIBfYNNH5OwxC72fFcolAJQo7NXVComHa7C+YLmg90mdwtCn5rz b3g4Tv9Wad5USLH/YN2mo1c5hQRUFxq70181GVpg78xwn8qBT6K9qQIFV6hzv2ReVLiN7+T42WIuJ/ zSqWSLKwq8Htl3ltigQu3Fhh9bfHPGLgi89z8nxPVn6Sg4v2cTBB6VV2tE43MBKgEnyvnUS9Q9yR/l ZQUIf+X8o96y3VNP2trMA6v4l3q5cCJnnh08R1EPHNydM= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Avoid building the mem_encrypt.o object if memory encryption support is not enabled to begin with. Signed-off-by: Ard Biesheuvel --- arch/x86/boot/compressed/Makefile | 2 +- arch/x86/boot/compressed/mem_encrypt.S | 3 --- 2 files changed, 1 insertion(+), 4 deletions(-) diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile index d6dbb46696a2..9aad9ddcf3b4 100644 --- a/arch/x86/boot/compressed/Makefile +++ b/arch/x86/boot/compressed/Makefile @@ -99,7 +99,7 @@ vmlinux-objs-$(CONFIG_RANDOMIZE_BASE) += $(obj)/kaslr.o ifdef CONFIG_X86_64 vmlinux-objs-y += $(obj)/ident_map_64.o vmlinux-objs-y += $(obj)/idt_64.o $(obj)/idt_handlers_64.o - vmlinux-objs-y += $(obj)/mem_encrypt.o + vmlinux-objs-$(CONFIG_AMD_MEM_ENCRYPT) += $(obj)/mem_encrypt.o vmlinux-objs-y += $(obj)/pgtable_64.o vmlinux-objs-$(CONFIG_AMD_MEM_ENCRYPT) += $(obj)/sev.o endif diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S index b4a116283bd9..fad0978d6799 100644 --- a/arch/x86/boot/compressed/mem_encrypt.S +++ b/arch/x86/boot/compressed/mem_encrypt.S @@ -56,8 +56,6 @@ SYM_FUNC_START(get_sev_encryption_bit) RET SYM_FUNC_END(get_sev_encryption_bit) -#ifdef CONFIG_AMD_MEM_ENCRYPT - /** * sev_es_req_cpuid - Request a CPUID value from the Hypervisor using * the GHCB MSR protocol @@ -312,4 +310,3 @@ SYM_DATA_START_LOCAL(boot32_idt_desc) .word . - boot32_idt - 1 .long 0 SYM_DATA_END(boot32_idt_desc) -#endif