From patchwork Tue Apr 1 13:34:18 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 877810 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9FFD22040B3 for ; Tue, 1 Apr 2025 13:34:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743514477; cv=none; b=tMOavlKUxEJzicxnQUGbAX8HSUcSs8IP3Q8YgfiIp/H5dGELueoebwTWgYMGOMNoTIJ+n29IVHy/4+hTBEdsp6QenaSxhWTbWHwUzBCh7CNQBvGkgv8eG2CfZd3rpsnMXU8tLXikZ4bsQUghOVaN/7zCs0aY88587YbCueAAq9I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743514477; c=relaxed/simple; bh=+jsVXUY+JNWjKDWkdCkg/3wzBjLZu5mOtImMQ3fql1Q=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=lBedZmvMgjXHCv8KWiXS0v+VCLfwyqGfiGpGHntnzEFGeWY9G0CRpOwnyc8N7DeTeS7GSdU8BLlcmq6DrvHe0WMlXLRSmfNN52/W4YyShMuQPhJ6WMNv0BzLgylMXg6sq9NFLSJS8NgDZUtEdvM6FhZErEMktFVCkczS/gdWGyU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ardb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=P3Ys0OKV; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ardb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="P3Ys0OKV" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43cf44b66f7so40736905e9.1 for ; Tue, 01 Apr 2025 06:34:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1743514473; x=1744119273; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=hdHYlnm7oCIQrR8yQXWXDs/JGWUxXVNjmYJAO6rvmBw=; b=P3Ys0OKVDFQ1j+Mak/eyXxJZN3pgOrYgukznUbGWafaTgdn0fQ0WiSSnJ+kYic7Stv gEK/XKWP5LWS9dndmHkNbGIEwHwEJThGRQd3qk13HPJO2atjhQPOb5/FN3/PxT0G68nr HqWW9Fwzs1Q4HCHgf/ucKvkJF6t0Lh0a3HK1pT5nbj9o9+B5Gkg6gedm1xb/7yOIrEvg YQLB+7NIpc+Sk0NCCUo1abvk4TC8LZomiCNh9yb52nreuCAFQ1YnhsN89uFjnnMVp2iY bnp1DgoyZ9VqpNdY9HKXBgmuC9onQ0wE77ILUX0EP8BGTUqOTXalPPfwpmbs0e2bTKcC lBlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743514473; x=1744119273; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=hdHYlnm7oCIQrR8yQXWXDs/JGWUxXVNjmYJAO6rvmBw=; b=EFJaf+xYICD6lVaA9D5VoIaiL/XkI7T3pgcAkd1X3fYlltYT6TqGfQ3ks0PjA7lS/M RhoFTv7R6xtt8X5wDOdOQDTWOidxm4RhnevHRc6HHHpNuW3TEg9/Dphkgrp+WyoJtdAG woMampm1vt8A6p3E5t+HAEGglj9VlTYsqRRdMdgToSgZAzGiyvaVUjnQ/6SbcS9teiua VFA1xoD6UyuBoyAkUWnoEpSw7rk9YCQwatCY6jelQVCU4BxSwSqiUwYDTHTRtV9uE6Ra F86TPNeulfAlX5fIL7x9dSfg21X5v3/BM4NMtoeQ+zL8J/KZ2bv+dOjfjp7Gt9zn1b4w 2rtQ== X-Gm-Message-State: AOJu0Yxbnfw8oksdS9PLswSJOcfAq5qNTPQw1Qy3QhKL73FLbkma1pwy ZxzYmBkMqw9PcGCBhvVUE6x+cqnI6eLEBVZf7d3GQv1lG3nhTln5XDFOCNigwSNrd9KgqaXna/t wf9cJb68QQByFrAFaj1JUsKRbMIWbUOdjA8jupabwPMRqejlsS7DwwdPKLmc+JqS5l4baYVBhR9 8l6DLxeesyFN6lsbcjvQGxrz3JUg== X-Google-Smtp-Source: AGHT+IG4x3Uac6Y88MEiNOljIjoS9AYRY3yNgjLa2c+ZBfDE53M+PxszbnOClR7HCFEi1qXaEnB2o3tn X-Received: from wmbjg21.prod.google.com ([2002:a05:600c:a015:b0:43c:f60a:4c59]) (user=ardb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1e8f:b0:43c:ec4c:25b1 with SMTP id 5b1f17b1804b1-43db62bbe2amr107359285e9.23.1743514473105; Tue, 01 Apr 2025 06:34:33 -0700 (PDT) Date: Tue, 1 Apr 2025 15:34:18 +0200 In-Reply-To: <20250401133416.1436741-8-ardb+git@google.com> Precedence: bulk X-Mailing-List: linux-efi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250401133416.1436741-8-ardb+git@google.com> X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-Developer-Signature: v=1; a=openpgp-sha256; l=3921; i=ardb@kernel.org; h=from:subject; bh=GJdt6JvaRkOjRUstkEclqIh4LycHXHj4RdQlgPesOr0=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIf3166iwvV57Pr1dd6HFzzxazmNikkIP+57Qks7WsI5Xq mmCzpc6SlkYxDgYZMUUWQRm/3238/REqVrnWbIwc1iZQIYwcHEKwESMnjP8s/7fKHxHZMuuOMeV ya5siWtPzNrNuvd68kbD9dMKF6013Mfw30vUsddv1keFe16/5XnU/vcukHvNEM3SN8G/Nm2ZkUQ JCwA= X-Mailer: git-send-email 2.49.0.472.ge94155a9ec-goog Message-ID: <20250401133416.1436741-9-ardb+git@google.com> Subject: [RFC PATCH 1/6] x86/boot/compressed: Merge local pgtable.h include into asm/boot.h From: Ard Biesheuvel To: linux-efi@vger.kernel.org Cc: linux-kernel@vger.kernel.org, x86@kernel.org, Ard Biesheuvel , Tom Lendacky , Dionna Amalie Glaze , Kevin Loughlin From: Ard Biesheuvel Merge the local include "pgtable.h" -which declares the API of the 5-level paging trampoline- into so that its implementation in la57toggle.S as well as the calling code can be decoupled from the traditional decompressor. Signed-off-by: Ard Biesheuvel --- arch/x86/boot/compressed/head_64.S | 1 - arch/x86/boot/compressed/la57toggle.S | 1 - arch/x86/boot/compressed/misc.c | 1 - arch/x86/boot/compressed/pgtable.h | 18 ------------------ arch/x86/boot/compressed/pgtable_64.c | 1 - arch/x86/include/asm/boot.h | 10 ++++++++++ 6 files changed, 10 insertions(+), 22 deletions(-) diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S index eafd4f185e77..d9dab940ff62 100644 --- a/arch/x86/boot/compressed/head_64.S +++ b/arch/x86/boot/compressed/head_64.S @@ -35,7 +35,6 @@ #include #include #include -#include "pgtable.h" /* * Fix alignment at 16 bytes. Following CONFIG_FUNCTION_ALIGNMENT will result diff --git a/arch/x86/boot/compressed/la57toggle.S b/arch/x86/boot/compressed/la57toggle.S index 9ee002387eb1..370075b4d95b 100644 --- a/arch/x86/boot/compressed/la57toggle.S +++ b/arch/x86/boot/compressed/la57toggle.S @@ -5,7 +5,6 @@ #include #include #include -#include "pgtable.h" /* * This is the 32-bit trampoline that will be copied over to low memory. It diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c index 1cdcd4aaf395..94b5991da001 100644 --- a/arch/x86/boot/compressed/misc.c +++ b/arch/x86/boot/compressed/misc.c @@ -14,7 +14,6 @@ #include "misc.h" #include "error.h" -#include "pgtable.h" #include "../string.h" #include "../voffset.h" #include diff --git a/arch/x86/boot/compressed/pgtable.h b/arch/x86/boot/compressed/pgtable.h deleted file mode 100644 index 6d595abe06b3..000000000000 --- a/arch/x86/boot/compressed/pgtable.h +++ /dev/null @@ -1,18 +0,0 @@ -#ifndef BOOT_COMPRESSED_PAGETABLE_H -#define BOOT_COMPRESSED_PAGETABLE_H - -#define TRAMPOLINE_32BIT_SIZE (2 * PAGE_SIZE) - -#define TRAMPOLINE_32BIT_CODE_OFFSET PAGE_SIZE -#define TRAMPOLINE_32BIT_CODE_SIZE 0xA0 - -#ifndef __ASSEMBLER__ - -extern unsigned long *trampoline_32bit; - -extern void trampoline_32bit_src(void *trampoline, bool enable_5lvl); - -extern const u16 trampoline_ljmp_imm_offset; - -#endif /* __ASSEMBLER__ */ -#endif /* BOOT_COMPRESSED_PAGETABLE_H */ diff --git a/arch/x86/boot/compressed/pgtable_64.c b/arch/x86/boot/compressed/pgtable_64.c index d8c5de40669d..5a6c7a190e5b 100644 --- a/arch/x86/boot/compressed/pgtable_64.c +++ b/arch/x86/boot/compressed/pgtable_64.c @@ -4,7 +4,6 @@ #include #include #include -#include "pgtable.h" #include "../string.h" #include "efi.h" diff --git a/arch/x86/include/asm/boot.h b/arch/x86/include/asm/boot.h index 3f02ff6d333d..02b23aa78955 100644 --- a/arch/x86/include/asm/boot.h +++ b/arch/x86/include/asm/boot.h @@ -74,6 +74,11 @@ # define BOOT_STACK_SIZE 0x1000 #endif +#define TRAMPOLINE_32BIT_SIZE (2 * PAGE_SIZE) + +#define TRAMPOLINE_32BIT_CODE_OFFSET PAGE_SIZE +#define TRAMPOLINE_32BIT_CODE_SIZE 0xA0 + #ifndef __ASSEMBLER__ extern unsigned int output_len; extern const unsigned long kernel_text_size; @@ -83,6 +88,11 @@ unsigned long decompress_kernel(unsigned char *outbuf, unsigned long virt_addr, void (*error)(char *x)); extern struct boot_params *boot_params_ptr; +extern unsigned long *trampoline_32bit; +extern const u16 trampoline_ljmp_imm_offset; + +void trampoline_32bit_src(void *trampoline, bool enable_5lvl); + #endif #endif /* _ASM_X86_BOOT_H */ From patchwork Tue Apr 1 13:34:19 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 877519 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A9B10204597 for ; Tue, 1 Apr 2025 13:34:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743514478; cv=none; b=BOe+R0M7n4HJb4ap5327h5TvO0NA14dy3ghd+jytIi6N1izhaPSC1ajstAtCrzRmOrGR76bELwAiAhyovkikmtR7g1idzNhoGJqk0In/Is33nFhCbXInFkCpeiTbN8L94UZSjtxnonEkwOUvE4RjvwRcxRJr9ppEIbacnRt7/+Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743514478; c=relaxed/simple; bh=jVv82q0VAwBPvYZ5/rIuosPXebvdrL2vHJ8IsDzi2y8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Ch2HvKQNXAKLVwzmcLx/vwxVc9X4JVJJ7TYha96ySjtr8BC15wHaHN7SKTO6Q3WYOywgNIHLVmo0aKrAdQ9UgYPtMx0bh3A4Ebneon3CXo5zVCJPrXNAWSo0ADUDJAzhwZ1TGgGHdE56G461N+xgYy0ZoqjupHGiV+gmkfhYjFA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ardb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=G3NJwlJO; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ardb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="G3NJwlJO" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-391492acb59so3625358f8f.3 for ; Tue, 01 Apr 2025 06:34:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1743514475; x=1744119275; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=KkkgdoQS7XsAFt9UmFe68Xk/ys+reCGu9v6By+aRdFc=; b=G3NJwlJOSAQKohRI0zN9krFZa3FmDcluaJKAgvN1g8FyI0ApEtnH0LBhSZrGffrpea Tu0/n3UMX2doH+oIWFJ4tWXAfznTW8lms1NaXFQ6ld/yraNm+B/+EkvPMgytkUIjDvyn 6SDKgV5Ru82mVumNNfVj6BXeXBZUXbFeRjxoYQYkdAS400Aut2hE6OzB7b3gs5xfw1Dd oe0M+HpNP1rj1vVZsHVwvDgO12DjJSfUMt4nBc+6ZbwYFiNi1yaif3imZhC/8I/XIT1P kdfh6MR1o25QdbPMKg/abyXJyoZOgqtcHZKWsPzc4HI3mgVxGka+J+McDzV9hcwWK3aw AQwA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743514475; x=1744119275; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=KkkgdoQS7XsAFt9UmFe68Xk/ys+reCGu9v6By+aRdFc=; b=Qjq9dm6qf41oMarhS1spe214f8UPLAjbT08qSTA61qo/SlLjYMWe85VhefHI+Db0x5 3b+P0Rtjv8Z4xBRvqVnIVZCmaHoRoXvkyI4x8HPOgljittiZbDqvM1Ldej1GyOUngGbW Y0shGNwkriEcpw9cGWYQ8WsdzdqfMVARKpjSgMyZKG2xdXz4cowMEJWk9NEFdKh3f8oT aOGG+lXPaJpGqQV0vff/o6uDIoRIVjWCSCEV/5An/o+3WrzhOSv13Enr/qWOwwwjDvJK PaAGbnnf/VADMi1QxPLkOHp6MVN4Smyt2hPo9TJ7jpUECEjHDxsX4SE0Um9orhbJMSlr TeOg== X-Gm-Message-State: AOJu0YzLuVCPnTN5m8BAM9rMucLOw0S5w4dNBZew0KancXeEfKYCUVRU IOxCMVeyoN+uMnUX/u9/zrpkWybPBH9AiiQcSoT+zfDBJVvje36aF/OWIuB9a1xIWw95lzWOQgF jJ15C39watQTlaWoQ1xWorH9rCt7brDAAo2cpFGMt9N95ElxRNvw0vBsRZqJbUfIjD1TKOyFSAO 4CfnV/nrDdmI9LzoIncMfZBVer0A== X-Google-Smtp-Source: AGHT+IH1Rwf0SPfvTDMLTXKr8r2VF8laxDRTirFJIzh3OocUBKb+W0dll+mMOctsSfKR+qI/IAtXpqrI X-Received: from wmco22.prod.google.com ([2002:a05:600c:a316:b0:43d:40ea:764d]) (user=ardb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:250f:b0:39c:1f10:d294 with SMTP id ffacd0b85a97d-39c1f10d4e4mr4825842f8f.26.1743514475045; Tue, 01 Apr 2025 06:34:35 -0700 (PDT) Date: Tue, 1 Apr 2025 15:34:19 +0200 In-Reply-To: <20250401133416.1436741-8-ardb+git@google.com> Precedence: bulk X-Mailing-List: linux-efi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250401133416.1436741-8-ardb+git@google.com> X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-Developer-Signature: v=1; a=openpgp-sha256; l=2642; i=ardb@kernel.org; h=from:subject; bh=Rg+9UesfDX7KMCl1WWKPr+VDxB1ApETGwnITWSbmCXE=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIf3165iHHY4uD/afSnKPszOfHrj0WLV0h5/xr5jp/9fL/ hfe86qio5SFQYyDQVZMkUVg9t93O09PlKp1niULM4eVCWQIAxenAEzEOJmR4WbK2rN+i8pqm1+u 5Ty+lHNTx/7ty24JfU27l7xzopd2+3OGfxq2ZvoFgR7zv5R8WvRAlHmLamzcifaAktn3uVdt0Hn iwgkA X-Mailer: git-send-email 2.49.0.472.ge94155a9ec-goog Message-ID: <20250401133416.1436741-10-ardb+git@google.com> Subject: [RFC PATCH 2/6] x86/boot: Move 5-level paging trampoline into startup code From: Ard Biesheuvel To: linux-efi@vger.kernel.org Cc: linux-kernel@vger.kernel.org, x86@kernel.org, Ard Biesheuvel , Tom Lendacky , Dionna Amalie Glaze , Kevin Loughlin From: Ard Biesheuvel The 5-level paging trampoline is used by both the EFI stub and the traditional decompressor. Move it out of the decompressor sources into the newly minted arch/x86/boot/startup/ sub-directory which will hold startup code that may be shared between the decompressor, the EFI stub and the kernel proper, and needs to tolerate being called during early boot, before the kernel virtual mapping has been created. This will allow the 5-level paging trampoline to be used by EFI boot images such as zboot that omit the traditional decompressor entirely. Signed-off-by: Ard Biesheuvel --- arch/x86/Makefile | 1 + arch/x86/boot/compressed/Makefile | 2 +- arch/x86/boot/startup/Makefile | 3 +++ arch/x86/boot/{compressed => startup}/la57toggle.S | 0 4 files changed, 5 insertions(+), 1 deletion(-) diff --git a/arch/x86/Makefile b/arch/x86/Makefile index 27efe2dc2aa8..c8703276e3e7 100644 --- a/arch/x86/Makefile +++ b/arch/x86/Makefile @@ -287,6 +287,7 @@ archprepare: $(cpufeaturemasks.hdr) ### # Kernel objects +core-y += arch/x86/boot/startup/ libs-y += arch/x86/lib/ # drivers-y are linked after core-y diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile index 2eb63536c5d0..468e135de88e 100644 --- a/arch/x86/boot/compressed/Makefile +++ b/arch/x86/boot/compressed/Makefile @@ -98,7 +98,6 @@ ifdef CONFIG_X86_64 vmlinux-objs-$(CONFIG_AMD_MEM_ENCRYPT) += $(obj)/mem_encrypt.o vmlinux-objs-y += $(obj)/pgtable_64.o vmlinux-objs-$(CONFIG_AMD_MEM_ENCRYPT) += $(obj)/sev.o - vmlinux-objs-y += $(obj)/la57toggle.o endif vmlinux-objs-$(CONFIG_ACPI) += $(obj)/acpi.o @@ -107,6 +106,7 @@ vmlinux-objs-$(CONFIG_UNACCEPTED_MEMORY) += $(obj)/mem.o vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o vmlinux-libs-$(CONFIG_EFI_STUB) += $(objtree)/drivers/firmware/efi/libstub/lib.a +vmlinux-libs-$(CONFIG_X86_64) += $(objtree)/arch/x86/boot/startup/lib.a $(obj)/vmlinux: $(vmlinux-objs-y) $(vmlinux-libs-y) FORCE $(call if_changed,ld) diff --git a/arch/x86/boot/startup/Makefile b/arch/x86/boot/startup/Makefile new file mode 100644 index 000000000000..03519ef4869d --- /dev/null +++ b/arch/x86/boot/startup/Makefile @@ -0,0 +1,3 @@ +# SPDX-License-Identifier: GPL-2.0 + +lib-$(CONFIG_X86_64) += la57toggle.o diff --git a/arch/x86/boot/compressed/la57toggle.S b/arch/x86/boot/startup/la57toggle.S similarity index 100% rename from arch/x86/boot/compressed/la57toggle.S rename to arch/x86/boot/startup/la57toggle.S From patchwork Tue Apr 1 13:34:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 877809 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 992BA2046AE for ; Tue, 1 Apr 2025 13:34:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743514480; cv=none; b=MHn48tH5cnBWtVEucvZvNYgMqiGNQbbanf7w+8DOj8z+PCnbgJPbp/kcUIeA0n25quh6i3aqCQIEr9eM+BkMazHCphGjS63NFGsiMLKCu1zwz74Q77OrRid5a6RUTDwlfUsNw5ip2hJ34eeg39+fd6u9Aq8fDpxjdnjjHkuFCM8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743514480; c=relaxed/simple; bh=Dja8KUvxraXjphff+I7V+sxjjRAswa3snLgGDyWKopQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=pkIMKJ3MaBI+W/dwtwDDLw/OSrOQbiF/aBy5c2L2i30djCedtIzgNjaEU2x6jIJNu9VJYjTfE7skhg3zB0uC//gWIdWpxnwaxrK9IdAng1utkoIQzI1aFixEMap/q+Yyd0pfMBIKI3XqdOfRuaHQG/Y7BLkqnwWHl+nPo7MQ6qA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ardb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=142by7MQ; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ardb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="142by7MQ" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43cec217977so32269065e9.0 for ; Tue, 01 Apr 2025 06:34:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1743514477; x=1744119277; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=SSnzu6xO+PITgBehw1ovPMVaSeDKcwED7ou+XOmk1EU=; b=142by7MQLtDaAxSMOJKuro4MmGKCqEpUgAT0Vi6f9HgCzIFEySs57V3/s3MbafQUQN yAlIrSW/4FUtRRldkxq2/XY7WQDuZNUwwYyFuFRscyJN/kFaAvV1cwVqdIeYprF8Agdd cHXrYCmB9FNgXkiJr6sHaMjfVN5COWEZ6fSVdnu9PJKU/EA8mm0lOLtDz5Lc65e9bYOQ VgBDm3s7LfHZ3tJznHXPTPOw5dQoCDcDlaVMdyNiM7mCweJPbv2t/xJsB5rHOPbFnqXR ltKOgcb0fLit5sS9XCV0W4b044YmhkTEUVr0jEji9lv0IyEdf6KpVYkA8vWHWIWeZL+x N/8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743514477; x=1744119277; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=SSnzu6xO+PITgBehw1ovPMVaSeDKcwED7ou+XOmk1EU=; b=feASeHcl2UotYq8flJ4YCSzhHenBugXkZPWX5ncQAUW05eTRPO7JrRKa+QiFxVPf4b Yz+P3/w2wj+IpWP5hj2MJq+5jip+A012Hvhl9T9xb8xwrdx84IT1dZHLsIvpWGJnTS9/ LBwB276fsNWqmqumxLDxMVfY26eXlLqlESunk/Et6MOfB7setEqiMT5zcFCMpqBIJ7hg 4JSdqHO5cMpOxtEZ+oe8pu4j0WYmuZnC1M0VAKODvz9p829VDiUBUwuppgmPcoMqBL0j Qsk73iU2J5uDdjmhDKJ9/Io2ssa6eO6RoeuhNIuRTiuhWerEQEmoJgM/6GHaCkpEs2HO a0WA== X-Gm-Message-State: AOJu0YxZtCPD3B/DIKUqK7ODCMjgwA7xVvLZ1H55ArV3GdbPjgUULg2z VhpfpfSexNOjPOnlTC7bkVB1rtSvRLxzGqy/1sk2fsmGC1GT6AA2cjfJM4t+dXJaaUCTWo3QXKd 5kA+VM6X/YkoqzKJmYclNTWlJ4+iUSoQOEj23hkMzM47N5x9DlczSy+T+w8tfTz4eqabXZF0CaI +tHchNMdaJJIuDWF505UA0BCxQuw== X-Google-Smtp-Source: AGHT+IFjiaIDe4SdAmblJ6QP/93wu83vTH7+Yt+H5LymjrIx+mS8SStIVJUrIHaoPRfFDn3l5SseO91N X-Received: from wmbbi15.prod.google.com ([2002:a05:600c:3d8f:b0:43d:1873:dbaf]) (user=ardb job=prod-delivery.src-stubby-dispatcher) by 2002:a5d:64ec:0:b0:39a:ca05:5232 with SMTP id ffacd0b85a97d-39c120c7ccemr10348788f8f.5.1743514477097; Tue, 01 Apr 2025 06:34:37 -0700 (PDT) Date: Tue, 1 Apr 2025 15:34:20 +0200 In-Reply-To: <20250401133416.1436741-8-ardb+git@google.com> Precedence: bulk X-Mailing-List: linux-efi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250401133416.1436741-8-ardb+git@google.com> X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-Developer-Signature: v=1; a=openpgp-sha256; l=1936; i=ardb@kernel.org; h=from:subject; bh=2ckatkKp0na78sWZkNAH6LE50dNZokOehx4dKsaghFc=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIf3169gPrqefP6/76BO5ufD00Rt+FpkR3G3ip9W15u1Ze /H78YrfHaUsDGIcDLJiiiwCs/++23l6olSt8yxZmDmsTCBDGLg4BWAirucY/oe8Ujjq1M47N3fz tsDpXMYyc2b02j+8ve6NlkKx2uYi836G/6Vi1m3ZXPfZPR+dKrgwecJjptchdd+Ffh/TmX2qP7m GlRsA X-Mailer: git-send-email 2.49.0.472.ge94155a9ec-goog Message-ID: <20250401133416.1436741-11-ardb+git@google.com> Subject: [RFC PATCH 3/6] x86/boot: Move EFI mixed mode startup code back under arch/x86 From: Ard Biesheuvel To: linux-efi@vger.kernel.org Cc: linux-kernel@vger.kernel.org, x86@kernel.org, Ard Biesheuvel , Tom Lendacky , Dionna Amalie Glaze , Kevin Loughlin From: Ard Biesheuvel Linus expressed a strong preference for arch-specific asm code (i.e., virtually all of it) to reside under arch/ rather than anywhere else. So move the EFI mixed mode startup code back, and put it under arch/x86/boot/startup/ where all shared x86 startup code is going to live. Signed-off-by: Ard Biesheuvel --- arch/x86/boot/startup/Makefile | 3 +++ drivers/firmware/efi/libstub/x86-mixed.S => arch/x86/boot/startup/efi-mixed.S | 0 drivers/firmware/efi/libstub/Makefile | 1 - 3 files changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/boot/startup/Makefile b/arch/x86/boot/startup/Makefile index 03519ef4869d..73946a3f6b3b 100644 --- a/arch/x86/boot/startup/Makefile +++ b/arch/x86/boot/startup/Makefile @@ -1,3 +1,6 @@ # SPDX-License-Identifier: GPL-2.0 +KBUILD_AFLAGS += -D__DISABLE_EXPORTS + lib-$(CONFIG_X86_64) += la57toggle.o +lib-$(CONFIG_EFI_MIXED) += efi-mixed.o diff --git a/drivers/firmware/efi/libstub/x86-mixed.S b/arch/x86/boot/startup/efi-mixed.S similarity index 100% rename from drivers/firmware/efi/libstub/x86-mixed.S rename to arch/x86/boot/startup/efi-mixed.S diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile index d23a1b9fed75..2f173391b63d 100644 --- a/drivers/firmware/efi/libstub/Makefile +++ b/drivers/firmware/efi/libstub/Makefile @@ -85,7 +85,6 @@ lib-$(CONFIG_EFI_GENERIC_STUB) += efi-stub.o string.o intrinsics.o systable.o \ lib-$(CONFIG_ARM) += arm32-stub.o lib-$(CONFIG_ARM64) += kaslr.o arm64.o arm64-stub.o smbios.o lib-$(CONFIG_X86) += x86-stub.o smbios.o -lib-$(CONFIG_EFI_MIXED) += x86-mixed.o lib-$(CONFIG_X86_64) += x86-5lvl.o lib-$(CONFIG_RISCV) += kaslr.o riscv.o riscv-stub.o lib-$(CONFIG_LOONGARCH) += loongarch.o loongarch-stub.o From patchwork Tue Apr 1 13:34:21 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 877518 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DCE11204864 for ; Tue, 1 Apr 2025 13:34:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743514482; cv=none; b=b6WFBFYq7qUss0sOdfo7+GLTnNVfk+xG4UnfoRkLBGoKGZ4FigOreN6mJMMDsP6JuXEcxygDrA14osH9v+nomST8Gb0L2OvAFbgQI95mx0vS6a0exAF95d/iTE6M4/59uGeM4KVy6bSMDwQr4Tn38GYt51KxobgMCnbmo6dZiRw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743514482; c=relaxed/simple; bh=13dN6LyMDIRJPZtbLc4Qnlwfujaf8mnCxnGVXar+9i0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=B24I+i6/DzJnoSkAHJITqc9D/7UcbwMFNfL6UDPSUGLpRRgTXMnDRkg+BmrlnODq5J2Ma7OJBGfVBkszfQTd9yytOwrneX5Is4EqZm55w4njWwIwgy/pFezSIZPpbeT2gsqlsEJFOntRhCy+llQsTg6DDSsFpBMfeTKbijsIJaE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ardb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=QwueR8CX; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ardb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="QwueR8CX" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43cec217977so32269345e9.0 for ; Tue, 01 Apr 2025 06:34:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1743514479; x=1744119279; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=iwpNwL234IC9yWIvSwlZhIgKkylOGfUY/NpW29/ZOa0=; b=QwueR8CXE1SHDXlMcssYhXj9RaibADn4mGnqJICY/KwI20FhSua7niARSlPKhVvhZp zAUMeZC49YZ/KUhutUZDXFdMnk0Ekl3uKt1wsMctwtl/LUYQwD//quAS4M8JhFBU9Oqu /yY+yoEu0T41n0gqQs/ZA+OFvUV7PS2REsLmpAmB6MO4AUSiljQ6raRsH2fhbakaY+Vz 45uehrYRKMmAWFBQUjtFviYl2AXZwsVwQVoe0bfCjiZnhhY6riwnd0BV6iamCN1meiPC vAT98+qM6SkvENKDLrepy23uT40hksxvVdsj60/HCGNIG+FKLGXkNWcVduZnIOJ90LUJ f2sA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743514479; x=1744119279; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=iwpNwL234IC9yWIvSwlZhIgKkylOGfUY/NpW29/ZOa0=; b=wU23T+ghha9SDI4o/M/Ob1eaidvL6dC3jJQICxlQzZWSaDXkNc5FAHFob4sOiU90Fo o7/i1KeKocERHbNCiC93n58ydVpHfu4zNFPmMNnqg0eWY2wGe0D7CWD/KV1pKV404WJC ITfQ2V5cwZiDoECsuKGyoEepYHCEzPqcoIn0Ql+KsxnlHw5Fs6Mq3jqu2IFbCQf98BDE mhlhZTAK0kfLNMoHEJ0eeyDOF+wAndNgLeSmj9b2PLfuZUR2e05TIUeDR1nX91qI4YjW ROovry9omrIHmmE3cZRNBtFZ1JwHUJZGoJFg1muJHBhPAVtBH4WUrYbqhGO1KmkQjA6r 1KbQ== X-Gm-Message-State: AOJu0Yw6Ctlu/zPNKArDtiM7KMTje1M3mgGvfOuBgewOLq4ZB0e4ar0X 7jpK2vrROsvQiQVGQdpc5WLYm5RtOUBsImriXTujuRSE6OxMjuzFGLctqSwywl50kaPg+hJ02cE 4A5UOrUMVIWWXtabcPG5lX3L7bIAnjI+knKrONv/LwuWGtIJg6G8Na0yvCo4Xv1ikD6YK9oF4Sz UfnxpMIc7rW7QNNQyJUmqkkrDrWg== X-Google-Smtp-Source: AGHT+IG4QK2ZXVU8OaDJZrpd4jbGOh2SrGi1jhRE/dSVG0kTZPTHVEXeqFIKnFD0vg8+XQ7RAPuxGyya X-Received: from wmbhc8.prod.google.com ([2002:a05:600c:8708:b0:43d:8244:7f6d]) (user=ardb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3c8a:b0:43d:45a:8fca with SMTP id 5b1f17b1804b1-43db62bf4e4mr119197335e9.30.1743514479363; Tue, 01 Apr 2025 06:34:39 -0700 (PDT) Date: Tue, 1 Apr 2025 15:34:21 +0200 In-Reply-To: <20250401133416.1436741-8-ardb+git@google.com> Precedence: bulk X-Mailing-List: linux-efi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250401133416.1436741-8-ardb+git@google.com> X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-Developer-Signature: v=1; a=openpgp-sha256; l=7788; i=ardb@kernel.org; h=from:subject; bh=7CqWOqUn0owfJx93vqlZ0MaFHl2+UzXGIVwFFUz3Lwk=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIf3167hzdUZKs0sL7l6c+eF38RwBCUHBG/u/zop/KD77r c38xTwMHaUsDGIcDLJiiiwCs/++23l6olSt8yxZmDmsTCBDGLg4BWAiJxgZ/qnz/Ft1teKvVpBB YjCn2DvGlkkRuyR2btO1Nw99bLdoyQ6Gf1oK/bUCaofNbidkMIkt6b2Wtmy5vc02t19T3QVfyxy +wgEA X-Mailer: git-send-email 2.49.0.472.ge94155a9ec-goog Message-ID: <20250401133416.1436741-12-ardb+git@google.com> Subject: [RFC PATCH 4/6] x86/boot: Move early GDT/IDT setup code into startup/ From: Ard Biesheuvel To: linux-efi@vger.kernel.org Cc: linux-kernel@vger.kernel.org, x86@kernel.org, Ard Biesheuvel , Tom Lendacky , Dionna Amalie Glaze , Kevin Loughlin From: Ard Biesheuvel Move the early GDT/IDT setup code that runs long before the kernel virtual mapping is up into arch/x86/boot/startup/, and build it in a way that ensures that the code tolerates being called from the 1:1 mapping of memory. This allows the RIP_REL_REF() macro uses to be dropped, and removes the need for emitting the code into the special .head.text section. Also tweak the sed symbol matching pattern in the decompressor to match on lower case 't' or 'b', as these will be emitted by Clang for symbols with hidden linkage. Signed-off-by: Ard Biesheuvel --- arch/x86/boot/compressed/Makefile | 2 +- arch/x86/boot/startup/Makefile | 15 ++++ arch/x86/boot/startup/gdt_idt.c | 82 ++++++++++++++++++++ arch/x86/kernel/head64.c | 74 ------------------ 4 files changed, 98 insertions(+), 75 deletions(-) diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile index 468e135de88e..48541cf54790 100644 --- a/arch/x86/boot/compressed/Makefile +++ b/arch/x86/boot/compressed/Makefile @@ -74,7 +74,7 @@ LDFLAGS_vmlinux += -T hostprogs := mkpiggy HOST_EXTRACFLAGS += -I$(srctree)/tools/include -sed-voffset := -e 's/^\([0-9a-fA-F]*\) [ABCDGRSTVW] \(_text\|__start_rodata\|__bss_start\|_end\)$$/\#define VO_\2 _AC(0x\1,UL)/p' +sed-voffset := -e 's/^\([0-9a-fA-F]*\) [ABbCDGRSTtVW] \(_text\|__start_rodata\|__bss_start\|_end\)$$/\#define VO_\2 _AC(0x\1,UL)/p' quiet_cmd_voffset = VOFFSET $@ cmd_voffset = $(NM) $< | sed -n $(sed-voffset) > $@ diff --git a/arch/x86/boot/startup/Makefile b/arch/x86/boot/startup/Makefile index 73946a3f6b3b..34b324cbd5a4 100644 --- a/arch/x86/boot/startup/Makefile +++ b/arch/x86/boot/startup/Makefile @@ -1,6 +1,21 @@ # SPDX-License-Identifier: GPL-2.0 KBUILD_AFLAGS += -D__DISABLE_EXPORTS +KBUILD_CFLAGS += -D__DISABLE_EXPORTS -mcmodel=small -fPIC \ + -Os -DDISABLE_BRANCH_PROFILING \ + $(DISABLE_STACKLEAK_PLUGIN) \ + -fno-stack-protector -D__NO_FORTIFY \ + -include $(srctree)/include/linux/hidden.h + +# disable ftrace hooks +KBUILD_CFLAGS := $(subst $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) +KASAN_SANITIZE := n +KCSAN_SANITIZE := n +KMSAN_SANITIZE := n +UBSAN_SANITIZE := n +KCOV_INSTRUMENT := n + +obj-$(CONFIG_X86_64) += gdt_idt.o lib-$(CONFIG_X86_64) += la57toggle.o lib-$(CONFIG_EFI_MIXED) += efi-mixed.o diff --git a/arch/x86/boot/startup/gdt_idt.c b/arch/x86/boot/startup/gdt_idt.c new file mode 100644 index 000000000000..b382d5db2586 --- /dev/null +++ b/arch/x86/boot/startup/gdt_idt.c @@ -0,0 +1,82 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include + +#include +#include +#include +#include + +/* + * Data structures and code used for IDT setup in head_64.S. The bringup-IDT is + * used until the idt_table takes over. On the boot CPU this happens in + * x86_64_start_kernel(), on secondary CPUs in start_secondary(). In both cases + * this happens in the functions called from head_64.S. + * + * The idt_table can't be used that early because all the code modifying it is + * in idt.c and can be instrumented by tracing or KASAN, which both don't work + * during early CPU bringup. Also the idt_table has the runtime vectors + * configured which require certain CPU state to be setup already (like TSS), + * which also hasn't happened yet in early CPU bringup. + */ +static gate_desc bringup_idt_table[NUM_EXCEPTION_VECTORS] __page_aligned_data; + +/* This may run while still in the direct mapping */ +static void startup_64_load_idt(void *vc_handler) +{ + struct desc_ptr desc = { + .address = (unsigned long)bringup_idt_table, + .size = sizeof(bringup_idt_table) - 1, + }; + struct idt_data data; + gate_desc idt_desc; + + /* @vc_handler is set only for a VMM Communication Exception */ + if (vc_handler) { + init_idt_data(&data, X86_TRAP_VC, vc_handler); + idt_init_desc(&idt_desc, &data); + native_write_idt_entry((gate_desc *)desc.address, X86_TRAP_VC, &idt_desc); + } + + native_load_idt(&desc); +} + +/* This is used when running on kernel addresses */ +void early_setup_idt(void) +{ + void *handler = NULL; + + if (IS_ENABLED(CONFIG_AMD_MEM_ENCRYPT)) { + setup_ghcb(); + handler = vc_boot_ghcb; + } + + startup_64_load_idt(handler); +} + +/* + * Setup boot CPU state needed before kernel switches to virtual addresses. + */ +void __init startup_64_setup_gdt_idt(void) +{ + void *handler = NULL; + + struct desc_ptr startup_gdt_descr = { + .address = (__force unsigned long)gdt_page.gdt, + .size = GDT_SIZE - 1, + }; + + /* Load GDT */ + native_load_gdt(&startup_gdt_descr); + + /* New GDT is live - reload data segment registers */ + asm volatile("movl %%eax, %%ds\n" + "movl %%eax, %%ss\n" + "movl %%eax, %%es\n" : : "a"(__KERNEL_DS) : "memory"); + + if (IS_ENABLED(CONFIG_AMD_MEM_ENCRYPT)) + handler = vc_no_ghcb; + + startup_64_load_idt(handler); +} diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c index fa9b6339975f..5b993b545c7e 100644 --- a/arch/x86/kernel/head64.c +++ b/arch/x86/kernel/head64.c @@ -512,77 +512,3 @@ void __init __noreturn x86_64_start_reservations(char *real_mode_data) start_kernel(); } - -/* - * Data structures and code used for IDT setup in head_64.S. The bringup-IDT is - * used until the idt_table takes over. On the boot CPU this happens in - * x86_64_start_kernel(), on secondary CPUs in start_secondary(). In both cases - * this happens in the functions called from head_64.S. - * - * The idt_table can't be used that early because all the code modifying it is - * in idt.c and can be instrumented by tracing or KASAN, which both don't work - * during early CPU bringup. Also the idt_table has the runtime vectors - * configured which require certain CPU state to be setup already (like TSS), - * which also hasn't happened yet in early CPU bringup. - */ -static gate_desc bringup_idt_table[NUM_EXCEPTION_VECTORS] __page_aligned_data; - -/* This may run while still in the direct mapping */ -static void __head startup_64_load_idt(void *vc_handler) -{ - struct desc_ptr desc = { - .address = (unsigned long)&RIP_REL_REF(bringup_idt_table), - .size = sizeof(bringup_idt_table) - 1, - }; - struct idt_data data; - gate_desc idt_desc; - - /* @vc_handler is set only for a VMM Communication Exception */ - if (vc_handler) { - init_idt_data(&data, X86_TRAP_VC, vc_handler); - idt_init_desc(&idt_desc, &data); - native_write_idt_entry((gate_desc *)desc.address, X86_TRAP_VC, &idt_desc); - } - - native_load_idt(&desc); -} - -/* This is used when running on kernel addresses */ -void early_setup_idt(void) -{ - void *handler = NULL; - - if (IS_ENABLED(CONFIG_AMD_MEM_ENCRYPT)) { - setup_ghcb(); - handler = vc_boot_ghcb; - } - - startup_64_load_idt(handler); -} - -/* - * Setup boot CPU state needed before kernel switches to virtual addresses. - */ -void __head startup_64_setup_gdt_idt(void) -{ - struct desc_struct *gdt = (void *)(__force unsigned long)gdt_page.gdt; - void *handler = NULL; - - struct desc_ptr startup_gdt_descr = { - .address = (unsigned long)&RIP_REL_REF(*gdt), - .size = GDT_SIZE - 1, - }; - - /* Load GDT */ - native_load_gdt(&startup_gdt_descr); - - /* New GDT is live - reload data segment registers */ - asm volatile("movl %%eax, %%ds\n" - "movl %%eax, %%ss\n" - "movl %%eax, %%es\n" : : "a"(__KERNEL_DS) : "memory"); - - if (IS_ENABLED(CONFIG_AMD_MEM_ENCRYPT)) - handler = &RIP_REL_REF(vc_no_ghcb); - - startup_64_load_idt(handler); -} From patchwork Tue Apr 1 13:34:22 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 877808 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D052920485D for ; Tue, 1 Apr 2025 13:34:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743514485; cv=none; b=cJj3YCcobaBudSjNdrrINdYOvdsF07H7XJIksRVeJmygRxr1EYlgaGzUM5cAbIjaYF4Fs4kHqQm9BgCHF/DAbaCLHYvHIj36/FXeDg4HMQRaeaffNt4Lyo+TltfD54JmYXYZyvjN24sFEZpbeT71n707iKEAcLdPP1mGJU0+IEU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743514485; c=relaxed/simple; bh=N9vA4EpIqk8loSj9DhnOdzc82juHV1lnon569PilmDg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=KioqS2Te5Ee9HDwicZPqVC7RMP0ck5fE1hPYo7clSnmb0QfuZQe8eF+E4voXQVrQd2tHKcHcfWH9AEGcEinmYtqBamYZ5xZqd80WIdDGI65zZJAZAniTQDDIy8Q3w21PQiinbfrXGRkak9anamNbo+CxSxlLpD33btbopIMx2Pk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ardb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Sv+bkqfM; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ardb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Sv+bkqfM" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43cf327e9a2so47367795e9.3 for ; Tue, 01 Apr 2025 06:34:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1743514481; x=1744119281; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=YATu9zXF0K3u0eA9uumphTz242jaGnKCjKZiC2CAoPc=; b=Sv+bkqfMz3PZo+prc7c6ldZlAqJpdi9Kw4KlSId+h0GancqAzo53Y79yG/ivnbLo9b CcKkXQ+L7iaLVBcVBStLPpRslLm5IGCh9snC1fKdVaH0kbJVsfAViXJKuF4nsbUI4T6O TNtz/kj1kz6tiZyyJLNZoAoOvDptox8Sc1NS1I3GG5jIe/8ANDtxUT/6zYXfHtftaDws cXgDcGQVS49NsDOa0wI9qdqDALqkaco+PLcM5E4WGpAlKygJubfuq2qv75KRvEdrCWWG mxa3v7gFas/77Vb0KuF6/OzBS0rMrMKFwC+rVkXcH0P482XKVYrAIbDw3YDhdGH8p9Ez OEYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743514481; x=1744119281; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=YATu9zXF0K3u0eA9uumphTz242jaGnKCjKZiC2CAoPc=; b=heFxYf/W6FER/t12fe0E+HjI7wOYR2S4SSgP4UGpZMWalvtokxMIiwSlQxiaz+JJty sNNzr0Dqgb4XIEPRDjhLeE+yVH8Cl874mtDsDLQjJSoVrwWjokz7fLZv8ujHIV5Bl70f PmhPBYaK9/OrEHDsI/qu2fdT00nvCiY2Wnc2HP+Ur3ixdYoXJ9X9+oTIBt+wfG1ZeZMY 2kv38Vt/qx5Xkx5qCxVQtI8/78BFcN2nvbEQo4jVHan68ItOJaiYf5ZB9tapjLyCvlj/ w2wizl05lemub6pzam3DFQAfoEeXRQ3ckR4ImkImTPgOhMpYQItGf1ZFK8rSAslAFaaK K/7w== X-Gm-Message-State: AOJu0Yzp8ldWsQCtyJEc7luNQU83ek2sOCIGi80saTRJfuyUkOy2o1na V8s3QmXw5/phr5jhcorHVYZqdijkgfYgxb5bYoz4X21W/WCGTyPX8rI/1L0TCqabzv2joPDaQbz XPrz8FDHx4yHk/TGQciSb12oPaP0x6LHuY1cJt+y4jb+5zGGhglMl4KgSm31qUIHsSBZROCO+gT IW792XEULVLjFd/5F7KbrsjGmCpw== X-Google-Smtp-Source: AGHT+IHOHzfjMGFyukFYtPOemaeqC5O37H99HY0TU2Cv08Z8vwBJNdwTNIm9ON3XTcj+T/Ag710uHOdB X-Received: from wmoq17.prod.google.com ([2002:a05:600c:46d1:b0:43b:c336:7b29]) (user=ardb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4e06:b0:43d:745a:5a50 with SMTP id 5b1f17b1804b1-43db62bfe4amr101859055e9.19.1743514481353; Tue, 01 Apr 2025 06:34:41 -0700 (PDT) Date: Tue, 1 Apr 2025 15:34:22 +0200 In-Reply-To: <20250401133416.1436741-8-ardb+git@google.com> Precedence: bulk X-Mailing-List: linux-efi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250401133416.1436741-8-ardb+git@google.com> X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-Developer-Signature: v=1; a=openpgp-sha256; l=17256; i=ardb@kernel.org; h=from:subject; bh=a6z4ADsTbmwulvK5aAyxZaInG5zR2MdL4tUgODMxkTU=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIf3163jbmbO/ftt5oq1F14Hj8ff/4f9mXTz7cY3I/ffKv 4VU5oQad5SyMIhxMMiKKbIIzP77bufpiVK1zrNkYeawMoEMYeDiFICJbLRhZLjKuGt6SP1fL5Np uw6WXTxg+21xihf3z1p+9wfF1nydbzYy/M/vf3Ryxwphd5uPWWJnr79m3i7jXniyaH2mCNsO74U n1rIAAA== X-Mailer: git-send-email 2.49.0.472.ge94155a9ec-goog Message-ID: <20250401133416.1436741-13-ardb+git@google.com> Subject: [RFC PATCH 5/6] x86/boot: Move early kernel mapping code into startup/ From: Ard Biesheuvel To: linux-efi@vger.kernel.org Cc: linux-kernel@vger.kernel.org, x86@kernel.org, Ard Biesheuvel , Tom Lendacky , Dionna Amalie Glaze , Kevin Loughlin From: Ard Biesheuvel The startup code that constructs the kernel virtual mapping runs from the 1:1 mapping of memory itself, and therefore, cannot use absolute symbol references. Move this code into a separate source file under arch/x86/boot/startup/ where all such code will be kept from now on. Since all code here is constructed in a manner that ensures that it tolerates running from the 1:1 mapping of memory, any uses of the RIP_REL_REF() macro can be dropped, along with __head annotations for placing this code in a dedicated startup section. Signed-off-by: Ard Biesheuvel --- arch/x86/boot/startup/Makefile | 2 +- arch/x86/boot/startup/map_kernel.c | 232 ++++++++++++++++++++ arch/x86/kernel/head64.c | 228 +------------------ 3 files changed, 234 insertions(+), 228 deletions(-) diff --git a/arch/x86/boot/startup/Makefile b/arch/x86/boot/startup/Makefile index 34b324cbd5a4..01423063fec2 100644 --- a/arch/x86/boot/startup/Makefile +++ b/arch/x86/boot/startup/Makefile @@ -15,7 +15,7 @@ KMSAN_SANITIZE := n UBSAN_SANITIZE := n KCOV_INSTRUMENT := n -obj-$(CONFIG_X86_64) += gdt_idt.o +obj-$(CONFIG_X86_64) += gdt_idt.o map_kernel.o lib-$(CONFIG_X86_64) += la57toggle.o lib-$(CONFIG_EFI_MIXED) += efi-mixed.o diff --git a/arch/x86/boot/startup/map_kernel.c b/arch/x86/boot/startup/map_kernel.c new file mode 100644 index 000000000000..ba856be92d10 --- /dev/null +++ b/arch/x86/boot/startup/map_kernel.c @@ -0,0 +1,232 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include +#include +#include + +#include +#include +#include + +extern pmd_t early_dynamic_pgts[EARLY_DYNAMIC_PAGE_TABLES][PTRS_PER_PMD]; +extern unsigned int next_early_pgt; + +#ifdef CONFIG_X86_5LEVEL +unsigned int __pgtable_l5_enabled __ro_after_init; +unsigned int pgdir_shift __ro_after_init = 39; +EXPORT_SYMBOL(pgdir_shift); +unsigned int ptrs_per_p4d __ro_after_init = 1; +EXPORT_SYMBOL(ptrs_per_p4d); +#endif + +#ifdef CONFIG_DYNAMIC_MEMORY_LAYOUT +unsigned long page_offset_base __ro_after_init = __PAGE_OFFSET_BASE_L4; +EXPORT_SYMBOL(page_offset_base); +unsigned long vmalloc_base __ro_after_init = __VMALLOC_BASE_L4; +EXPORT_SYMBOL(vmalloc_base); +unsigned long vmemmap_base __ro_after_init = __VMEMMAP_BASE_L4; +EXPORT_SYMBOL(vmemmap_base); +#endif + +static inline bool check_la57_support(void) +{ + if (!IS_ENABLED(CONFIG_X86_5LEVEL)) + return false; + + /* + * 5-level paging is detected and enabled at kernel decompression + * stage. Only check if it has been enabled there. + */ + if (!(native_read_cr4() & X86_CR4_LA57)) + return false; + + __pgtable_l5_enabled = 1; + pgdir_shift = 48; + ptrs_per_p4d = 512; + page_offset_base = __PAGE_OFFSET_BASE_L5; + vmalloc_base = __VMALLOC_BASE_L5; + vmemmap_base = __VMEMMAP_BASE_L5; + + return true; +} + +static unsigned long sme_postprocess_startup(struct boot_params *bp, + pmdval_t *pmd, + unsigned long p2v_offset) +{ + unsigned long paddr, paddr_end; + int i; + + /* Encrypt the kernel and related (if SME is active) */ + sme_encrypt_kernel(bp); + + /* + * Clear the memory encryption mask from the .bss..decrypted section. + * The bss section will be memset to zero later in the initialization so + * there is no need to zero it after changing the memory encryption + * attribute. + */ + if (sme_get_me_mask()) { + paddr = (unsigned long)__start_bss_decrypted; + paddr_end = (unsigned long)__end_bss_decrypted; + + for (; paddr < paddr_end; paddr += PMD_SIZE) { + /* + * On SNP, transition the page to shared in the RMP table so that + * it is consistent with the page table attribute change. + * + * __start_bss_decrypted has a virtual address in the high range + * mapping (kernel .text). PVALIDATE, by way of + * early_snp_set_memory_shared(), requires a valid virtual + * address but the kernel is currently running off of the identity + * mapping so use the PA to get a *currently* valid virtual address. + */ + early_snp_set_memory_shared(paddr, paddr, PTRS_PER_PMD); + + i = pmd_index(paddr - p2v_offset); + pmd[i] -= sme_get_me_mask(); + } + } + + /* + * Return the SME encryption mask (if SME is active) to be used as a + * modifier for the initial pgdir entry programmed into CR3. + */ + return sme_get_me_mask(); +} + +unsigned long __init __startup_64(unsigned long p2v_offset, + struct boot_params *bp) +{ + pmd_t (*early_pgts)[PTRS_PER_PMD] = early_dynamic_pgts; + unsigned long physaddr = (unsigned long)_text; + unsigned long va_text, va_end; + unsigned long pgtable_flags; + unsigned long load_delta; + pgdval_t *pgd; + p4dval_t *p4d; + pudval_t *pud; + pmdval_t *pmd, pmd_entry; + bool la57; + int i; + + la57 = check_la57_support(); + + /* Is the address too large? */ + if (physaddr >> MAX_PHYSMEM_BITS) + for (;;); + + /* + * Compute the delta between the address I am compiled to run at + * and the address I am actually running at. + */ + phys_base = load_delta = __START_KERNEL_map + p2v_offset; + + /* Is the address not 2M aligned? */ + if (load_delta & ~PMD_MASK) + for (;;); + + va_text = physaddr - p2v_offset; + va_end = (unsigned long)_end - p2v_offset; + + /* Include the SME encryption mask in the fixup value */ + load_delta += sme_get_me_mask(); + + /* Fixup the physical addresses in the page table */ + + pgd = &early_top_pgt[0].pgd; + pgd[pgd_index(__START_KERNEL_map)] += load_delta; + + if (IS_ENABLED(CONFIG_X86_5LEVEL) && la57) { + p4d = (p4dval_t *)level4_kernel_pgt; + p4d[MAX_PTRS_PER_P4D - 1] += load_delta; + + pgd[pgd_index(__START_KERNEL_map)] = (pgdval_t)p4d | _PAGE_TABLE; + } + + level3_kernel_pgt[PTRS_PER_PUD - 2].pud += load_delta; + level3_kernel_pgt[PTRS_PER_PUD - 1].pud += load_delta; + + for (i = FIXMAP_PMD_TOP; i > FIXMAP_PMD_TOP - FIXMAP_PMD_NUM; i--) + level2_fixmap_pgt[i].pmd += load_delta; + + /* + * Set up the identity mapping for the switchover. These + * entries should *NOT* have the global bit set! This also + * creates a bunch of nonsense entries but that is fine -- + * it avoids problems around wraparound. + */ + + pud = &early_pgts[0]->pmd; + pmd = &early_pgts[1]->pmd; + next_early_pgt = 2; + + pgtable_flags = _KERNPG_TABLE_NOENC + sme_get_me_mask(); + + if (la57) { + p4d = &early_pgts[next_early_pgt++]->pmd; + + i = (physaddr >> PGDIR_SHIFT) % PTRS_PER_PGD; + pgd[i + 0] = (pgdval_t)p4d + pgtable_flags; + pgd[i + 1] = (pgdval_t)p4d + pgtable_flags; + + i = physaddr >> P4D_SHIFT; + p4d[(i + 0) % PTRS_PER_P4D] = (pgdval_t)pud + pgtable_flags; + p4d[(i + 1) % PTRS_PER_P4D] = (pgdval_t)pud + pgtable_flags; + } else { + i = (physaddr >> PGDIR_SHIFT) % PTRS_PER_PGD; + pgd[i + 0] = (pgdval_t)pud + pgtable_flags; + pgd[i + 1] = (pgdval_t)pud + pgtable_flags; + } + + i = physaddr >> PUD_SHIFT; + pud[(i + 0) % PTRS_PER_PUD] = (pudval_t)pmd + pgtable_flags; + pud[(i + 1) % PTRS_PER_PUD] = (pudval_t)pmd + pgtable_flags; + + pmd_entry = __PAGE_KERNEL_LARGE_EXEC & ~_PAGE_GLOBAL; + /* Filter out unsupported __PAGE_KERNEL_* bits: */ + pmd_entry &= __supported_pte_mask; + pmd_entry += sme_get_me_mask(); + pmd_entry += physaddr; + + for (i = 0; i < DIV_ROUND_UP(va_end - va_text, PMD_SIZE); i++) { + int idx = i + (physaddr >> PMD_SHIFT); + + pmd[idx % PTRS_PER_PMD] = pmd_entry + i * PMD_SIZE; + } + + /* + * Fixup the kernel text+data virtual addresses. Note that + * we might write invalid pmds, when the kernel is relocated + * cleanup_highmap() fixes this up along with the mappings + * beyond _end. + * + * Only the region occupied by the kernel image has so far + * been checked against the table of usable memory regions + * provided by the firmware, so invalidate pages outside that + * region. A page table entry that maps to a reserved area of + * memory would allow processor speculation into that area, + * and on some hardware (particularly the UV platform) even + * speculative access to some reserved areas is caught as an + * error, causing the BIOS to halt the system. + */ + + pmd = &level2_kernel_pgt[0].pmd; + + /* invalidate pages before the kernel image */ + for (i = 0; i < pmd_index(va_text); i++) + pmd[i] &= ~_PAGE_PRESENT; + + /* fixup pages that are part of the kernel image */ + for (; i <= pmd_index(va_end); i++) + if (pmd[i] & _PAGE_PRESENT) + pmd[i] += load_delta; + + /* invalidate pages after the kernel image */ + for (; i < PTRS_PER_PMD; i++) + pmd[i] &= ~_PAGE_PRESENT; + + return sme_postprocess_startup(bp, pmd, p2v_offset); +} diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c index 5b993b545c7e..9afb123a8676 100644 --- a/arch/x86/kernel/head64.c +++ b/arch/x86/kernel/head64.c @@ -47,235 +47,9 @@ * Manage page tables very early on. */ extern pmd_t early_dynamic_pgts[EARLY_DYNAMIC_PAGE_TABLES][PTRS_PER_PMD]; -static unsigned int __initdata next_early_pgt; +unsigned int __initdata next_early_pgt; pmdval_t early_pmd_flags = __PAGE_KERNEL_LARGE & ~(_PAGE_GLOBAL | _PAGE_NX); -#ifdef CONFIG_X86_5LEVEL -unsigned int __pgtable_l5_enabled __ro_after_init; -unsigned int pgdir_shift __ro_after_init = 39; -EXPORT_SYMBOL(pgdir_shift); -unsigned int ptrs_per_p4d __ro_after_init = 1; -EXPORT_SYMBOL(ptrs_per_p4d); -#endif - -#ifdef CONFIG_DYNAMIC_MEMORY_LAYOUT -unsigned long page_offset_base __ro_after_init = __PAGE_OFFSET_BASE_L4; -EXPORT_SYMBOL(page_offset_base); -unsigned long vmalloc_base __ro_after_init = __VMALLOC_BASE_L4; -EXPORT_SYMBOL(vmalloc_base); -unsigned long vmemmap_base __ro_after_init = __VMEMMAP_BASE_L4; -EXPORT_SYMBOL(vmemmap_base); -#endif - -static inline bool check_la57_support(void) -{ - if (!IS_ENABLED(CONFIG_X86_5LEVEL)) - return false; - - /* - * 5-level paging is detected and enabled at kernel decompression - * stage. Only check if it has been enabled there. - */ - if (!(native_read_cr4() & X86_CR4_LA57)) - return false; - - RIP_REL_REF(__pgtable_l5_enabled) = 1; - RIP_REL_REF(pgdir_shift) = 48; - RIP_REL_REF(ptrs_per_p4d) = 512; - RIP_REL_REF(page_offset_base) = __PAGE_OFFSET_BASE_L5; - RIP_REL_REF(vmalloc_base) = __VMALLOC_BASE_L5; - RIP_REL_REF(vmemmap_base) = __VMEMMAP_BASE_L5; - - return true; -} - -static unsigned long __head sme_postprocess_startup(struct boot_params *bp, - pmdval_t *pmd, - unsigned long p2v_offset) -{ - unsigned long paddr, paddr_end; - int i; - - /* Encrypt the kernel and related (if SME is active) */ - sme_encrypt_kernel(bp); - - /* - * Clear the memory encryption mask from the .bss..decrypted section. - * The bss section will be memset to zero later in the initialization so - * there is no need to zero it after changing the memory encryption - * attribute. - */ - if (sme_get_me_mask()) { - paddr = (unsigned long)&RIP_REL_REF(__start_bss_decrypted); - paddr_end = (unsigned long)&RIP_REL_REF(__end_bss_decrypted); - - for (; paddr < paddr_end; paddr += PMD_SIZE) { - /* - * On SNP, transition the page to shared in the RMP table so that - * it is consistent with the page table attribute change. - * - * __start_bss_decrypted has a virtual address in the high range - * mapping (kernel .text). PVALIDATE, by way of - * early_snp_set_memory_shared(), requires a valid virtual - * address but the kernel is currently running off of the identity - * mapping so use the PA to get a *currently* valid virtual address. - */ - early_snp_set_memory_shared(paddr, paddr, PTRS_PER_PMD); - - i = pmd_index(paddr - p2v_offset); - pmd[i] -= sme_get_me_mask(); - } - } - - /* - * Return the SME encryption mask (if SME is active) to be used as a - * modifier for the initial pgdir entry programmed into CR3. - */ - return sme_get_me_mask(); -} - -/* Code in __startup_64() can be relocated during execution, but the compiler - * doesn't have to generate PC-relative relocations when accessing globals from - * that function. Clang actually does not generate them, which leads to - * boot-time crashes. To work around this problem, every global pointer must - * be accessed using RIP_REL_REF(). Kernel virtual addresses can be determined - * by subtracting p2v_offset from the RIP-relative address. - */ -unsigned long __head __startup_64(unsigned long p2v_offset, - struct boot_params *bp) -{ - pmd_t (*early_pgts)[PTRS_PER_PMD] = RIP_REL_REF(early_dynamic_pgts); - unsigned long physaddr = (unsigned long)&RIP_REL_REF(_text); - unsigned long va_text, va_end; - unsigned long pgtable_flags; - unsigned long load_delta; - pgdval_t *pgd; - p4dval_t *p4d; - pudval_t *pud; - pmdval_t *pmd, pmd_entry; - bool la57; - int i; - - la57 = check_la57_support(); - - /* Is the address too large? */ - if (physaddr >> MAX_PHYSMEM_BITS) - for (;;); - - /* - * Compute the delta between the address I am compiled to run at - * and the address I am actually running at. - */ - load_delta = __START_KERNEL_map + p2v_offset; - RIP_REL_REF(phys_base) = load_delta; - - /* Is the address not 2M aligned? */ - if (load_delta & ~PMD_MASK) - for (;;); - - va_text = physaddr - p2v_offset; - va_end = (unsigned long)&RIP_REL_REF(_end) - p2v_offset; - - /* Include the SME encryption mask in the fixup value */ - load_delta += sme_get_me_mask(); - - /* Fixup the physical addresses in the page table */ - - pgd = &RIP_REL_REF(early_top_pgt)->pgd; - pgd[pgd_index(__START_KERNEL_map)] += load_delta; - - if (IS_ENABLED(CONFIG_X86_5LEVEL) && la57) { - p4d = (p4dval_t *)&RIP_REL_REF(level4_kernel_pgt); - p4d[MAX_PTRS_PER_P4D - 1] += load_delta; - - pgd[pgd_index(__START_KERNEL_map)] = (pgdval_t)p4d | _PAGE_TABLE; - } - - RIP_REL_REF(level3_kernel_pgt)[PTRS_PER_PUD - 2].pud += load_delta; - RIP_REL_REF(level3_kernel_pgt)[PTRS_PER_PUD - 1].pud += load_delta; - - for (i = FIXMAP_PMD_TOP; i > FIXMAP_PMD_TOP - FIXMAP_PMD_NUM; i--) - RIP_REL_REF(level2_fixmap_pgt)[i].pmd += load_delta; - - /* - * Set up the identity mapping for the switchover. These - * entries should *NOT* have the global bit set! This also - * creates a bunch of nonsense entries but that is fine -- - * it avoids problems around wraparound. - */ - - pud = &early_pgts[0]->pmd; - pmd = &early_pgts[1]->pmd; - RIP_REL_REF(next_early_pgt) = 2; - - pgtable_flags = _KERNPG_TABLE_NOENC + sme_get_me_mask(); - - if (la57) { - p4d = &early_pgts[RIP_REL_REF(next_early_pgt)++]->pmd; - - i = (physaddr >> PGDIR_SHIFT) % PTRS_PER_PGD; - pgd[i + 0] = (pgdval_t)p4d + pgtable_flags; - pgd[i + 1] = (pgdval_t)p4d + pgtable_flags; - - i = physaddr >> P4D_SHIFT; - p4d[(i + 0) % PTRS_PER_P4D] = (pgdval_t)pud + pgtable_flags; - p4d[(i + 1) % PTRS_PER_P4D] = (pgdval_t)pud + pgtable_flags; - } else { - i = (physaddr >> PGDIR_SHIFT) % PTRS_PER_PGD; - pgd[i + 0] = (pgdval_t)pud + pgtable_flags; - pgd[i + 1] = (pgdval_t)pud + pgtable_flags; - } - - i = physaddr >> PUD_SHIFT; - pud[(i + 0) % PTRS_PER_PUD] = (pudval_t)pmd + pgtable_flags; - pud[(i + 1) % PTRS_PER_PUD] = (pudval_t)pmd + pgtable_flags; - - pmd_entry = __PAGE_KERNEL_LARGE_EXEC & ~_PAGE_GLOBAL; - /* Filter out unsupported __PAGE_KERNEL_* bits: */ - pmd_entry &= RIP_REL_REF(__supported_pte_mask); - pmd_entry += sme_get_me_mask(); - pmd_entry += physaddr; - - for (i = 0; i < DIV_ROUND_UP(va_end - va_text, PMD_SIZE); i++) { - int idx = i + (physaddr >> PMD_SHIFT); - - pmd[idx % PTRS_PER_PMD] = pmd_entry + i * PMD_SIZE; - } - - /* - * Fixup the kernel text+data virtual addresses. Note that - * we might write invalid pmds, when the kernel is relocated - * cleanup_highmap() fixes this up along with the mappings - * beyond _end. - * - * Only the region occupied by the kernel image has so far - * been checked against the table of usable memory regions - * provided by the firmware, so invalidate pages outside that - * region. A page table entry that maps to a reserved area of - * memory would allow processor speculation into that area, - * and on some hardware (particularly the UV platform) even - * speculative access to some reserved areas is caught as an - * error, causing the BIOS to halt the system. - */ - - pmd = &RIP_REL_REF(level2_kernel_pgt)->pmd; - - /* invalidate pages before the kernel image */ - for (i = 0; i < pmd_index(va_text); i++) - pmd[i] &= ~_PAGE_PRESENT; - - /* fixup pages that are part of the kernel image */ - for (; i <= pmd_index(va_end); i++) - if (pmd[i] & _PAGE_PRESENT) - pmd[i] += load_delta; - - /* invalidate pages after the kernel image */ - for (; i < PTRS_PER_PMD; i++) - pmd[i] &= ~_PAGE_PRESENT; - - return sme_postprocess_startup(bp, pmd, p2v_offset); -} - /* Wipe all early page tables except for the kernel symbol map */ static void __init reset_early_page_tables(void) { From patchwork Tue Apr 1 13:34:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 877517 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0A38B204C29 for ; Tue, 1 Apr 2025 13:34:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743514488; cv=none; b=FhZtjetAUtz6b+wEytJSpsNvAxk/mEWQfmi0YDKlrDJqB0cbLBWDe/cl6FZQFHacUgPzpyLGND5TQSBtHmfWJa5R2B0Zc4fWyf9Rvq2/neq/HaUaXhB2yKyeoqYA8h8D4w3exM2AO58PkcD3/Yt2YrseB6YyAH/LMuSl/O5iQOw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743514488; c=relaxed/simple; bh=gdyI7unVAqelFcsV1oQkRb/FpEZooiabpLxxOvJoVAE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=W2CI5dbukMgRDen4touK8EtU6XZB0urcx30LtM6LaBJYKlUungfEWKXvnAZCzmraXqOfoicJoDCWZwKqOCYlCg4wdGq4ol7cBYjnF82UmitiAqm39mV5GJ4pgniYHph5gKpqPXanF3jphv0FQQxQCgPVlHcUrdUP7aKSSWbCFVM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ardb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=4ybXhcBI; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ardb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="4ybXhcBI" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43d51bd9b41so49798345e9.3 for ; Tue, 01 Apr 2025 06:34:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1743514483; x=1744119283; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Y3WkZ4oiyNOu5L0IFpzb/JlaH4U0a2GEGnk/9vyxvlk=; b=4ybXhcBI2lzRiLirF7OjgcBj9nTZcUvui/+PZuDa/JwnK+Kb5+UetuyN9rj92jYCrv Oka1+Ls5x5DRH+kjYhKH8oTGuqiTE1J+PdnS38mU6Hm9nSq/Gq4oDhV0U+a61iurFTaR 5yX7+1spqhXoqyo+ApJxBzRt3mioZ60Q3fnC3Nu4+As1PfJnx9Qr+dbh4QYX2rcFMRRp v8m8v9Qi1KcVbhfj6VQoPUJHj5mFStitTJBfMzemBmiZn7qpwBn69ln8jZfj189SZAk+ 1mp6t93hVubCSH9wlbhfHg7Bc9U/Oj976tiu8IYJuI1VxcJQj3WVVw54gmxocqT3k4BI cvHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743514483; x=1744119283; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Y3WkZ4oiyNOu5L0IFpzb/JlaH4U0a2GEGnk/9vyxvlk=; b=uzgF6YRFc03ZsN/usOOkvmOF8ePigteqi8r2zwbgcT+hT1xweoKZIaHTNm1mbGB6xp b2SuHEMFEORqUymRLtnV2T8C3mMvagoXcD5USICMaAp2+u6va/v2pxsWr6co/DAqWzvZ Dv0cKEVxDOuQeLqvm0CaYiOcl3LDwXOPDS2/+rRtQGnFc/Nnrs4IprxIOw4NGzzFmr6y Zj9lzZlpXU+4ZtTCInCb8yJphH1NRCJ2XjiPuHS6bCGPZZY5E5QIO23RGdp7tLCu6q2i HhBaT6JfiIp0cKCGe29IiUIkqcitxVQG/0085lhS23ufiFmCF/7bwPHfRWS9L4U3sMyx Z2qQ== X-Gm-Message-State: AOJu0YxSLDruWlLQy/mgWXwA2UOobmm1ti8UBabFviR/nHZoDnn2uOPc VndXHIDlOHDHNtaI2fnPZF3XAEZ4KsLolbEjsWZdxQuL+NNhp+cLrRQH5h264ZDZlU86Z1M/Lhl 2xLEE4/TXgBp1b8rwE1WXyT1+Mh0cxaM/hwwwum9wpdgpCA/on9m7poD9W7ZFd4zCrm/bHGRX8i s9ywYOHBxfHBN4HLcm+P2nBnrvrw== X-Google-Smtp-Source: AGHT+IGfPceNrKT6p4eafCCaO1Kg9NonHtCGA618XGANB6G80nVMJDaWbJqhrfqxJGWoYDWeOBLuDcQp X-Received: from wmqd10.prod.google.com ([2002:a05:600c:34ca:b0:43d:b71:a576]) (user=ardb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1388:b0:43c:ec97:75db with SMTP id 5b1f17b1804b1-43db622a2a1mr87997065e9.11.1743514483360; Tue, 01 Apr 2025 06:34:43 -0700 (PDT) Date: Tue, 1 Apr 2025 15:34:23 +0200 In-Reply-To: <20250401133416.1436741-8-ardb+git@google.com> Precedence: bulk X-Mailing-List: linux-efi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250401133416.1436741-8-ardb+git@google.com> X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-Developer-Signature: v=1; a=openpgp-sha256; l=9644; i=ardb@kernel.org; h=from:subject; bh=Xn0NZVRSKnkBVNa4JliVxJ2z0L0pTwu/uZobfB4mEXk=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIf3164R5kqHHDzveeVQe+XVynVvWji9qr+o55RJ1zQJ69 dN+G7V0lLIwiHEwyIopsgjM/vtu5+mJUrXOs2Rh5rAygQxh4OIUgIlcvc3w31f52p/7ddNjX0c8 598R/Vry3Ktb03bGXC/mK/XhmMvHdZjhN5vJjohou+cFcRfEFB8UNASdl/fcn3tR0WGNWdcz+1c LmAA= X-Mailer: git-send-email 2.49.0.472.ge94155a9ec-goog Message-ID: <20250401133416.1436741-14-ardb+git@google.com> Subject: [RFC PATCH 6/6] x86/boot: Move early SME init code into startup/ From: Ard Biesheuvel To: linux-efi@vger.kernel.org Cc: linux-kernel@vger.kernel.org, x86@kernel.org, Ard Biesheuvel , Tom Lendacky , Dionna Amalie Glaze , Kevin Loughlin From: Ard Biesheuvel Move the SME initialization code, which runs from the 1:1 mapping of memory as it operates on the kernel virtual mapping, into the new sub-directory arch/x86/boot/startup/ where all startup code will reside that needs to tolerate executing from the 1:1 mapping. This allows RIP_REL_REF() macro invocations and __head annotations to be dropped. Signed-off-by: Ard Biesheuvel --- arch/x86/boot/startup/Makefile | 1 + arch/x86/{mm/mem_encrypt_identity.c => boot/startup/sme.c} | 45 +++++++++----------- arch/x86/include/asm/mem_encrypt.h | 2 +- arch/x86/mm/Makefile | 6 --- 4 files changed, 23 insertions(+), 31 deletions(-) diff --git a/arch/x86/boot/startup/Makefile b/arch/x86/boot/startup/Makefile index 01423063fec2..480c2d2063a0 100644 --- a/arch/x86/boot/startup/Makefile +++ b/arch/x86/boot/startup/Makefile @@ -16,6 +16,7 @@ UBSAN_SANITIZE := n KCOV_INSTRUMENT := n obj-$(CONFIG_X86_64) += gdt_idt.o map_kernel.o +obj-$(CONFIG_AMD_MEM_ENCRYPT) += sme.o lib-$(CONFIG_X86_64) += la57toggle.o lib-$(CONFIG_EFI_MIXED) += efi-mixed.o diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/boot/startup/sme.c similarity index 92% rename from arch/x86/mm/mem_encrypt_identity.c rename to arch/x86/boot/startup/sme.c index 5eecdd92da10..85bd39652535 100644 --- a/arch/x86/mm/mem_encrypt_identity.c +++ b/arch/x86/boot/startup/sme.c @@ -45,8 +45,6 @@ #include #include -#include "mm_internal.h" - #define PGD_FLAGS _KERNPG_TABLE_NOENC #define P4D_FLAGS _KERNPG_TABLE_NOENC #define PUD_FLAGS _KERNPG_TABLE_NOENC @@ -93,7 +91,7 @@ struct sme_populate_pgd_data { */ static char sme_workarea[2 * PMD_SIZE] __section(".init.scratch"); -static void __head sme_clear_pgd(struct sme_populate_pgd_data *ppd) +static void __init sme_clear_pgd(struct sme_populate_pgd_data *ppd) { unsigned long pgd_start, pgd_end, pgd_size; pgd_t *pgd_p; @@ -108,7 +106,7 @@ static void __head sme_clear_pgd(struct sme_populate_pgd_data *ppd) memset(pgd_p, 0, pgd_size); } -static pud_t __head *sme_prepare_pgd(struct sme_populate_pgd_data *ppd) +static pud_t __init *sme_prepare_pgd(struct sme_populate_pgd_data *ppd) { pgd_t *pgd; p4d_t *p4d; @@ -145,7 +143,7 @@ static pud_t __head *sme_prepare_pgd(struct sme_populate_pgd_data *ppd) return pud; } -static void __head sme_populate_pgd_large(struct sme_populate_pgd_data *ppd) +static void __init sme_populate_pgd_large(struct sme_populate_pgd_data *ppd) { pud_t *pud; pmd_t *pmd; @@ -161,7 +159,7 @@ static void __head sme_populate_pgd_large(struct sme_populate_pgd_data *ppd) set_pmd(pmd, __pmd(ppd->paddr | ppd->pmd_flags)); } -static void __head sme_populate_pgd(struct sme_populate_pgd_data *ppd) +static void __init sme_populate_pgd(struct sme_populate_pgd_data *ppd) { pud_t *pud; pmd_t *pmd; @@ -187,7 +185,7 @@ static void __head sme_populate_pgd(struct sme_populate_pgd_data *ppd) set_pte(pte, __pte(ppd->paddr | ppd->pte_flags)); } -static void __head __sme_map_range_pmd(struct sme_populate_pgd_data *ppd) +static void __init __sme_map_range_pmd(struct sme_populate_pgd_data *ppd) { while (ppd->vaddr < ppd->vaddr_end) { sme_populate_pgd_large(ppd); @@ -197,7 +195,7 @@ static void __head __sme_map_range_pmd(struct sme_populate_pgd_data *ppd) } } -static void __head __sme_map_range_pte(struct sme_populate_pgd_data *ppd) +static void __init __sme_map_range_pte(struct sme_populate_pgd_data *ppd) { while (ppd->vaddr < ppd->vaddr_end) { sme_populate_pgd(ppd); @@ -207,7 +205,7 @@ static void __head __sme_map_range_pte(struct sme_populate_pgd_data *ppd) } } -static void __head __sme_map_range(struct sme_populate_pgd_data *ppd, +static void __init __sme_map_range(struct sme_populate_pgd_data *ppd, pmdval_t pmd_flags, pteval_t pte_flags) { unsigned long vaddr_end; @@ -231,22 +229,22 @@ static void __head __sme_map_range(struct sme_populate_pgd_data *ppd, __sme_map_range_pte(ppd); } -static void __head sme_map_range_encrypted(struct sme_populate_pgd_data *ppd) +static void __init sme_map_range_encrypted(struct sme_populate_pgd_data *ppd) { __sme_map_range(ppd, PMD_FLAGS_ENC, PTE_FLAGS_ENC); } -static void __head sme_map_range_decrypted(struct sme_populate_pgd_data *ppd) +static void __init sme_map_range_decrypted(struct sme_populate_pgd_data *ppd) { __sme_map_range(ppd, PMD_FLAGS_DEC, PTE_FLAGS_DEC); } -static void __head sme_map_range_decrypted_wp(struct sme_populate_pgd_data *ppd) +static void __init sme_map_range_decrypted_wp(struct sme_populate_pgd_data *ppd) { __sme_map_range(ppd, PMD_FLAGS_DEC_WP, PTE_FLAGS_DEC_WP); } -static unsigned long __head sme_pgtable_calc(unsigned long len) +static unsigned long __init sme_pgtable_calc(unsigned long len) { unsigned long entries = 0, tables = 0; @@ -283,7 +281,7 @@ static unsigned long __head sme_pgtable_calc(unsigned long len) return entries + tables; } -void __head sme_encrypt_kernel(struct boot_params *bp) +void __init sme_encrypt_kernel(struct boot_params *bp) { unsigned long workarea_start, workarea_end, workarea_len; unsigned long execute_start, execute_end, execute_len; @@ -299,8 +297,7 @@ void __head sme_encrypt_kernel(struct boot_params *bp) * instrumentation or checking boot_cpu_data in the cc_platform_has() * function. */ - if (!sme_get_me_mask() || - RIP_REL_REF(sev_status) & MSR_AMD64_SEV_ENABLED) + if (!sme_get_me_mask() || sev_status & MSR_AMD64_SEV_ENABLED) return; /* @@ -318,8 +315,8 @@ void __head sme_encrypt_kernel(struct boot_params *bp) * memory from being cached. */ - kernel_start = (unsigned long)RIP_REL_REF(_text); - kernel_end = ALIGN((unsigned long)RIP_REL_REF(_end), PMD_SIZE); + kernel_start = (unsigned long)_text; + kernel_end = ALIGN((unsigned long)_end, PMD_SIZE); kernel_len = kernel_end - kernel_start; initrd_start = 0; @@ -345,7 +342,7 @@ void __head sme_encrypt_kernel(struct boot_params *bp) * pagetable structures for the encryption of the kernel * pagetable structures for workarea (in case not currently mapped) */ - execute_start = workarea_start = (unsigned long)RIP_REL_REF(sme_workarea); + execute_start = workarea_start = (unsigned long)sme_workarea; execute_end = execute_start + (PAGE_SIZE * 2) + PMD_SIZE; execute_len = execute_end - execute_start; @@ -488,7 +485,7 @@ void __head sme_encrypt_kernel(struct boot_params *bp) native_write_cr3(__native_read_cr3()); } -void __head sme_enable(struct boot_params *bp) +void __init sme_enable(struct boot_params *bp) { unsigned int eax, ebx, ecx, edx; unsigned long feature_mask; @@ -526,7 +523,7 @@ void __head sme_enable(struct boot_params *bp) me_mask = 1UL << (ebx & 0x3f); /* Check the SEV MSR whether SEV or SME is enabled */ - RIP_REL_REF(sev_status) = msr = __rdmsr(MSR_AMD64_SEV); + sev_status = msr = __rdmsr(MSR_AMD64_SEV); feature_mask = (msr & MSR_AMD64_SEV_ENABLED) ? AMD_SEV_BIT : AMD_SME_BIT; /* @@ -562,8 +559,8 @@ void __head sme_enable(struct boot_params *bp) return; } - RIP_REL_REF(sme_me_mask) = me_mask; - RIP_REL_REF(physical_mask) &= ~me_mask; - RIP_REL_REF(cc_vendor) = CC_VENDOR_AMD; + sme_me_mask = me_mask; + physical_mask &= ~me_mask; + cc_vendor = CC_VENDOR_AMD; cc_set_mask(me_mask); } diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 1530ee301dfe..ea6494628cb0 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -61,7 +61,7 @@ void __init sev_es_init_vc_handling(void); static inline u64 sme_get_me_mask(void) { - return RIP_REL_REF(sme_me_mask); + return sme_me_mask; } #define __bss_decrypted __section(".bss..decrypted") diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index 690fbf48e853..9cbb18c99adb 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -3,12 +3,10 @@ KCOV_INSTRUMENT_tlb.o := n KCOV_INSTRUMENT_mem_encrypt.o := n KCOV_INSTRUMENT_mem_encrypt_amd.o := n -KCOV_INSTRUMENT_mem_encrypt_identity.o := n KCOV_INSTRUMENT_pgprot.o := n KASAN_SANITIZE_mem_encrypt.o := n KASAN_SANITIZE_mem_encrypt_amd.o := n -KASAN_SANITIZE_mem_encrypt_identity.o := n KASAN_SANITIZE_pgprot.o := n # Disable KCSAN entirely, because otherwise we get warnings that some functions @@ -16,12 +14,10 @@ KASAN_SANITIZE_pgprot.o := n KCSAN_SANITIZE := n # Avoid recursion by not calling KMSAN hooks for CEA code. KMSAN_SANITIZE_cpu_entry_area.o := n -KMSAN_SANITIZE_mem_encrypt_identity.o := n ifdef CONFIG_FUNCTION_TRACER CFLAGS_REMOVE_mem_encrypt.o = -pg CFLAGS_REMOVE_mem_encrypt_amd.o = -pg -CFLAGS_REMOVE_mem_encrypt_identity.o = -pg CFLAGS_REMOVE_pgprot.o = -pg endif @@ -32,7 +28,6 @@ obj-y += pat/ # Make sure __phys_addr has no stackprotector CFLAGS_physaddr.o := -fno-stack-protector -CFLAGS_mem_encrypt_identity.o := -fno-stack-protector CFLAGS_fault.o := -I $(src)/../include/asm/trace @@ -65,5 +60,4 @@ obj-$(CONFIG_MITIGATION_PAGE_TABLE_ISOLATION) += pti.o obj-$(CONFIG_X86_MEM_ENCRYPT) += mem_encrypt.o obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_amd.o -obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_identity.o obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_boot.o