From patchwork Mon May 12 19:08:41 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 889506 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A86F5298277 for ; Mon, 12 May 2025 19:11:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747077108; cv=none; b=FGqxZbq4MZSiWv7vr1y/JytE3FkQl+WFVzhVkp+fP3FWgsfanWHEuAs4jPs7Cb7o7yLW/Up5w8zePLNj1CXb2uo7UxaPeMjwB26+edJXeOwJSG1vLq1xi5CmyqJNm0IotkNVUn0Wr4c22gFZZj95Fu85BEAUNfllMtNlboOdlmQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747077108; c=relaxed/simple; bh=RRnMIw5g/jrAb0W13Erg+CyjBnpxxydYbxaGL2P/YX4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=YwoM2FBWCv8lNuh2fjW+jbm1Ft7zPloIVYynCDdopWrgFJdZ0I/Qxnksy8KCVb3ewi6AJ/XXjBSQjXn+VUtBKSCZi/LXtHO4DF4ovdMCMh+5FyH271kYuCJddwvZi1QdnzXP85YCo9NsgGD8mQotP5SLNxjmGRDqKUdFN8aCFec= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ardb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=3+8XNM5f; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ardb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="3+8XNM5f" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43cf44b66f7so27259115e9.1 for ; Mon, 12 May 2025 12:11:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747077105; x=1747681905; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=71xIGwXHz9m/rl1b5tE1mHXI8wzc1cOO1kDEHCl0Ovk=; b=3+8XNM5fO0HVrV6nUuPA3EILKQqkWJiq4sgzCSreX1zAp26zdB2jCj9I8ggGMB7paO UYk3riMN1meluBLIMy/ay12+AVZxfzAHkauT7ZBO3iPdORPVl7L81BZVuVr+iA8feg1r jVQGv3/U2zxilnKSheyxLJ8JCg788PM1/LpMjm3hnTCzYrH9CCRxHFuMsCXWII9Ah85u yMUK9bshQTHKpQFD4h3A5jOqZserBztVhopLjXiR1ehNpluVlcmUasqYBNWAxoYaDX8F CAwMBrahYOw6k59App4JS2z8iJXJ3TVd4e8YDlbuxksLuSTDXErNrkF7hCam/5jlv1Hz 2xEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747077105; x=1747681905; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=71xIGwXHz9m/rl1b5tE1mHXI8wzc1cOO1kDEHCl0Ovk=; b=tgSZJFE7FfwITeVOnHxxN6xB0Iecebs/p27cLNbY7yHK0M2/VNoZNxb+a+apjH1aPT zTXlQ1M1pXry/LlvF+ywHuHmrmclN1eUlYj12CIfdHHcaexdZWn4ZqA2iYk/aqwQEVT7 +IYG7DBMYykbqU8e7+16GHpToB0QmY1KLWENybdag6AuaNaXLG7Sm+p5hdk2ONQwQdl/ RkaSij0O1slMbJiUlCRIRWi+7jpipfUDmgHUmXXY+2fB3k6X27sxD5KmI7i5WprQCesT wRwTIAL6TmZdh3FDZe7bEuAaZRaw7hKQmJ/GjnffuUKaoviixTs80q0YQVBxu92L0lrC JvBw== X-Gm-Message-State: AOJu0YwK6z9hUnoijLM1MEg1EbHyzh0mEWM2LHQ65tGaVobnYLdyFsB4 5lSIQuin6z8jQr4sXVY5FPHCKQdh1waEV3jBlFvJDc9IgeYXogATDgWIHcUEPWO5UYToYw== X-Google-Smtp-Source: AGHT+IFFqCt+fsBqPngsdQyiVskwd9ConFlbUeF25rdan8MOxzENvfrbYyiCj4mVU1L3lcsff9F8yZql X-Received: from wrbcc8.prod.google.com ([2002:a5d:5c08:0:b0:39a:be1a:5df9]) (user=ardb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:2dc4:b0:39c:dfa:c3de with SMTP id ffacd0b85a97d-3a1f64a383dmr11163957f8f.47.1747077104992; Mon, 12 May 2025 12:11:44 -0700 (PDT) Date: Mon, 12 May 2025 21:08:41 +0200 In-Reply-To: <20250512190834.332684-23-ardb+git@google.com> Precedence: bulk X-Mailing-List: linux-efi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250512190834.332684-23-ardb+git@google.com> X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-Developer-Signature: v=1; a=openpgp-sha256; l=6919; i=ardb@kernel.org; h=from:subject; bh=bBn+bWoiPv9ap4MhobATz6OEPk63e0DgR5JHkfb8XMQ=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIUPJ3WHqFU2915K2656WWP98v+/q7/p5s3rlMyyLov8rS QZc42PuKGVhEONgkBVTZBGY/ffdztMTpWqdZ8nCzGFlAhnCwMUpABP518/IcP/zIeVrPTfDVUI3 XHM/tvDCk4N35K+uv2b8U+Cxw6vAFS8YGd4u3205lU+KR0X7a5TeoZN3hbY5JizO6f+6bFdzNdu jUCYA X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: <20250512190834.332684-29-ardb+git@google.com> Subject: [RFT PATCH v3 06/21] x86/sev: Avoid global variable to store virtual address of SVSM area From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: linux-efi@vger.kernel.org, x86@kernel.org, Ard Biesheuvel , Borislav Petkov , Ingo Molnar , Dionna Amalie Glaze , Kevin Loughlin , Tom Lendacky From: Ard Biesheuvel The boottime SVSM calling area is used both by the startup code running from a 1:1 mapping, and potentially later on running from the ordinary kernel mapping. This SVSM calling area is statically allocated, and so its physical address doesn't change. However, its virtual address depends on the calling context (1:1 mapping or kernel virtual mapping), and even though the variable that holds the virtual address of this calling area gets updated from 1:1 address to kernel address during the boot, it is hard to reason about why this is guaranteed to be safe. So instead, take the RIP-relative address of the boottime SVSM calling area whenever its virtual address is required, and only use a global variable for the physical address. Signed-off-by: Ard Biesheuvel --- arch/x86/boot/compressed/sev.c | 5 ++--- arch/x86/boot/startup/sev-shared.c | 6 ------ arch/x86/boot/startup/sev-startup.c | 4 ++-- arch/x86/coco/sev/core.c | 9 --------- arch/x86/include/asm/sev-internal.h | 3 +-- arch/x86/include/asm/sev.h | 2 -- arch/x86/mm/mem_encrypt_amd.c | 6 ------ 7 files changed, 5 insertions(+), 30 deletions(-) diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c index 2141936daba7..70c3f4fc4349 100644 --- a/arch/x86/boot/compressed/sev.c +++ b/arch/x86/boot/compressed/sev.c @@ -37,12 +37,12 @@ struct ghcb *boot_ghcb; #define __BOOT_COMPRESSED -extern struct svsm_ca *boot_svsm_caa; extern u64 boot_svsm_caa_pa; struct svsm_ca *svsm_get_caa(void) { - return boot_svsm_caa; + /* The decompressor is mapped 1:1 so VA == PA */ + return (struct svsm_ca *)boot_svsm_caa_pa; } u64 svsm_get_caa_pa(void) @@ -530,7 +530,6 @@ bool early_is_sevsnp_guest(void) /* Obtain the address of the calling area to use */ boot_rdmsr(MSR_SVSM_CAA, &m); - boot_svsm_caa = (void *)m.q; boot_svsm_caa_pa = m.q; /* diff --git a/arch/x86/boot/startup/sev-shared.c b/arch/x86/boot/startup/sev-shared.c index 7884884c0898..9e0573aa29c1 100644 --- a/arch/x86/boot/startup/sev-shared.c +++ b/arch/x86/boot/startup/sev-shared.c @@ -26,7 +26,6 @@ * early boot, both with identity mapped virtual addresses and proper kernel * virtual addresses. */ -struct svsm_ca *boot_svsm_caa __ro_after_init; u64 boot_svsm_caa_pa __ro_after_init; /* @@ -648,11 +647,6 @@ static bool __head svsm_setup_ca(const struct cc_blob_sev_info *cc_info, if (caa & (PAGE_SIZE - 1)) sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_SVSM_CAA); - /* - * The CA is identity mapped when this routine is called, both by the - * decompressor code and the early kernel code. - */ - boot_svsm_caa = (struct svsm_ca *)caa; boot_svsm_caa_pa = caa; /* Advertise the SVSM presence via CPUID. */ diff --git a/arch/x86/boot/startup/sev-startup.c b/arch/x86/boot/startup/sev-startup.c index 0000885dc24c..24e7082e1a50 100644 --- a/arch/x86/boot/startup/sev-startup.c +++ b/arch/x86/boot/startup/sev-startup.c @@ -252,6 +252,7 @@ static __head struct cc_blob_sev_info *find_cc_blob(struct boot_params *bp) static __head void svsm_setup(struct cc_blob_sev_info *cc_info) { + struct snp_secrets_page *secrets = (void *)cc_info->secrets_phys; struct svsm_call call = {}; int ret; u64 pa; @@ -280,7 +281,7 @@ static __head void svsm_setup(struct cc_blob_sev_info *cc_info) * RAX = 0 (Protocol=0, CallID=0) * RCX = New CA GPA */ - call.caa = svsm_get_caa(); + call.caa = (struct svsm_ca *)secrets->svsm_caa; call.rax = SVSM_CORE_CALL(SVSM_CORE_REMAP_CA); call.rcx = pa; do { @@ -289,7 +290,6 @@ static __head void svsm_setup(struct cc_blob_sev_info *cc_info) if (ret) sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_SVSM_CA_REMAP_FAIL); - boot_svsm_caa = (struct svsm_ca *)pa; boot_svsm_caa_pa = pa; } diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c index 310d867be4dc..0e0ddf4c92aa 100644 --- a/arch/x86/coco/sev/core.c +++ b/arch/x86/coco/sev/core.c @@ -1496,15 +1496,6 @@ void sev_show_status(void) pr_cont("\n"); } -void __init snp_update_svsm_ca(void) -{ - if (!snp_vmpl) - return; - - /* Update the CAA to a proper kernel address */ - boot_svsm_caa = &boot_svsm_ca_page; -} - #ifdef CONFIG_SYSFS static ssize_t vmpl_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) diff --git a/arch/x86/include/asm/sev-internal.h b/arch/x86/include/asm/sev-internal.h index 4269d9dbefdf..e3b203c280aa 100644 --- a/arch/x86/include/asm/sev-internal.h +++ b/arch/x86/include/asm/sev-internal.h @@ -60,7 +60,6 @@ void early_set_pages_state(unsigned long vaddr, unsigned long paddr, DECLARE_PER_CPU(struct svsm_ca *, svsm_caa); DECLARE_PER_CPU(u64, svsm_caa_pa); -extern struct svsm_ca *boot_svsm_caa; extern u64 boot_svsm_caa_pa; static __always_inline struct svsm_ca *svsm_get_caa(void) @@ -68,7 +67,7 @@ static __always_inline struct svsm_ca *svsm_get_caa(void) if (sev_cfg.use_cas) return this_cpu_read(svsm_caa); else - return boot_svsm_caa; + return rip_rel_ptr(&boot_svsm_ca_page); } static __always_inline u64 svsm_get_caa_pa(void) diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h index 07081bb85331..ae2502253bd3 100644 --- a/arch/x86/include/asm/sev.h +++ b/arch/x86/include/asm/sev.h @@ -501,7 +501,6 @@ void snp_accept_memory(phys_addr_t start, phys_addr_t end); u64 snp_get_unsupported_features(u64 status); u64 sev_get_status(void); void sev_show_status(void); -void snp_update_svsm_ca(void); int prepare_pte_enc(struct pte_enc_desc *d); void set_pte_enc_mask(pte_t *kpte, unsigned long pfn, pgprot_t new_prot); void snp_kexec_finish(void); @@ -629,7 +628,6 @@ static inline void snp_accept_memory(phys_addr_t start, phys_addr_t end) { } static inline u64 snp_get_unsupported_features(u64 status) { return 0; } static inline u64 sev_get_status(void) { return 0; } static inline void sev_show_status(void) { } -static inline void snp_update_svsm_ca(void) { } static inline int prepare_pte_enc(struct pte_enc_desc *d) { return 0; } static inline void set_pte_enc_mask(pte_t *kpte, unsigned long pfn, pgprot_t new_prot) { } static inline void snp_kexec_finish(void) { } diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c index faf3a13fb6ba..2f8c32173972 100644 --- a/arch/x86/mm/mem_encrypt_amd.c +++ b/arch/x86/mm/mem_encrypt_amd.c @@ -536,12 +536,6 @@ void __init sme_early_init(void) x86_init.resources.dmi_setup = snp_dmi_setup; } - /* - * Switch the SVSM CA mapping (if active) from identity mapped to - * kernel mapped. - */ - snp_update_svsm_ca(); - if (sev_status & MSR_AMD64_SNP_SECURE_TSC) setup_force_cpu_cap(X86_FEATURE_TSC_RELIABLE); }