diff mbox series

[Part1,v5,32/38] x86/sev: enable SEV-SNP-validated CPUID in #VC handlers

Message ID 20210820151933.22401-33-brijesh.singh@amd.com
State New
Headers show
Series [Part1,v5,01/38] x86/mm: Add sev_feature_enabled() helper | expand

Commit Message

Brijesh Singh Aug. 20, 2021, 3:19 p.m. UTC
From: Michael Roth <michael.roth@amd.com>

This adds support for utilizing the SEV-SNP-validated CPUID table in
the various #VC handler routines used throughout boot/run-time. Mostly
this is handled by re-using the CPUID lookup code introduced earlier
for the boot/compressed kernel, but at various stages of boot some work
needs to be done to ensure the CPUID table is set up and remains
accessible throughout. The following init routines are introduced to
handle this:

sev_snp_cpuid_init():

  This sets up access to the CPUID memory range for the #VC handler
  that gets set up just after entry to startup_64(). Since the code is
  still using an identity mapping, the existing sev_snp_cpuid_init()
  used by boot/compressed is used here as well, but annotated as __init
  so it can be cleaned up later (boot/compressed/sev.c already defines
  away __init when it pulls in shared SEV code). The boot/compressed
  kernel handles any necessary lookup of ConfidentialComputing blob
  from EFI and puts it into boot_params if present, so only boot_params
  needs to be checked.

sev_snp_cpuid_init_virtual():

  This is called when the previous identity mapping  is gone and the
  memory used for the CPUID memory range needs to be mapped into the
  new page table with encryption bit set and accessed via __va().

  Since this path is also entered later by APs to set up their initial
  VC handlers, a function pointer is used to switch them to a handler
  that doesn't attempt to re-initialize the SNP CPUID feature, as at
  that point it will have already been set up.

sev_snp_cpuid_init_remap_early():

  This is called when the previous mapping of CPUID memory range is no
  longer present. early_memremap() is now available, so use that to
  create a new one that can be used until memremap() is available.

sev_snp_cpuid_init_remap():

  This switches away from using early_memremap() to ioremap_encrypted()
  to map CPUID memory range, otherwise the leak detector will complain.
  This mapping is what gets used for the remaining life of the guest.

Signed-off-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/include/asm/realmode.h |  1 +
 arch/x86/include/asm/setup.h    |  5 +-
 arch/x86/include/asm/sev.h      |  6 +++
 arch/x86/kernel/head64.c        | 21 ++++++--
 arch/x86/kernel/head_64.S       |  6 ++-
 arch/x86/kernel/setup.c         |  3 ++
 arch/x86/kernel/sev-shared.c    | 95 ++++++++++++++++++++++++++++++++-
 arch/x86/kernel/smpboot.c       |  2 +
 8 files changed, 129 insertions(+), 10 deletions(-)

Comments

Borislav Petkov Aug. 27, 2021, 3:18 p.m. UTC | #1
On Fri, Aug 20, 2021 at 10:19:27AM -0500, Brijesh Singh wrote:
> From: Michael Roth <michael.roth@amd.com>

> 

> This adds support for utilizing the SEV-SNP-validated CPUID table in


s/This adds support for utilizing/Utilize/

Yap, it can really be that simple. :)

> the various #VC handler routines used throughout boot/run-time. Mostly

> this is handled by re-using the CPUID lookup code introduced earlier

> for the boot/compressed kernel, but at various stages of boot some work

> needs to be done to ensure the CPUID table is set up and remains

> accessible throughout. The following init routines are introduced to

> handle this:


Do not talk about what your patch does - that should hopefully be
visible in the diff itself. Rather, talk about *why* you're doing what
you're doing.

> sev_snp_cpuid_init():


This one is not really introduced - it is already there.

<snip all the complex rest>

So this patch is making my head spin. It seems we're dancing a lot of
dance just to have our CPUID page present at all times. Which begs the
question: do we need it during the whole lifetime of the guest?

Regardless, I think this can be simplified by orders of
magnitude if we allocated statically 4K for that CPUID page in
arch/x86/boot/compressed/mem_encrypt.S, copied the supplied CPUID page
from the firmware to it and from now on, work with our own copy.

You probably would need to still remap it for kernel proper but it would
get rid of all that crazy in this patch here.

Hmmm?

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette
Brijesh Singh Aug. 27, 2021, 3:47 p.m. UTC | #2
On 8/27/21 10:18 AM, Borislav Petkov wrote:
> On Fri, Aug 20, 2021 at 10:19:27AM -0500, Brijesh Singh wrote:

>> From: Michael Roth <michael.roth@amd.com>

>>

>> This adds support for utilizing the SEV-SNP-validated CPUID table in

> s/This adds support for utilizing/Utilize/

>

> Yap, it can really be that simple. :)

>

>> the various #VC handler routines used throughout boot/run-time. Mostly

>> this is handled by re-using the CPUID lookup code introduced earlier

>> for the boot/compressed kernel, but at various stages of boot some work

>> needs to be done to ensure the CPUID table is set up and remains

>> accessible throughout. The following init routines are introduced to

>> handle this:

> Do not talk about what your patch does - that should hopefully be

> visible in the diff itself. Rather, talk about *why* you're doing what

> you're doing.

>

>> sev_snp_cpuid_init():

> This one is not really introduced - it is already there.

>

> <snip all the complex rest>

>

> So this patch is making my head spin. It seems we're dancing a lot of

> dance just to have our CPUID page present at all times. Which begs the

> question: do we need it during the whole lifetime of the guest?


Mike can correct me,  we need it for entire lifetime of the guest. 
Whenever guest needs the CPUID value, the #VC handler will refer to this
page.


> Regardless, I think this can be simplified by orders of

> magnitude if we allocated statically 4K for that CPUID page in

> arch/x86/boot/compressed/mem_encrypt.S, copied the supplied CPUID page

> from the firmware to it and from now on, work with our own copy.


Actually a  VMM could populate more than one page for the CPUID. One
page can include 64 entries and I believe Mike is already running into
limits (with Qemu) and exploring the ideas to extend it more than a page.


> You probably would need to still remap it for kernel proper but it would

> get rid of all that crazy in this patch here.

>

> Hmmm?

>
Borislav Petkov Aug. 27, 2021, 4:56 p.m. UTC | #3
On Fri, Aug 27, 2021 at 10:47:42AM -0500, Brijesh Singh wrote:
> Actually a  VMM could populate more than one page for the CPUID. One

> page can include 64 entries and I believe Mike is already running into

> limits (with Qemu) and exploring the ideas to extend it more than a page.


You mean, like, 2 pages?

:-)

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette
Michael Roth Aug. 27, 2021, 6:32 p.m. UTC | #4
On Fri, Aug 27, 2021 at 05:18:49PM +0200, Borislav Petkov wrote:
> On Fri, Aug 20, 2021 at 10:19:27AM -0500, Brijesh Singh wrote:

> > From: Michael Roth <michael.roth@amd.com>

> > 

> > This adds support for utilizing the SEV-SNP-validated CPUID table in

> 

> s/This adds support for utilizing/Utilize/

> 

> Yap, it can really be that simple. :)

> 

> > the various #VC handler routines used throughout boot/run-time. Mostly

> > this is handled by re-using the CPUID lookup code introduced earlier

> > for the boot/compressed kernel, but at various stages of boot some work

> > needs to be done to ensure the CPUID table is set up and remains

> > accessible throughout. The following init routines are introduced to

> > handle this:

> 

> Do not talk about what your patch does - that should hopefully be

> visible in the diff itself. Rather, talk about *why* you're doing what

> you're doing.


I'll get this cleaned up.

> 

> > sev_snp_cpuid_init():

> 

> This one is not really introduced - it is already there.

> 

> <snip all the complex rest>

> 

> So this patch is making my head spin. It seems we're dancing a lot of

> dance just to have our CPUID page present at all times. Which begs the

> question: do we need it during the whole lifetime of the guest?

> 

> Regardless, I think this can be simplified by orders of

> magnitude if we allocated statically 4K for that CPUID page in

> arch/x86/boot/compressed/mem_encrypt.S, copied the supplied CPUID page

> from the firmware to it and from now on, work with our own copy.


That makes sense. I was thinking it was safer to work with the FW page
since it would be less susceptible to something like a buffer overflow
modifying the CPUID table, but __ro_after_init seems like it would
provide similar protections. And yes, would definitely be great to avoid
the need for so many [re-]init routines.

> 

> You probably would need to still remap it for kernel proper but it would

> get rid of all that crazy in this patch here.

> 

> Hmmm?


If the memory is allocated in boot/compressed/mem_encrypt.S, wouldn't
kernel proper still need to create a static buffer for its copy? And if
not, wouldn't boot compressed still need a way to pass the PA of this
buffer? That seems like it would need to be done via boot_params. It
seems like it would also need to be marked as reserved as well since
kernel proper could no longer rely on the EFI map to handle it.

I've been testing a similar approach based on your suggestion that seems
to work out pretty well, but there's still some ugliness due to the
fixup_pointer() stuff that's needed early during snp_cpuid_init() in
kernel proper, which results in the need for 2 init routines there. Not
sure if there's a better way to handle it, but it's a lot better than 4
init routines at least, and with this there is no longer any need to
store the address/size of the FW page:

in arch/x86/kernel/sev-shared.c:

/* Firmware-enforced limit on CPUID table entries */
#define SNP_CPUID_COUNT_MAX 64

struct sev_snp_cpuid_info {
        u32 count;
        u32 __reserved1;
        u64 __reserved2;
        struct sev_snp_cpuid_fn fn[SNP_CPUID_COUNT_MAX];
} __packed;

static struct snp_cpuid_info cpuid_info_copy __ro_after_init;
static const struct snp_cpuid_info *cpuid_info __ro_after_init;
static int sev_snp_cpuid_enabled __ro_after_init;

/*
 * Initial set up of CPUID table when running identity-mapped.
 */
#ifdef __BOOT_COMPRESSED
void sev_snp_cpuid_init(struct boot_params *bp)
#else
void __init sev_snp_cpuid_init(struct boot_params *bp, unsigned long physaddr)
#endif
{
        const struct sev_snp_cpuid_info *cpuid_info_fw;

        cpuid_info_fw = snp_probe_cpuid_info(bp);
        if (!cpuid_info_fw)
                return;

#ifdef __BOOT_COMPRESSED
        cpuid_info2 = &cpuid_info_copy;
#else
        /* Kernel proper calls this while pointer fixups are still needed. */
        cpuid_info2 = (const struct sev_snp_cpuid_info *)
                       ((void *)&cpuid_info_copy - (void *)_text + physaddr);
#endif
        memcpy((struct sev_snp_cpuid_info *)cpuid_info2, cpuid_info_fw,
               sizeof(*cpuid_info2));

        sev_snp_cpuid_enabled = 1;
}

#ifndef __BOOT_COMPRESSED
/*
 * This is called after the switch to virtual kernel addresses. At this
 * point pointer fixups are no longer needed, and the virtual address of
 * the CPUID info buffer has changed, so re-initialize the pointer.
 */
void __init sev_snp_cpuid_init_virtual(void)
{
        /*
         * sev_snp_cpuid_init() already did the initial parsing of bootparams
         * and initial setup. If that didn't enable the feature then don't try
         * to enable it here.
         */
        if (!sev_snp_cpuid_active())
                return;

        /*
         * Either boot_params/EFI advertised the feature even though SNP isn't
         * enabled, or something else went wrong. Bail out.
         */
        if (!sev_feature_enabled(SEV_SNP))
                sev_es_terminate(1, GHCB_TERM_CPUID);

        cpuid_info = &cpuid_info_copy;
}
#endif

Then the rest of the code just accesses cpuid_info directly as it does now.

Would that be a reasonable approach for v6?

> 

> -- 

> Regards/Gruss,

>     Boris.

> 

> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpeople.kernel.org%2Ftglx%2Fnotes-about-netiquette&amp;data=04%7C01%7Cmichael.roth%40amd.com%7C464de4ea70544dc32ba108d9696de67a%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637656743008963071%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=wYK2a%2FuHgw0%2FQJsCVCIpoaJxdT0XASMYUfHvmDcowXg%3D&amp;reserved=0
Michael Roth Aug. 27, 2021, 6:39 p.m. UTC | #5
On Fri, Aug 27, 2021 at 10:47:42AM -0500, Brijesh Singh wrote:
> 

> On 8/27/21 10:18 AM, Borislav Petkov wrote:

> > On Fri, Aug 20, 2021 at 10:19:27AM -0500, Brijesh Singh wrote:

> >> From: Michael Roth <michael.roth@amd.com>

> >>

> >> This adds support for utilizing the SEV-SNP-validated CPUID table in

> > s/This adds support for utilizing/Utilize/

> >

> > Yap, it can really be that simple. :)

> >

> >> the various #VC handler routines used throughout boot/run-time. Mostly

> >> this is handled by re-using the CPUID lookup code introduced earlier

> >> for the boot/compressed kernel, but at various stages of boot some work

> >> needs to be done to ensure the CPUID table is set up and remains

> >> accessible throughout. The following init routines are introduced to

> >> handle this:

> > Do not talk about what your patch does - that should hopefully be

> > visible in the diff itself. Rather, talk about *why* you're doing what

> > you're doing.

> >

> >> sev_snp_cpuid_init():

> > This one is not really introduced - it is already there.

> >

> > <snip all the complex rest>

> >

> > So this patch is making my head spin. It seems we're dancing a lot of

> > dance just to have our CPUID page present at all times. Which begs the

> > question: do we need it during the whole lifetime of the guest?

> 

> Mike can correct me,  we need it for entire lifetime of the guest. 

> Whenever guest needs the CPUID value, the #VC handler will refer to this

> page.


That's right, and cpuid instructions can get introduced at pretty much
every stage of the boot process.

> 

> 

> > Regardless, I think this can be simplified by orders of

> > magnitude if we allocated statically 4K for that CPUID page in

> > arch/x86/boot/compressed/mem_encrypt.S, copied the supplied CPUID page

> > from the firmware to it and from now on, work with our own copy.

> 

> Actually a  VMM could populate more than one page for the CPUID. One

> page can include 64 entries and I believe Mike is already running into

> limits (with Qemu) and exploring the ideas to extend it more than a page.


I added the range checks in this version so that a hypervisor can still
leave out all-zero entries, so I think it can be avoided near-term at
least, but yes, still a possibility we might need an extra one in the
future, not sure how scarce storage is for stuff like __ro_after_init, so
worth considering.
Michael Roth Aug. 30, 2021, 4:03 p.m. UTC | #6
On Fri, Aug 27, 2021 at 01:32:40PM -0500, Michael Roth wrote:
> On Fri, Aug 27, 2021 at 05:18:49PM +0200, Borislav Petkov wrote:

> > On Fri, Aug 20, 2021 at 10:19:27AM -0500, Brijesh Singh wrote:

> > > From: Michael Roth <michael.roth@amd.com>

> > > 

> > > This adds support for utilizing the SEV-SNP-validated CPUID table in

> > 

> > s/This adds support for utilizing/Utilize/

> > 

> > Yap, it can really be that simple. :)

> > 

> > > the various #VC handler routines used throughout boot/run-time. Mostly

> > > this is handled by re-using the CPUID lookup code introduced earlier

> > > for the boot/compressed kernel, but at various stages of boot some work

> > > needs to be done to ensure the CPUID table is set up and remains

> > > accessible throughout. The following init routines are introduced to

> > > handle this:

> > 

> > Do not talk about what your patch does - that should hopefully be

> > visible in the diff itself. Rather, talk about *why* you're doing what

> > you're doing.

> 

> I'll get this cleaned up.

> 

> > 

> > > sev_snp_cpuid_init():

> > 

> > This one is not really introduced - it is already there.

> > 

> > <snip all the complex rest>

> > 

> > So this patch is making my head spin. It seems we're dancing a lot of

> > dance just to have our CPUID page present at all times. Which begs the

> > question: do we need it during the whole lifetime of the guest?

> > 

> > Regardless, I think this can be simplified by orders of

> > magnitude if we allocated statically 4K for that CPUID page in

> > arch/x86/boot/compressed/mem_encrypt.S, copied the supplied CPUID page

> > from the firmware to it and from now on, work with our own copy.

> 

> That makes sense. I was thinking it was safer to work with the FW page

> since it would be less susceptible to something like a buffer overflow

> modifying the CPUID table, but __ro_after_init seems like it would

> provide similar protections. And yes, would definitely be great to avoid

> the need for so many [re-]init routines.

> 

> > 

> > You probably would need to still remap it for kernel proper but it would

> > get rid of all that crazy in this patch here.

> > 

> > Hmmm?

> 

> If the memory is allocated in boot/compressed/mem_encrypt.S, wouldn't

> kernel proper still need to create a static buffer for its copy? And if

> not, wouldn't boot compressed still need a way to pass the PA of this

> buffer? That seems like it would need to be done via boot_params. It

> seems like it would also need to be marked as reserved as well since

> kernel proper could no longer rely on the EFI map to handle it.

> 

> I've been testing a similar approach based on your suggestion that seems

> to work out pretty well, but there's still some ugliness due to the

> fixup_pointer() stuff that's needed early during snp_cpuid_init() in

> kernel proper, which results in the need for 2 init routines there. Not

> sure if there's a better way to handle it, but it's a lot better than 4

> init routines at least, and with this there is no longer any need to

> store the address/size of the FW page:

> 

> in arch/x86/kernel/sev-shared.c:

> 

> /* Firmware-enforced limit on CPUID table entries */

> #define SNP_CPUID_COUNT_MAX 64

> 

> struct sev_snp_cpuid_info {

>         u32 count;

>         u32 __reserved1;

>         u64 __reserved2;

>         struct sev_snp_cpuid_fn fn[SNP_CPUID_COUNT_MAX];

> } __packed;

> 

> static struct snp_cpuid_info cpuid_info_copy __ro_after_init;

> static const struct snp_cpuid_info *cpuid_info __ro_after_init;

> static int sev_snp_cpuid_enabled __ro_after_init;

> 

> /*

>  * Initial set up of CPUID table when running identity-mapped.

>  */

> #ifdef __BOOT_COMPRESSED

> void sev_snp_cpuid_init(struct boot_params *bp)

> #else

> void __init sev_snp_cpuid_init(struct boot_params *bp, unsigned long physaddr)

> #endif

> {

>         const struct sev_snp_cpuid_info *cpuid_info_fw;

> 

>         cpuid_info_fw = snp_probe_cpuid_info(bp);

>         if (!cpuid_info_fw)

>                 return;

> 

> #ifdef __BOOT_COMPRESSED

>         cpuid_info2 = &cpuid_info_copy;

> #else

>         /* Kernel proper calls this while pointer fixups are still needed. */

>         cpuid_info2 = (const struct sev_snp_cpuid_info *)

>                        ((void *)&cpuid_info_copy - (void *)_text + physaddr);

> #endif

>         memcpy((struct sev_snp_cpuid_info *)cpuid_info2, cpuid_info_fw,

>                sizeof(*cpuid_info2));


These should be cpuid_info, not cpuid_info2.
Borislav Petkov Aug. 31, 2021, 4:22 p.m. UTC | #7
On Fri, Aug 27, 2021 at 01:32:40PM -0500, Michael Roth wrote:
> If the memory is allocated in boot/compressed/mem_encrypt.S, wouldn't

> kernel proper still need to create a static buffer for its copy?


Just like the other variables like sme_me_mask etc that file allocates
at the bottom. Or do you have a better idea?

> Would that be a reasonable approach for v6?


I don't like the ifdeffery one bit, TBH. I guess you should split it
and have a boot/compressed page and a kernel proper one and keep 'em
separate. That should make everything nice and clean at the cost of 2*4K
which is nothing nowadays.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette
Michael Roth Sept. 1, 2021, 1:16 a.m. UTC | #8
On Tue, Aug 31, 2021 at 06:22:44PM +0200, Borislav Petkov wrote:
> On Fri, Aug 27, 2021 at 01:32:40PM -0500, Michael Roth wrote:

> > If the memory is allocated in boot/compressed/mem_encrypt.S, wouldn't

> > kernel proper still need to create a static buffer for its copy?

> 

> Just like the other variables like sme_me_mask etc that file allocates

> at the bottom. Or do you have a better idea?


What did you think of the suggestion of defining it in sev-shared.c
as a static buffer/struct as __ro_after_init? It would be nice to
declare/reserve the memory in one place. Another benefit is it doesn't
need to be exported, and could just be local with all the other
snp_cpuid* helpers that access it in sev-shared.c

> 

> > Would that be a reasonable approach for v6?

> 

> I don't like the ifdeffery one bit, TBH. I guess you should split it

> and have a boot/compressed page and a kernel proper one and keep 'em

> separate. That should make everything nice and clean at the cost of 2*4K

> which is nothing nowadays.


I think I can address the ifdeffery by splitting the boot/proper routines
into separate self-contained routines (and maybe move them out into
boot/compressed/sev.c and kernel/sev.c, respectively), then having them
just initialize the table pointer and create the copy using a common setter
function, e.g.

  snp_cpuid_table_create(cc_blob, fixup_offset)

and for boot/compressed.c fixup_offset would just be passed in as 0.

> 

> Thx.

> 

> -- 

> Regards/Gruss,

>     Boris.

> 

> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpeople.kernel.org%2Ftglx%2Fnotes-about-netiquette&amp;data=04%7C01%7CMichael.Roth%40amd.com%7Cb4adf700d33e42ffe4be08d96c9b7fe6%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637660237404006013%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=nVGIpG0WAcqcHZWXK1%2BQoaPBoeCLwtkqgs8Mfgz%2Fr04%3D&amp;reserved=0
Borislav Petkov Sept. 2, 2021, 11:05 a.m. UTC | #9
On Tue, Aug 31, 2021 at 08:16:58PM -0500, Michael Roth wrote:
> What did you think of the suggestion of defining it in sev-shared.c

> as a static buffer/struct as __ro_after_init? It would be nice to

> declare/reserve the memory in one place. Another benefit is it doesn't

> need to be exported, and could just be local with all the other

> snp_cpuid* helpers that access it in sev-shared.c


Yap.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette
diff mbox series

Patch

diff --git a/arch/x86/include/asm/realmode.h b/arch/x86/include/asm/realmode.h
index 5db5d083c873..ff0eecee4235 100644
--- a/arch/x86/include/asm/realmode.h
+++ b/arch/x86/include/asm/realmode.h
@@ -63,6 +63,7 @@  extern unsigned long initial_stack;
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 extern unsigned long initial_vc_handler;
 #endif
+extern unsigned long initial_idt_setup;
 
 extern unsigned char real_mode_blob[];
 extern unsigned char real_mode_relocs[];
diff --git a/arch/x86/include/asm/setup.h b/arch/x86/include/asm/setup.h
index a12458a7a8d4..12fc52894ad8 100644
--- a/arch/x86/include/asm/setup.h
+++ b/arch/x86/include/asm/setup.h
@@ -50,8 +50,9 @@  extern void reserve_standard_io_resources(void);
 extern void i386_reserve_resources(void);
 extern unsigned long __startup_64(unsigned long physaddr, struct boot_params *bp);
 extern unsigned long __startup_secondary_64(void);
-extern void startup_64_setup_env(unsigned long physbase);
-extern void early_setup_idt(void);
+extern void startup_64_setup_env(unsigned long physbase, struct boot_params *bp);
+extern void early_setup_idt_common(void *rmode);
+extern void __init early_setup_idt(void *rmode);
 extern void __init do_early_exception(struct pt_regs *regs, int trapnr);
 
 #ifdef CONFIG_X86_INTEL_MID
diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
index 345740aa5559..a5f0a1c3ccbe 100644
--- a/arch/x86/include/asm/sev.h
+++ b/arch/x86/include/asm/sev.h
@@ -129,6 +129,9 @@  void snp_set_memory_private(unsigned long vaddr, unsigned int npages);
 void snp_set_wakeup_secondary_cpu(void);
 #ifdef __BOOT_COMPRESSED
 bool sev_snp_enabled(void);
+#else
+void sev_snp_cpuid_init_virtual(void);
+void sev_snp_cpuid_init_remap_early(void);
 #endif /* __BOOT_COMPRESSED */
 void sev_snp_cpuid_init(struct boot_params *bp);
 #else
@@ -149,6 +152,9 @@  static inline void snp_set_wakeup_secondary_cpu(void) { }
 static inline void sev_snp_cpuid_init(struct boot_params *bp) { }
 #ifdef __BOOT_COMPRESSED
 static inline bool sev_snp_enabled { return false; }
+#else
+static inline void sev_snp_cpuid_init_virtual(void) { }
+static inline void sev_snp_cpuid_init_remap_early(void) { }
 #endif /*__BOOT_COMPRESSED */
 #endif
 
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index f1b76a54c84e..4700926deb52 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -576,7 +576,7 @@  static void set_bringup_idt_handler(gate_desc *idt, int n, void *handler)
 }
 
 /* This runs while still in the direct mapping */
-static void startup_64_load_idt(unsigned long physbase)
+static void startup_64_load_idt(unsigned long physbase, struct boot_params *bp)
 {
 	struct desc_ptr *desc = fixup_pointer(&bringup_idt_descr, physbase);
 	gate_desc *idt = fixup_pointer(bringup_idt_table, physbase);
@@ -586,6 +586,7 @@  static void startup_64_load_idt(unsigned long physbase)
 		void *handler;
 
 		/* VMM Communication Exception */
+		sev_snp_cpuid_init(bp); /* used by #VC handler */
 		handler = fixup_pointer(vc_no_ghcb, physbase);
 		set_bringup_idt_handler(idt, X86_TRAP_VC, handler);
 	}
@@ -594,8 +595,8 @@  static void startup_64_load_idt(unsigned long physbase)
 	native_load_idt(desc);
 }
 
-/* This is used when running on kernel addresses */
-void early_setup_idt(void)
+/* Used for all CPUs */
+void early_setup_idt_common(void *rmode)
 {
 	/* VMM Communication Exception */
 	if (IS_ENABLED(CONFIG_AMD_MEM_ENCRYPT))
@@ -605,10 +606,20 @@  void early_setup_idt(void)
 	native_load_idt(&bringup_idt_descr);
 }
 
+/* This is used by boot processor when running on kernel addresses */
+void __init early_setup_idt(void *rmode)
+{
+	/* SEV-SNP CPUID setup for use by #VC handler */
+	if (IS_ENABLED(CONFIG_AMD_MEM_ENCRYPT))
+		sev_snp_cpuid_init_virtual();
+
+	early_setup_idt_common(rmode);
+}
+
 /*
  * Setup boot CPU state needed before kernel switches to virtual addresses.
  */
-void __head startup_64_setup_env(unsigned long physbase)
+void __head startup_64_setup_env(unsigned long physbase, struct boot_params *bp)
 {
 	u64 gs_area = (u64)fixup_pointer(startup_gs_area, physbase);
 
@@ -634,5 +645,5 @@  void __head startup_64_setup_env(unsigned long physbase)
 	native_wrmsr(MSR_GS_BASE, gs_area, gs_area >> 32);
 #endif
 
-	startup_64_load_idt(physbase);
+	startup_64_load_idt(physbase, bp);
 }
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index d8b3ebd2bb85..78f35e446498 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -218,7 +218,10 @@  SYM_INNER_LABEL(secondary_startup_64_no_verify, SYM_L_GLOBAL)
 
 	/* Setup and Load IDT */
 	pushq	%rsi
-	call	early_setup_idt
+	movq	%rsi, %rdi
+	movq	initial_idt_setup(%rip), %rax
+	ANNOTATE_RETPOLINE_SAFE
+	call	*%rax
 	popq	%rsi
 
 	/* Check if nx is implemented */
@@ -341,6 +344,7 @@  SYM_DATA(initial_gs,	.quad INIT_PER_CPU_VAR(fixed_percpu_data))
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 SYM_DATA(initial_vc_handler,	.quad handle_vc_boot_ghcb)
 #endif
+SYM_DATA(initial_idt_setup,	.quad early_setup_idt)
 
 /*
  * The FRAME_SIZE gap is a convention which helps the in-kernel unwinder
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index bff3a784aec5..e81fc19657b7 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -48,6 +48,7 @@ 
 #include <asm/thermal.h>
 #include <asm/unwind.h>
 #include <asm/vsyscall.h>
+#include <asm/sev.h>
 #include <linux/vmalloc.h>
 
 /*
@@ -1075,6 +1076,8 @@  void __init setup_arch(char **cmdline_p)
 
 	init_mem_mapping();
 
+	sev_snp_cpuid_init_remap_early();
+
 	idt_setup_early_pf();
 
 	/*
diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c
index 6f70ba293c5e..e257df79830c 100644
--- a/arch/x86/kernel/sev-shared.c
+++ b/arch/x86/kernel/sev-shared.c
@@ -264,7 +264,7 @@  static int sev_cpuid_hv(u32 func, u32 subfunc, u32 *eax, u32 *ebx,
 	return 0;
 }
 
-static bool sev_snp_cpuid_active(void)
+static inline bool sev_snp_cpuid_active(void)
 {
 	return sev_snp_cpuid_enabled;
 }
@@ -960,7 +960,7 @@  static struct cc_blob_sev_info *sev_snp_probe_cc_blob(struct boot_params *bp)
  * indication that SEV-ES is enabled. Subsequent init levels will check for
  * SEV_SNP feature once available to also take SEV MSR value into account.
  */
-void sev_snp_cpuid_init(struct boot_params *bp)
+void __init sev_snp_cpuid_init(struct boot_params *bp)
 {
 	struct cc_blob_sev_info *cc_info;
 
@@ -995,3 +995,94 @@  void sev_snp_cpuid_init(struct boot_params *bp)
 
 	sev_snp_cpuid_enabled = 1;
 }
+
+#ifndef __BOOT_COMPRESSED
+
+static bool __init early_make_pgtable_enc(unsigned long physaddr)
+{
+	pmdval_t pmd;
+
+	/* early_pmd_flags hasn't been updated with SME bit yet; add it */
+	pmd = (physaddr & PMD_MASK) + early_pmd_flags + sme_get_me_mask();
+
+	return __early_make_pgtable((unsigned long)__va(physaddr), pmd);
+}
+
+/*
+ * This is called when we switch to virtual kernel addresses, before #PF
+ * handler is set up. boot_params have already been parsed at this point,
+ * but CPUID page is no longer identity-mapped so we need to create a
+ * virtual mapping.
+ */
+void __init sev_snp_cpuid_init_virtual(void)
+{
+	/*
+	 * We rely on sev_snp_cpuid_init() to do initial parsing of bootparams
+	 * and initial setup. If that didn't enable the feature then don't try
+	 * to enable it here.
+	 */
+	if (!sev_snp_cpuid_active())
+		return;
+
+	/*
+	 * Either boot_params/EFI advertised the feature even though SNP isn't
+	 * enabled, or something else went wrong. Bail out.
+	 */
+	if (!sev_feature_enabled(SEV_SNP))
+		sev_es_terminate(1, GHCB_TERM_CPUID);
+
+	/* If feature is enabled, but we can't map CPUID info, we're hosed */
+	if (!early_make_pgtable_enc(sev_snp_cpuid_pa))
+		sev_es_terminate(1, GHCB_TERM_CPUID);
+
+	cpuid_info = (const struct sev_snp_cpuid_info *)__va(sev_snp_cpuid_pa);
+}
+
+/* Called after early_ioremap_init() */
+void __init sev_snp_cpuid_init_remap_early(void)
+{
+	if (!sev_snp_cpuid_active())
+		return;
+
+	/*
+	 * This really shouldn't be possible at this point.
+	 */
+	if (!sev_feature_enabled(SEV_SNP))
+		sev_es_terminate(1, GHCB_TERM_CPUID);
+
+	cpuid_info = early_memremap(sev_snp_cpuid_pa, sev_snp_cpuid_sz);
+}
+
+/* Final switch to run-time mapping */
+static int __init sev_snp_cpuid_init_remap(void)
+{
+	if (!sev_snp_cpuid_active())
+		return 0;
+
+	pr_info("Using SNP CPUID page, %d entries present.\n", cpuid_info->count);
+
+	/*
+	 * This really shouldn't be possible at this point either.
+	 */
+	if (!sev_feature_enabled(SEV_SNP))
+		sev_es_terminate(1, GHCB_TERM_CPUID);
+
+	/* Clean up earlier mapping. */
+	if (cpuid_info)
+		early_memunmap((void *)cpuid_info, sev_snp_cpuid_sz);
+
+	/*
+	 * We need ioremap_encrypted() to get an encrypted mapping, but this
+	 * is normal RAM so can be accessed directly.
+	 */
+	cpuid_info = (__force void *)ioremap_encrypted(sev_snp_cpuid_pa,
+						       sev_snp_cpuid_sz);
+	if (!cpuid_info)
+		return -EIO;
+
+	return 0;
+}
+
+arch_initcall(sev_snp_cpuid_init_remap);
+
+#endif /* __BOOT_COMPRESSED */
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index ca78711620e0..02c172ab97de 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -1044,6 +1044,8 @@  static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle,
 	early_gdt_descr.address = (unsigned long)get_cpu_gdt_rw(cpu);
 	initial_code = (unsigned long)start_secondary;
 	initial_stack  = idle->thread.sp;
+	/* don't repeat IDT setup work specific to the BSP */
+	initial_idt_setup = (unsigned long)early_setup_idt_common;
 
 	/* Enable the espfix hack for this CPU */
 	init_espfix_ap(cpu);