diff mbox series

[v9,17/43] x86/kernel: Make the .bss..decrypted section shared in RMP table

Message ID 20220128171804.569796-18-brijesh.singh@amd.com
State New
Headers show
Series Add AMD Secure Nested Paging (SEV-SNP) Guest Support | expand

Commit Message

Brijesh Singh Jan. 28, 2022, 5:17 p.m. UTC
The encryption attribute for the .bss..decrypted section is cleared in the
initial page table build. This is because the section contains the data
that need to be shared between the guest and the hypervisor.

When SEV-SNP is active, just clearing the encryption attribute in the
page table is not enough. The page state need to be updated in the RMP
table.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kernel/head64.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

Comments

Borislav Petkov Feb. 2, 2022, 11:06 a.m. UTC | #1
On Fri, Jan 28, 2022 at 11:17:38AM -0600, Brijesh Singh wrote:
> The encryption attribute for the .bss..decrypted section is cleared in the
> initial page table build. This is because the section contains the data
> that need to be shared between the guest and the hypervisor.
> 
> When SEV-SNP is active, just clearing the encryption attribute in the
> page table is not enough. The page state need to be updated in the RMP
> table.
> 
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> ---
>  arch/x86/kernel/head64.c | 14 ++++++++++++++
>  1 file changed, 14 insertions(+)
> 
> diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
> index 8075e91cff2b..1239bc104cda 100644
> --- a/arch/x86/kernel/head64.c
> +++ b/arch/x86/kernel/head64.c
> @@ -143,7 +143,21 @@ static unsigned long sme_postprocess_startup(struct boot_params *bp, pmdval_t *p
>  	if (sme_get_me_mask()) {
>  		vaddr = (unsigned long)__start_bss_decrypted;
>  		vaddr_end = (unsigned long)__end_bss_decrypted;
> +
>  		for (; vaddr < vaddr_end; vaddr += PMD_SIZE) {
> +			/*
> +			 * When SEV-SNP is active then transition the page to
> +			 * shared in the RMP table so that it is consistent with
> +			 * the page table attribute change.
> +			 *
> +			 * At this point, kernel is running in identity mapped mode.
> +			 * The __start_bss_decrypted is a regular kernel address. The
> +			 * early_snp_set_memory_shared() requires a valid virtual
> +			 * address, so use __pa() against __start_bss_decrypted to
> +			 * get valid virtual address.
> +			 */

How's that?

                        /*
                         * On SNP, transition the page to shared in the RMP table so that
                         * it is consistent with the page table attribute change.
                         *
                         * __start_bss_decrypted has a virtual address in the high range
                         * mapping (kernel .text). PVALIDATE, by way of
                         * early_snp_set_memory_shared(), requires a valid virtual
                         * address but the kernel is currently running off of the identity
                         * mapping so use __pa() to get a *currently* valid virtual address.
                         */
diff mbox series

Patch

diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index 8075e91cff2b..1239bc104cda 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -143,7 +143,21 @@  static unsigned long sme_postprocess_startup(struct boot_params *bp, pmdval_t *p
 	if (sme_get_me_mask()) {
 		vaddr = (unsigned long)__start_bss_decrypted;
 		vaddr_end = (unsigned long)__end_bss_decrypted;
+
 		for (; vaddr < vaddr_end; vaddr += PMD_SIZE) {
+			/*
+			 * When SEV-SNP is active then transition the page to
+			 * shared in the RMP table so that it is consistent with
+			 * the page table attribute change.
+			 *
+			 * At this point, kernel is running in identity mapped mode.
+			 * The __start_bss_decrypted is a regular kernel address. The
+			 * early_snp_set_memory_shared() requires a valid virtual
+			 * address, so use __pa() against __start_bss_decrypted to
+			 * get valid virtual address.
+			 */
+			early_snp_set_memory_shared(__pa(vaddr), __pa(vaddr), PTRS_PER_PMD);
+
 			i = pmd_index(vaddr);
 			pmd[i] -= sme_get_me_mask();
 		}