diff mbox series

[Part1,RFC,v4,09/36] x86/compressed: Add helper for validating pages in the decompression stage

Message ID 20210707181506.30489-10-brijesh.singh@amd.com
State New
Headers show
Series Add AMD Secure Nested Paging (SEV-SNP) Guest Support | expand

Commit Message

Brijesh Singh July 7, 2021, 6:14 p.m. UTC
Many of the integrity guarantees of SEV-SNP are enforced through the
Reverse Map Table (RMP). Each RMP entry contains the GPA at which a
particular page of DRAM should be mapped. The VMs can request the
hypervisor to add pages in the RMP table via the Page State Change VMGEXIT
defined in the GHCB specification. Inside each RMP entry is a Validated
flag; this flag is automatically cleared to 0 by the CPU hardware when a
new RMP entry is created for a guest. Each VM page can be either
validated or invalidated, as indicated by the Validated flag in the RMP
entry. Memory access to a private page that is not validated generates
a #VC. A VM must use PVALIDATE instruction to validate the private page
before using it.

To maintain the security guarantee of SEV-SNP guests, when transitioning
pages from private to shared, the guest must invalidate the pages before
asking the hypervisor to change the page state to shared in the RMP table.

After the pages are mapped private in the page table, the guest must issue
a page state change VMGEXIT to make the pages private in the RMP table and
validate it.

On boot, BIOS should have validated the entire system memory. During
the kernel decompression stage, the VC handler uses the
set_memory_decrypted() to make the GHCB page shared (i.e clear encryption
attribute). And while exiting from the decompression, it calls the
set_page_encrypted() to make the page private.

Add sev_snp_set_page_{private,shared}() helper that is used by the
set_memory_{decrypt,encrypt}() to change the page state in the RMP table.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/boot/compressed/ident_map_64.c | 17 +++++++++-
 arch/x86/boot/compressed/misc.h         |  6 ++++
 arch/x86/boot/compressed/sev.c          | 41 +++++++++++++++++++++++++
 arch/x86/include/asm/sev-common.h       | 17 ++++++++++
 4 files changed, 80 insertions(+), 1 deletion(-)

Comments

Borislav Petkov Aug. 13, 2021, 10:22 a.m. UTC | #1
On Wed, Jul 07, 2021 at 01:14:39PM -0500, Brijesh Singh wrote:
> @@ -274,16 +274,31 @@ static int set_clr_page_flags(struct x86_mapping_info *info,
>  	/*
>  	 * Changing encryption attributes of a page requires to flush it from
>  	 * the caches.
> +	 *
> +	 * If the encryption attribute is being cleared, then change the page
> +	 * state to shared in the RMP table.

That comment...

>  	 */
> -	if ((set | clr) & _PAGE_ENC)
> +	if ((set | clr) & _PAGE_ENC) {
>  		clflush_page(address);
>  

... goes here:

<---

> +		if (clr)
> +			snp_set_page_shared(pte_pfn(*ptep) << PAGE_SHIFT);
> +	}
> +

...

> diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
> index 2f3081e9c78c..f386d45a57b6 100644
> --- a/arch/x86/boot/compressed/sev.c
> +++ b/arch/x86/boot/compressed/sev.c
> @@ -164,6 +164,47 @@ static bool is_vmpl0(void)
>  	return true;
>  }
>  
> +static void __page_state_change(unsigned long paddr, int op)

That op should be:

enum psc_op {
	SNP_PAGE_STATE_SHARED,
	SNP_PAGE_STATE_PRIVATE,
};

and have

static void __page_state_change(unsigned long paddr, enum psc_op op)

so that the compiler can check you're at least passing from the correct
set of defines.

> diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h
> index ea508835ab33..aee07d1bb138 100644
> --- a/arch/x86/include/asm/sev-common.h
> +++ b/arch/x86/include/asm/sev-common.h
> @@ -45,6 +45,23 @@
>  		(((unsigned long)reg & GHCB_MSR_CPUID_REG_MASK) << GHCB_MSR_CPUID_REG_POS) | \
>  		(((unsigned long)fn) << GHCB_MSR_CPUID_FUNC_POS))
>  
> +/* SNP Page State Change */
> +#define GHCB_MSR_PSC_REQ		0x014
> +#define SNP_PAGE_STATE_PRIVATE		1
> +#define SNP_PAGE_STATE_SHARED		2
> +#define GHCB_MSR_PSC_GFN_POS		12
> +#define GHCB_MSR_PSC_GFN_MASK		GENMASK_ULL(39, 0)
> +#define GHCB_MSR_PSC_OP_POS		52
> +#define GHCB_MSR_PSC_OP_MASK		0xf
> +#define GHCB_MSR_PSC_REQ_GFN(gfn, op)	\
> +	(((unsigned long)((op) & GHCB_MSR_PSC_OP_MASK) << GHCB_MSR_PSC_OP_POS) | \
> +	((unsigned long)((gfn) & GHCB_MSR_PSC_GFN_MASK) << GHCB_MSR_PSC_GFN_POS) | \
> +	GHCB_MSR_PSC_REQ)
> +
> +#define GHCB_MSR_PSC_RESP		0x015
> +#define GHCB_MSR_PSC_ERROR_POS		32
> +#define GHCB_MSR_PSC_RESP_VAL(val)	((val) >> GHCB_MSR_PSC_ERROR_POS)
> +

Also get rid of eccessive defines...
Brijesh Singh Aug. 13, 2021, 2:21 p.m. UTC | #2
On 8/13/21 5:22 AM, Borislav Petkov wrote:
>> +static void __page_state_change(unsigned long paddr, int op)
> 
> That op should be:
> 
> enum psc_op {
> 	SNP_PAGE_STATE_SHARED,
> 	SNP_PAGE_STATE_PRIVATE,
> };
> 

Noted.

> and have
> 
> static void __page_state_change(unsigned long paddr, enum psc_op op)
> 
> so that the compiler can check you're at least passing from the correct
> set of defines.
> 
>> diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h
>> index ea508835ab33..aee07d1bb138 100644
>> --- a/arch/x86/include/asm/sev-common.h
>> +++ b/arch/x86/include/asm/sev-common.h
>> @@ -45,6 +45,23 @@
>>   		(((unsigned long)reg & GHCB_MSR_CPUID_REG_MASK) << GHCB_MSR_CPUID_REG_POS) | \
>>   		(((unsigned long)fn) << GHCB_MSR_CPUID_FUNC_POS))
>>   
>> +/* SNP Page State Change */
>> +#define GHCB_MSR_PSC_REQ		0x014
>> +#define SNP_PAGE_STATE_PRIVATE		1
>> +#define SNP_PAGE_STATE_SHARED		2
>> +#define GHCB_MSR_PSC_GFN_POS		12
>> +#define GHCB_MSR_PSC_GFN_MASK		GENMASK_ULL(39, 0)
>> +#define GHCB_MSR_PSC_OP_POS		52
>> +#define GHCB_MSR_PSC_OP_MASK		0xf
>> +#define GHCB_MSR_PSC_REQ_GFN(gfn, op)	\
>> +	(((unsigned long)((op) & GHCB_MSR_PSC_OP_MASK) << GHCB_MSR_PSC_OP_POS) | \
>> +	((unsigned long)((gfn) & GHCB_MSR_PSC_GFN_MASK) << GHCB_MSR_PSC_GFN_POS) | \
>> +	GHCB_MSR_PSC_REQ)
>> +
>> +#define GHCB_MSR_PSC_RESP		0x015
>> +#define GHCB_MSR_PSC_ERROR_POS		32
>> +#define GHCB_MSR_PSC_RESP_VAL(val)	((val) >> GHCB_MSR_PSC_ERROR_POS)
>> +
> 
> Also get rid of eccessive defines...

I am getting conflicting review comments on function naming, comment 
style, macro etc. While addressing the feedback I try to incorporate all 
those comments, lets see how I do in next rev.

thanks
Borislav Petkov Aug. 13, 2021, 3:19 p.m. UTC | #3
On Fri, Aug 13, 2021 at 09:21:14AM -0500, Brijesh Singh wrote:
> > Also get rid of eccessive defines...

> 

> I am getting conflicting review comments on function naming, comment style,

> macro etc. While addressing the feedback I try to incorporate all those

> comments, lets see how I do in next rev.


What do you mean? Example?

With "Also get rid of eccessive defines..." I mean this here from
earlier this week:

https://lkml.kernel.org/r/YRJOkQe0W9/HyjjQ@zn.tnic

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette
diff mbox series

Patch

diff --git a/arch/x86/boot/compressed/ident_map_64.c b/arch/x86/boot/compressed/ident_map_64.c
index f7213d0943b8..59befc610993 100644
--- a/arch/x86/boot/compressed/ident_map_64.c
+++ b/arch/x86/boot/compressed/ident_map_64.c
@@ -274,16 +274,31 @@  static int set_clr_page_flags(struct x86_mapping_info *info,
 	/*
 	 * Changing encryption attributes of a page requires to flush it from
 	 * the caches.
+	 *
+	 * If the encryption attribute is being cleared, then change the page
+	 * state to shared in the RMP table.
 	 */
-	if ((set | clr) & _PAGE_ENC)
+	if ((set | clr) & _PAGE_ENC) {
 		clflush_page(address);
 
+		if (clr)
+			snp_set_page_shared(pte_pfn(*ptep) << PAGE_SHIFT);
+	}
+
 	/* Update PTE */
 	pte = *ptep;
 	pte = pte_set_flags(pte, set);
 	pte = pte_clear_flags(pte, clr);
 	set_pte(ptep, pte);
 
+	/*
+	 * If the encryption attribute is being set, then change the page state to
+	 * private in the RMP entry. The page state must be done after the PTE
+	 * is updated.
+	 */
+	if (set & _PAGE_ENC)
+		snp_set_page_private(pte_pfn(*ptep) << PAGE_SHIFT);
+
 	/* Flush TLB after changing encryption attribute */
 	write_cr3(top_level_pgt);
 
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 31139256859f..822e0c254b9a 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -121,12 +121,18 @@  void set_sev_encryption_mask(void);
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 void sev_es_shutdown_ghcb(void);
 extern bool sev_es_check_ghcb_fault(unsigned long address);
+void snp_set_page_private(unsigned long paddr);
+void snp_set_page_shared(unsigned long paddr);
+
 #else
 static inline void sev_es_shutdown_ghcb(void) { }
 static inline bool sev_es_check_ghcb_fault(unsigned long address)
 {
 	return false;
 }
+static inline void snp_set_page_private(unsigned long paddr) { }
+static inline void snp_set_page_shared(unsigned long paddr) { }
+
 #endif
 
 /* acpi.c */
diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
index 2f3081e9c78c..f386d45a57b6 100644
--- a/arch/x86/boot/compressed/sev.c
+++ b/arch/x86/boot/compressed/sev.c
@@ -164,6 +164,47 @@  static bool is_vmpl0(void)
 	return true;
 }
 
+static void __page_state_change(unsigned long paddr, int op)
+{
+	u64 val;
+
+	if (!sev_snp_enabled())
+		return;
+
+	/*
+	 * If private -> shared then invalidate the page before requesting the
+	 * state change in the RMP table.
+	 */
+	if ((op == SNP_PAGE_STATE_SHARED) && pvalidate(paddr, RMP_PG_SIZE_4K, 0))
+		sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PVALIDATE);
+
+	/* Issue VMGEXIT to change the page state in RMP table. */
+	sev_es_wr_ghcb_msr(GHCB_MSR_PSC_REQ_GFN(paddr >> PAGE_SHIFT, op));
+	VMGEXIT();
+
+	/* Read the response of the VMGEXIT. */
+	val = sev_es_rd_ghcb_msr();
+	if ((GHCB_RESP_CODE(val) != GHCB_MSR_PSC_RESP) || GHCB_MSR_PSC_RESP_VAL(val))
+		sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PSC);
+
+	/*
+	 * Now that page is added in the RMP table, validate it so that it is
+	 * consistent with the RMP entry.
+	 */
+	if ((op == SNP_PAGE_STATE_PRIVATE) && pvalidate(paddr, RMP_PG_SIZE_4K, 1))
+		sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PVALIDATE);
+}
+
+void snp_set_page_private(unsigned long paddr)
+{
+	__page_state_change(paddr, SNP_PAGE_STATE_PRIVATE);
+}
+
+void snp_set_page_shared(unsigned long paddr)
+{
+	__page_state_change(paddr, SNP_PAGE_STATE_SHARED);
+}
+
 static bool do_early_sev_setup(void)
 {
 	if (!sev_es_negotiate_protocol())
diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h
index ea508835ab33..aee07d1bb138 100644
--- a/arch/x86/include/asm/sev-common.h
+++ b/arch/x86/include/asm/sev-common.h
@@ -45,6 +45,23 @@ 
 		(((unsigned long)reg & GHCB_MSR_CPUID_REG_MASK) << GHCB_MSR_CPUID_REG_POS) | \
 		(((unsigned long)fn) << GHCB_MSR_CPUID_FUNC_POS))
 
+/* SNP Page State Change */
+#define GHCB_MSR_PSC_REQ		0x014
+#define SNP_PAGE_STATE_PRIVATE		1
+#define SNP_PAGE_STATE_SHARED		2
+#define GHCB_MSR_PSC_GFN_POS		12
+#define GHCB_MSR_PSC_GFN_MASK		GENMASK_ULL(39, 0)
+#define GHCB_MSR_PSC_OP_POS		52
+#define GHCB_MSR_PSC_OP_MASK		0xf
+#define GHCB_MSR_PSC_REQ_GFN(gfn, op)	\
+	(((unsigned long)((op) & GHCB_MSR_PSC_OP_MASK) << GHCB_MSR_PSC_OP_POS) | \
+	((unsigned long)((gfn) & GHCB_MSR_PSC_GFN_MASK) << GHCB_MSR_PSC_GFN_POS) | \
+	GHCB_MSR_PSC_REQ)
+
+#define GHCB_MSR_PSC_RESP		0x015
+#define GHCB_MSR_PSC_ERROR_POS		32
+#define GHCB_MSR_PSC_RESP_VAL(val)	((val) >> GHCB_MSR_PSC_ERROR_POS)
+
 /* GHCB Hypervisor Feature Request */
 #define GHCB_MSR_HV_FT_REQ	0x080
 #define GHCB_MSR_HV_FT_RESP	0x081