From patchwork Tue Apr 5 23:43:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 558989 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22E35C4332F for ; Wed, 6 Apr 2022 04:05:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350035AbiDFEFY (ORCPT ); Wed, 6 Apr 2022 00:05:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47722 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1850045AbiDFCrJ (ORCPT ); Tue, 5 Apr 2022 22:47:09 -0400 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 64D50294A33; Tue, 5 Apr 2022 16:48:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649202521; x=1680738521; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=bHjOpRF0FH9a4fKmacsGbqTcyKL/garGb9vIFX8+JmU=; b=d2xudDnw/rXJ16zDW0kypN9CMrA754m4j6/tOfQCQhtBDx+ExaF7Y9gM jned6Y5a6Ksz2EUEdFoBdN3G35SjMhhhxUT4K0xFoOwmnzmc9B+sOVfz2 5TC6T0u2rVE2my18epAkxVKqVLiUc4V36WnWVdPE03PF0pOAPPz1XAwsO mth46lQIxPKDm9OTvY3vWsHoZfhwKPZlyRpmS2ei1qHw3ACYk8Lp5dkmk 1//wGmIEU5PhHxOK0SqobObFBopSWtp9hzhklLMkSkGz5mXbp31ww3vcx ZxYaXlt+BoIo7YeeudVT6PyUWlc4qoBow1B9z8uBVyR+Ktmm4+H89hdFZ Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10308"; a="258479964" X-IronPort-AV: E=Sophos;i="5.90,238,1643702400"; d="scan'208";a="258479964" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Apr 2022 16:48:41 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,238,1643702400"; d="scan'208";a="556723611" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga007.fm.intel.com with ESMTP; 05 Apr 2022 16:48:34 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id 838195C; Wed, 6 Apr 2022 02:43:47 +0300 (EEST) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Varad Gautam , Dario Faggioli , Dave Hansen , Brijesh Singh , Mike Rapoport , David Hildenbrand , x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" , Mike Rapoport Subject: [PATCHv4 1/8] mm: Add support for unaccepted memory Date: Wed, 6 Apr 2022 02:43:36 +0300 Message-Id: <20220405234343.74045-2-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220405234343.74045-1-kirill.shutemov@linux.intel.com> References: <20220405234343.74045-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org UEFI Specification version 2.9 introduces the concept of memory acceptance. Some Virtual Machine platforms, such as Intel TDX or AMD SEV-SNP, requiring memory to be accepted before it can be used by the guest. Accepting happens via a protocol specific for the Virtual Machine platform. Accepting memory is costly and it makes VMM allocate memory for the accepted guest physical address range. It's better to postpone memory acceptance until memory is needed. It lowers boot time and reduces memory overhead. Support of such memory requires a few changes in core-mm code: - memblock has to accept memory on allocation; - page allocator has to accept memory on the first allocation of the page; Memblock change is trivial. The page allocator is modified to accept pages on the first allocation. PageUnaccepted() is used to indicate that the page requires acceptance. Kernel only needs to accept memory once after boot, so during the boot and warm up phase there will be a lot of memory acceptance. After things are settled down the only price of the feature if couple of checks for PageUnaccepted() in allocate and free paths. The check refers a hot variable (that also encodes PageBuddy()), so it is cheap and not visible on profiles. Architecture has to provide two helpers if it wants to support unaccepted memory: - accept_memory() makes a range of physical addresses accepted. - memory_is_unaccepted() checks anything within the range of physical addresses requires acceptance. Signed-off-by: Kirill A. Shutemov Acked-by: Mike Rapoport # memblock --- include/linux/page-flags.h | 24 ++++++++++++++++ mm/internal.h | 11 ++++++++ mm/memblock.c | 9 ++++++ mm/page_alloc.c | 57 ++++++++++++++++++++++++++++++++++++-- 4 files changed, 99 insertions(+), 2 deletions(-) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 9d8eeaa67d05..aaaedc111092 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -928,6 +928,7 @@ static inline bool is_page_hwpoison(struct page *page) #define PG_offline 0x00000100 #define PG_table 0x00000200 #define PG_guard 0x00000400 +#define PG_unaccepted 0x00000800 #define PageType(page, flag) \ ((page->page_type & (PAGE_TYPE_BASE | flag)) == PAGE_TYPE_BASE) @@ -953,6 +954,18 @@ static __always_inline void __ClearPage##uname(struct page *page) \ page->page_type |= PG_##lname; \ } +#define PAGE_TYPE_OPS_FALSE(uname) \ +static __always_inline int Page##uname(struct page *page) \ +{ \ + return false; \ +} \ +static __always_inline void __SetPage##uname(struct page *page) \ +{ \ +} \ +static __always_inline void __ClearPage##uname(struct page *page) \ +{ \ +} + /* * PageBuddy() indicates that the page is free and in the buddy system * (see mm/page_alloc.c). @@ -983,6 +996,17 @@ PAGE_TYPE_OPS(Buddy, buddy) */ PAGE_TYPE_OPS(Offline, offline) + /* + * PageUnaccepted() indicates that the page has to be "accepted" before it can + * be used. Page allocator has to call accept_page() before returning the page + * to the caller. + */ +#ifdef CONFIG_UNACCEPTED_MEMORY +PAGE_TYPE_OPS(Unaccepted, unaccepted) +#else +PAGE_TYPE_OPS_FALSE(Unaccepted) +#endif + extern void page_offline_freeze(void); extern void page_offline_thaw(void); extern void page_offline_begin(void); diff --git a/mm/internal.h b/mm/internal.h index cf16280ce132..10302fe857c4 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -758,4 +758,15 @@ struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags); DECLARE_PER_CPU(struct per_cpu_nodestat, boot_nodestats); +#ifndef CONFIG_UNACCEPTED_MEMORY +static inline bool memory_is_unaccepted(phys_addr_t start, phys_addr_t end) +{ + return false; +} + +static inline void accept_memory(phys_addr_t start, phys_addr_t end) +{ +} +#endif + #endif /* __MM_INTERNAL_H */ diff --git a/mm/memblock.c b/mm/memblock.c index e4f03a6e8e56..a1f7f8b304d5 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -1405,6 +1405,15 @@ phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size, */ kmemleak_alloc_phys(found, size, 0, 0); + /* + * Some Virtual Machine platforms, such as Intel TDX or AMD SEV-SNP, + * require memory to be accepted before it can be used by the + * guest. + * + * Accept the memory of the allocated buffer. + */ + accept_memory(found, found + size); + return found; } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 2db95780e003..53f4aa1c92a7 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -121,6 +121,12 @@ typedef int __bitwise fpi_t; */ #define FPI_SKIP_KASAN_POISON ((__force fpi_t)BIT(2)) +/* + * Check if the page needs to be marked as PageUnaccepted(). + * Used for the new pages added to the buddy allocator for the first time. + */ +#define FPI_UNACCEPTED ((__force fpi_t)BIT(3)) + /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */ static DEFINE_MUTEX(pcp_batch_high_lock); #define MIN_PERCPU_PAGELIST_HIGH_FRACTION (8) @@ -1023,6 +1029,26 @@ buddy_merge_likely(unsigned long pfn, unsigned long buddy_pfn, return page_is_buddy(higher_page, higher_buddy, order + 1); } +static void accept_page(struct page *page, unsigned int order) +{ + phys_addr_t start = page_to_phys(page); + int i; + + accept_memory(start, start + (PAGE_SIZE << order)); + + for (i = 0; i < (1 << order); i++) { + if (PageUnaccepted(page + i)) + __ClearPageUnaccepted(page + i); + } +} + +static bool page_is_unaccepted(struct page *page, unsigned int order) +{ + phys_addr_t start = page_to_phys(page); + + return memory_is_unaccepted(start, start + (PAGE_SIZE << order)); +} + /* * Freeing function for a buddy system allocator. * @@ -1058,6 +1084,7 @@ static inline void __free_one_page(struct page *page, unsigned long combined_pfn; struct page *buddy; bool to_tail; + bool unaccepted = PageUnaccepted(page); VM_BUG_ON(!zone_is_initialized(zone)); VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page); @@ -1089,6 +1116,11 @@ static inline void __free_one_page(struct page *page, clear_page_guard(zone, buddy, order, migratetype); else del_page_from_free_list(buddy, zone, order); + + /* Mark page unaccepted if any of merged pages were unaccepted */ + if (PageUnaccepted(buddy)) + unaccepted = true; + combined_pfn = buddy_pfn & pfn; page = page + (combined_pfn - pfn); pfn = combined_pfn; @@ -1124,6 +1156,17 @@ static inline void __free_one_page(struct page *page, done_merging: set_buddy_order(page, order); + /* + * Check if the page needs to be marked as PageUnaccepted(). + * Used for the new pages added to the buddy allocator for the first + * time. + */ + if (!unaccepted && (fpi_flags & FPI_UNACCEPTED)) + unaccepted = page_is_unaccepted(page, order); + + if (unaccepted) + __SetPageUnaccepted(page); + if (fpi_flags & FPI_TO_TAIL) to_tail = true; else if (is_shuffle_order(order)) @@ -1149,7 +1192,8 @@ static inline void __free_one_page(struct page *page, static inline bool page_expected_state(struct page *page, unsigned long check_flags) { - if (unlikely(atomic_read(&page->_mapcount) != -1)) + if (unlikely(atomic_read(&page->_mapcount) != -1) && + !PageUnaccepted(page)) return false; if (unlikely((unsigned long)page->mapping | @@ -1654,7 +1698,8 @@ void __free_pages_core(struct page *page, unsigned int order) * Bypass PCP and place fresh pages right to the tail, primarily * relevant for memory onlining. */ - __free_pages_ok(page, order, FPI_TO_TAIL | FPI_SKIP_KASAN_POISON); + __free_pages_ok(page, order, + FPI_TO_TAIL | FPI_SKIP_KASAN_POISON | FPI_UNACCEPTED); } #ifdef CONFIG_NUMA @@ -1807,6 +1852,7 @@ static void __init deferred_free_range(unsigned long pfn, return; } + accept_memory(pfn << PAGE_SHIFT, (pfn + nr_pages) << PAGE_SHIFT); for (i = 0; i < nr_pages; i++, page++, pfn++) { if ((pfn & (pageblock_nr_pages - 1)) == 0) set_pageblock_migratetype(page, MIGRATE_MOVABLE); @@ -2266,6 +2312,10 @@ static inline void expand(struct zone *zone, struct page *page, if (set_page_guard(zone, &page[size], high, migratetype)) continue; + /* Transfer PageUnaccepted() to the newly split pages */ + if (PageUnaccepted(page)) + __SetPageUnaccepted(&page[size]); + add_to_free_list(&page[size], zone, high, migratetype); set_buddy_order(&page[size], high); } @@ -2396,6 +2446,9 @@ inline void post_alloc_hook(struct page *page, unsigned int order, */ kernel_unpoison_pages(page, 1 << order); + if (PageUnaccepted(page)) + accept_page(page, order); + /* * As memory initialization might be integrated into KASAN, * KASAN unpoisoning and memory initializion code must be From patchwork Tue Apr 5 23:43:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 558988 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6EE85C4167D for ; Wed, 6 Apr 2022 04:05:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244323AbiDFEFC (ORCPT ); Wed, 6 Apr 2022 00:05:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45004 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1850041AbiDFCrJ (ORCPT ); Tue, 5 Apr 2022 22:47:09 -0400 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 627D2294A32; Tue, 5 Apr 2022 16:48:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649202521; x=1680738521; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=uxOljjKX+qk9GQuCznjSBxeTYqGSyAmKn+lWvo4axDU=; b=krarkGPDArAarEZszI0oXyaf94CLU31PWcatsi8sGKsr5v18hezKkWQ+ fESXE3GMRzy1V2mA5qVovpECViAHxVMLUG/wKcbRTHxzut0zzDtuAoxgB /GSsui3ed3hHescEwHWLdQVQO+vnaFsULPc71JGEAdAeeSVosWoKXMjN7 HcJssFK0Wke7ST5zcdjFUz7PfSs1nulF0zv6m1P8lhI/55adQLMs3s62S DQ/CI9wYFxSmXWeUx2H66kJquaZfdCcrNmI3KWL+O+MoEQvd8Tj4e8YJ2 RR+S3Aa1Qo02ekoLY2PXmB+Ut2WGYICeK9hV7aFs9HVn/Vsb7K6my8MxT A==; X-IronPort-AV: E=McAfee;i="6200,9189,10308"; a="243035267" X-IronPort-AV: E=Sophos;i="5.90,238,1643702400"; d="scan'208";a="243035267" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Apr 2022 16:48:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,238,1643702400"; d="scan'208";a="608658168" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga008.fm.intel.com with ESMTP; 05 Apr 2022 16:48:35 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id 98F0C32E; Wed, 6 Apr 2022 02:43:47 +0300 (EEST) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Varad Gautam , Dario Faggioli , Dave Hansen , Brijesh Singh , Mike Rapoport , David Hildenbrand , x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv4 2/8] efi/x86: Get full memory map in allocate_e820() Date: Wed, 6 Apr 2022 02:43:37 +0300 Message-Id: <20220405234343.74045-3-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220405234343.74045-1-kirill.shutemov@linux.intel.com> References: <20220405234343.74045-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Currently allocate_e820() only interested in the size of map and size of memory descriptor to determine how many e820 entries the kernel needs. UEFI Specification version 2.9 introduces a new memory type -- unaccepted memory. To track unaccepted memory kernel needs to allocate a bitmap. The size of the bitmap is dependent on the maximum physical address present in the system. A full memory map is required to find the maximum address. Modify allocate_e820() to get a full memory map. This is preparation for the next patch that implements handling of unaccepted memory in EFI stub. Signed-off-by: Kirill A. Shutemov --- drivers/firmware/efi/libstub/x86-stub.c | 28 ++++++++++++------------- 1 file changed, 13 insertions(+), 15 deletions(-) diff --git a/drivers/firmware/efi/libstub/x86-stub.c b/drivers/firmware/efi/libstub/x86-stub.c index 01ddd4502e28..d18cac8ab436 100644 --- a/drivers/firmware/efi/libstub/x86-stub.c +++ b/drivers/firmware/efi/libstub/x86-stub.c @@ -569,30 +569,28 @@ static efi_status_t alloc_e820ext(u32 nr_desc, struct setup_data **e820ext, } static efi_status_t allocate_e820(struct boot_params *params, + struct efi_boot_memmap *map, struct setup_data **e820ext, u32 *e820ext_size) { - unsigned long map_size, desc_size, map_key; efi_status_t status; - __u32 nr_desc, desc_version; + __u32 nr_desc; - /* Only need the size of the mem map and size of each mem descriptor */ - map_size = 0; - status = efi_bs_call(get_memory_map, &map_size, NULL, &map_key, - &desc_size, &desc_version); - if (status != EFI_BUFFER_TOO_SMALL) - return (status != EFI_SUCCESS) ? status : EFI_UNSUPPORTED; - - nr_desc = map_size / desc_size + EFI_MMAP_NR_SLACK_SLOTS; + status = efi_get_memory_map(map); + if (status != EFI_SUCCESS) + return status; - if (nr_desc > ARRAY_SIZE(params->e820_table)) { - u32 nr_e820ext = nr_desc - ARRAY_SIZE(params->e820_table); + nr_desc = *map->map_size / *map->desc_size; + if (nr_desc > ARRAY_SIZE(params->e820_table) - EFI_MMAP_NR_SLACK_SLOTS) { + u32 nr_e820ext = nr_desc - ARRAY_SIZE(params->e820_table) + + EFI_MMAP_NR_SLACK_SLOTS; status = alloc_e820ext(nr_e820ext, e820ext, e820ext_size); if (status != EFI_SUCCESS) - return status; + goto out; } - +out: + efi_bs_call(free_pool, *map->map); return EFI_SUCCESS; } @@ -642,7 +640,7 @@ static efi_status_t exit_boot(struct boot_params *boot_params, void *handle) priv.boot_params = boot_params; priv.efi = &boot_params->efi_info; - status = allocate_e820(boot_params, &e820ext, &e820ext_size); + status = allocate_e820(boot_params, &map, &e820ext, &e820ext_size); if (status != EFI_SUCCESS) return status; From patchwork Tue Apr 5 23:43:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 558378 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11A27C43219 for ; Wed, 6 Apr 2022 04:05:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1444281AbiDFEGi (ORCPT ); Wed, 6 Apr 2022 00:06:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44994 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1850051AbiDFCrJ (ORCPT ); Tue, 5 Apr 2022 22:47:09 -0400 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A5D91294E40; Tue, 5 Apr 2022 16:48:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649202522; x=1680738522; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=T3WMn08aOPIBgIdsB6pv5P1TVysfDDlNVb1CyuBWCfY=; b=J0JopwAbt6ZY5Sft6wC+M1d7RS8YWG7WQdeEANMIRW6e/YYrmdSMRYkR 81SIY4RJTWMm3IbzQ3EmfFPzdUQoJDLL7XVqgPXYZvSMWFMr7Su2Apkdh Gr9UVX8BiwjlohlwgKG8M2A2wz7w42J+QECNaJoP7UnBPxKu2YFZZSvno fiYCJQNO8GAlRkXa0KW/Uvp5bGwxxeZtW87S1uvDUFKDBq50JmzsrLGkI nG32j3S52RX8Uy5Nz0UsA6skD00MiSmrCf/eF3w4ejGJmHbfUXjaeAhwu UywIlA8cEucHCPN1cNeaJCnQFFL1d6GrUMf6jwcmqVUBWLhN8E7A8nMcA A==; X-IronPort-AV: E=McAfee;i="6200,9189,10308"; a="243035269" X-IronPort-AV: E=Sophos;i="5.90,238,1643702400"; d="scan'208";a="243035269" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Apr 2022 16:48:41 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,238,1643702400"; d="scan'208";a="608658171" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga008.fm.intel.com with ESMTP; 05 Apr 2022 16:48:35 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id A6C92412; Wed, 6 Apr 2022 02:43:47 +0300 (EEST) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Varad Gautam , Dario Faggioli , Dave Hansen , Brijesh Singh , Mike Rapoport , David Hildenbrand , x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv4 3/8] efi/x86: Implement support for unaccepted memory Date: Wed, 6 Apr 2022 02:43:38 +0300 Message-Id: <20220405234343.74045-4-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220405234343.74045-1-kirill.shutemov@linux.intel.com> References: <20220405234343.74045-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org UEFI Specification version 2.9 introduces the concept of memory acceptance: Some Virtual Machine platforms, such as Intel TDX or AMD SEV-SNP, requiring memory to be accepted before it can be used by the guest. Accepting happens via a protocol specific for the Virtual Machine platform. Accepting memory is costly and it makes VMM allocate memory for the accepted guest physical address range. It's better to postpone memory acceptance until memory is needed. It lowers boot time and reduces memory overhead. The kernel needs to know what memory has been accepted. Firmware communicates this information via memory map: a new memory type -- EFI_UNACCEPTED_MEMORY -- indicates such memory. Range-based tracking works fine for firmware, but it gets bulky for the kernel: e820 has to be modified on every page acceptance. It leads to table fragmentation, but there's a limited number of entries in the e820 table Another option is to mark such memory as usable in e820 and track if the range has been accepted in a bitmap. One bit in the bitmap represents 2MiB in the address space: one 4k page is enough to track 64GiB or physical address space. In the worst-case scenario -- a huge hole in the middle of the address space -- It needs 256MiB to handle 4PiB of the address space. Any unaccepted memory that is not aligned to 2M gets accepted upfront. The bitmap is allocated and constructed in the EFI stub and passed down to the kernel via boot_params. allocate_e820() allocates the bitmap if unaccepted memory is present, according to the maximum address in the memory map. The same boot_params.unaccepted_memory can be used to pass the bitmap between two kernels on kexec, but the use-case is not yet implemented. Make KEXEC and UNACCEPTED_MEMORY mutually exclusive for now. Signed-off-by: Kirill A. Shutemov --- Documentation/x86/zero-page.rst | 1 + arch/x86/boot/compressed/Makefile | 1 + arch/x86/boot/compressed/bitmap.c | 24 ++++++++ arch/x86/boot/compressed/unaccepted_memory.c | 53 +++++++++++++++++ arch/x86/include/asm/unaccepted_memory.h | 12 ++++ arch/x86/include/uapi/asm/bootparam.h | 3 +- drivers/firmware/efi/Kconfig | 15 +++++ drivers/firmware/efi/efi.c | 1 + drivers/firmware/efi/libstub/x86-stub.c | 62 +++++++++++++++++++- include/linux/efi.h | 3 +- 10 files changed, 172 insertions(+), 3 deletions(-) create mode 100644 arch/x86/boot/compressed/bitmap.c create mode 100644 arch/x86/boot/compressed/unaccepted_memory.c create mode 100644 arch/x86/include/asm/unaccepted_memory.h diff --git a/Documentation/x86/zero-page.rst b/Documentation/x86/zero-page.rst index f088f5881666..8e3447a4b373 100644 --- a/Documentation/x86/zero-page.rst +++ b/Documentation/x86/zero-page.rst @@ -42,4 +42,5 @@ Offset/Size Proto Name Meaning 2D0/A00 ALL e820_table E820 memory map table (array of struct e820_entry) D00/1EC ALL eddbuf EDD data (array of struct edd_info) +ECC/008 ALL unaccepted_memory Bitmap of unaccepted memory (1bit == 2M) =========== ===== ======================= ================================================= diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile index 8fd0e6ae2e1f..09993797efa2 100644 --- a/arch/x86/boot/compressed/Makefile +++ b/arch/x86/boot/compressed/Makefile @@ -102,6 +102,7 @@ endif vmlinux-objs-$(CONFIG_ACPI) += $(obj)/acpi.o vmlinux-objs-$(CONFIG_INTEL_TDX_GUEST) += $(obj)/tdx.o $(obj)/tdcall.o +vmlinux-objs-$(CONFIG_UNACCEPTED_MEMORY) += $(obj)/bitmap.o $(obj)/unaccepted_memory.o vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_thunk_$(BITS).o efi-obj-$(CONFIG_EFI_STUB) = $(objtree)/drivers/firmware/efi/libstub/lib.a diff --git a/arch/x86/boot/compressed/bitmap.c b/arch/x86/boot/compressed/bitmap.c new file mode 100644 index 000000000000..bf58b259380a --- /dev/null +++ b/arch/x86/boot/compressed/bitmap.c @@ -0,0 +1,24 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Taken from lib/string.c */ + +#include + +void __bitmap_set(unsigned long *map, unsigned int start, int len) +{ + unsigned long *p = map + BIT_WORD(start); + const unsigned int size = start + len; + int bits_to_set = BITS_PER_LONG - (start % BITS_PER_LONG); + unsigned long mask_to_set = BITMAP_FIRST_WORD_MASK(start); + + while (len - bits_to_set >= 0) { + *p |= mask_to_set; + len -= bits_to_set; + bits_to_set = BITS_PER_LONG; + mask_to_set = ~0UL; + p++; + } + if (len) { + mask_to_set &= BITMAP_LAST_WORD_MASK(size); + *p |= mask_to_set; + } +} diff --git a/arch/x86/boot/compressed/unaccepted_memory.c b/arch/x86/boot/compressed/unaccepted_memory.c new file mode 100644 index 000000000000..d363acf59c08 --- /dev/null +++ b/arch/x86/boot/compressed/unaccepted_memory.c @@ -0,0 +1,53 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include "error.h" +#include "misc.h" + +static inline void __accept_memory(phys_addr_t start, phys_addr_t end) +{ + /* Platform-specific memory-acceptance call goes here */ + error("Cannot accept memory"); +} + +void mark_unaccepted(struct boot_params *params, u64 start, u64 end) +{ + /* + * The accepted memory bitmap only works at PMD_SIZE granularity. + * If a request comes in to mark memory as unaccepted which is not + * PMD_SIZE-aligned, simply accept the memory now since it can not be + * *marked* as unaccepted. + */ + + /* + * Accept small regions that might not be able to be represented + * in the bitmap: + */ + if (end - start < 2 * PMD_SIZE) { + __accept_memory(start, end); + return; + } + + /* + * No matter how the start and end are aligned, at least one unaccepted + * PMD_SIZE area will remain. + */ + + /* Immediately accept a unaccepted_memory, + start / PMD_SIZE, (end - start) / PMD_SIZE); +} diff --git a/arch/x86/include/asm/unaccepted_memory.h b/arch/x86/include/asm/unaccepted_memory.h new file mode 100644 index 000000000000..cbc24040b853 --- /dev/null +++ b/arch/x86/include/asm/unaccepted_memory.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2020 Intel Corporation */ +#ifndef _ASM_X86_UNACCEPTED_MEMORY_H +#define _ASM_X86_UNACCEPTED_MEMORY_H + +#include + +struct boot_params; + +void mark_unaccepted(struct boot_params *params, u64 start, u64 num); + +#endif diff --git a/arch/x86/include/uapi/asm/bootparam.h b/arch/x86/include/uapi/asm/bootparam.h index b25d3f82c2f3..16bc686a198d 100644 --- a/arch/x86/include/uapi/asm/bootparam.h +++ b/arch/x86/include/uapi/asm/bootparam.h @@ -217,7 +217,8 @@ struct boot_params { struct boot_e820_entry e820_table[E820_MAX_ENTRIES_ZEROPAGE]; /* 0x2d0 */ __u8 _pad8[48]; /* 0xcd0 */ struct edd_info eddbuf[EDDMAXNR]; /* 0xd00 */ - __u8 _pad9[276]; /* 0xeec */ + __u64 unaccepted_memory; /* 0xeec */ + __u8 _pad9[268]; /* 0xef4 */ } __attribute__((packed)); /** diff --git a/drivers/firmware/efi/Kconfig b/drivers/firmware/efi/Kconfig index 2c3dac5ecb36..b17ceec757d0 100644 --- a/drivers/firmware/efi/Kconfig +++ b/drivers/firmware/efi/Kconfig @@ -243,6 +243,21 @@ config EFI_DISABLE_PCI_DMA options "efi=disable_early_pci_dma" or "efi=no_disable_early_pci_dma" may be used to override this option. +config UNACCEPTED_MEMORY + bool + depends on EFI_STUB + depends on !KEXEC_CORE + help + Some Virtual Machine platforms, such as Intel TDX, require + some memory to be "accepted" by the guest before it can be used. + This mechanism helps prevent malicious hosts from making changes + to guest memory. + + UEFI specification v2.9 introduced EFI_UNACCEPTED_MEMORY memory type. + + This option adds support for unaccepted memory and makes such memory + usable by kernel. + endmenu config EFI_EMBEDDED_FIRMWARE diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c index 5502e176d51b..2c055afb1b11 100644 --- a/drivers/firmware/efi/efi.c +++ b/drivers/firmware/efi/efi.c @@ -747,6 +747,7 @@ static __initdata char memory_type_name[][13] = { "MMIO Port", "PAL Code", "Persistent", + "Unaccepted", }; char * __init efi_md_typeattr_format(char *buf, size_t size, diff --git a/drivers/firmware/efi/libstub/x86-stub.c b/drivers/firmware/efi/libstub/x86-stub.c index d18cac8ab436..e7601fd612aa 100644 --- a/drivers/firmware/efi/libstub/x86-stub.c +++ b/drivers/firmware/efi/libstub/x86-stub.c @@ -9,12 +9,14 @@ #include #include #include +#include #include #include #include #include #include +#include #include "efistub.h" @@ -504,6 +506,13 @@ setup_e820(struct boot_params *params, struct setup_data *e820ext, u32 e820ext_s e820_type = E820_TYPE_PMEM; break; + case EFI_UNACCEPTED_MEMORY: + if (!IS_ENABLED(CONFIG_UNACCEPTED_MEMORY)) + continue; + e820_type = E820_TYPE_RAM; + mark_unaccepted(params, d->phys_addr, + d->phys_addr + PAGE_SIZE * d->num_pages); + break; default: continue; } @@ -575,6 +584,9 @@ static efi_status_t allocate_e820(struct boot_params *params, { efi_status_t status; __u32 nr_desc; + bool unaccepted_memory_present = false; + u64 max_addr = 0; + int i; status = efi_get_memory_map(map); if (status != EFI_SUCCESS) @@ -589,9 +601,57 @@ static efi_status_t allocate_e820(struct boot_params *params, if (status != EFI_SUCCESS) goto out; } + + if (!IS_ENABLED(CONFIG_UNACCEPTED_MEMORY)) + goto out; + + /* Check if there's any unaccepted memory and find the max address */ + for (i = 0; i < nr_desc; i++) { + efi_memory_desc_t *d; + + d = efi_early_memdesc_ptr(*map->map, *map->desc_size, i); + if (d->type == EFI_UNACCEPTED_MEMORY) + unaccepted_memory_present = true; + if (d->phys_addr + d->num_pages * PAGE_SIZE > max_addr) + max_addr = d->phys_addr + d->num_pages * PAGE_SIZE; + } + + /* + * If unaccepted memory is present allocate a bitmap to track what + * memory has to be accepted before access. + * + * One bit in the bitmap represents 2MiB in the address space: + * A 4k bitmap can track 64GiB of physical address space. + * + * In the worst case scenario -- a huge hole in the middle of the + * address space -- It needs 256MiB to handle 4PiB of the address + * space. + * + * TODO: handle situation if params->unaccepted_memory has already set. + * It's required to deal with kexec. + * + * The bitmap will be populated in setup_e820() according to the memory + * map after efi_exit_boot_services(). + */ + if (unaccepted_memory_present) { + unsigned long *unaccepted_memory = NULL; + u64 size = DIV_ROUND_UP(max_addr, PMD_SIZE * BITS_PER_BYTE); + + status = efi_allocate_pages(size, + (unsigned long *)&unaccepted_memory, + ULONG_MAX); + if (status != EFI_SUCCESS) + goto out; + memset(unaccepted_memory, 0, size); + params->unaccepted_memory = (unsigned long)unaccepted_memory; + } else { + params->unaccepted_memory = 0; + } + out: efi_bs_call(free_pool, *map->map); - return EFI_SUCCESS; + return status; + } struct exit_boot_struct { diff --git a/include/linux/efi.h b/include/linux/efi.h index ccd4d3f91c98..b0240fdcaf5b 100644 --- a/include/linux/efi.h +++ b/include/linux/efi.h @@ -108,7 +108,8 @@ typedef struct { #define EFI_MEMORY_MAPPED_IO_PORT_SPACE 12 #define EFI_PAL_CODE 13 #define EFI_PERSISTENT_MEMORY 14 -#define EFI_MAX_MEMORY_TYPE 15 +#define EFI_UNACCEPTED_MEMORY 15 +#define EFI_MAX_MEMORY_TYPE 16 /* Attribute values: */ #define EFI_MEMORY_UC ((u64)0x0000000000000001ULL) /* uncached */ From patchwork Tue Apr 5 23:43:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 558383 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C07DC433F5 for ; Wed, 6 Apr 2022 04:02:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347990AbiDFEEC (ORCPT ); Wed, 6 Apr 2022 00:04:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47724 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1850049AbiDFCrJ (ORCPT ); Tue, 5 Apr 2022 22:47:09 -0400 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1978B1E1112; Tue, 5 Apr 2022 16:48:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649202523; x=1680738523; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=vDtT5eCuwypFPjQBE9Qdn6flkyTeEBWCMmVpOI5Jnq4=; b=m1ioYfqC/0NhJbHqlCEEZ2b2jptaATEYJu/asjb/Ypjo+zBeye/LUtsE reIWK13X+PJp9/AaMvtO88/2b2pEIXgCSe6HjbukoLIAUTmzqfFdyp5Yq G8rmi9SU8z42f0j1e7jUJHyFpzgzzlj1JbPwHinyIBcm+jZumnMVOIvTQ /oTrzrDXYoBX3AvABZLc1Zhyioaa/vP15ONHkUH+PHNXcHqq6jaZAA0oU k3kQDBVHpm3TINCba/CqWpMdkm5rLwcHhlXCcEeSJayM1E1ldxFLfe8Ys k63oaM7D3hJeX8v3VKHFSnMNKOJFb/pLqbQ9PQIghaRl18XTos7LvG39V Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10308"; a="243035271" X-IronPort-AV: E=Sophos;i="5.90,238,1643702400"; d="scan'208";a="243035271" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Apr 2022 16:48:41 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,238,1643702400"; d="scan'208";a="608658173" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga008.fm.intel.com with ESMTP; 05 Apr 2022 16:48:35 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id B42034B3; Wed, 6 Apr 2022 02:43:47 +0300 (EEST) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Varad Gautam , Dario Faggioli , Dave Hansen , Brijesh Singh , Mike Rapoport , David Hildenbrand , x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv4 4/8] x86/boot/compressed: Handle unaccepted memory Date: Wed, 6 Apr 2022 02:43:39 +0300 Message-Id: <20220405234343.74045-5-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220405234343.74045-1-kirill.shutemov@linux.intel.com> References: <20220405234343.74045-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Firmware is responsible for accepting memory where compressed kernel image and initrd land. But kernel has to accept memory for decompression buffer: accept memory just before decompression starts. KASLR is allowed to use unaccepted memory for the output buffer. Signed-off-by: Kirill A. Shutemov --- arch/x86/boot/compressed/bitmap.c | 62 ++++++++++++++++++++ arch/x86/boot/compressed/kaslr.c | 14 ++++- arch/x86/boot/compressed/misc.c | 11 ++++ arch/x86/boot/compressed/unaccepted_memory.c | 14 +++++ arch/x86/include/asm/unaccepted_memory.h | 2 + 5 files changed, 101 insertions(+), 2 deletions(-) diff --git a/arch/x86/boot/compressed/bitmap.c b/arch/x86/boot/compressed/bitmap.c index bf58b259380a..ba2de61c0823 100644 --- a/arch/x86/boot/compressed/bitmap.c +++ b/arch/x86/boot/compressed/bitmap.c @@ -2,6 +2,48 @@ /* Taken from lib/string.c */ #include +#include +#include + +unsigned long _find_next_bit(const unsigned long *addr1, + const unsigned long *addr2, unsigned long nbits, + unsigned long start, unsigned long invert, unsigned long le) +{ + unsigned long tmp, mask; + + if (unlikely(start >= nbits)) + return nbits; + + tmp = addr1[start / BITS_PER_LONG]; + if (addr2) + tmp &= addr2[start / BITS_PER_LONG]; + tmp ^= invert; + + /* Handle 1st word. */ + mask = BITMAP_FIRST_WORD_MASK(start); + if (le) + mask = swab(mask); + + tmp &= mask; + + start = round_down(start, BITS_PER_LONG); + + while (!tmp) { + start += BITS_PER_LONG; + if (start >= nbits) + return nbits; + + tmp = addr1[start / BITS_PER_LONG]; + if (addr2) + tmp &= addr2[start / BITS_PER_LONG]; + tmp ^= invert; + } + + if (le) + tmp = swab(tmp); + + return min(start + __ffs(tmp), nbits); +} void __bitmap_set(unsigned long *map, unsigned int start, int len) { @@ -22,3 +64,23 @@ void __bitmap_set(unsigned long *map, unsigned int start, int len) *p |= mask_to_set; } } + +void __bitmap_clear(unsigned long *map, unsigned int start, int len) +{ + unsigned long *p = map + BIT_WORD(start); + const unsigned int size = start + len; + int bits_to_clear = BITS_PER_LONG - (start % BITS_PER_LONG); + unsigned long mask_to_clear = BITMAP_FIRST_WORD_MASK(start); + + while (len - bits_to_clear >= 0) { + *p &= ~mask_to_clear; + len -= bits_to_clear; + bits_to_clear = BITS_PER_LONG; + mask_to_clear = ~0UL; + p++; + } + if (len) { + mask_to_clear &= BITMAP_LAST_WORD_MASK(size); + *p &= ~mask_to_clear; + } +} diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c index 411b268bc0a2..59db90626042 100644 --- a/arch/x86/boot/compressed/kaslr.c +++ b/arch/x86/boot/compressed/kaslr.c @@ -725,10 +725,20 @@ process_efi_entries(unsigned long minimum, unsigned long image_size) * but in practice there's firmware where using that memory leads * to crashes. * - * Only EFI_CONVENTIONAL_MEMORY is guaranteed to be free. + * Only EFI_CONVENTIONAL_MEMORY and EFI_UNACCEPTED_MEMORY (if + * supported) are guaranteed to be free. */ - if (md->type != EFI_CONVENTIONAL_MEMORY) + + switch (md->type) { + case EFI_CONVENTIONAL_MEMORY: + break; + case EFI_UNACCEPTED_MEMORY: + if (IS_ENABLED(CONFIG_UNACCEPTED_MEMORY)) + break; continue; + default: + continue; + } if (efi_soft_reserve_enabled() && (md->attribute & EFI_MEMORY_SP)) diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c index fa8969fad011..c1d9d71a6615 100644 --- a/arch/x86/boot/compressed/misc.c +++ b/arch/x86/boot/compressed/misc.c @@ -18,6 +18,7 @@ #include "../string.h" #include "../voffset.h" #include +#include /* * WARNING!! @@ -43,6 +44,9 @@ void *memmove(void *dest, const void *src, size_t n); #endif +#undef __pa +#define __pa(x) ((unsigned long)(x)) + /* * This is set up by the setup-routine at boot-time */ @@ -451,6 +455,13 @@ asmlinkage __visible void *extract_kernel(void *rmode, memptr heap, #endif debug_putstr("\nDecompressing Linux... "); + + if (IS_ENABLED(CONFIG_UNACCEPTED_MEMORY) && + boot_params->unaccepted_memory) { + debug_putstr("Accepting memory... "); + accept_memory(__pa(output), __pa(output) + needed_size); + } + __decompress(input_data, input_len, NULL, NULL, output, output_len, NULL, error); parse_elf(output); diff --git a/arch/x86/boot/compressed/unaccepted_memory.c b/arch/x86/boot/compressed/unaccepted_memory.c index d363acf59c08..3ebab63789bb 100644 --- a/arch/x86/boot/compressed/unaccepted_memory.c +++ b/arch/x86/boot/compressed/unaccepted_memory.c @@ -51,3 +51,17 @@ void mark_unaccepted(struct boot_params *params, u64 start, u64 end) bitmap_set((unsigned long *)params->unaccepted_memory, start / PMD_SIZE, (end - start) / PMD_SIZE); } + +void accept_memory(phys_addr_t start, phys_addr_t end) +{ + unsigned long *unaccepted_memory; + unsigned int rs, re; + + unaccepted_memory = (unsigned long *)boot_params->unaccepted_memory; + rs = start / PMD_SIZE; + for_each_set_bitrange_from(rs, re, unaccepted_memory, + DIV_ROUND_UP(end, PMD_SIZE)) { + __accept_memory(rs * PMD_SIZE, re * PMD_SIZE); + bitmap_clear(unaccepted_memory, rs, re - rs); + } +} diff --git a/arch/x86/include/asm/unaccepted_memory.h b/arch/x86/include/asm/unaccepted_memory.h index cbc24040b853..f1f835d3cd78 100644 --- a/arch/x86/include/asm/unaccepted_memory.h +++ b/arch/x86/include/asm/unaccepted_memory.h @@ -9,4 +9,6 @@ struct boot_params; void mark_unaccepted(struct boot_params *params, u64 start, u64 num); +void accept_memory(phys_addr_t start, phys_addr_t end); + #endif From patchwork Tue Apr 5 23:43:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 558382 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4592C433FE for ; Wed, 6 Apr 2022 04:02:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238750AbiDFEEa (ORCPT ); Wed, 6 Apr 2022 00:04:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45006 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1850053AbiDFCrJ (ORCPT ); Tue, 5 Apr 2022 22:47:09 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 881C5294A18; Tue, 5 Apr 2022 16:48:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649202530; x=1680738530; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/1nyzn7F+fmd6VP9tbO0wCXT9G8b1V0YmZnfceU20K8=; b=i7/cXMMo92lbsvtXoUsW6tBxxVWYyu8FXPuLa0T2pN8VBwLJcCCcrrJA iQZByeL80GAVM1/eFz5AyydWcv2Xu8BUtz7pezmvR0iTWpVlWixlROKD7 eT2f4iW7KrK8TW0bvD1F8QJJC1+XCMqdGweX7yyC9jz7oTjdpiX2ngrg+ mPp9Oajv19Rb4UCDDdTt0NF8EkinLdOtTntS9jkTVWScUhtCMPo4vcn8P IeYWwiQu4owsE37EaxDK/koFmYS+gHeB9f/1dY2RxyrqshPqPTp8zK77n Pnkt3791vr5744n5QIa63kM32TIbTaRcItMVSq7auICBZgzCkL6mbrVZw g==; X-IronPort-AV: E=McAfee;i="6200,9189,10308"; a="248417456" X-IronPort-AV: E=Sophos;i="5.90,238,1643702400"; d="scan'208";a="248417456" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Apr 2022 16:48:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,238,1643702400"; d="scan'208";a="588141278" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga001.jf.intel.com with ESMTP; 05 Apr 2022 16:48:41 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id C1FE05AA; Wed, 6 Apr 2022 02:43:47 +0300 (EEST) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Varad Gautam , Dario Faggioli , Dave Hansen , Brijesh Singh , Mike Rapoport , David Hildenbrand , x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" , Mike Rapoport Subject: [PATCHv4 5/8] x86/mm: Reserve unaccepted memory bitmap Date: Wed, 6 Apr 2022 02:43:40 +0300 Message-Id: <20220405234343.74045-6-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220405234343.74045-1-kirill.shutemov@linux.intel.com> References: <20220405234343.74045-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org A given page of memory can only be accepted once. The kernel has a need to accept memory both in the early decompression stage and during normal runtime. A bitmap used to communicate the acceptance state of each page between the decompression stage and normal runtime. This eliminates the possibility of attempting to double-accept a page. The bitmap is allocated in EFI stub, decompression stage updates the state of pages used for the kernel and initrd and hands the bitmap over to the main kernel image via boot_params. In the runtime kernel, reserve the bitmap's memory to ensure nothing overwrites it. Signed-off-by: Kirill A. Shutemov Acked-by: Mike Rapoport --- arch/x86/kernel/e820.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c index f267205f2d5a..22d1fe48dcba 100644 --- a/arch/x86/kernel/e820.c +++ b/arch/x86/kernel/e820.c @@ -1316,6 +1316,16 @@ void __init e820__memblock_setup(void) int i; u64 end; + /* Mark unaccepted memory bitmap reserved */ + if (boot_params.unaccepted_memory) { + unsigned long size; + + /* One bit per 2MB */ + size = DIV_ROUND_UP(e820__end_of_ram_pfn() * PAGE_SIZE, + PMD_SIZE * BITS_PER_BYTE); + memblock_reserve(boot_params.unaccepted_memory, size); + } + /* * The bootstrap memblock region count maximum is 128 entries * (INIT_MEMBLOCK_REGIONS), but EFI might pass us more E820 entries From patchwork Tue Apr 5 23:43:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 558379 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9E54BC4167E for ; Wed, 6 Apr 2022 04:05:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1388059AbiDFEFx (ORCPT ); Wed, 6 Apr 2022 00:05:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47710 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1850058AbiDFCrJ (ORCPT ); Tue, 5 Apr 2022 22:47:09 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3DE511E111F; Tue, 5 Apr 2022 16:48:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649202529; x=1680738529; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=olocThm3w1DRSupMBiWIGBr2Nhg3m7XH5eFavATvzHQ=; b=dmwnhRsmAbGFeKbgiWuukgUaCyQceP67P5NBCMCHfK3ZeOb7reDFyq04 LpMuwPo6NQ86tLnEfkWScVXT9pInYsHFOlY2mF7mitJPhYeK3kzIIA2+v wt3f94UO9jovkcRz3g61jZeWeqIGIv45nwKpd23sByAboRWQOiH5lLIqP 8N7ZAKSE+O0Sqn910r+yaq9PS58d4H+5u38528Vbz81vKmYVzkOkjjBxz 2BpbQ7lH3DswfIKI5onnPY2ZQUxDnWRmTeU3cfo7dRnv+KdbATDmJPJCR 25DgeYpDa5Va3oRvUBl58GB0+hbDRPAQEn77MZ6LI0tvVVlQxKe9tWdUv A==; X-IronPort-AV: E=McAfee;i="6200,9189,10308"; a="285858014" X-IronPort-AV: E=Sophos;i="5.90,238,1643702400"; d="scan'208";a="285858014" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Apr 2022 16:48:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,238,1643702400"; d="scan'208";a="851044908" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga005.fm.intel.com with ESMTP; 05 Apr 2022 16:48:41 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id CEF3260A; Wed, 6 Apr 2022 02:43:47 +0300 (EEST) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Varad Gautam , Dario Faggioli , Dave Hansen , Brijesh Singh , Mike Rapoport , David Hildenbrand , x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv4 6/8] x86/mm: Provide helpers for unaccepted memory Date: Wed, 6 Apr 2022 02:43:41 +0300 Message-Id: <20220405234343.74045-7-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220405234343.74045-1-kirill.shutemov@linux.intel.com> References: <20220405234343.74045-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Core-mm requires few helpers to support unaccepted memory: - accept_memory() checks the range of addresses against the bitmap and accept memory if needed. - memory_is_unaccepted() check if anything within the range requires acceptance. Signed-off-by: Kirill A. Shutemov --- arch/x86/include/asm/page.h | 5 +++ arch/x86/include/asm/unaccepted_memory.h | 1 + arch/x86/mm/Makefile | 2 + arch/x86/mm/unaccepted_memory.c | 53 ++++++++++++++++++++++++ 4 files changed, 61 insertions(+) create mode 100644 arch/x86/mm/unaccepted_memory.c diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h index 9cc82f305f4b..9ae0064f97e5 100644 --- a/arch/x86/include/asm/page.h +++ b/arch/x86/include/asm/page.h @@ -19,6 +19,11 @@ struct page; #include + +#ifdef CONFIG_UNACCEPTED_MEMORY +#include +#endif + extern struct range pfn_mapped[]; extern int nr_pfn_mapped; diff --git a/arch/x86/include/asm/unaccepted_memory.h b/arch/x86/include/asm/unaccepted_memory.h index f1f835d3cd78..a8d12ef1bda8 100644 --- a/arch/x86/include/asm/unaccepted_memory.h +++ b/arch/x86/include/asm/unaccepted_memory.h @@ -10,5 +10,6 @@ struct boot_params; void mark_unaccepted(struct boot_params *params, u64 start, u64 num); void accept_memory(phys_addr_t start, phys_addr_t end); +bool memory_is_unaccepted(phys_addr_t start, phys_addr_t end); #endif diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index fe3d3061fc11..e327f83e6bbf 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -60,3 +60,5 @@ obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_amd.o obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_identity.o obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_boot.o + +obj-$(CONFIG_UNACCEPTED_MEMORY) += unaccepted_memory.o diff --git a/arch/x86/mm/unaccepted_memory.c b/arch/x86/mm/unaccepted_memory.c new file mode 100644 index 000000000000..3588a7cb954c --- /dev/null +++ b/arch/x86/mm/unaccepted_memory.c @@ -0,0 +1,53 @@ +// SPDX-License-Identifier: GPL-2.0-only +#include +#include +#include +#include + +#include +#include +#include + +static DEFINE_SPINLOCK(unaccepted_memory_lock); + +void accept_memory(phys_addr_t start, phys_addr_t end) +{ + unsigned long *unaccepted_memory; + unsigned long flags; + unsigned int rs, re; + + if (!boot_params.unaccepted_memory) + return; + + unaccepted_memory = __va(boot_params.unaccepted_memory); + rs = start / PMD_SIZE; + + spin_lock_irqsave(&unaccepted_memory_lock, flags); + for_each_set_bitrange_from(rs, re, unaccepted_memory, + DIV_ROUND_UP(end, PMD_SIZE)) { + /* Platform-specific memory-acceptance call goes here */ + panic("Cannot accept memory"); + bitmap_clear(unaccepted_memory, rs, re - rs); + } + spin_unlock_irqrestore(&unaccepted_memory_lock, flags); +} + +bool memory_is_unaccepted(phys_addr_t start, phys_addr_t end) +{ + unsigned long *unaccepted_memory = __va(boot_params.unaccepted_memory); + unsigned long flags; + bool ret = false; + + spin_lock_irqsave(&unaccepted_memory_lock, flags); + while (start < end) { + if (test_bit(start / PMD_SIZE, unaccepted_memory)) { + ret = true; + break; + } + + start += PMD_SIZE; + } + spin_unlock_irqrestore(&unaccepted_memory_lock, flags); + + return ret; +} From patchwork Tue Apr 5 23:43:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 558990 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F373BC433FE for ; Wed, 6 Apr 2022 04:05:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348395AbiDFEFK (ORCPT ); Wed, 6 Apr 2022 00:05:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47702 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1850050AbiDFCrJ (ORCPT ); Tue, 5 Apr 2022 22:47:09 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3DDE11E1108; Tue, 5 Apr 2022 16:48:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649202529; x=1680738529; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=VLMK2XfZRxwY2DfHGkqNEfElATjStd6aGMpGXYl7+N4=; b=jUO74UV+P/iG+GwF6MqRyU3PosyxEuFDG/uafHeLcN+1wkIMEc6YBliz mEILPvGTp3PyhH2aNezuc/wYh4TyiGUKkj92pFMt2C0HBuTkQwvixeSJ8 fYCQ/Dlodb0NmCBpeq//l7BkOK2jnhypnSele2wdgD4Nl047BEWue47PU AaBQy5krD/HTrnL0myTnSmHdZumYUTfraT6y3IBWdebrcToDJOrLwCQ9d BWevNx2IiTfEiKL3s1Ls6GEF6cj2iTVKjHrXzhVMNSLYNsh3aQcnwel0c pQdqNiDn7TkkVO5JB8r9Yrr2kApqrT3eXokLsQUGPCvRSqnPRyBKqepLb g==; X-IronPort-AV: E=McAfee;i="6200,9189,10308"; a="248417452" X-IronPort-AV: E=Sophos;i="5.90,238,1643702400"; d="scan'208";a="248417452" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Apr 2022 16:48:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,238,1643702400"; d="scan'208";a="549279513" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga007.jf.intel.com with ESMTP; 05 Apr 2022 16:48:41 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id DC54261B; Wed, 6 Apr 2022 02:43:47 +0300 (EEST) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Varad Gautam , Dario Faggioli , Dave Hansen , Brijesh Singh , Mike Rapoport , David Hildenbrand , x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv4 7/8] x86/tdx: Unaccepted memory support Date: Wed, 6 Apr 2022 02:43:42 +0300 Message-Id: <20220405234343.74045-8-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220405234343.74045-1-kirill.shutemov@linux.intel.com> References: <20220405234343.74045-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org All preparations are complete. Hookup TDX-specific code to accept memory. There are two tdx_accept_memory() implementations: one in main kernel and one in the decompresser. The implementation in core kernel uses tdx_enc_status_changed(). The helper is not available in the decompresser, self-contained implementation added there instead. Signed-off-by: Kirill A. Shutemov --- arch/x86/Kconfig | 1 + arch/x86/boot/compressed/tdx.c | 41 ++++++++++++++++++++ arch/x86/boot/compressed/unaccepted_memory.c | 23 ++++++++++- arch/x86/coco/tdx/tdx.c | 29 +++++++++----- arch/x86/include/asm/shared/tdx.h | 20 ++++++++++ arch/x86/include/asm/tdx.h | 19 --------- arch/x86/mm/unaccepted_memory.c | 6 ++- 7 files changed, 109 insertions(+), 30 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 7021ec725dd3..e4c31dbea6d7 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -885,6 +885,7 @@ config INTEL_TDX_GUEST select ARCH_HAS_CC_PLATFORM select X86_MEM_ENCRYPT select X86_MCE + select UNACCEPTED_MEMORY help Support running as a guest under Intel TDX. Without this support, the guest kernel can not boot or run under TDX. diff --git a/arch/x86/boot/compressed/tdx.c b/arch/x86/boot/compressed/tdx.c index 918a7606f53c..a0bd1426d235 100644 --- a/arch/x86/boot/compressed/tdx.c +++ b/arch/x86/boot/compressed/tdx.c @@ -9,6 +9,7 @@ #include #include +#include /* Called from __tdx_hypercall() for unrecoverable failure */ void __tdx_hypercall_failed(void) @@ -75,3 +76,43 @@ void early_tdx_detect(void) pio_ops.f_outb = tdx_outb; pio_ops.f_outw = tdx_outw; } + +#define TDACCEPTPAGE 6 +#define TDVMCALL_MAP_GPA 0x10001 + +/* + * Wrapper for standard use of __tdx_hypercall with no output aside from + * return code. + */ +static inline u64 _tdx_hypercall(u64 fn, u64 r12, u64 r13, u64 r14, u64 r15) +{ + struct tdx_hypercall_args args = { + .r10 = TDX_HYPERCALL_STANDARD, + .r11 = fn, + .r12 = r12, + .r13 = r13, + .r14 = r14, + .r15 = r15, + }; + + return __tdx_hypercall(&args, 0); +} + +void tdx_accept_memory(phys_addr_t start, phys_addr_t end) +{ + int i; + + if (_tdx_hypercall(TDVMCALL_MAP_GPA, start, end - start, 0, 0)) + error("Cannot accept memory: MapGPA failed\n"); + + /* + * For shared->private conversion, accept the page using TDACCEPTPAGE + * TDX module call. + */ + for (i = 0; i < (end - start) / PAGE_SIZE; i++) { + if (__tdx_module_call(TDACCEPTPAGE, start + i * PAGE_SIZE, + 0, 0, 0, NULL)) { + error("Cannot accept memory: page accept failed\n"); + } + } +} diff --git a/arch/x86/boot/compressed/unaccepted_memory.c b/arch/x86/boot/compressed/unaccepted_memory.c index 3ebab63789bb..662ec32e3c42 100644 --- a/arch/x86/boot/compressed/unaccepted_memory.c +++ b/arch/x86/boot/compressed/unaccepted_memory.c @@ -1,12 +1,33 @@ // SPDX-License-Identifier: GPL-2.0-only +#include #include "error.h" #include "misc.h" +#include "tdx.h" + +static bool is_tdx_guest(void) +{ + static bool once; + static bool is_tdx; + + if (!once) { + u32 eax, sig[3]; + + cpuid_count(TDX_CPUID_LEAF_ID, 0, &eax, + &sig[0], &sig[2], &sig[1]); + is_tdx = !memcmp(TDX_IDENT, sig, sizeof(sig)); + } + + return is_tdx; +} static inline void __accept_memory(phys_addr_t start, phys_addr_t end) { /* Platform-specific memory-acceptance call goes here */ - error("Cannot accept memory"); + if (is_tdx_guest()) + tdx_accept_memory(start, end); + else + error("Cannot accept memory"); } void mark_unaccepted(struct boot_params *params, u64 start, u64 end) diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c index 03deb4d6920d..0fcb54e07797 100644 --- a/arch/x86/coco/tdx/tdx.c +++ b/arch/x86/coco/tdx/tdx.c @@ -606,16 +606,8 @@ static bool try_accept_one(phys_addr_t *start, unsigned long len, return true; } -/* - * Inform the VMM of the guest's intent for this physical page: shared with - * the VMM or private to the guest. The VMM is expected to change its mapping - * of the page in response. - */ -static bool tdx_enc_status_changed(unsigned long vaddr, int numpages, bool enc) +static bool __tdx_enc_status_changed(phys_addr_t start, phys_addr_t end, bool enc) { - phys_addr_t start = __pa(vaddr); - phys_addr_t end = __pa(vaddr + numpages * PAGE_SIZE); - if (!enc) { /* Set the shared (decrypted) bits: */ start |= cc_mkdec(0); @@ -660,6 +652,25 @@ static bool tdx_enc_status_changed(unsigned long vaddr, int numpages, bool enc) return true; } +void tdx_accept_memory(phys_addr_t start, phys_addr_t end) +{ + if (!__tdx_enc_status_changed(start, end, true)) + panic("Accepting memory failed\n"); +} + +/* + * Inform the VMM of the guest's intent for this physical page: shared with + * the VMM or private to the guest. The VMM is expected to change its mapping + * of the page in response. + */ +static bool tdx_enc_status_changed(unsigned long vaddr, int numpages, bool enc) +{ + phys_addr_t start = __pa(vaddr); + phys_addr_t end = __pa(vaddr + numpages * PAGE_SIZE); + + return __tdx_enc_status_changed(start, end, enc); +} + void __init tdx_early_init(void) { u64 cc_mask; diff --git a/arch/x86/include/asm/shared/tdx.h b/arch/x86/include/asm/shared/tdx.h index e53f26228fbb..4eca6492b108 100644 --- a/arch/x86/include/asm/shared/tdx.h +++ b/arch/x86/include/asm/shared/tdx.h @@ -36,5 +36,25 @@ u64 __tdx_hypercall(struct tdx_hypercall_args *args, unsigned long flags); /* Called from __tdx_hypercall() for unrecoverable failure */ void __tdx_hypercall_failed(void); +/* + * Used in __tdx_module_call() to gather the output registers' values of the + * TDCALL instruction when requesting services from the TDX module. This is a + * software only structure and not part of the TDX module/VMM ABI + */ +struct tdx_module_output { + u64 rcx; + u64 rdx; + u64 r8; + u64 r9; + u64 r10; + u64 r11; +}; + +/* Used to communicate with the TDX module */ +u64 __tdx_module_call(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9, + struct tdx_module_output *out); + +void tdx_accept_memory(phys_addr_t start, phys_addr_t end); + #endif /* !__ASSEMBLY__ */ #endif /* _ASM_X86_SHARED_TDX_H */ diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h index 020c81a7c729..d9106d3e89f8 100644 --- a/arch/x86/include/asm/tdx.h +++ b/arch/x86/include/asm/tdx.h @@ -20,21 +20,6 @@ #ifndef __ASSEMBLY__ -/* - * Used to gather the output registers values of the TDCALL and SEAMCALL - * instructions when requesting services from the TDX module. - * - * This is a software only structure and not part of the TDX module/VMM ABI. - */ -struct tdx_module_output { - u64 rcx; - u64 rdx; - u64 r8; - u64 r9; - u64 r10; - u64 r11; -}; - /* * Used by the #VE exception handler to gather the #VE exception * info from the TDX module. This is a software only structure @@ -55,10 +40,6 @@ struct ve_info { void __init tdx_early_init(void); -/* Used to communicate with the TDX module */ -u64 __tdx_module_call(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9, - struct tdx_module_output *out); - void tdx_get_ve_info(struct ve_info *ve); bool tdx_handle_virt_exception(struct pt_regs *regs, struct ve_info *ve); diff --git a/arch/x86/mm/unaccepted_memory.c b/arch/x86/mm/unaccepted_memory.c index 3588a7cb954c..2f1c3c0375cd 100644 --- a/arch/x86/mm/unaccepted_memory.c +++ b/arch/x86/mm/unaccepted_memory.c @@ -6,6 +6,7 @@ #include #include +#include #include static DEFINE_SPINLOCK(unaccepted_memory_lock); @@ -26,7 +27,10 @@ void accept_memory(phys_addr_t start, phys_addr_t end) for_each_set_bitrange_from(rs, re, unaccepted_memory, DIV_ROUND_UP(end, PMD_SIZE)) { /* Platform-specific memory-acceptance call goes here */ - panic("Cannot accept memory"); + if (cpu_feature_enabled(X86_FEATURE_TDX_GUEST)) + tdx_accept_memory(rs * PMD_SIZE, re * PMD_SIZE); + else + panic("Cannot accept memory"); bitmap_clear(unaccepted_memory, rs, re - rs); } spin_unlock_irqrestore(&unaccepted_memory_lock, flags); From patchwork Tue Apr 5 23:43:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 558381 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E56A6C433EF for ; Wed, 6 Apr 2022 04:04:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232401AbiDFEEt (ORCPT ); Wed, 6 Apr 2022 00:04:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47738 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1850059AbiDFCrJ (ORCPT ); Tue, 5 Apr 2022 22:47:09 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C5FBF58E68; Tue, 5 Apr 2022 16:48:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649202539; x=1680738539; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=RyShgqLY7MIPXh/I0xSRyqWdn3IGjezVnvsj+/bA36c=; b=IxyomQOvNhLCXl+F2I43x8415ZOs1Sv5NnbMp7BjDiaWGC3B6jdEB0mo pg18Bn8YfObGGlzbqJg00vRn2zOqFaaFMkpd82i9HE/U4G0pMLQXo8FXB YfAgrVz7uFpA0nyq4960iNLK0OSc+MXOcWlew3sFNUt2Sphkj+DA2PmGz e82X4N8/F6p3zAotK+CjtvE6byzqMmTj/PJSDtqu/Xpea9O7j5OyKuZY5 X65NZDEgPvG4idF0cHnAQJwaG67iEpZXj4Ke9gbYzwTX5Sa8C+L2JBIn+ C86lrPMXPQ36B5673h+4RSxHB0UtifuhG8lHNJ/whUqRNpX8VbK2PIQsC Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10308"; a="260888338" X-IronPort-AV: E=Sophos;i="5.90,238,1643702400"; d="scan'208";a="260888338" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Apr 2022 16:48:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,238,1643702400"; d="scan'208";a="641817585" Received: from black.fi.intel.com ([10.237.72.28]) by FMSMGA003.fm.intel.com with ESMTP; 05 Apr 2022 16:48:41 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id E93DB62E; Wed, 6 Apr 2022 02:43:47 +0300 (EEST) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Varad Gautam , Dario Faggioli , Dave Hansen , Brijesh Singh , Mike Rapoport , David Hildenbrand , x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv4 8/8] mm/vmstat: Add counter for memory accepting Date: Wed, 6 Apr 2022 02:43:43 +0300 Message-Id: <20220405234343.74045-9-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220405234343.74045-1-kirill.shutemov@linux.intel.com> References: <20220405234343.74045-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org The counter increased every time kernel accepts a memory region. The counter allows to see if memory acceptation is still ongoing and contributes to memory allocation latency. Signed-off-by: Kirill A. Shutemov --- arch/x86/mm/unaccepted_memory.c | 1 + include/linux/vm_event_item.h | 3 +++ mm/vmstat.c | 3 +++ 3 files changed, 7 insertions(+) diff --git a/arch/x86/mm/unaccepted_memory.c b/arch/x86/mm/unaccepted_memory.c index 2f1c3c0375cd..7cfe0bd8d2be 100644 --- a/arch/x86/mm/unaccepted_memory.c +++ b/arch/x86/mm/unaccepted_memory.c @@ -32,6 +32,7 @@ void accept_memory(phys_addr_t start, phys_addr_t end) else panic("Cannot accept memory"); bitmap_clear(unaccepted_memory, rs, re - rs); + count_vm_events(ACCEPT_MEMORY, PMD_SIZE / PAGE_SIZE); } spin_unlock_irqrestore(&unaccepted_memory_lock, flags); } diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h index 16a0a4fd000b..6a468164a2f9 100644 --- a/include/linux/vm_event_item.h +++ b/include/linux/vm_event_item.h @@ -136,6 +136,9 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT, #ifdef CONFIG_X86 DIRECT_MAP_LEVEL2_SPLIT, DIRECT_MAP_LEVEL3_SPLIT, +#endif +#ifdef CONFIG_UNACCEPTED_MEMORY + ACCEPT_MEMORY, #endif NR_VM_EVENT_ITEMS }; diff --git a/mm/vmstat.c b/mm/vmstat.c index b75b1a64b54c..4c9197f32406 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1397,6 +1397,9 @@ const char * const vmstat_text[] = { "direct_map_level2_splits", "direct_map_level3_splits", #endif +#ifdef CONFIG_UNACCEPTED_MEMORY + "accept_memory", +#endif #endif /* CONFIG_VM_EVENT_COUNTERS || CONFIG_MEMCG */ }; #endif /* CONFIG_PROC_FS || CONFIG_SYSFS || CONFIG_NUMA || CONFIG_MEMCG */