From patchwork Tue Jun 14 09:21:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: mawupeng X-Patchwork-Id: 581773 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03B97C433EF for ; Tue, 14 Jun 2022 09:00:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231224AbiFNJAr (ORCPT ); Tue, 14 Jun 2022 05:00:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38754 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229758AbiFNJAq (ORCPT ); Tue, 14 Jun 2022 05:00:46 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DDE5E27158; Tue, 14 Jun 2022 02:00:43 -0700 (PDT) Received: from dggpemm500021.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4LMj4M4SW0zRjS7; Tue, 14 Jun 2022 16:57:23 +0800 (CST) Received: from dggpemm500014.china.huawei.com (7.185.36.153) by dggpemm500021.china.huawei.com (7.185.36.109) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 14 Jun 2022 17:00:36 +0800 Received: from localhost.localdomain (10.175.112.125) by dggpemm500014.china.huawei.com (7.185.36.153) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 14 Jun 2022 17:00:35 +0800 From: Wupeng Ma To: , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v5 1/5] efi: arm64: Introduce ability to find mirrored memory ranges Date: Tue, 14 Jun 2022 17:21:52 +0800 Message-ID: <20220614092156.1972846-2-mawupeng1@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220614092156.1972846-1-mawupeng1@huawei.com> References: <20220614092156.1972846-1-mawupeng1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemm500014.china.huawei.com (7.185.36.153) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org From: Ma Wupeng Commit b05b9f5f9dcf ("x86, mirror: x86 enabling - find mirrored memory ranges") introduce the efi_find_mirror() function on x86. In order to reuse the API we make it public. Arm64 can support mirrored memory too, so function efi_find_mirror() is added to efi_init() to this support for arm64. Since efi_init() is shared by ARM, arm64 and riscv, this patch will bring mirror memory support for these architectures, but this support is only tested in arm64. Signed-off-by: Ma Wupeng --- arch/x86/include/asm/efi.h | 4 ---- arch/x86/platform/efi/efi.c | 23 ----------------------- drivers/firmware/efi/efi-init.c | 1 + drivers/firmware/efi/efi.c | 23 +++++++++++++++++++++++ include/linux/efi.h | 3 +++ 5 files changed, 27 insertions(+), 27 deletions(-) diff --git a/arch/x86/include/asm/efi.h b/arch/x86/include/asm/efi.h index 71943dce691e..eb90206eae80 100644 --- a/arch/x86/include/asm/efi.h +++ b/arch/x86/include/asm/efi.h @@ -383,7 +383,6 @@ static inline bool efi_is_64bit(void) extern bool efi_reboot_required(void); extern bool efi_is_table_address(unsigned long phys_addr); -extern void efi_find_mirror(void); extern void efi_reserve_boot_services(void); #else static inline void parse_efi_setup(u64 phys_addr, u32 data_len) {} @@ -395,9 +394,6 @@ static inline bool efi_is_table_address(unsigned long phys_addr) { return false; } -static inline void efi_find_mirror(void) -{ -} static inline void efi_reserve_boot_services(void) { } diff --git a/arch/x86/platform/efi/efi.c b/arch/x86/platform/efi/efi.c index 1591d67e0bcd..6e598bd78eef 100644 --- a/arch/x86/platform/efi/efi.c +++ b/arch/x86/platform/efi/efi.c @@ -108,29 +108,6 @@ static int __init setup_add_efi_memmap(char *arg) } early_param("add_efi_memmap", setup_add_efi_memmap); -void __init efi_find_mirror(void) -{ - efi_memory_desc_t *md; - u64 mirror_size = 0, total_size = 0; - - if (!efi_enabled(EFI_MEMMAP)) - return; - - for_each_efi_memory_desc(md) { - unsigned long long start = md->phys_addr; - unsigned long long size = md->num_pages << EFI_PAGE_SHIFT; - - total_size += size; - if (md->attribute & EFI_MEMORY_MORE_RELIABLE) { - memblock_mark_mirror(start, size); - mirror_size += size; - } - } - if (mirror_size) - pr_info("Memory: %lldM/%lldM mirrored memory\n", - mirror_size>>20, total_size>>20); -} - /* * Tell the kernel about the EFI memory map. This might include * more than the max 128 entries that can fit in the passed in e820 diff --git a/drivers/firmware/efi/efi-init.c b/drivers/firmware/efi/efi-init.c index b2c829e95bd1..8e3a0d14dac5 100644 --- a/drivers/firmware/efi/efi-init.c +++ b/drivers/firmware/efi/efi-init.c @@ -241,6 +241,7 @@ void __init efi_init(void) */ early_init_dt_check_for_usable_mem_range(); + efi_find_mirror(); efi_esrt_init(); efi_mokvar_table_init(); memblock_reserve(data.phys_map & PAGE_MASK, diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c index 860534bcfdac..79c232e07de7 100644 --- a/drivers/firmware/efi/efi.c +++ b/drivers/firmware/efi/efi.c @@ -446,6 +446,29 @@ static int __init efisubsys_init(void) subsys_initcall(efisubsys_init); +void __init efi_find_mirror(void) +{ + efi_memory_desc_t *md; + u64 mirror_size = 0, total_size = 0; + + if (!efi_enabled(EFI_MEMMAP)) + return; + + for_each_efi_memory_desc(md) { + unsigned long long start = md->phys_addr; + unsigned long long size = md->num_pages << EFI_PAGE_SHIFT; + + total_size += size; + if (md->attribute & EFI_MEMORY_MORE_RELIABLE) { + memblock_mark_mirror(start, size); + mirror_size += size; + } + } + if (mirror_size) + pr_info("Memory: %lldM/%lldM mirrored memory\n", + mirror_size>>20, total_size>>20); +} + /* * Find the efi memory descriptor for a given physical address. Given a * physical address, determine if it exists within an EFI Memory Map entry, diff --git a/include/linux/efi.h b/include/linux/efi.h index 7d9b0bb47eb3..53f64c14a525 100644 --- a/include/linux/efi.h +++ b/include/linux/efi.h @@ -872,6 +872,7 @@ static inline bool efi_rt_services_supported(unsigned int mask) { return (efi.runtime_supported_mask & mask) == mask; } +extern void efi_find_mirror(void); #else static inline bool efi_enabled(int feature) { @@ -889,6 +890,8 @@ static inline bool efi_rt_services_supported(unsigned int mask) { return false; } + +static inline void efi_find_mirror(void) {} #endif extern int efi_status_to_err(efi_status_t status); From patchwork Tue Jun 14 09:21:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: mawupeng X-Patchwork-Id: 581771 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D3C2CCA47C for ; Tue, 14 Jun 2022 09:01:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242202AbiFNJBQ (ORCPT ); Tue, 14 Jun 2022 05:01:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39278 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242481AbiFNJBO (ORCPT ); Tue, 14 Jun 2022 05:01:14 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 212682C117; Tue, 14 Jun 2022 02:01:14 -0700 (PDT) Received: from dggpemm500020.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4LMj6V5WpdzgZCJ; Tue, 14 Jun 2022 16:59:14 +0800 (CST) Received: from dggpemm500014.china.huawei.com (7.185.36.153) by dggpemm500020.china.huawei.com (7.185.36.49) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 14 Jun 2022 17:00:38 +0800 Received: from localhost.localdomain (10.175.112.125) by dggpemm500014.china.huawei.com (7.185.36.153) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 14 Jun 2022 17:00:36 +0800 From: Wupeng Ma To: , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Mike Rapoport Subject: [PATCH v5 2/5] mm: Ratelimited mirrored memory related warning messages Date: Tue, 14 Jun 2022 17:21:53 +0800 Message-ID: <20220614092156.1972846-3-mawupeng1@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220614092156.1972846-1-mawupeng1@huawei.com> References: <20220614092156.1972846-1-mawupeng1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemm500014.china.huawei.com (7.185.36.153) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org From: Ma Wupeng If system has mirrored memory, memblock will try to allocate mirrored memory firstly and fallback to non-mirrored memory when fails, but if with limited mirrored memory or some numa node without mirrored memory, lots of warning message about memblock allocation will occur. This patch ratelimit the warning message to avoid a very long print during bootup. Signed-off-by: Ma Wupeng Reviewed-by: David Hildenbrand Acked-by: Mike Rapoport Reviewed-by: Anshuman Khandual Reviewed-by: Kefeng Wang --- mm/memblock.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/memblock.c b/mm/memblock.c index e4f03a6e8e56..b1d2a0009733 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -327,7 +327,7 @@ static phys_addr_t __init_memblock memblock_find_in_range(phys_addr_t start, NUMA_NO_NODE, flags); if (!ret && (flags & MEMBLOCK_MIRROR)) { - pr_warn("Could not allocate %pap bytes of mirrored memory\n", + pr_warn_ratelimited("Could not allocate %pap bytes of mirrored memory\n", &size); flags &= ~MEMBLOCK_MIRROR; goto again; @@ -1384,7 +1384,7 @@ phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size, if (flags & MEMBLOCK_MIRROR) { flags &= ~MEMBLOCK_MIRROR; - pr_warn("Could not allocate %pap bytes of mirrored memory\n", + pr_warn_ratelimited("Could not allocate %pap bytes of mirrored memory\n", &size); goto again; } From patchwork Tue Jun 14 09:21:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: mawupeng X-Patchwork-Id: 582361 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7DB7C433EF for ; Tue, 14 Jun 2022 09:00:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234560AbiFNJAv (ORCPT ); Tue, 14 Jun 2022 05:00:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38798 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229758AbiFNJAv (ORCPT ); Tue, 14 Jun 2022 05:00:51 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7C23B27B37; Tue, 14 Jun 2022 02:00:50 -0700 (PDT) Received: from dggpemm500022.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4LMj4W32ZCzRhyb; Tue, 14 Jun 2022 16:57:31 +0800 (CST) Received: from dggpemm500014.china.huawei.com (7.185.36.153) by dggpemm500022.china.huawei.com (7.185.36.162) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 14 Jun 2022 17:00:40 +0800 Received: from localhost.localdomain (10.175.112.125) by dggpemm500014.china.huawei.com (7.185.36.153) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 14 Jun 2022 17:00:38 +0800 From: Wupeng Ma To: , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v5 3/5] mm: Limit warning message in vmemmap_verify() to once Date: Tue, 14 Jun 2022 17:21:54 +0800 Message-ID: <20220614092156.1972846-4-mawupeng1@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220614092156.1972846-1-mawupeng1@huawei.com> References: <20220614092156.1972846-1-mawupeng1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemm500014.china.huawei.com (7.185.36.153) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org From: Ma Wupeng For a system only have limited mirrored memory or some numa node without mirrored memory, the per node vmemmap page_structs prefer to allocate memory from mirrored region, which will lead to vmemmap_verify() in vmemmap_populate_basepages() report lots of warning message. This patch change the frequency of "potential offnode page_structs" warning messages to only once to avoid a very long print during bootup. Signed-off-by: Ma Wupeng Acked-by: David Hildenbrand --- mm/sparse-vmemmap.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index f4fa61dbbee3..f34c6889b0a6 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -528,7 +528,7 @@ void __meminit vmemmap_verify(pte_t *pte, int node, int actual_node = early_pfn_to_nid(pfn); if (node_distance(actual_node, node) > LOCAL_DISTANCE) - pr_warn("[%lx-%lx] potential offnode page_structs\n", + pr_warn_once("[%lx-%lx] potential offnode page_structs\n", start, end - 1); } From patchwork Tue Jun 14 09:21:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: mawupeng X-Patchwork-Id: 581772 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 943ACCCA47F for ; Tue, 14 Jun 2022 09:00:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239936AbiFNJAx (ORCPT ); Tue, 14 Jun 2022 05:00:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38816 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238355AbiFNJAw (ORCPT ); Tue, 14 Jun 2022 05:00:52 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6AD3427B37; Tue, 14 Jun 2022 02:00:51 -0700 (PDT) Received: from dggpemm500023.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4LMj4W5JmMzRj8K; Tue, 14 Jun 2022 16:57:31 +0800 (CST) Received: from dggpemm500014.china.huawei.com (7.185.36.153) by dggpemm500023.china.huawei.com (7.185.36.83) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 14 Jun 2022 17:00:41 +0800 Received: from localhost.localdomain (10.175.112.125) by dggpemm500014.china.huawei.com (7.185.36.153) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 14 Jun 2022 17:00:40 +0800 From: Wupeng Ma To: , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v5 4/5] arm64: mm: Only remove nomap flag for initrd Date: Tue, 14 Jun 2022 17:21:55 +0800 Message-ID: <20220614092156.1972846-5-mawupeng1@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220614092156.1972846-1-mawupeng1@huawei.com> References: <20220614092156.1972846-1-mawupeng1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemm500014.china.huawei.com (7.185.36.153) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org From: Ma Wupeng Commit 177e15f0c144 ("arm64: add the initrd region to the linear mapping explicitly") remove all the flags of the memory used by initrd. This is fine since MEMBLOCK_MIRROR is not used in arm64. However with mirrored feature introduced to arm64, this will clear the mirrored flag used by initrd, which will lead to error log printed by find_zone_movable_pfns_for_nodes() if the lower 4G range has some non-mirrored memory. To solve this problem, only MEMBLOCK_NOMAP flag will be removed via memblock_clear_nomap(). Signed-off-by: Ma Wupeng Reviewed-by: Ard Biesheuvel Acked-by: Catalin Marinas --- arch/arm64/mm/init.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 339ee84e5a61..8456dbae9441 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -350,8 +350,8 @@ void __init arm64_memblock_init(void) "initrd not fully accessible via the linear mapping -- please check your bootloader ...\n")) { phys_initrd_size = 0; } else { - memblock_remove(base, size); /* clear MEMBLOCK_ flags */ memblock_add(base, size); + memblock_clear_nomap(base, size); memblock_reserve(base, size); } } From patchwork Tue Jun 14 09:21:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: mawupeng X-Patchwork-Id: 582359 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 098ACC43334 for ; Tue, 14 Jun 2022 09:01:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232212AbiFNJBv (ORCPT ); Tue, 14 Jun 2022 05:01:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40026 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239852AbiFNJBs (ORCPT ); Tue, 14 Jun 2022 05:01:48 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1FAD32F00B; Tue, 14 Jun 2022 02:01:47 -0700 (PDT) Received: from dggpemm500024.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4LMj783mwBzgZDS; Tue, 14 Jun 2022 16:59:48 +0800 (CST) Received: from dggpemm500014.china.huawei.com (7.185.36.153) by dggpemm500024.china.huawei.com (7.185.36.203) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 14 Jun 2022 17:00:43 +0800 Received: from localhost.localdomain (10.175.112.125) by dggpemm500014.china.huawei.com (7.185.36.153) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 14 Jun 2022 17:00:41 +0800 From: Wupeng Ma To: , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v5 5/5] memblock: Disable mirror feature if kernelcore is not specified Date: Tue, 14 Jun 2022 17:21:56 +0800 Message-ID: <20220614092156.1972846-6-mawupeng1@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220614092156.1972846-1-mawupeng1@huawei.com> References: <20220614092156.1972846-1-mawupeng1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemm500014.china.huawei.com (7.185.36.153) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org From: Ma Wupeng If system have some mirrored memory and mirrored feature is not specified in boot parameter, the basic mirrored feature will be enabled and this will lead to the following situations: - memblock memory allocation prefers mirrored region. This may have some unexpected influence on numa affinity. - contiguous memory will be split into several parts if parts of them is mirrored memory via memblock_mark_mirror(). To fix this, variable mirrored_kernelcore will be checked in memblock_mark_mirror(). Mark mirrored memory with flag MEMBLOCK_MIRROR iff kernelcore=mirror is added in the kernel parameters. Signed-off-by: Ma Wupeng Acked-by: Ard Biesheuvel --- mm/internal.h | 2 ++ mm/memblock.c | 3 +++ mm/page_alloc.c | 2 +- 3 files changed, 6 insertions(+), 1 deletion(-) diff --git a/mm/internal.h b/mm/internal.h index c0f8fbe0445b..ddd2d6a46f1b 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -861,4 +861,6 @@ struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags); DECLARE_PER_CPU(struct per_cpu_nodestat, boot_nodestats); +extern bool mirrored_kernelcore; + #endif /* __MM_INTERNAL_H */ diff --git a/mm/memblock.c b/mm/memblock.c index b1d2a0009733..a9f18b988b7f 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -924,6 +924,9 @@ int __init_memblock memblock_clear_hotplug(phys_addr_t base, phys_addr_t size) */ int __init_memblock memblock_mark_mirror(phys_addr_t base, phys_addr_t size) { + if (!mirrored_kernelcore) + return 0; + system_has_some_mirror = true; return memblock_setclr_flag(base, size, 1, MEMBLOCK_MIRROR); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e008a3df0485..10dc35ec7479 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -356,7 +356,7 @@ static unsigned long required_kernelcore_percent __initdata; static unsigned long required_movablecore __initdata; static unsigned long required_movablecore_percent __initdata; static unsigned long zone_movable_pfn[MAX_NUMNODES] __initdata; -static bool mirrored_kernelcore __meminitdata; +bool mirrored_kernelcore __initdata_memblock; /* movable_zone is the "real" zone pages in ZONE_MOVABLE are taken from */ int movable_zone;