From patchwork Tue Dec 28 13:26:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhen Lei X-Patchwork-Id: 529376 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21B89C43219 for ; Tue, 28 Dec 2021 13:29:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233072AbhL1N3F (ORCPT ); Tue, 28 Dec 2021 08:29:05 -0500 Received: from szxga03-in.huawei.com ([45.249.212.189]:30187 "EHLO szxga03-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232768AbhL1N3A (ORCPT ); Tue, 28 Dec 2021 08:29:00 -0500 Received: from dggpemm500023.china.huawei.com (unknown [172.30.72.54]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4JNb0R003Nz8w8Z; Tue, 28 Dec 2021 21:26:30 +0800 (CST) Received: from dggpemm500006.china.huawei.com (7.185.36.236) by dggpemm500023.china.huawei.com (7.185.36.83) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Tue, 28 Dec 2021 21:28:57 +0800 Received: from thunder-town.china.huawei.com (10.174.178.55) by dggpemm500006.china.huawei.com (7.185.36.236) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Tue, 28 Dec 2021 21:28:56 +0800 From: Zhen Lei To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , Dave Young , Baoquan He , Vivek Goyal , Eric Biederman , , Catalin Marinas , "Will Deacon" , , Rob Herring , Frank Rowand , , Jonathan Corbet , CC: Zhen Lei , Randy Dunlap , Feng Zhou , Kefeng Wang , Chen Zhou , "John Donnelly" Subject: [PATCH v19 03/13] kdump: make parse_crashkernel_{high|low}() static Date: Tue, 28 Dec 2021 21:26:02 +0800 Message-ID: <20211228132612.1860-4-thunder.leizhen@huawei.com> X-Mailer: git-send-email 2.26.0.windows.1 In-Reply-To: <20211228132612.1860-1-thunder.leizhen@huawei.com> References: <20211228132612.1860-1-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.178.55] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemm500006.china.huawei.com (7.185.36.236) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Make parse_crashkernel_{high|low}() static, they are only referenced by parse_crashkernel_high_low() in the same file. The latter is recommended. Signed-off-by: Zhen Lei --- include/linux/crash_core.h | 4 ---- kernel/crash_core.c | 4 ++-- 2 files changed, 2 insertions(+), 6 deletions(-) diff --git a/include/linux/crash_core.h b/include/linux/crash_core.h index 2d3a64761d18998..598fd55d83c169e 100644 --- a/include/linux/crash_core.h +++ b/include/linux/crash_core.h @@ -79,10 +79,6 @@ void final_note(Elf_Word *buf); int __init parse_crashkernel(char *cmdline, unsigned long long system_ram, unsigned long long *crash_size, unsigned long long *crash_base); -int parse_crashkernel_high(char *cmdline, unsigned long long system_ram, - unsigned long long *crash_size, unsigned long long *crash_base); -int parse_crashkernel_low(char *cmdline, unsigned long long system_ram, - unsigned long long *crash_size, unsigned long long *crash_base); int __init parse_crashkernel_high_low(char *cmdline, unsigned long long *high_size, unsigned long long *low_size); diff --git a/kernel/crash_core.c b/kernel/crash_core.c index 8966beaf7c4fd52..3b9e01fc450b2a4 100644 --- a/kernel/crash_core.c +++ b/kernel/crash_core.c @@ -277,7 +277,7 @@ int __init parse_crashkernel(char *cmdline, "crashkernel=", NULL); } -int __init parse_crashkernel_high(char *cmdline, +static int __init parse_crashkernel_high(char *cmdline, unsigned long long system_ram, unsigned long long *crash_size, unsigned long long *crash_base) @@ -286,7 +286,7 @@ int __init parse_crashkernel_high(char *cmdline, "crashkernel=", suffix_tbl[SUFFIX_HIGH]); } -int __init parse_crashkernel_low(char *cmdline, +static int __init parse_crashkernel_low(char *cmdline, unsigned long long system_ram, unsigned long long *crash_size, unsigned long long *crash_base) From patchwork Tue Dec 28 13:26:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhen Lei X-Patchwork-Id: 529375 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78BBDC4332F for ; Tue, 28 Dec 2021 13:29:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233174AbhL1N3G (ORCPT ); Tue, 28 Dec 2021 08:29:06 -0500 Received: from szxga01-in.huawei.com ([45.249.212.187]:34856 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232659AbhL1N3B (ORCPT ); Tue, 28 Dec 2021 08:29:01 -0500 Received: from dggpemm500024.china.huawei.com (unknown [172.30.72.57]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4JNb2l0p7Pzcbyx; Tue, 28 Dec 2021 21:28:31 +0800 (CST) Received: from dggpemm500006.china.huawei.com (7.185.36.236) by dggpemm500024.china.huawei.com (7.185.36.203) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Tue, 28 Dec 2021 21:28:58 +0800 Received: from thunder-town.china.huawei.com (10.174.178.55) by dggpemm500006.china.huawei.com (7.185.36.236) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Tue, 28 Dec 2021 21:28:57 +0800 From: Zhen Lei To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , Dave Young , Baoquan He , Vivek Goyal , Eric Biederman , , Catalin Marinas , "Will Deacon" , , Rob Herring , Frank Rowand , , Jonathan Corbet , CC: Zhen Lei , Randy Dunlap , Feng Zhou , Kefeng Wang , Chen Zhou , "John Donnelly" Subject: [PATCH v19 04/13] kdump: reduce unnecessary parameters of parse_crashkernel_{high|low}() Date: Tue, 28 Dec 2021 21:26:03 +0800 Message-ID: <20211228132612.1860-5-thunder.leizhen@huawei.com> X-Mailer: git-send-email 2.26.0.windows.1 In-Reply-To: <20211228132612.1860-1-thunder.leizhen@huawei.com> References: <20211228132612.1860-1-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.178.55] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemm500006.china.huawei.com (7.185.36.236) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Delete confusing parameters 'system_ram' and 'crash_base' of parse_crashkernel_{high|low}(), they are only needed by the case of "crashkernel=X@[offset]". Signed-off-by: Zhen Lei --- kernel/crash_core.c | 21 ++++++++++----------- 1 file changed, 10 insertions(+), 11 deletions(-) diff --git a/kernel/crash_core.c b/kernel/crash_core.c index 3b9e01fc450b2a4..b7d024eb464d0ae 100644 --- a/kernel/crash_core.c +++ b/kernel/crash_core.c @@ -278,20 +278,20 @@ int __init parse_crashkernel(char *cmdline, } static int __init parse_crashkernel_high(char *cmdline, - unsigned long long system_ram, - unsigned long long *crash_size, - unsigned long long *crash_base) + unsigned long long *crash_size) { - return __parse_crashkernel(cmdline, system_ram, crash_size, crash_base, + unsigned long long base; + + return __parse_crashkernel(cmdline, 0, crash_size, &base, "crashkernel=", suffix_tbl[SUFFIX_HIGH]); } static int __init parse_crashkernel_low(char *cmdline, - unsigned long long system_ram, - unsigned long long *crash_size, - unsigned long long *crash_base) + unsigned long long *crash_size) { - return __parse_crashkernel(cmdline, system_ram, crash_size, crash_base, + unsigned long long base; + + return __parse_crashkernel(cmdline, 0, crash_size, &base, "crashkernel=", suffix_tbl[SUFFIX_LOW]); } @@ -310,12 +310,11 @@ int __init parse_crashkernel_high_low(char *cmdline, unsigned long long *low_size) { int ret; - unsigned long long base; BUG_ON(!high_size || !low_size); /* crashkernel=X,high */ - ret = parse_crashkernel_high(cmdline, 0, high_size, &base); + ret = parse_crashkernel_high(cmdline, high_size); if (ret) return ret; @@ -323,7 +322,7 @@ int __init parse_crashkernel_high_low(char *cmdline, return -EINVAL; /* crashkernel=Y,low */ - ret = parse_crashkernel_low(cmdline, 0, low_size, &base); + ret = parse_crashkernel_low(cmdline, low_size); if (ret) *low_size = -1; From patchwork Tue Dec 28 13:26:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhen Lei X-Patchwork-Id: 529374 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86EEAC43217 for ; Tue, 28 Dec 2021 13:29:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233405AbhL1N3J (ORCPT ); Tue, 28 Dec 2021 08:29:09 -0500 Received: from szxga02-in.huawei.com ([45.249.212.188]:29302 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232767AbhL1N3E (ORCPT ); Tue, 28 Dec 2021 08:29:04 -0500 Received: from dggpemm500020.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4JNb2n6H2VzbjSb; Tue, 28 Dec 2021 21:28:33 +0800 (CST) Received: from dggpemm500006.china.huawei.com (7.185.36.236) by dggpemm500020.china.huawei.com (7.185.36.49) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Tue, 28 Dec 2021 21:29:02 +0800 Received: from thunder-town.china.huawei.com (10.174.178.55) by dggpemm500006.china.huawei.com (7.185.36.236) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Tue, 28 Dec 2021 21:29:00 +0800 From: Zhen Lei To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , Dave Young , Baoquan He , Vivek Goyal , Eric Biederman , , Catalin Marinas , "Will Deacon" , , Rob Herring , Frank Rowand , , Jonathan Corbet , CC: Zhen Lei , Randy Dunlap , Feng Zhou , Kefeng Wang , Chen Zhou , "John Donnelly" Subject: [PATCH v19 07/13] kdump: Add helper reserve_crashkernel_mem[_low]() Date: Tue, 28 Dec 2021 21:26:06 +0800 Message-ID: <20211228132612.1860-8-thunder.leizhen@huawei.com> X-Mailer: git-send-email 2.26.0.windows.1 In-Reply-To: <20211228132612.1860-1-thunder.leizhen@huawei.com> References: <20211228132612.1860-1-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.178.55] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemm500006.china.huawei.com (7.185.36.236) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Add helper reserve_crashkernel_mem[_low]() to reserve high and low memory for crash kernel. The implementation of these two functions is based on function reserve_crashkernel[_low]() in arch/x86/kernel/setup.c. But the following adaptations are made: 1. To avoid compilation problems on other architectures, provide default values for macro CRASH[_BASE]_ALIGN, CRASH_ADDR_{LOW|HIGH}_MAX, and add new macro CRASH_LOW_SIZE_MIN. 2. Only functions that reserve crash memory are extracted from reserve_crashkernel(), excluding functions such as parse_crashkernel() and insert_resource(). 3. Change "return;" in reserve_crashkernel() to "return -ENOMEM;". 4. Change call reserve_crashkernel_low() to call reserve_crashkernel_mem_low(). 5. Change CONFIG_X86_64 to CONFIG_64BIT. Signed-off-by: Zhen Lei --- include/linux/crash_core.h | 6 ++ kernel/crash_core.c | 154 ++++++++++++++++++++++++++++++++++++- 2 files changed, 159 insertions(+), 1 deletion(-) diff --git a/include/linux/crash_core.h b/include/linux/crash_core.h index f5437c9c9411fce..2e19632f8d45a60 100644 --- a/include/linux/crash_core.h +++ b/include/linux/crash_core.h @@ -86,5 +86,11 @@ int __init parse_crashkernel(char *cmdline, unsigned long long system_ram, int __init parse_crashkernel_high_low(char *cmdline, unsigned long long *high_size, unsigned long long *low_size); +int __init reserve_crashkernel_mem_low(unsigned long long low_size); +int __init reserve_crashkernel_mem(unsigned long long system_ram, + unsigned long long crash_size, + unsigned long long crash_base, + unsigned long long low_size, + bool high); #endif /* LINUX_CRASH_CORE_H */ diff --git a/kernel/crash_core.c b/kernel/crash_core.c index 686d8a65e12a337..4bd30098534a184 100644 --- a/kernel/crash_core.c +++ b/kernel/crash_core.c @@ -5,7 +5,9 @@ */ #include -#include +#include +#include +#include #include #include @@ -345,6 +347,156 @@ int __init parse_crashkernel_high_low(char *cmdline, return 0; } +/* alignment for crash kernel dynamic regions */ +#ifndef CRASH_ALIGN +#define CRASH_ALIGN SZ_2M +#endif + +/* alignment for crash kernel fixed region */ +#ifndef CRASH_BASE_ALIGN +#define CRASH_BASE_ALIGN SZ_2M +#endif + +/* upper bound for crash low memory */ +#ifndef CRASH_ADDR_LOW_MAX +#ifdef CONFIG_PHYS_ADDR_T_64BIT +#define CRASH_ADDR_LOW_MAX SZ_4G +#else +#define CRASH_ADDR_LOW_MAX MEMBLOCK_ALLOC_ACCESSIBLE +#endif +#endif + +/* upper bound for crash high memory */ +#ifndef CRASH_ADDR_HIGH_MAX +#define CRASH_ADDR_HIGH_MAX MEMBLOCK_ALLOC_ACCESSIBLE +#endif + +#ifdef CONFIG_SWIOTLB +/* + * two parts from kernel/dma/swiotlb.c: + * -swiotlb size: user-specified with swiotlb= or default. + * + * -swiotlb overflow buffer: now hardcoded to 32k. We round it + * to 8M for other buffers that may need to stay low too. Also + * make sure we allocate enough extra low memory so that we + * don't run out of DMA buffers for 32-bit devices. + */ +#define CRASH_LOW_SIZE_MIN max(swiotlb_size_or_default() + (8UL << 20), 256UL << 20) +#else +#define CRASH_LOW_SIZE_MIN (256UL << 20) +#endif + +/** + * reserve_crashkernel_mem_low - Reserve crash kernel low memory. + * + * @low_size: The memory size specified by "crashkernel=Y,low" or "-1" + * if it's not specified. + * + * Returns 0 on success, else a negative status code. + */ +int __init reserve_crashkernel_mem_low(unsigned long long low_size) +{ +#ifdef CONFIG_64BIT + unsigned long long low_base = 0; + unsigned long low_mem_limit; + + low_mem_limit = min(memblock_phys_mem_size(), CRASH_ADDR_LOW_MAX); + + /* crashkernel=Y,low is not specified */ + if ((long)low_size < 0) { + low_size = CRASH_LOW_SIZE_MIN; + } else { + /* passed with crashkernel=0,low ? */ + if (!low_size) + return 0; + } + + low_base = memblock_phys_alloc_range(low_size, CRASH_ALIGN, 0, CRASH_ADDR_LOW_MAX); + if (!low_base) { + pr_err("Cannot reserve %ldMB crashkernel low memory, please try smaller size.\n", + (unsigned long)(low_size >> 20)); + return -ENOMEM; + } + + pr_info("Reserving %ldMB of low memory at %ldMB for crashkernel (low RAM limit: %ldMB)\n", + (unsigned long)(low_size >> 20), + (unsigned long)(low_base >> 20), + (unsigned long)(low_mem_limit >> 20)); + + crashk_low_res.start = low_base; + crashk_low_res.end = low_base + low_size - 1; +#endif + return 0; +} + +/** + * reserve_crashkernel_mem - Reserve crash kernel memory. + * + * @system_ram: Total system memory size. + * @crash_size: The memory size specified by "crashkernel=X[@offset]" or + * "crashkernel=X,high". + * @crash_base: The base address specified by "crashkernel=X@offset" + * @low_size: The memory size specified by "crashkernel=Y,low" or "-1" + * if it's not specified. + * @high: Whether "crashkernel=X,high" is specified. + * + * Returns 0 on success, else a negative status code. + */ +int __init reserve_crashkernel_mem(unsigned long long system_ram, + unsigned long long crash_size, + unsigned long long crash_base, + unsigned long long low_size, + bool high) +{ + /* 0 means: find the address automatically */ + if (!crash_base) { + /* + * Set CRASH_ADDR_LOW_MAX upper bound for crash memory, + * crashkernel=x,high reserves memory over 4G, also allocates + * 256M extra low memory for DMA buffers and swiotlb. + * But the extra memory is not required for all machines. + * So try low memory first and fall back to high memory + * unless "crashkernel=size[KMG],high" is specified. + */ + if (!high) + crash_base = memblock_phys_alloc_range(crash_size, + CRASH_ALIGN, CRASH_ALIGN, + CRASH_ADDR_LOW_MAX); + if (!crash_base) + crash_base = memblock_phys_alloc_range(crash_size, + CRASH_ALIGN, CRASH_ALIGN, + CRASH_ADDR_HIGH_MAX); + if (!crash_base) { + pr_info("crashkernel reservation failed - No suitable area found.\n"); + return -ENOMEM; + } + } else { + unsigned long long start; + + start = memblock_phys_alloc_range(crash_size, CRASH_BASE_ALIGN, crash_base, + crash_base + crash_size); + if (start != crash_base) { + pr_info("crashkernel reservation failed - memory is in use.\n"); + return -ENOMEM; + } + } + + if (crash_base >= (1ULL << 32) && reserve_crashkernel_mem_low(low_size)) { + memblock_phys_free(crash_base, crash_size); + return -ENOMEM; + } + + pr_info("Reserving %ldMB of memory at %ldMB for crashkernel (System RAM: %ldMB)\n", + (unsigned long)(crash_size >> 20), + (unsigned long)(crash_base >> 20), + (unsigned long)(system_ram >> 20)); + + crashk_res.start = crash_base; + crashk_res.end = crash_base + crash_size - 1; + + return 0; +} + Elf_Word *append_elf_note(Elf_Word *buf, char *name, unsigned int type, void *data, size_t data_len) { From patchwork Tue Dec 28 13:26:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhen Lei X-Patchwork-Id: 529373 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9C76C4332F for ; Tue, 28 Dec 2021 13:29:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233567AbhL1N3P (ORCPT ); Tue, 28 Dec 2021 08:29:15 -0500 Received: from szxga02-in.huawei.com ([45.249.212.188]:29303 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233070AbhL1N3G (ORCPT ); Tue, 28 Dec 2021 08:29:06 -0500 Received: from dggpemm500024.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4JNb2q6Xh9zbjhF; Tue, 28 Dec 2021 21:28:35 +0800 (CST) Received: from dggpemm500006.china.huawei.com (7.185.36.236) by dggpemm500024.china.huawei.com (7.185.36.203) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Tue, 28 Dec 2021 21:29:04 +0800 Received: from thunder-town.china.huawei.com (10.174.178.55) by dggpemm500006.china.huawei.com (7.185.36.236) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Tue, 28 Dec 2021 21:29:02 +0800 From: Zhen Lei To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , Dave Young , Baoquan He , Vivek Goyal , Eric Biederman , , Catalin Marinas , "Will Deacon" , , Rob Herring , Frank Rowand , , Jonathan Corbet , CC: Zhen Lei , Randy Dunlap , Feng Zhou , Kefeng Wang , Chen Zhou , "John Donnelly" Subject: [PATCH v19 09/13] x86/setup: Use generic reserve_crashkernel_mem[_low]() Date: Tue, 28 Dec 2021 21:26:08 +0800 Message-ID: <20211228132612.1860-10-thunder.leizhen@huawei.com> X-Mailer: git-send-email 2.26.0.windows.1 In-Reply-To: <20211228132612.1860-1-thunder.leizhen@huawei.com> References: <20211228132612.1860-1-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.178.55] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemm500006.china.huawei.com (7.185.36.236) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Use generic reserve_crashkernel_mem[_low]() to replace arch-specific reserve_crashkernel_low() and a partial implementation of reserve_crashkernel(). The only difference is that "insert_resource(&iomem_resource, &crashk_low_res);" is moved into reserve_crashkernel(), no functional change. Signed-off-by: Zhen Lei --- arch/x86/kernel/setup.c | 93 ++--------------------------------------- 1 file changed, 4 insertions(+), 89 deletions(-) diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index 22d63dbf5db0a58..ee2606b3b9da662 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -391,52 +391,6 @@ static void __init memblock_x86_reserve_range_setup_data(void) */ #ifdef CONFIG_KEXEC_CORE - -static int __init reserve_crashkernel_low(unsigned long long low_size) -{ -#ifdef CONFIG_X86_64 - unsigned long long low_base = 0; - unsigned long low_mem_limit; - - low_mem_limit = min(memblock_phys_mem_size(), CRASH_ADDR_LOW_MAX); - - /* crashkernel=Y,low is not specified */ - if ((long)low_size < 0) { - /* - * two parts from kernel/dma/swiotlb.c: - * -swiotlb size: user-specified with swiotlb= or default. - * - * -swiotlb overflow buffer: now hardcoded to 32k. We round it - * to 8M for other buffers that may need to stay low too. Also - * make sure we allocate enough extra low memory so that we - * don't run out of DMA buffers for 32-bit devices. - */ - low_size = max(swiotlb_size_or_default() + (8UL << 20), 256UL << 20); - } else { - /* passed with crashkernel=0,low ? */ - if (!low_size) - return 0; - } - - low_base = memblock_phys_alloc_range(low_size, CRASH_ALIGN, 0, CRASH_ADDR_LOW_MAX); - if (!low_base) { - pr_err("Cannot reserve %ldMB crashkernel low memory, please try smaller size.\n", - (unsigned long)(low_size >> 20)); - return -ENOMEM; - } - - pr_info("Reserving %ldMB of low memory at %ldMB for crashkernel (low RAM limit: %ldMB)\n", - (unsigned long)(low_size >> 20), - (unsigned long)(low_base >> 20), - (unsigned long)(low_mem_limit >> 20)); - - crashk_low_res.start = low_base; - crashk_low_res.end = low_base + low_size - 1; - insert_resource(&iomem_resource, &crashk_low_res); -#endif - return 0; -} - static void __init reserve_crashkernel(void) { unsigned long long crash_size, crash_base, total_mem, low_size; @@ -460,51 +414,12 @@ static void __init reserve_crashkernel(void) return; } - /* 0 means: find the address automatically */ - if (!crash_base) { - /* - * Set CRASH_ADDR_LOW_MAX upper bound for crash memory, - * crashkernel=x,high reserves memory over 4G, also allocates - * 256M extra low memory for DMA buffers and swiotlb. - * But the extra memory is not required for all machines. - * So try low memory first and fall back to high memory - * unless "crashkernel=size[KMG],high" is specified. - */ - if (!high) - crash_base = memblock_phys_alloc_range(crash_size, - CRASH_ALIGN, CRASH_ALIGN, - CRASH_ADDR_LOW_MAX); - if (!crash_base) - crash_base = memblock_phys_alloc_range(crash_size, - CRASH_ALIGN, CRASH_ALIGN, - CRASH_ADDR_HIGH_MAX); - if (!crash_base) { - pr_info("crashkernel reservation failed - No suitable area found.\n"); - return; - } - } else { - unsigned long long start; - - start = memblock_phys_alloc_range(crash_size, CRASH_BASE_ALIGN, crash_base, - crash_base + crash_size); - if (start != crash_base) { - pr_info("crashkernel reservation failed - memory is in use.\n"); - return; - } - } - - if (crash_base >= (1ULL << 32) && reserve_crashkernel_low(low_size)) { - memblock_phys_free(crash_base, crash_size); + ret = reserve_crashkernel_mem(total_mem, crash_size, crash_base, low_size, high); + if (ret) return; - } - - pr_info("Reserving %ldMB of memory at %ldMB for crashkernel (System RAM: %ldMB)\n", - (unsigned long)(crash_size >> 20), - (unsigned long)(crash_base >> 20), - (unsigned long)(total_mem >> 20)); - crashk_res.start = crash_base; - crashk_res.end = crash_base + crash_size - 1; + if (crashk_low_res.end > crashk_low_res.start) + insert_resource(&iomem_resource, &crashk_low_res); insert_resource(&iomem_resource, &crashk_res); } #else From patchwork Tue Dec 28 13:26:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhen Lei X-Patchwork-Id: 529372 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F0BFC433EF for ; Tue, 28 Dec 2021 13:29:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232941AbhL1N3k (ORCPT ); Tue, 28 Dec 2021 08:29:40 -0500 Received: from szxga01-in.huawei.com ([45.249.212.187]:15986 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233224AbhL1N3I (ORCPT ); Tue, 28 Dec 2021 08:29:08 -0500 Received: from dggpemm500022.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4JNZzd17m4zZdgP; Tue, 28 Dec 2021 21:25:49 +0800 (CST) Received: from dggpemm500006.china.huawei.com (7.185.36.236) by dggpemm500022.china.huawei.com (7.185.36.162) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Tue, 28 Dec 2021 21:29:06 +0800 Received: from thunder-town.china.huawei.com (10.174.178.55) by dggpemm500006.china.huawei.com (7.185.36.236) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Tue, 28 Dec 2021 21:29:05 +0800 From: Zhen Lei To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , Dave Young , Baoquan He , Vivek Goyal , Eric Biederman , , Catalin Marinas , "Will Deacon" , , Rob Herring , Frank Rowand , , Jonathan Corbet , CC: Zhen Lei , Randy Dunlap , Feng Zhou , Kefeng Wang , Chen Zhou , "John Donnelly" Subject: [PATCH v19 11/13] arm64: kdump: reimplement crashkernel=X Date: Tue, 28 Dec 2021 21:26:10 +0800 Message-ID: <20211228132612.1860-12-thunder.leizhen@huawei.com> X-Mailer: git-send-email 2.26.0.windows.1 In-Reply-To: <20211228132612.1860-1-thunder.leizhen@huawei.com> References: <20211228132612.1860-1-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.178.55] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemm500006.china.huawei.com (7.185.36.236) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org From: Chen Zhou There are following issues in arm64 kdump: 1. We use crashkernel=X to reserve crashkernel below 4G, which will fail when there is no enough low memory. 2. If reserving crashkernel above 4G, in this case, crash dump kernel will boot failure because there is no low memory available for allocation. To solve these issues, change the behavior of crashkernel=X and introduce crashkernel=X,[high,low]. crashkernel=X tries low allocation in DMA zone, and fall back to high allocation if it fails. We can also use "crashkernel=X,high" to select a region above DMA zone, which also tries to allocate at least 256M in DMA zone automatically. "crashkernel=Y,low" can be used to allocate specified size low memory. Another minor change, there may be two regions reserved for crash dump kernel, in order to distinct from the high region and make no effect to the use of existing kexec-tools, rename the low region as "Crash kernel (low)". Signed-off-by: Chen Zhou Co-developed-by: Zhen Lei Signed-off-by: Zhen Lei --- arch/arm64/kernel/machine_kexec.c | 5 +++- arch/arm64/kernel/machine_kexec_file.c | 12 ++++++-- arch/arm64/kernel/setup.c | 13 +++++++- arch/arm64/mm/init.c | 41 ++++++++++---------------- 4 files changed, 42 insertions(+), 29 deletions(-) diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index 6fb31c117ebe08c..6665bf31f6b6a19 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -327,7 +327,10 @@ bool crash_is_nosave(unsigned long pfn) /* in reserved memory? */ addr = __pfn_to_phys(pfn); - if ((addr < crashk_res.start) || (crashk_res.end < addr)) + if (((addr < crashk_res.start) || (crashk_res.end < addr)) && !crashk_low_res.end) + return false; + + if ((addr < crashk_low_res.start) || (crashk_low_res.end < addr)) return false; if (!kexec_crash_image) diff --git a/arch/arm64/kernel/machine_kexec_file.c b/arch/arm64/kernel/machine_kexec_file.c index 59c648d51848886..889951291cc0f9c 100644 --- a/arch/arm64/kernel/machine_kexec_file.c +++ b/arch/arm64/kernel/machine_kexec_file.c @@ -65,10 +65,18 @@ static int prepare_elf_headers(void **addr, unsigned long *sz) /* Exclude crashkernel region */ ret = crash_exclude_mem_range(cmem, crashk_res.start, crashk_res.end); + if (ret) + goto out; + + if (crashk_low_res.end) { + ret = crash_exclude_mem_range(cmem, crashk_low_res.start, crashk_low_res.end); + if (ret) + goto out; + } - if (!ret) - ret = crash_prepare_elf64_headers(cmem, true, addr, sz); + ret = crash_prepare_elf64_headers(cmem, true, addr, sz); +out: kfree(cmem); return ret; } diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index be5f85b0a24de69..4bb2e55366be64d 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -248,7 +248,18 @@ static void __init request_standard_resources(void) kernel_data.end <= res->end) request_resource(res, &kernel_data); #ifdef CONFIG_KEXEC_CORE - /* Userspace will find "Crash kernel" region in /proc/iomem. */ + /* + * Userspace will find "Crash kernel" or "Crash kernel (low)" + * region in /proc/iomem. + * In order to distinct from the high region and make no effect + * to the use of existing kexec-tools, rename the low region as + * "Crash kernel (low)". + */ + if (crashk_low_res.end && crashk_low_res.start >= res->start && + crashk_low_res.end <= res->end) { + crashk_low_res.name = "Crash kernel (low)"; + request_resource(res, &crashk_low_res); + } if (crashk_res.end && crashk_res.start >= res->start && crashk_res.end <= res->end) request_resource(res, &crashk_res); diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index be4595dc7459115..91b8038a1529068 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -74,41 +74,32 @@ phys_addr_t arm64_dma_phys_limit __ro_after_init; */ static void __init reserve_crashkernel(void) { - unsigned long long crash_base, crash_size; - unsigned long long crash_max = CRASH_ADDR_LOW_MAX; + unsigned long long crash_size, crash_base, total_mem, low_size; + bool high = false; int ret; - ret = parse_crashkernel(boot_command_line, memblock_phys_mem_size(), - &crash_size, &crash_base); - /* no crashkernel= or invalid value specified */ - if (ret || !crash_size) - return; - - crash_size = PAGE_ALIGN(crash_size); - - /* User specifies base address explicitly. */ - if (crash_base) - crash_max = crash_base + crash_size; + total_mem = memblock_phys_mem_size(); - /* Current arm64 boot protocol requires 2MB alignment */ - crash_base = memblock_phys_alloc_range(crash_size, CRASH_ALIGN, - crash_base, crash_max); - if (!crash_base) { - pr_warn("cannot allocate crashkernel (size:0x%llx)\n", - crash_size); - return; + ret = parse_crashkernel(boot_command_line, total_mem, &crash_size, &crash_base); + if (ret != 0 || crash_size <= 0) { + /* crashkernel=X,high and possible crashkernel=Y,low */ + ret = parse_crashkernel_high_low(boot_command_line, &crash_size, &low_size); + if (ret) + return; + high = true; } - pr_info("crashkernel reserved: 0x%016llx - 0x%016llx (%lld MB)\n", - crash_base, crash_base + crash_size, crash_size >> 20); + ret = reserve_crashkernel_mem(total_mem, crash_size, crash_base, low_size, high); + if (ret) + return; /* * The crashkernel memory will be removed from the kernel linear * map. Inform kmemleak so that it won't try to access it. */ - kmemleak_ignore_phys(crash_base); - crashk_res.start = crash_base; - crashk_res.end = crash_base + crash_size - 1; + kmemleak_ignore_phys(crashk_res.start); + if (crashk_low_res.end) + kmemleak_ignore_phys(crashk_low_res.start); } #else static void __init reserve_crashkernel(void) From patchwork Tue Dec 28 13:26:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhen Lei X-Patchwork-Id: 529371 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 935DDC433F5 for ; Tue, 28 Dec 2021 13:29:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233310AbhL1N3m (ORCPT ); Tue, 28 Dec 2021 08:29:42 -0500 Received: from szxga01-in.huawei.com ([45.249.212.187]:15987 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233294AbhL1N3J (ORCPT ); Tue, 28 Dec 2021 08:29:09 -0500 Received: from dggpemm500020.china.huawei.com (unknown [172.30.72.54]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4JNZzf1Pp8zZdrT; Tue, 28 Dec 2021 21:25:50 +0800 (CST) Received: from dggpemm500006.china.huawei.com (7.185.36.236) by dggpemm500020.china.huawei.com (7.185.36.49) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Tue, 28 Dec 2021 21:29:07 +0800 Received: from thunder-town.china.huawei.com (10.174.178.55) by dggpemm500006.china.huawei.com (7.185.36.236) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Tue, 28 Dec 2021 21:29:06 +0800 From: Zhen Lei To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , Dave Young , Baoquan He , Vivek Goyal , Eric Biederman , , Catalin Marinas , "Will Deacon" , , Rob Herring , Frank Rowand , , Jonathan Corbet , CC: Zhen Lei , Randy Dunlap , Feng Zhou , Kefeng Wang , Chen Zhou , "John Donnelly" Subject: [PATCH v19 12/13] of: fdt: Add memory for devices by DT property "linux,usable-memory-range" Date: Tue, 28 Dec 2021 21:26:11 +0800 Message-ID: <20211228132612.1860-13-thunder.leizhen@huawei.com> X-Mailer: git-send-email 2.26.0.windows.1 In-Reply-To: <20211228132612.1860-1-thunder.leizhen@huawei.com> References: <20211228132612.1860-1-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.178.55] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemm500006.china.huawei.com (7.185.36.236) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org From: Chen Zhou When reserving crashkernel in high memory, some low memory is reserved for crash dump kernel devices and never mapped by the first kernel. This memory range is advertised to crash dump kernel via DT property under /chosen, linux,usable-memory-range = We reused the DT property linux,usable-memory-range and made the low memory region as the second range "BASE2 SIZE2", which keeps compatibility with existing user-space and older kdump kernels. Crash dump kernel reads this property at boot time and call memblock_add() to add the low memory region after memblock_cap_memory_range() has been called. Signed-off-by: Chen Zhou Co-developed-by: Zhen Lei Signed-off-by: Zhen Lei Reviewed-by: Rob Herring Tested-by: Dave Kleikamp --- drivers/of/fdt.c | 33 +++++++++++++++++++++++---------- 1 file changed, 23 insertions(+), 10 deletions(-) diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c index 65af475dfa9508f..20e6281b2201ff5 100644 --- a/drivers/of/fdt.c +++ b/drivers/of/fdt.c @@ -967,16 +967,24 @@ static void __init early_init_dt_check_for_elfcorehdr(unsigned long node) static unsigned long chosen_node_offset = -FDT_ERR_NOTFOUND; +/* + * The main usage of linux,usable-memory-range is for crash dump kernel. + * Originally, the number of usable-memory regions is one. Now there may + * be two regions, low region and high region. + * To make compatibility with existing user-space and older kdump, the low + * region is always the last range of linux,usable-memory-range if exist. + */ +#define MAX_USABLE_RANGES 2 + /** * early_init_dt_check_for_usable_mem_range - Decode usable memory range * location from flat tree */ void __init early_init_dt_check_for_usable_mem_range(void) { - const __be32 *prop; - int len; - phys_addr_t cap_mem_addr; - phys_addr_t cap_mem_size; + struct memblock_region rgn[MAX_USABLE_RANGES] = {0}; + const __be32 *prop, *endp; + int len, i; unsigned long node = chosen_node_offset; if ((long)node < 0) @@ -985,16 +993,21 @@ void __init early_init_dt_check_for_usable_mem_range(void) pr_debug("Looking for usable-memory-range property... "); prop = of_get_flat_dt_prop(node, "linux,usable-memory-range", &len); - if (!prop || (len < (dt_root_addr_cells + dt_root_size_cells))) + if (!prop || (len % (dt_root_addr_cells + dt_root_size_cells))) return; - cap_mem_addr = dt_mem_next_cell(dt_root_addr_cells, &prop); - cap_mem_size = dt_mem_next_cell(dt_root_size_cells, &prop); + endp = prop + (len / sizeof(__be32)); + for (i = 0; i < MAX_USABLE_RANGES && prop < endp; i++) { + rgn[i].base = dt_mem_next_cell(dt_root_addr_cells, &prop); + rgn[i].size = dt_mem_next_cell(dt_root_size_cells, &prop); - pr_debug("cap_mem_start=%pa cap_mem_size=%pa\n", &cap_mem_addr, - &cap_mem_size); + pr_debug("cap_mem_regions[%d]: base=%pa, size=%pa\n", + i, &rgn[i].base, &rgn[i].size); + } - memblock_cap_memory_range(cap_mem_addr, cap_mem_size); + memblock_cap_memory_range(rgn[0].base, rgn[0].size); + for (i = 1; i < MAX_USABLE_RANGES && rgn[i].size; i++) + memblock_add(rgn[i].base, rgn[i].size); } #ifdef CONFIG_SERIAL_EARLYCON