From patchwork Thu Oct 29 16:18:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 311023 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8ED9C2D0A3 for ; Thu, 29 Oct 2020 16:19:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 92AA7214F1 for ; Thu, 29 Oct 2020 16:19:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603988385; bh=vBwurL4/Q9Hcp/7WaGQxkN1fGezdnq9akegXwjuyoXo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=IhpdzvXJHRIiyIfJmPKM5gWFZrjQTS9L53RcSLLiie7PEAPlKuvQy3xFoznOOJ63Z jCbc/del3k5GBP7xXUbT1mJP5ocTO4Z5Td82EFhst0RB+TQRIG9CxJ3LYl/v97SHgD DiDDsrD2c3xUgXyoY6wUDQaySf6ykF6YWpxJtVZI= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726654AbgJ2QTl (ORCPT ); Thu, 29 Oct 2020 12:19:41 -0400 Received: from mail.kernel.org ([198.145.29.99]:42294 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725764AbgJ2QTk (ORCPT ); Thu, 29 Oct 2020 12:19:40 -0400 Received: from aquarius.haifa.ibm.com (nesher1.haifa.il.ibm.com [195.110.40.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id E196F20838; Thu, 29 Oct 2020 16:19:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603988379; bh=vBwurL4/Q9Hcp/7WaGQxkN1fGezdnq9akegXwjuyoXo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iu1hGk5C+9KA0JuAZ9aELW2NgTQCTT7LskLXiHWtTRA4uwcYlzt218pXb7cTVqj74 KFZPwNQijSQWol2FX9/4R0GtrYmSusx0ENyOCULN4uaa8YmjU2rVFqr972RKvBVtH4 Js2pTF9s8Furil7Hm6afgBC8x7fq0sZqHqhgpjgc= From: Mike Rapoport To: Andrew Morton Cc: Albert Ou , Andy Lutomirski , Benjamin Herrenschmidt , Borislav Petkov , Catalin Marinas , Christian Borntraeger , Christoph Lameter , "David S. Miller" , Dave Hansen , David Hildenbrand , David Rientjes , "Edgecombe, Rick P" , "H. Peter Anvin" , Heiko Carstens , Ingo Molnar , Joonsoo Kim , "Kirill A. Shutemov" , Len Brown , Michael Ellerman , Mike Rapoport , Mike Rapoport , Palmer Dabbelt , Paul Mackerras , Paul Walmsley , Pavel Machek , Pekka Enberg , Peter Zijlstra , "Rafael J. Wysocki" , Thomas Gleixner , Vasily Gorbik , Will Deacon , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-pm@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v2 1/4] mm: introduce debug_pagealloc_map_pages() helper Date: Thu, 29 Oct 2020 18:18:59 +0200 Message-Id: <20201029161902.19272-2-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201029161902.19272-1-rppt@kernel.org> References: <20201029161902.19272-1-rppt@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Mike Rapoport When CONFIG_DEBUG_PAGEALLOC is enabled, it unmaps pages from the kernel direct mapping after free_pages(). The pages than need to be mapped back before they could be used. Theese mapping operations use __kernel_map_pages() guarded with with debug_pagealloc_enabled(). The only place that calls __kernel_map_pages() without checking whether DEBUG_PAGEALLOC is enabled is the hibernation code that presumes availability of this function when ARCH_HAS_SET_DIRECT_MAP is set. Still, on arm64, __kernel_map_pages() will bail out when DEBUG_PAGEALLOC is not enabled but set_direct_map_invalid_noflush() may render some pages not present in the direct map and hibernation code won't be able to save such pages. To make page allocation debugging and hibernation interaction more robust, the dependency on DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP has to be made more explicit. Start with combining the guard condition and the call to __kernel_map_pages() into a single debug_pagealloc_map_pages() function to emphasize that __kernel_map_pages() should not be called without DEBUG_PAGEALLOC and use this new function to map/unmap pages when page allocation debug is enabled. Signed-off-by: Mike Rapoport Reviewed-by: David Hildenbrand --- include/linux/mm.h | 10 ++++++++++ mm/memory_hotplug.c | 3 +-- mm/page_alloc.c | 6 ++---- mm/slab.c | 8 +++----- 4 files changed, 16 insertions(+), 11 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index ef360fe70aaf..1fc0609056dc 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2936,12 +2936,22 @@ kernel_map_pages(struct page *page, int numpages, int enable) { __kernel_map_pages(page, numpages, enable); } + +static inline void debug_pagealloc_map_pages(struct page *page, + int numpages, int enable) +{ + if (debug_pagealloc_enabled_static()) + __kernel_map_pages(page, numpages, enable); +} + #ifdef CONFIG_HIBERNATION extern bool kernel_page_present(struct page *page); #endif /* CONFIG_HIBERNATION */ #else /* CONFIG_DEBUG_PAGEALLOC || CONFIG_ARCH_HAS_SET_DIRECT_MAP */ static inline void kernel_map_pages(struct page *page, int numpages, int enable) {} +static inline void debug_pagealloc_map_pages(struct page *page, + int numpages, int enable) {} #ifdef CONFIG_HIBERNATION static inline bool kernel_page_present(struct page *page) { return true; } #endif /* CONFIG_HIBERNATION */ diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index b44d4c7ba73b..e2b6043a4428 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -614,8 +614,7 @@ void generic_online_page(struct page *page, unsigned int order) * so we should map it first. This is better than introducing a special * case in page freeing fast path. */ - if (debug_pagealloc_enabled_static()) - kernel_map_pages(page, 1 << order, 1); + debug_pagealloc_map_pages(page, 1 << order, 1); __free_pages_core(page, order); totalram_pages_add(1UL << order); #ifdef CONFIG_HIGHMEM diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 23f5066bd4a5..9a66a1ff9193 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1272,8 +1272,7 @@ static __always_inline bool free_pages_prepare(struct page *page, */ arch_free_page(page, order); - if (debug_pagealloc_enabled_static()) - kernel_map_pages(page, 1 << order, 0); + debug_pagealloc_map_pages(page, 1 << order, 0); kasan_free_nondeferred_pages(page, order); @@ -2270,8 +2269,7 @@ inline void post_alloc_hook(struct page *page, unsigned int order, set_page_refcounted(page); arch_alloc_page(page, order); - if (debug_pagealloc_enabled_static()) - kernel_map_pages(page, 1 << order, 1); + debug_pagealloc_map_pages(page, 1 << order, 1); kasan_alloc_pages(page, order); kernel_poison_pages(page, 1 << order, 1); set_page_owner(page, order, gfp_flags); diff --git a/mm/slab.c b/mm/slab.c index b1113561b98b..340db0ce74c4 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -1431,10 +1431,8 @@ static bool is_debug_pagealloc_cache(struct kmem_cache *cachep) #ifdef CONFIG_DEBUG_PAGEALLOC static void slab_kernel_map(struct kmem_cache *cachep, void *objp, int map) { - if (!is_debug_pagealloc_cache(cachep)) - return; - - kernel_map_pages(virt_to_page(objp), cachep->size / PAGE_SIZE, map); + debug_pagealloc_map_pages(virt_to_page(objp), + cachep->size / PAGE_SIZE, map); } #else @@ -2062,7 +2060,7 @@ int __kmem_cache_create(struct kmem_cache *cachep, slab_flags_t flags) #if DEBUG /* - * If we're going to use the generic kernel_map_pages() + * If we're going to use the generic debug_pagealloc_map_pages() * poisoning, then it's going to smash the contents of * the redzone and userword anyhow, so switch them off. */ From patchwork Thu Oct 29 16:19:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 311022 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76849C4363A for ; Thu, 29 Oct 2020 16:20:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F30402076E for ; Thu, 29 Oct 2020 16:20:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603988405; bh=zI1eI+XhW0lIxg0HHm0ntO6wulEQzC4d9LeaLKLoEbQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=d2fnjPErMYO59HM+tmSfQLL391J6FnAjslXaMVdHhVbVKL+0jCghF2uOXNHwXJr7g HargsUEa5nLUYzAZ0Zr4F7+OhC7eno9tYStdBSNHmtu1PE70Qzo64AGaXAKsS7kqZX x7nhz18GQFS5zOGwys3uXIoRHID0IdKJQzPK6eDs= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726635AbgJ2QUE (ORCPT ); Thu, 29 Oct 2020 12:20:04 -0400 Received: from mail.kernel.org ([198.145.29.99]:42682 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725764AbgJ2QUE (ORCPT ); Thu, 29 Oct 2020 12:20:04 -0400 Received: from aquarius.haifa.ibm.com (nesher1.haifa.il.ibm.com [195.110.40.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 17F3720825; Thu, 29 Oct 2020 16:19:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603988401; bh=zI1eI+XhW0lIxg0HHm0ntO6wulEQzC4d9LeaLKLoEbQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cQbzoXdNd7+j1ILlDsrzMJ4MqSv34cj/azO2y10t85s1oOWZzB17+JNGy++4fR3pW kb20Xcz8yCK6efwrsKoXrnRuUNHayKvuBTu1LPK5I5mYeDegMb4vqlOGILD9Fl6VXe cvcmWwgGAM+WWRFF+gQyF9Vda7jzHrPpsR3EfwJ4= From: Mike Rapoport To: Andrew Morton Cc: Albert Ou , Andy Lutomirski , Benjamin Herrenschmidt , Borislav Petkov , Catalin Marinas , Christian Borntraeger , Christoph Lameter , "David S. Miller" , Dave Hansen , David Hildenbrand , David Rientjes , "Edgecombe, Rick P" , "H. Peter Anvin" , Heiko Carstens , Ingo Molnar , Joonsoo Kim , "Kirill A. Shutemov" , Len Brown , Michael Ellerman , Mike Rapoport , Mike Rapoport , Palmer Dabbelt , Paul Mackerras , Paul Walmsley , Pavel Machek , Pekka Enberg , Peter Zijlstra , "Rafael J. Wysocki" , Thomas Gleixner , Vasily Gorbik , Will Deacon , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-pm@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v2 3/4] arch, mm: restore dependency of __kernel_map_pages() of DEBUG_PAGEALLOC Date: Thu, 29 Oct 2020 18:19:01 +0200 Message-Id: <20201029161902.19272-4-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201029161902.19272-1-rppt@kernel.org> References: <20201029161902.19272-1-rppt@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Mike Rapoport The design of DEBUG_PAGEALLOC presumes that __kernel_map_pages() must never fail. With this assumption is wouldn't be safe to allow general usage of this function. Moreover, some architectures that implement __kernel_map_pages() have this function guarded by #ifdef DEBUG_PAGEALLOC and some refuse to map/unmap pages when page allocation debugging is disabled at runtime. As all the users of __kernel_map_pages() were converted to use debug_pagealloc_map_pages() it is safe to make it available only when DEBUG_PAGEALLOC is set. Signed-off-by: Mike Rapoport --- arch/Kconfig | 3 +++ arch/arm64/Kconfig | 4 +--- arch/arm64/mm/pageattr.c | 6 ++++-- arch/powerpc/Kconfig | 5 +---- arch/riscv/Kconfig | 4 +--- arch/riscv/include/asm/pgtable.h | 2 -- arch/riscv/mm/pageattr.c | 2 ++ arch/s390/Kconfig | 4 +--- arch/sparc/Kconfig | 4 +--- arch/x86/Kconfig | 4 +--- arch/x86/mm/pat/set_memory.c | 2 ++ include/linux/mm.h | 10 +++++++--- 12 files changed, 24 insertions(+), 26 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index 56b6ccc0e32d..56d4752b6db6 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -1028,6 +1028,9 @@ config HAVE_STATIC_CALL_INLINE bool depends on HAVE_STATIC_CALL +config ARCH_SUPPORTS_DEBUG_PAGEALLOC + bool + source "kernel/gcov/Kconfig" source "scripts/gcc-plugins/Kconfig" diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index f858c352f72a..5a01dfb77b93 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -71,6 +71,7 @@ config ARM64 select ARCH_USE_QUEUED_RWLOCKS select ARCH_USE_QUEUED_SPINLOCKS select ARCH_USE_SYM_ANNOTATIONS + select ARCH_SUPPORTS_DEBUG_PAGEALLOC select ARCH_SUPPORTS_MEMORY_FAILURE select ARCH_SUPPORTS_SHADOW_CALL_STACK if CC_HAVE_SHADOW_CALL_STACK select ARCH_SUPPORTS_ATOMIC_RMW @@ -1005,9 +1006,6 @@ config HOLES_IN_ZONE source "kernel/Kconfig.hz" -config ARCH_SUPPORTS_DEBUG_PAGEALLOC - def_bool y - config ARCH_SPARSEMEM_ENABLE def_bool y select SPARSEMEM_VMEMMAP_ENABLE diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index 1b94f5b82654..18613d8834db 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -178,13 +178,15 @@ int set_direct_map_default_noflush(struct page *page) PAGE_SIZE, change_page_range, &data); } +#ifdef CONFIG_DEBUG_PAGEALLOC void __kernel_map_pages(struct page *page, int numpages, int enable) { - if (!debug_pagealloc_enabled() && !rodata_full) + if (!rodata_full) return; set_memory_valid((unsigned long)page_address(page), numpages, enable); } +#endif /* CONFIG_DEBUG_PAGEALLOC */ /* * This function is used to determine if a linear map page has been marked as @@ -204,7 +206,7 @@ bool kernel_page_present(struct page *page) pte_t *ptep; unsigned long addr = (unsigned long)page_address(page); - if (!debug_pagealloc_enabled() && !rodata_full) + if (!rodata_full) return true; pgdp = pgd_offset_k(addr); diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index e9f13fe08492..ad8a83f3ddca 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -146,6 +146,7 @@ config PPC select ARCH_MIGHT_HAVE_PC_SERIO select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX select ARCH_SUPPORTS_ATOMIC_RMW + select ARCH_SUPPORTS_DEBUG_PAGEALLOC if PPC32 || PPC_BOOK3S_64 select ARCH_USE_BUILTIN_BSWAP select ARCH_USE_CMPXCHG_LOCKREF if PPC64 select ARCH_USE_QUEUED_RWLOCKS if PPC_QUEUED_SPINLOCKS @@ -355,10 +356,6 @@ config PPC_OF_PLATFORM_PCI depends on PCI depends on PPC64 # not supported on 32 bits yet -config ARCH_SUPPORTS_DEBUG_PAGEALLOC - depends on PPC32 || PPC_BOOK3S_64 - def_bool y - config ARCH_SUPPORTS_UPROBES def_bool y diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 44377fd7860e..9283c6f9ae2a 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -14,6 +14,7 @@ config RISCV def_bool y select ARCH_CLOCKSOURCE_INIT select ARCH_SUPPORTS_ATOMIC_RMW + select ARCH_SUPPORTS_DEBUG_PAGEALLOC if MMU select ARCH_HAS_BINFMT_FLAT select ARCH_HAS_DEBUG_VM_PGTABLE select ARCH_HAS_DEBUG_VIRTUAL if MMU @@ -153,9 +154,6 @@ config ARCH_SELECT_MEMORY_MODEL config ARCH_WANT_GENERAL_HUGETLB def_bool y -config ARCH_SUPPORTS_DEBUG_PAGEALLOC - def_bool y - config SYS_SUPPORTS_HUGETLBFS depends on MMU def_bool y diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index 183f1f4b2ae6..41a72861987c 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -461,8 +461,6 @@ static inline int ptep_clear_flush_young(struct vm_area_struct *vma, #define VMALLOC_START 0 #define VMALLOC_END TASK_SIZE -static inline void __kernel_map_pages(struct page *page, int numpages, int enable) {} - #endif /* !CONFIG_MMU */ #define kern_addr_valid(addr) (1) /* FIXME */ diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c index 19fecb362d81..321b09d2e2ea 100644 --- a/arch/riscv/mm/pageattr.c +++ b/arch/riscv/mm/pageattr.c @@ -184,6 +184,7 @@ int set_direct_map_default_noflush(struct page *page) return ret; } +#ifdef CONFIG_DEBUG_PAGEALLOC void __kernel_map_pages(struct page *page, int numpages, int enable) { if (!debug_pagealloc_enabled()) @@ -196,3 +197,4 @@ void __kernel_map_pages(struct page *page, int numpages, int enable) __set_memory((unsigned long)page_address(page), numpages, __pgprot(0), __pgprot(_PAGE_PRESENT)); } +#endif diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig index 4a2a12be04c9..991a850a6c0b 100644 --- a/arch/s390/Kconfig +++ b/arch/s390/Kconfig @@ -35,9 +35,6 @@ config GENERIC_LOCKBREAK config PGSTE def_bool y if KVM -config ARCH_SUPPORTS_DEBUG_PAGEALLOC - def_bool y - config AUDIT_ARCH def_bool y @@ -106,6 +103,7 @@ config S390 select ARCH_INLINE_WRITE_UNLOCK_IRQRESTORE select ARCH_STACKWALK select ARCH_SUPPORTS_ATOMIC_RMW + select ARCH_SUPPORTS_DEBUG_PAGEALLOC select ARCH_SUPPORTS_NUMA_BALANCING select ARCH_USE_BUILTIN_BSWAP select ARCH_USE_CMPXCHG_LOCKREF diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig index a6ca135442f9..2c729b8d097a 100644 --- a/arch/sparc/Kconfig +++ b/arch/sparc/Kconfig @@ -88,6 +88,7 @@ config SPARC64 select HAVE_C_RECORDMCOUNT select HAVE_ARCH_AUDITSYSCALL select ARCH_SUPPORTS_ATOMIC_RMW + select ARCH_SUPPORTS_DEBUG_PAGEALLOC select HAVE_NMI select HAVE_REGS_AND_STACK_ACCESS_API select ARCH_USE_QUEUED_RWLOCKS @@ -148,9 +149,6 @@ config GENERIC_ISA_DMA bool default y if SPARC32 -config ARCH_SUPPORTS_DEBUG_PAGEALLOC - def_bool y if SPARC64 - config PGTABLE_LEVELS default 4 if 64BIT default 3 diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index f6946b81f74a..0db3fb1da70c 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -91,6 +91,7 @@ config X86 select ARCH_STACKWALK select ARCH_SUPPORTS_ACPI select ARCH_SUPPORTS_ATOMIC_RMW + select ARCH_SUPPORTS_DEBUG_PAGEALLOC select ARCH_SUPPORTS_NUMA_BALANCING if X86_64 select ARCH_USE_BUILTIN_BSWAP select ARCH_USE_QUEUED_RWLOCKS @@ -329,9 +330,6 @@ config ZONE_DMA32 config AUDIT_ARCH def_bool y if X86_64 -config ARCH_SUPPORTS_DEBUG_PAGEALLOC - def_bool y - config KASAN_SHADOW_OFFSET hex depends on KASAN diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index 40baa90e74f4..7f248fc45317 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -2194,6 +2194,7 @@ int set_direct_map_default_noflush(struct page *page) return __set_pages_p(page, 1); } +#ifdef CONFIG_DEBUG_PAGEALLOC void __kernel_map_pages(struct page *page, int numpages, int enable) { if (PageHighMem(page)) @@ -2225,6 +2226,7 @@ void __kernel_map_pages(struct page *page, int numpages, int enable) arch_flush_lazy_mmu_mode(); } +#endif /* CONFIG_DEBUG_PAGEALLOC */ #ifdef CONFIG_HIBERNATION bool kernel_page_present(struct page *page) diff --git a/include/linux/mm.h b/include/linux/mm.h index 14e397f3752c..ab0ef6bd351d 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2924,7 +2924,11 @@ static inline bool debug_pagealloc_enabled_static(void) return static_branch_unlikely(&_debug_pagealloc_enabled); } -#if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP) +#ifdef CONFIG_DEBUG_PAGEALLOC +/* + * To support DEBUG_PAGEALLOC architecture must ensure that + * __kernel_map_pages() never fails + */ extern void __kernel_map_pages(struct page *page, int numpages, int enable); static inline void debug_pagealloc_map_pages(struct page *page, @@ -2937,13 +2941,13 @@ static inline void debug_pagealloc_map_pages(struct page *page, #ifdef CONFIG_HIBERNATION extern bool kernel_page_present(struct page *page); #endif /* CONFIG_HIBERNATION */ -#else /* CONFIG_DEBUG_PAGEALLOC || CONFIG_ARCH_HAS_SET_DIRECT_MAP */ +#else /* CONFIG_DEBUG_PAGEALLOC */ static inline void debug_pagealloc_map_pages(struct page *page, int numpages, int enable) {} #ifdef CONFIG_HIBERNATION static inline bool kernel_page_present(struct page *page) { return true; } #endif /* CONFIG_HIBERNATION */ -#endif /* CONFIG_DEBUG_PAGEALLOC || CONFIG_ARCH_HAS_SET_DIRECT_MAP */ +#endif /* CONFIG_DEBUG_PAGEALLOC */ #ifdef __HAVE_ARCH_GATE_AREA extern struct vm_area_struct *get_gate_vma(struct mm_struct *mm);