mbox series

[v4,0/4] arch, mm: improve robustness of direct map manipulation

Message ID 20201103162057.22916-1-rppt@kernel.org
Headers show
Series arch, mm: improve robustness of direct map manipulation | expand

Message

Mike Rapoport Nov. 3, 2020, 4:20 p.m. UTC
From: Mike Rapoport <rppt@linux.ibm.com>

Hi,

During recent discussion about KVM protected memory, David raised a concern
about usage of __kernel_map_pages() outside of DEBUG_PAGEALLOC scope [1].

Indeed, for architectures that define CONFIG_ARCH_HAS_SET_DIRECT_MAP it is
possible that __kernel_map_pages() would fail, but since this function is
void, the failure will go unnoticed.

Moreover, there's lack of consistency of __kernel_map_pages() semantics
across architectures as some guard this function with
#ifdef DEBUG_PAGEALLOC, some refuse to update the direct map if page
allocation debugging is disabled at run time and some allow modifying the
direct map regardless of DEBUG_PAGEALLOC settings.

This set straightens this out by restoring dependency of
__kernel_map_pages() on DEBUG_PAGEALLOC and updating the call sites
accordingly. 

Since currently the only user of __kernel_map_pages() outside
DEBUG_PAGEALLOC is hibernation, it is updated to make direct map accesses
there more explicit.

[1] https://lore.kernel.org/lkml/2759b4bf-e1e3-d006-7d86-78a40348269d@redhat.com

v4 changes:
* s/WARN_ON/pr_warn_once/ per David and Kirill
* rebase on v5.10-rc2
* add Acked/Reviewed tags

v3 changes:
* update arm64 changes to avoid regression, per Rick's comments
* fix bisectability
https://lore.kernel.org/lkml/20201101170815.9795-1-rppt@kernel.org

v2 changes:
* Rephrase patch 2 changelog to better describe the change intentions and
implications
* Move removal of kernel_map_pages() from patch 1 to patch 2, per David
https://lore.kernel.org/lkml/20201029161902.19272-1-rppt@kernel.org

v1:
https://lore.kernel.org/lkml/20201025101555.3057-1-rppt@kernel.org

Mike Rapoport (4):
  mm: introduce debug_pagealloc_map_pages() helper
  PM: hibernate: make direct map manipulations more explicit
  arch, mm: restore dependency of __kernel_map_pages() of DEBUG_PAGEALLOC
  arch, mm: make kernel_page_present() always available

 arch/Kconfig                        |  3 +++
 arch/arm64/Kconfig                  |  4 +---
 arch/arm64/include/asm/cacheflush.h |  1 +
 arch/arm64/mm/pageattr.c            |  6 +++--
 arch/powerpc/Kconfig                |  5 +----
 arch/riscv/Kconfig                  |  4 +---
 arch/riscv/include/asm/pgtable.h    |  2 --
 arch/riscv/include/asm/set_memory.h |  1 +
 arch/riscv/mm/pageattr.c            | 31 +++++++++++++++++++++++++
 arch/s390/Kconfig                   |  4 +---
 arch/sparc/Kconfig                  |  4 +---
 arch/x86/Kconfig                    |  4 +---
 arch/x86/include/asm/set_memory.h   |  1 +
 arch/x86/mm/pat/set_memory.c        |  4 ++--
 include/linux/mm.h                  | 35 +++++++++++++----------------
 include/linux/set_memory.h          |  5 +++++
 kernel/power/snapshot.c             | 30 +++++++++++++++++++++++--
 mm/memory_hotplug.c                 |  3 +--
 mm/page_alloc.c                     |  6 ++---
 mm/slab.c                           |  8 +++----
 20 files changed, 103 insertions(+), 58 deletions(-)

Comments

Vlastimil Babka Nov. 4, 2020, 5:35 p.m. UTC | #1
On 11/3/20 5:20 PM, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
> 
> When CONFIG_DEBUG_PAGEALLOC is enabled, it unmaps pages from the kernel
> direct mapping after free_pages(). The pages than need to be mapped back
> before they could be used. Theese mapping operations use
> __kernel_map_pages() guarded with with debug_pagealloc_enabled().
> 
> The only place that calls __kernel_map_pages() without checking whether
> DEBUG_PAGEALLOC is enabled is the hibernation code that presumes
> availability of this function when ARCH_HAS_SET_DIRECT_MAP is set.
> Still, on arm64, __kernel_map_pages() will bail out when DEBUG_PAGEALLOC is
> not enabled but set_direct_map_invalid_noflush() may render some pages not
> present in the direct map and hibernation code won't be able to save such
> pages.
> 
> To make page allocation debugging and hibernation interaction more robust,
> the dependency on DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP has to be made
> more explicit.
> 
> Start with combining the guard condition and the call to
> __kernel_map_pages() into a single debug_pagealloc_map_pages() function to
> emphasize that __kernel_map_pages() should not be called without
> DEBUG_PAGEALLOC and use this new function to map/unmap pages when page
> allocation debug is enabled.
> 
> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> Reviewed-by: David Hildenbrand <david@redhat.com>
> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

But, the "enable" param is hideous. I would rather have map and unmap variants 
(and just did the same split for page poisoning) and this seems to be a good 
opportunity. If David didn't propose it already, I'm surprised ;)

> ---
>   include/linux/mm.h  | 10 ++++++++++
>   mm/memory_hotplug.c |  3 +--
>   mm/page_alloc.c     |  6 ++----
>   mm/slab.c           |  8 +++-----
>   4 files changed, 16 insertions(+), 11 deletions(-)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index ef360fe70aaf..1fc0609056dc 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2936,12 +2936,22 @@ kernel_map_pages(struct page *page, int numpages, int enable)
>   {
>   	__kernel_map_pages(page, numpages, enable);
>   }
> +
> +static inline void debug_pagealloc_map_pages(struct page *page,
> +					     int numpages, int enable)
> +{
> +	if (debug_pagealloc_enabled_static())
> +		__kernel_map_pages(page, numpages, enable);
> +}
> +
>   #ifdef CONFIG_HIBERNATION
>   extern bool kernel_page_present(struct page *page);
>   #endif	/* CONFIG_HIBERNATION */
>   #else	/* CONFIG_DEBUG_PAGEALLOC || CONFIG_ARCH_HAS_SET_DIRECT_MAP */
>   static inline void
>   kernel_map_pages(struct page *page, int numpages, int enable) {}
> +static inline void debug_pagealloc_map_pages(struct page *page,
> +					     int numpages, int enable) {}
>   #ifdef CONFIG_HIBERNATION
>   static inline bool kernel_page_present(struct page *page) { return true; }
>   #endif	/* CONFIG_HIBERNATION */
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index b44d4c7ba73b..e2b6043a4428 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -614,8 +614,7 @@ void generic_online_page(struct page *page, unsigned int order)
>   	 * so we should map it first. This is better than introducing a special
>   	 * case in page freeing fast path.
>   	 */
> -	if (debug_pagealloc_enabled_static())
> -		kernel_map_pages(page, 1 << order, 1);
> +	debug_pagealloc_map_pages(page, 1 << order, 1);
>   	__free_pages_core(page, order);
>   	totalram_pages_add(1UL << order);
>   #ifdef CONFIG_HIGHMEM
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 23f5066bd4a5..9a66a1ff9193 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1272,8 +1272,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
>   	 */
>   	arch_free_page(page, order);
>   
> -	if (debug_pagealloc_enabled_static())
> -		kernel_map_pages(page, 1 << order, 0);
> +	debug_pagealloc_map_pages(page, 1 << order, 0);
>   
>   	kasan_free_nondeferred_pages(page, order);
>   
> @@ -2270,8 +2269,7 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
>   	set_page_refcounted(page);
>   
>   	arch_alloc_page(page, order);
> -	if (debug_pagealloc_enabled_static())
> -		kernel_map_pages(page, 1 << order, 1);
> +	debug_pagealloc_map_pages(page, 1 << order, 1);
>   	kasan_alloc_pages(page, order);
>   	kernel_poison_pages(page, 1 << order, 1);
>   	set_page_owner(page, order, gfp_flags);
> diff --git a/mm/slab.c b/mm/slab.c
> index b1113561b98b..340db0ce74c4 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -1431,10 +1431,8 @@ static bool is_debug_pagealloc_cache(struct kmem_cache *cachep)
>   #ifdef CONFIG_DEBUG_PAGEALLOC
>   static void slab_kernel_map(struct kmem_cache *cachep, void *objp, int map)
>   {
> -	if (!is_debug_pagealloc_cache(cachep))
> -		return;
> -
> -	kernel_map_pages(virt_to_page(objp), cachep->size / PAGE_SIZE, map);
> +	debug_pagealloc_map_pages(virt_to_page(objp),
> +				  cachep->size / PAGE_SIZE, map);
>   }
>   
>   #else
> @@ -2062,7 +2060,7 @@ int __kmem_cache_create(struct kmem_cache *cachep, slab_flags_t flags)
>   
>   #if DEBUG
>   	/*
> -	 * If we're going to use the generic kernel_map_pages()
> +	 * If we're going to use the generic debug_pagealloc_map_pages()
>   	 * poisoning, then it's going to smash the contents of
>   	 * the redzone and userword anyhow, so switch them off.
>   	 */
>
Vlastimil Babka Nov. 4, 2020, 6:02 p.m. UTC | #2
On 11/3/20 5:20 PM, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>

Subject should have "on DEBUG_PAGEALLOC" ?

> The design of DEBUG_PAGEALLOC presumes that __kernel_map_pages() must never
> fail. With this assumption is wouldn't be safe to allow general usage of
> this function.
> 
> Moreover, some architectures that implement __kernel_map_pages() have this
> function guarded by #ifdef DEBUG_PAGEALLOC and some refuse to map/unmap
> pages when page allocation debugging is disabled at runtime.
> 
> As all the users of __kernel_map_pages() were converted to use
> debug_pagealloc_map_pages() it is safe to make it available only when
> DEBUG_PAGEALLOC is set.
> 
> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> Acked-by: David Hildenbrand <david@redhat.com>
> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> ---
>   arch/Kconfig                     |  3 +++
>   arch/arm64/Kconfig               |  4 +---
>   arch/arm64/mm/pageattr.c         |  8 ++++++--
>   arch/powerpc/Kconfig             |  5 +----
>   arch/riscv/Kconfig               |  4 +---
>   arch/riscv/include/asm/pgtable.h |  2 --
>   arch/riscv/mm/pageattr.c         |  2 ++
>   arch/s390/Kconfig                |  4 +---
>   arch/sparc/Kconfig               |  4 +---
>   arch/x86/Kconfig                 |  4 +---
>   arch/x86/mm/pat/set_memory.c     |  2 ++
>   include/linux/mm.h               | 10 +++++++---
>   12 files changed, 26 insertions(+), 26 deletions(-)
> 
> diff --git a/arch/Kconfig b/arch/Kconfig
> index 56b6ccc0e32d..56d4752b6db6 100644
> --- a/arch/Kconfig
> +++ b/arch/Kconfig
> @@ -1028,6 +1028,9 @@ config HAVE_STATIC_CALL_INLINE
>   	bool
>   	depends on HAVE_STATIC_CALL
>   
> +config ARCH_SUPPORTS_DEBUG_PAGEALLOC
> +	bool
> +
>   source "kernel/gcov/Kconfig"
>   
>   source "scripts/gcc-plugins/Kconfig"
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 1d466addb078..a932810cfd90 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -71,6 +71,7 @@ config ARM64
>   	select ARCH_USE_QUEUED_RWLOCKS
>   	select ARCH_USE_QUEUED_SPINLOCKS
>   	select ARCH_USE_SYM_ANNOTATIONS
> +	select ARCH_SUPPORTS_DEBUG_PAGEALLOC
>   	select ARCH_SUPPORTS_MEMORY_FAILURE
>   	select ARCH_SUPPORTS_SHADOW_CALL_STACK if CC_HAVE_SHADOW_CALL_STACK
>   	select ARCH_SUPPORTS_ATOMIC_RMW
> @@ -1025,9 +1026,6 @@ config HOLES_IN_ZONE
>   
>   source "kernel/Kconfig.hz"
>   
> -config ARCH_SUPPORTS_DEBUG_PAGEALLOC
> -	def_bool y
> -
>   config ARCH_SPARSEMEM_ENABLE
>   	def_bool y
>   	select SPARSEMEM_VMEMMAP_ENABLE
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index 1b94f5b82654..439325532be1 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -155,7 +155,7 @@ int set_direct_map_invalid_noflush(struct page *page)
>   		.clear_mask = __pgprot(PTE_VALID),
>   	};
>   
> -	if (!rodata_full)
> +	if (!debug_pagealloc_enabled() && !rodata_full)
>   		return 0;
>   
>   	return apply_to_page_range(&init_mm,
> @@ -170,7 +170,7 @@ int set_direct_map_default_noflush(struct page *page)
>   		.clear_mask = __pgprot(PTE_RDONLY),
>   	};
>   
> -	if (!rodata_full)
> +	if (!debug_pagealloc_enabled() && !rodata_full)
>   		return 0;
>   
>   	return apply_to_page_range(&init_mm,

I don't understand these two hunks. Previous patch calls this for hibernation 
when CONFIG_ARCH_HAS_SET_DIRECT_MAP, which is true for arm64. Why suddenly this 
starts to depend on debug_pagealloc_enabled()?
Mike Rapoport Nov. 5, 2020, 11:31 a.m. UTC | #3
On Wed, Nov 04, 2020 at 06:35:50PM +0100, Vlastimil Babka wrote:
> On 11/3/20 5:20 PM, Mike Rapoport wrote:

> > From: Mike Rapoport <rppt@linux.ibm.com>

> > 

> > When CONFIG_DEBUG_PAGEALLOC is enabled, it unmaps pages from the kernel

> > direct mapping after free_pages(). The pages than need to be mapped back

> > before they could be used. Theese mapping operations use

> > __kernel_map_pages() guarded with with debug_pagealloc_enabled().

> > 

> > The only place that calls __kernel_map_pages() without checking whether

> > DEBUG_PAGEALLOC is enabled is the hibernation code that presumes

> > availability of this function when ARCH_HAS_SET_DIRECT_MAP is set.

> > Still, on arm64, __kernel_map_pages() will bail out when DEBUG_PAGEALLOC is

> > not enabled but set_direct_map_invalid_noflush() may render some pages not

> > present in the direct map and hibernation code won't be able to save such

> > pages.

> > 

> > To make page allocation debugging and hibernation interaction more robust,

> > the dependency on DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP has to be made

> > more explicit.

> > 

> > Start with combining the guard condition and the call to

> > __kernel_map_pages() into a single debug_pagealloc_map_pages() function to

> > emphasize that __kernel_map_pages() should not be called without

> > DEBUG_PAGEALLOC and use this new function to map/unmap pages when page

> > allocation debug is enabled.

> > 

> > Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>

> > Reviewed-by: David Hildenbrand <david@redhat.com>

> > Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>

> 

> Acked-by: Vlastimil Babka <vbabka@suse.cz>

> 

> But, the "enable" param is hideous. I would rather have map and unmap

> variants (and just did the same split for page poisoning) and this seems to

> be a good opportunity. If David didn't propose it already, I'm surprised ;)


I'm ok with map and unmap, and no, David didn't propose it already :)

> > ---

> >   include/linux/mm.h  | 10 ++++++++++

> >   mm/memory_hotplug.c |  3 +--

> >   mm/page_alloc.c     |  6 ++----

> >   mm/slab.c           |  8 +++-----

> >   4 files changed, 16 insertions(+), 11 deletions(-)

> > 

> > diff --git a/include/linux/mm.h b/include/linux/mm.h

> > index ef360fe70aaf..1fc0609056dc 100644

> > --- a/include/linux/mm.h

> > +++ b/include/linux/mm.h

> > @@ -2936,12 +2936,22 @@ kernel_map_pages(struct page *page, int numpages, int enable)

> >   {

> >   	__kernel_map_pages(page, numpages, enable);

> >   }

> > +

> > +static inline void debug_pagealloc_map_pages(struct page *page,

> > +					     int numpages, int enable)

> > +{

> > +	if (debug_pagealloc_enabled_static())

> > +		__kernel_map_pages(page, numpages, enable);

> > +}

> > +

> >   #ifdef CONFIG_HIBERNATION

> >   extern bool kernel_page_present(struct page *page);

> >   #endif	/* CONFIG_HIBERNATION */

> >   #else	/* CONFIG_DEBUG_PAGEALLOC || CONFIG_ARCH_HAS_SET_DIRECT_MAP */

> >   static inline void

> >   kernel_map_pages(struct page *page, int numpages, int enable) {}

> > +static inline void debug_pagealloc_map_pages(struct page *page,

> > +					     int numpages, int enable) {}

> >   #ifdef CONFIG_HIBERNATION

> >   static inline bool kernel_page_present(struct page *page) { return true; }

> >   #endif	/* CONFIG_HIBERNATION */

> > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c

> > index b44d4c7ba73b..e2b6043a4428 100644

> > --- a/mm/memory_hotplug.c

> > +++ b/mm/memory_hotplug.c

> > @@ -614,8 +614,7 @@ void generic_online_page(struct page *page, unsigned int order)

> >   	 * so we should map it first. This is better than introducing a special

> >   	 * case in page freeing fast path.

> >   	 */

> > -	if (debug_pagealloc_enabled_static())

> > -		kernel_map_pages(page, 1 << order, 1);

> > +	debug_pagealloc_map_pages(page, 1 << order, 1);

> >   	__free_pages_core(page, order);

> >   	totalram_pages_add(1UL << order);

> >   #ifdef CONFIG_HIGHMEM

> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c

> > index 23f5066bd4a5..9a66a1ff9193 100644

> > --- a/mm/page_alloc.c

> > +++ b/mm/page_alloc.c

> > @@ -1272,8 +1272,7 @@ static __always_inline bool free_pages_prepare(struct page *page,

> >   	 */

> >   	arch_free_page(page, order);

> > -	if (debug_pagealloc_enabled_static())

> > -		kernel_map_pages(page, 1 << order, 0);

> > +	debug_pagealloc_map_pages(page, 1 << order, 0);

> >   	kasan_free_nondeferred_pages(page, order);

> > @@ -2270,8 +2269,7 @@ inline void post_alloc_hook(struct page *page, unsigned int order,

> >   	set_page_refcounted(page);

> >   	arch_alloc_page(page, order);

> > -	if (debug_pagealloc_enabled_static())

> > -		kernel_map_pages(page, 1 << order, 1);

> > +	debug_pagealloc_map_pages(page, 1 << order, 1);

> >   	kasan_alloc_pages(page, order);

> >   	kernel_poison_pages(page, 1 << order, 1);

> >   	set_page_owner(page, order, gfp_flags);

> > diff --git a/mm/slab.c b/mm/slab.c

> > index b1113561b98b..340db0ce74c4 100644

> > --- a/mm/slab.c

> > +++ b/mm/slab.c

> > @@ -1431,10 +1431,8 @@ static bool is_debug_pagealloc_cache(struct kmem_cache *cachep)

> >   #ifdef CONFIG_DEBUG_PAGEALLOC

> >   static void slab_kernel_map(struct kmem_cache *cachep, void *objp, int map)

> >   {

> > -	if (!is_debug_pagealloc_cache(cachep))

> > -		return;

> > -

> > -	kernel_map_pages(virt_to_page(objp), cachep->size / PAGE_SIZE, map);

> > +	debug_pagealloc_map_pages(virt_to_page(objp),

> > +				  cachep->size / PAGE_SIZE, map);

> >   }

> >   #else

> > @@ -2062,7 +2060,7 @@ int __kmem_cache_create(struct kmem_cache *cachep, slab_flags_t flags)

> >   #if DEBUG

> >   	/*

> > -	 * If we're going to use the generic kernel_map_pages()

> > +	 * If we're going to use the generic debug_pagealloc_map_pages()

> >   	 * poisoning, then it's going to smash the contents of

> >   	 * the redzone and userword anyhow, so switch them off.

> >   	 */

> > 

> 


-- 
Sincerely yours,
Mike.
Mike Rapoport Nov. 5, 2020, 11:42 a.m. UTC | #4
On Wed, Nov 04, 2020 at 07:02:20PM +0100, Vlastimil Babka wrote:
> On 11/3/20 5:20 PM, Mike Rapoport wrote:
> > From: Mike Rapoport <rppt@linux.ibm.com>
> 
> Subject should have "on DEBUG_PAGEALLOC" ?
> 
> > The design of DEBUG_PAGEALLOC presumes that __kernel_map_pages() must never
> > fail. With this assumption is wouldn't be safe to allow general usage of
> > this function.
> > 
> > Moreover, some architectures that implement __kernel_map_pages() have this
> > function guarded by #ifdef DEBUG_PAGEALLOC and some refuse to map/unmap
> > pages when page allocation debugging is disabled at runtime.
> > 
> > As all the users of __kernel_map_pages() were converted to use
> > debug_pagealloc_map_pages() it is safe to make it available only when
> > DEBUG_PAGEALLOC is set.
> > 
> > Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> > Acked-by: David Hildenbrand <david@redhat.com>
> > Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> > ---
> >   arch/Kconfig                     |  3 +++
> >   arch/arm64/Kconfig               |  4 +---
> >   arch/arm64/mm/pageattr.c         |  8 ++++++--
> >   arch/powerpc/Kconfig             |  5 +----
> >   arch/riscv/Kconfig               |  4 +---
> >   arch/riscv/include/asm/pgtable.h |  2 --
> >   arch/riscv/mm/pageattr.c         |  2 ++
> >   arch/s390/Kconfig                |  4 +---
> >   arch/sparc/Kconfig               |  4 +---
> >   arch/x86/Kconfig                 |  4 +---
> >   arch/x86/mm/pat/set_memory.c     |  2 ++
> >   include/linux/mm.h               | 10 +++++++---
> >   12 files changed, 26 insertions(+), 26 deletions(-)
> > 
> > diff --git a/arch/Kconfig b/arch/Kconfig
> > index 56b6ccc0e32d..56d4752b6db6 100644
> > --- a/arch/Kconfig
> > +++ b/arch/Kconfig
> > @@ -1028,6 +1028,9 @@ config HAVE_STATIC_CALL_INLINE
> >   	bool
> >   	depends on HAVE_STATIC_CALL
> > +config ARCH_SUPPORTS_DEBUG_PAGEALLOC
> > +	bool
> > +
> >   source "kernel/gcov/Kconfig"
> >   source "scripts/gcc-plugins/Kconfig"
> > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> > index 1d466addb078..a932810cfd90 100644
> > --- a/arch/arm64/Kconfig
> > +++ b/arch/arm64/Kconfig
> > @@ -71,6 +71,7 @@ config ARM64
> >   	select ARCH_USE_QUEUED_RWLOCKS
> >   	select ARCH_USE_QUEUED_SPINLOCKS
> >   	select ARCH_USE_SYM_ANNOTATIONS
> > +	select ARCH_SUPPORTS_DEBUG_PAGEALLOC
> >   	select ARCH_SUPPORTS_MEMORY_FAILURE
> >   	select ARCH_SUPPORTS_SHADOW_CALL_STACK if CC_HAVE_SHADOW_CALL_STACK
> >   	select ARCH_SUPPORTS_ATOMIC_RMW
> > @@ -1025,9 +1026,6 @@ config HOLES_IN_ZONE
> >   source "kernel/Kconfig.hz"
> > -config ARCH_SUPPORTS_DEBUG_PAGEALLOC
> > -	def_bool y
> > -
> >   config ARCH_SPARSEMEM_ENABLE
> >   	def_bool y
> >   	select SPARSEMEM_VMEMMAP_ENABLE
> > diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> > index 1b94f5b82654..439325532be1 100644
> > --- a/arch/arm64/mm/pageattr.c
> > +++ b/arch/arm64/mm/pageattr.c
> > @@ -155,7 +155,7 @@ int set_direct_map_invalid_noflush(struct page *page)
> >   		.clear_mask = __pgprot(PTE_VALID),
> >   	};
> > -	if (!rodata_full)
> > +	if (!debug_pagealloc_enabled() && !rodata_full)
> >   		return 0;
> >   	return apply_to_page_range(&init_mm,
> > @@ -170,7 +170,7 @@ int set_direct_map_default_noflush(struct page *page)
> >   		.clear_mask = __pgprot(PTE_RDONLY),
> >   	};
> > -	if (!rodata_full)
> > +	if (!debug_pagealloc_enabled() && !rodata_full)
> >   		return 0;
> >   	return apply_to_page_range(&init_mm,
> 
> I don't understand these two hunks. Previous patch calls this for
> hibernation when CONFIG_ARCH_HAS_SET_DIRECT_MAP, which is true for arm64.
> Why suddenly this starts to depend on debug_pagealloc_enabled()?

I was confused about this for quite a long :)

On arm64 the changes to direct^w linear map are allowed when 

	debug_page_alloc() || rodata_full

In hibernation we essentially have now

	if (1)
		set_direct_map(something)
	else
		debug_page_alloc_map()

With debug_pagealloc enabled but with rodata_full disabled arm64
versions of set_direct_map_*() will become a nop, so a page that was
unmapped by debug_pagealloc() will not be mapped back.

I'm still puzzled how hibernation might ever need to save a free page,
but that's another story.