mbox series

[v3,0/4] arch, mm: improve robustness of direct map manipulation

Message ID 20201101170815.9795-1-rppt@kernel.org
Headers show
Series arch, mm: improve robustness of direct map manipulation | expand

Message

Mike Rapoport Nov. 1, 2020, 5:08 p.m. UTC
From: Mike Rapoport <rppt@linux.ibm.com>

Hi,

During recent discussion about KVM protected memory, David raised a concern
about usage of __kernel_map_pages() outside of DEBUG_PAGEALLOC scope [1].

Indeed, for architectures that define CONFIG_ARCH_HAS_SET_DIRECT_MAP it is
possible that __kernel_map_pages() would fail, but since this function is
void, the failure will go unnoticed.

Moreover, there's lack of consistency of __kernel_map_pages() semantics
across architectures as some guard this function with
#ifdef DEBUG_PAGEALLOC, some refuse to update the direct map if page
allocation debugging is disabled at run time and some allow modifying the
direct map regardless of DEBUG_PAGEALLOC settings.

This set straightens this out by restoring dependency of
__kernel_map_pages() on DEBUG_PAGEALLOC and updating the call sites
accordingly. 

Since currently the only user of __kernel_map_pages() outside
DEBUG_PAGEALLOC, it is updated to make direct map accesses there more
explicit.

[1] https://lore.kernel.org/lkml/2759b4bf-e1e3-d006-7d86-78a40348269d@redhat.com

v3 changes:
* update arm64 changes to avoid regression, per Rick's comments
* fix bisectability

v2 changes:
* Rephrase patch 2 changelog to better describe the change intentions and
implications
* Move removal of kernel_map_pages() from patch 1 to patch 2, per David
https://lore.kernel.org/lkml/20201029161902.19272-1-rppt@kernel.org

v1:
https://lore.kernel.org/lkml/20201025101555.3057-1-rppt@kernel.org

Mike Rapoport (4):
  mm: introduce debug_pagealloc_map_pages() helper
  PM: hibernate: make direct map manipulations more explicit
  arch, mm: restore dependency of __kernel_map_pages() of DEBUG_PAGEALLOC
  arch, mm: make kernel_page_present() always available

 arch/Kconfig                        |  3 +++
 arch/arm64/Kconfig                  |  4 +---
 arch/arm64/include/asm/cacheflush.h |  1 +
 arch/arm64/mm/pageattr.c            |  6 +++--
 arch/powerpc/Kconfig                |  5 +----
 arch/riscv/Kconfig                  |  4 +---
 arch/riscv/include/asm/pgtable.h    |  2 --
 arch/riscv/include/asm/set_memory.h |  1 +
 arch/riscv/mm/pageattr.c            | 31 +++++++++++++++++++++++++
 arch/s390/Kconfig                   |  4 +---
 arch/sparc/Kconfig                  |  4 +---
 arch/x86/Kconfig                    |  4 +---
 arch/x86/include/asm/set_memory.h   |  1 +
 arch/x86/mm/pat/set_memory.c        |  4 ++--
 include/linux/mm.h                  | 35 +++++++++++++----------------
 include/linux/set_memory.h          |  5 +++++
 kernel/power/snapshot.c             | 30 +++++++++++++++++++++++--
 mm/memory_hotplug.c                 |  3 +--
 mm/page_alloc.c                     |  6 ++---
 mm/slab.c                           |  8 +++----
 20 files changed, 103 insertions(+), 58 deletions(-)

Comments

David Hildenbrand Nov. 2, 2020, 9:23 a.m. UTC | #1
>   int __init kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address,

>   				   unsigned numpages, unsigned long page_flags)

> diff --git a/include/linux/mm.h b/include/linux/mm.h

> index 14e397f3752c..ab0ef6bd351d 100644

> --- a/include/linux/mm.h

> +++ b/include/linux/mm.h

> @@ -2924,7 +2924,11 @@ static inline bool debug_pagealloc_enabled_static(void)

>   	return static_branch_unlikely(&_debug_pagealloc_enabled);

>   }

>   

> -#if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP)

> +#ifdef CONFIG_DEBUG_PAGEALLOC

> +/*

> + * To support DEBUG_PAGEALLOC architecture must ensure that

> + * __kernel_map_pages() never fails


Maybe add here, that this implies mapping everything via PTEs during boot.

Acked-by: David Hildenbrand <david@redhat.com>


-- 
Thanks,

David / dhildenb
Mike Rapoport Nov. 2, 2020, 3:15 p.m. UTC | #2
On Mon, Nov 02, 2020 at 10:23:20AM +0100, David Hildenbrand wrote:
> 

> >   int __init kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address,

> >   				   unsigned numpages, unsigned long page_flags)

> > diff --git a/include/linux/mm.h b/include/linux/mm.h

> > index 14e397f3752c..ab0ef6bd351d 100644

> > --- a/include/linux/mm.h

> > +++ b/include/linux/mm.h

> > @@ -2924,7 +2924,11 @@ static inline bool debug_pagealloc_enabled_static(void)

> >   	return static_branch_unlikely(&_debug_pagealloc_enabled);

> >   }

> > -#if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP)

> > +#ifdef CONFIG_DEBUG_PAGEALLOC

> > +/*

> > + * To support DEBUG_PAGEALLOC architecture must ensure that

> > + * __kernel_map_pages() never fails

> 

> Maybe add here, that this implies mapping everything via PTEs during boot.


This is more of an implementation detail, while assumption that
__kernel_map_pages() does not fail is somewhat a requirement :)

> Acked-by: David Hildenbrand <david@redhat.com>


Thanks!

> -- 

> Thanks,

> 

> David / dhildenb

> 


-- 
Sincerely yours,
Mike.
Kirill A. Shutemov Nov. 3, 2020, 11:15 a.m. UTC | #3
On Sun, Nov 01, 2020 at 07:08:11PM +0200, Mike Rapoport wrote:
> Mike Rapoport (4):

>   mm: introduce debug_pagealloc_map_pages() helper

>   PM: hibernate: make direct map manipulations more explicit

>   arch, mm: restore dependency of __kernel_map_pages() of DEBUG_PAGEALLOC

>   arch, mm: make kernel_page_present() always available


The series looks good to me (apart from the minor nit):

Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>


-- 
 Kirill A. Shutemov