mbox series

[0/4] arm64: mm: support dynamic vmalloc/pmd configuration

Message ID 20240220203256.31153-1-mbland@motorola.com
Headers show
Series arm64: mm: support dynamic vmalloc/pmd configuration | expand

Message

Maxwell Bland Feb. 20, 2024, 8:32 p.m. UTC
Reworks ARM's virtual memory allocation infrastructure to support
dynamic enforcement of page middle directory PXNTable restrictions
rather than only during the initial memory mapping. Runtime enforcement
of this bit prevents write-then-execute attacks, where malicious code is
staged in vmalloc'd data regions, and later the page table is changed to
make this code executable.

Previously the entire region from VMALLOC_START to VMALLOC_END was
vulnerable, but now the vulnerable region is restricted to the 2GB
reserved by module_alloc, a region which is generally read-only and more
difficult to inject staging code into, e.g., data must pass the BPF
verifier. These changes also set the stage for other systems, such as
KVM-level (EL2) changes to mark page tables immutable and code page
verification changes, forging a path toward complete mitigation of
kernel exploits on ARM.

Implementing this required minimal changes to the generic vmalloc
interface in the kernel to allow architecture overrides of some vmalloc
wrapper functions, refactoring vmalloc calls to use a standard interface
in the generic kernel, and passing the address parameter already passed
into PTE allocation to the pte_allocate child function call.

The new arm64 vmalloc wrapper functions ensure vmalloc data is not
allocated into the region reserved for module_alloc. arm64 BPF and
kprobe code also see a two-line-change ensuring their allocations abide
by the segmentation of code from data. Finally, arm64's pmd_populate
function is modified to set the PXNTable bit appropriately.

Signed-off-by: Maxwell Bland <mbland@motorola.com>

---

After Mark Rutland's feedback last week on my more minimal patch, see

<CAP5Mv+ydhk=Ob4b40ZahGMgT-5+-VEHxtmA=-LkJiEOOU+K6hw@mail.gmail.com>

I adopted a more sweeping and more correct overhaul of ARM's virtual
memory allocation infrastructure to support these changes. This patch
guarantees our ability to write future systems with a strong and
accessible distinction between code and data at the page allocation
layer, bolstering the guarantees of complementary contributions, i.e.
W^X and kCFI.

The current patch minimally reduces available vmalloc space, removing
the 2GB that should be reserved for code allocations regardless, and I
feel really benefits the kernel by making several memory allocation
interfaces more uniform, and providing hooks for non-ARM architectures
to follow suit.

I have done some minimal runtime testing using Torvald's test-tlb script
on a QEMU VM, but maybe more extensive benchmarking is needed?

Size: Before Patch -> After Patch
4k: 4.09ns  4.15ns  4.41ns  4.43ns -> 3.68ns  3.73ns  3.67ns  3.73ns 
8k: 4.22ns  4.19ns  4.30ns  4.15ns -> 3.99ns  3.89ns  4.12ns  4.04ns 
16k: 3.97ns  4.31ns  4.30ns  4.28ns -> 4.03ns  3.98ns  4.06ns  4.06ns 
32k: 3.82ns  4.51ns  4.25ns  4.31ns -> 3.99ns  4.09ns  4.07ns  5.17ns 
64k: 4.50ns  5.59ns  6.13ns  6.14ns -> 4.23ns  4.26ns  5.91ns  5.93ns 
128k: 5.06ns  4.47ns  6.75ns  6.69ns -> 4.47ns  4.71ns  6.54ns  6.44ns 
256k: 4.83ns  4.43ns  6.62ns  6.21ns -> 4.39ns  4.62ns  6.71ns  6.65ns 
512k: 4.45ns  4.75ns  6.19ns  6.65ns -> 4.86ns  5.26ns  7.77ns  6.68ns 
1M: 4.72ns  4.73ns  6.74ns  6.47ns -> 4.29ns  4.45ns  6.87ns  6.59ns 
2M: 4.66ns  4.86ns  14.49ns  15.00ns -> 4.53ns  4.57ns  15.91ns  15.90ns 
4M: 4.85ns  4.95ns  15.90ns  15.98ns -> 4.48ns  4.74ns  17.27ns  17.36ns 
6M: 4.94ns  5.03ns  17.19ns  17.31ns -> 4.70ns  4.93ns  18.02ns  18.23ns 
8M: 5.05ns  5.18ns  17.49ns  17.64ns -> 4.96ns  5.07ns  18.84ns  18.72ns 
16M: 5.55ns  5.79ns  20.99ns  23.70ns -> 5.46ns  5.72ns  22.76ns  26.51ns
32M: 8.54ns  9.06ns  124.61ns 125.07ns -> 8.43ns  8.59ns  116.83ns 138.83ns
64M: 8.42ns  8.63ns  196.17ns 204.52ns -> 8.26ns  8.43ns  193.49ns 203.85ns
128M: 8.31ns  8.58ns  230.46ns 242.63ns -> 8.22ns  8.39ns  227.99ns 240.29ns
256M: 8.80ns  8.80ns  248.24ns 261.68ns -> 8.35ns  8.55ns  250.18ns 262.20ns

Note I also chose to enforce PXNTable at the PMD layer only (for now),
since the 194 descriptors which are affected by this change on my
testing setup are not sufficient to warrant enforcement at a coarser
granularity.

The architecture-independent changes (I term "generic") can be
classified only as refactoring, but I feel are also major improvements
in that they standardize most uses of the vmalloc interface across the
kernel.

Note this patch reduces the arm64 allocated region for BPF and kprobes,
but only to match with the existing allocation choices made by the
generic kernel. I will admit I do not understand why BPF JIT allocation
code was duplicated into arm64, but I also feel that this was either an
artifact or that these overrides for generic allocation should require a
specific KConfig as they trade off between security and space. That
said, I have chosen not to wrap this patch in a KConfig interface, as I
feel the changes provide significant benefit to the arm64 kernel's
baseline security, though a KConfig could certainly be added if the
maintainers see the need.

Maxwell Bland (4):
  mm/vmalloc: allow arch-specific vmalloc_node overrides
  mm: pgalloc: support address-conditional pmd allocation
  arm64: separate code and data virtual memory allocation
  arm64: dynamic enforcement of pmd-level PXNTable

 arch/arm/kernel/irq.c               |  2 +-
 arch/arm64/include/asm/pgalloc.h    | 11 +++++-
 arch/arm64/include/asm/vmalloc.h    |  8 ++++
 arch/arm64/include/asm/vmap_stack.h |  2 +-
 arch/arm64/kernel/efi.c             |  2 +-
 arch/arm64/kernel/module.c          |  7 ++++
 arch/arm64/kernel/probes/kprobes.c  |  2 +-
 arch/arm64/mm/Makefile              |  3 +-
 arch/arm64/mm/trans_pgd.c           |  2 +-
 arch/arm64/mm/vmalloc.c             | 57 +++++++++++++++++++++++++++++
 arch/arm64/net/bpf_jit_comp.c       |  5 ++-
 arch/powerpc/kernel/irq.c           |  2 +-
 arch/riscv/include/asm/irq_stack.h  |  2 +-
 arch/s390/hypfs/hypfs_diag.c        |  2 +-
 arch/s390/kernel/setup.c            |  6 +--
 arch/s390/kernel/sthyi.c            |  2 +-
 include/asm-generic/pgalloc.h       | 18 +++++++++
 include/linux/mm.h                  |  4 +-
 include/linux/vmalloc.h             | 15 +++++++-
 kernel/bpf/syscall.c                |  4 +-
 kernel/fork.c                       |  4 +-
 kernel/scs.c                        |  3 +-
 lib/objpool.c                       |  2 +-
 lib/test_vmalloc.c                  |  6 +--
 mm/hugetlb_vmemmap.c                |  4 +-
 mm/kasan/init.c                     | 22 ++++++-----
 mm/memory.c                         |  4 +-
 mm/percpu.c                         |  2 +-
 mm/pgalloc-track.h                  |  3 +-
 mm/sparse-vmemmap.c                 |  2 +-
 mm/util.c                           |  3 +-
 mm/vmalloc.c                        | 39 +++++++-------------
 32 files changed, 176 insertions(+), 74 deletions(-)
 create mode 100644 arch/arm64/mm/vmalloc.c


base-commit: b401b621758e46812da61fa58a67c3fd8d91de0d

Comments

Christoph Hellwig Feb. 21, 2024, 5:43 a.m. UTC | #1
On Tue, Feb 20, 2024 at 02:32:53PM -0600, Maxwell Bland wrote:
> Present non-uniform use of __vmalloc_node and __vmalloc_node_range makes
> enforcing appropriate code and data seperation untenable on certain
> microarchitectures, as VMALLOC_START and VMALLOC_END are monolithic
> while the use of the vmalloc interface is non-monolithic: in particular,
> appropriate randomness in ASLR makes it such that code regions must fall
> in some region between VMALLOC_START and VMALLOC_end, but this
> necessitates that code pages are intermingled with data pages, meaning
> code-specific protections, such as arm64's PXNTable, cannot be
> performantly runtime enforced.

That's not actually true.  We have MODULE_START/END to separate them,
which is used by mips only for now.

> 
> The solution to this problem allows architectures to override the
> vmalloc wrapper functions by enforcing that the rest of the kernel does
> not reimplement __vmalloc_node by using __vmalloc_node_range with the
> same parameters as __vmalloc_node or provides a __weak tag to those
> functions using __vmalloc_node_range with parameters repeating those of
> __vmalloc_node.

I'm really not too happy about overriding the functions.  Especially
as the separation is a generally good idea and it would be good to
move everyone (or at least all modern architectures) over to a scheme
like this.
Christophe Leroy Feb. 21, 2024, 7:13 a.m. UTC | #2
Le 20/02/2024 à 21:32, Maxwell Bland a écrit :
> [Vous ne recevez pas souvent de courriers de mbland@motorola.com. Découvrez pourquoi ceci est important à https://aka.ms/LearnAboutSenderIdentification ]
> 
> While other descriptors (e.g. pud) allow allocations conditional on
> which virtual address is allocated, pmd descriptor allocations do not.
> However, adding support for this is straightforward and is beneficial to
> future kernel development targeting the PMD memory granularity.
> 
> As many architectures already implement pmd_populate_kernel in an
> address-generic manner, it is necessary to roll out support
> incrementally. For this purpose a preprocessor flag,

Is it really worth it ? It is only 48 call sites that need to be 
updated. It would avoid that processor flag and avoid introducing that 
pmd_populate_kernel_at() in kernel core.

$ git grep -l pmd_populate_kernel -- arch/ | wc -l
48


> __HAVE_ARCH_ADDR_COND_PMD is introduced to capture whether the
> architecture supports some feature requiring PMD allocation conditional
> on virtual address. Some microarchitectures (e.g. arm64) support
> configurations for table descriptors, for example to enforce Privilege
> eXecute Never, which benefit from knowing the virtual memory addresses
> referenced by PMDs.
> 
> Thus two major arguments in favor of this change are (1) unformity of
> allocation between PMD and other table descriptor types and (2) the
> capability of address-specific PMD allocation.

Can you give more details on that uniformity ? I can't find any function 
called pud_populate_kernel().

Previously, pmd_populate_kernel() had same arguments as pmd_populate(). 
Why not also updating pmd_populate() to keep consistancy ? (can be done 
in a follow-up patch, not in this patch).

> 
> Signed-off-by: Maxwell Bland <mbland@motorola.com>
> ---
>   include/asm-generic/pgalloc.h | 18 ++++++++++++++++++
>   include/linux/mm.h            |  4 ++--
>   mm/hugetlb_vmemmap.c          |  4 ++--
>   mm/kasan/init.c               | 22 +++++++++++++---------
>   mm/memory.c                   |  4 ++--
>   mm/percpu.c                   |  2 +-
>   mm/pgalloc-track.h            |  3 ++-
>   mm/sparse-vmemmap.c           |  2 +-
>   8 files changed, 41 insertions(+), 18 deletions(-)
> 
> diff --git a/include/asm-generic/pgalloc.h b/include/asm-generic/pgalloc.h
> index 879e5f8aa5e9..e5cdce77c6e4 100644
> --- a/include/asm-generic/pgalloc.h
> +++ b/include/asm-generic/pgalloc.h
> @@ -142,6 +142,24 @@ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr)
>   }
>   #endif
> 
> +#ifdef __HAVE_ARCH_ADDR_COND_PMD
> +static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmdp,
> +                       pte_t *ptep, unsigned long address);
> +#else
> +static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmdp,
> +                       pte_t *ptep);
> +#endif
> +
> +static inline void pmd_populate_kernel_at(struct mm_struct *mm, pmd_t *pmdp,
> +                       pte_t *ptep, unsigned long address)
> +{
> +#ifdef __HAVE_ARCH_ADDR_COND_PMD
> +       pmd_populate_kernel(mm, pmdp, ptep, address);
> +#else
> +       pmd_populate_kernel(mm, pmdp, ptep);
> +#endif
> +}
> +
>   #ifndef __HAVE_ARCH_PMD_FREE
>   static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
>   {
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index f5a97dec5169..6a9d5ded428d 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2782,7 +2782,7 @@ static inline void mm_dec_nr_ptes(struct mm_struct *mm) {}
>   #endif
> 
>   int __pte_alloc(struct mm_struct *mm, pmd_t *pmd);
> -int __pte_alloc_kernel(pmd_t *pmd);
> +int __pte_alloc_kernel(pmd_t *pmd, unsigned long address);
> 
>   #if defined(CONFIG_MMU)
> 
> @@ -2977,7 +2977,7 @@ pte_t *pte_offset_map_nolock(struct mm_struct *mm, pmd_t *pmd,
>                   NULL : pte_offset_map_lock(mm, pmd, address, ptlp))
> 
>   #define pte_alloc_kernel(pmd, address)                 \
> -       ((unlikely(pmd_none(*(pmd))) && __pte_alloc_kernel(pmd))? \
> +       ((unlikely(pmd_none(*(pmd))) && __pte_alloc_kernel(pmd, address)) ? \
>                  NULL: pte_offset_kernel(pmd, address))
> 
>   #if USE_SPLIT_PMD_PTLOCKS
> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> index da177e49d956..1f5664b656f1 100644
> --- a/mm/hugetlb_vmemmap.c
> +++ b/mm/hugetlb_vmemmap.c
> @@ -58,7 +58,7 @@ static int vmemmap_split_pmd(pmd_t *pmd, struct page *head, unsigned long start,
>          if (!pgtable)
>                  return -ENOMEM;
> 
> -       pmd_populate_kernel(&init_mm, &__pmd, pgtable);
> +       pmd_populate_kernel_at(&init_mm, &__pmd, pgtable, addr);
> 
>          for (i = 0; i < PTRS_PER_PTE; i++, addr += PAGE_SIZE) {
>                  pte_t entry, *pte;
> @@ -81,7 +81,7 @@ static int vmemmap_split_pmd(pmd_t *pmd, struct page *head, unsigned long start,
> 
>                  /* Make pte visible before pmd. See comment in pmd_install(). */
>                  smp_wmb();
> -               pmd_populate_kernel(&init_mm, pmd, pgtable);
> +               pmd_populate_kernel_at(&init_mm, pmd, pgtable, addr);
>                  if (!(walk->flags & VMEMMAP_SPLIT_NO_TLB_FLUSH))
>                          flush_tlb_kernel_range(start, start + PMD_SIZE);
>          } else {
> diff --git a/mm/kasan/init.c b/mm/kasan/init.c
> index 89895f38f722..1e31d965a14e 100644
> --- a/mm/kasan/init.c
> +++ b/mm/kasan/init.c
> @@ -116,8 +116,9 @@ static int __ref zero_pmd_populate(pud_t *pud, unsigned long addr,
>                  next = pmd_addr_end(addr, end);
> 
>                  if (IS_ALIGNED(addr, PMD_SIZE) && end - addr >= PMD_SIZE) {
> -                       pmd_populate_kernel(&init_mm, pmd,
> -                                       lm_alias(kasan_early_shadow_pte));
> +                       pmd_populate_kernel_at(&init_mm, pmd,
> +                                       lm_alias(kasan_early_shadow_pte),
> +                                       addr);
>                          continue;
>                  }
> 
> @@ -131,7 +132,7 @@ static int __ref zero_pmd_populate(pud_t *pud, unsigned long addr,
>                          if (!p)
>                                  return -ENOMEM;
> 
> -                       pmd_populate_kernel(&init_mm, pmd, p);
> +                       pmd_populate_kernel_at(&init_mm, pmd, p, addr);
>                  }
>                  zero_pte_populate(pmd, addr, next);
>          } while (pmd++, addr = next, addr != end);
> @@ -157,8 +158,9 @@ static int __ref zero_pud_populate(p4d_t *p4d, unsigned long addr,
>                          pud_populate(&init_mm, pud,
>                                          lm_alias(kasan_early_shadow_pmd));
>                          pmd = pmd_offset(pud, addr);
> -                       pmd_populate_kernel(&init_mm, pmd,
> -                                       lm_alias(kasan_early_shadow_pte));
> +                       pmd_populate_kernel_at(&init_mm, pmd,
> +                                       lm_alias(kasan_early_shadow_pte),
> +                                       addr);
>                          continue;
>                  }
> 
> @@ -203,8 +205,9 @@ static int __ref zero_p4d_populate(pgd_t *pgd, unsigned long addr,
>                          pud_populate(&init_mm, pud,
>                                          lm_alias(kasan_early_shadow_pmd));
>                          pmd = pmd_offset(pud, addr);
> -                       pmd_populate_kernel(&init_mm, pmd,
> -                                       lm_alias(kasan_early_shadow_pte));
> +                       pmd_populate_kernel_at(&init_mm, pmd,
> +                                       lm_alias(kasan_early_shadow_pte),
> +                                       addr);
>                          continue;
>                  }
> 
> @@ -266,8 +269,9 @@ int __ref kasan_populate_early_shadow(const void *shadow_start,
>                          pud_populate(&init_mm, pud,
>                                          lm_alias(kasan_early_shadow_pmd));
>                          pmd = pmd_offset(pud, addr);
> -                       pmd_populate_kernel(&init_mm, pmd,
> -                                       lm_alias(kasan_early_shadow_pte));
> +                       pmd_populate_kernel_at(&init_mm, pmd,
> +                                       lm_alias(kasan_early_shadow_pte),
> +                                       addr);
>                          continue;
>                  }
> 
> diff --git a/mm/memory.c b/mm/memory.c
> index 15f8b10ea17c..15702822d904 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -447,7 +447,7 @@ int __pte_alloc(struct mm_struct *mm, pmd_t *pmd)
>          return 0;
>   }
> 
> -int __pte_alloc_kernel(pmd_t *pmd)
> +int __pte_alloc_kernel(pmd_t *pmd, unsigned long address)
>   {
>          pte_t *new = pte_alloc_one_kernel(&init_mm);
>          if (!new)
> @@ -456,7 +456,7 @@ int __pte_alloc_kernel(pmd_t *pmd)
>          spin_lock(&init_mm.page_table_lock);
>          if (likely(pmd_none(*pmd))) {   /* Has another populated it ? */
>                  smp_wmb(); /* See comment in pmd_install() */
> -               pmd_populate_kernel(&init_mm, pmd, new);
> +               pmd_populate_kernel_at(&init_mm, pmd, new, address);
>                  new = NULL;
>          }
>          spin_unlock(&init_mm.page_table_lock);
> diff --git a/mm/percpu.c b/mm/percpu.c
> index 4e11fc1e6def..7312e584c1b5 100644
> --- a/mm/percpu.c
> +++ b/mm/percpu.c
> @@ -3238,7 +3238,7 @@ void __init __weak pcpu_populate_pte(unsigned long addr)
>                  new = memblock_alloc(PTE_TABLE_SIZE, PTE_TABLE_SIZE);
>                  if (!new)
>                          goto err_alloc;
> -               pmd_populate_kernel(&init_mm, pmd, new);
> +               pmd_populate_kernel_at(&init_mm, pmd, new, addr);
>          }
> 
>          return;
> diff --git a/mm/pgalloc-track.h b/mm/pgalloc-track.h
> index e9e879de8649..0984681c03d4 100644
> --- a/mm/pgalloc-track.h
> +++ b/mm/pgalloc-track.h
> @@ -45,7 +45,8 @@ static inline pmd_t *pmd_alloc_track(struct mm_struct *mm, pud_t *pud,
> 
>   #define pte_alloc_kernel_track(pmd, address, mask)                     \
>          ((unlikely(pmd_none(*(pmd))) &&                                 \
> -         (__pte_alloc_kernel(pmd) || ({*(mask)|=PGTBL_PMD_MODIFIED;0;})))?\
> +         (__pte_alloc_kernel(pmd, address) ||                          \
> +               ({*(mask) |= PGTBL_PMD_MODIFIED; 0; }))) ?              \
>                  NULL: pte_offset_kernel(pmd, address))
> 
>   #endif /* _LINUX_PGALLOC_TRACK_H */
> diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
> index a2cbe44c48e1..d876cc4dc700 100644
> --- a/mm/sparse-vmemmap.c
> +++ b/mm/sparse-vmemmap.c
> @@ -191,7 +191,7 @@ pmd_t * __meminit vmemmap_pmd_populate(pud_t *pud, unsigned long addr, int node)
>                  void *p = vmemmap_alloc_block_zero(PAGE_SIZE, node);
>                  if (!p)
>                          return NULL;
> -               pmd_populate_kernel(&init_mm, pmd, p);
> +               pmd_populate_kernel_at(&init_mm, pmd, p, addr);
>          }
>          return pmd;
>   }
> --
> 2.39.2
>
Christophe Leroy Feb. 21, 2024, 7:32 a.m. UTC | #3
Le 20/02/2024 à 21:32, Maxwell Bland a écrit :
> [Vous ne recevez pas souvent de courriers de mbland@motorola.com. Découvrez pourquoi ceci est important à https://aka.ms/LearnAboutSenderIdentification ]
> 
> Reworks ARM's virtual memory allocation infrastructure to support
> dynamic enforcement of page middle directory PXNTable restrictions
> rather than only during the initial memory mapping. Runtime enforcement
> of this bit prevents write-then-execute attacks, where malicious code is
> staged in vmalloc'd data regions, and later the page table is changed to
> make this code executable.
> 
> Previously the entire region from VMALLOC_START to VMALLOC_END was
> vulnerable, but now the vulnerable region is restricted to the 2GB
> reserved by module_alloc, a region which is generally read-only and more
> difficult to inject staging code into, e.g., data must pass the BPF
> verifier. These changes also set the stage for other systems, such as
> KVM-level (EL2) changes to mark page tables immutable and code page
> verification changes, forging a path toward complete mitigation of
> kernel exploits on ARM.
> 
> Implementing this required minimal changes to the generic vmalloc
> interface in the kernel to allow architecture overrides of some vmalloc
> wrapper functions, refactoring vmalloc calls to use a standard interface
> in the generic kernel, and passing the address parameter already passed
> into PTE allocation to the pte_allocate child function call.
> 
> The new arm64 vmalloc wrapper functions ensure vmalloc data is not
> allocated into the region reserved for module_alloc. arm64 BPF and
> kprobe code also see a two-line-change ensuring their allocations abide
> by the segmentation of code from data. Finally, arm64's pmd_populate
> function is modified to set the PXNTable bit appropriately.

On powerpc (book3s/32) we have more or less the same although it is not 
directly linked to PMDs: the virtual 4G address space is split in 
segments of 256M. On each segment there's a bit called NX to forbit 
execution. Vmalloc space is allocated in a segment with NX bit set while 
Module spare is allocated in a segment with NX bit unset. We never have 
to override vmalloc wrappers. All consumers of exec memory allocate it 
using module_alloc() while vmalloc() provides non-exec memory.

For modules, all you have to do is select 
ARCH_WANTS_MODULES_DATA_IN_VMALLOC and module data will be allocated 
using vmalloc() hence non-exec memory in our case.

Christophe
Christophe Leroy Feb. 21, 2024, 7:38 a.m. UTC | #4
Le 21/02/2024 à 06:43, Christoph Hellwig a écrit :
> On Tue, Feb 20, 2024 at 02:32:53PM -0600, Maxwell Bland wrote:
>> Present non-uniform use of __vmalloc_node and __vmalloc_node_range makes
>> enforcing appropriate code and data seperation untenable on certain
>> microarchitectures, as VMALLOC_START and VMALLOC_END are monolithic
>> while the use of the vmalloc interface is non-monolithic: in particular,
>> appropriate randomness in ASLR makes it such that code regions must fall
>> in some region between VMALLOC_START and VMALLOC_end, but this
>> necessitates that code pages are intermingled with data pages, meaning
>> code-specific protections, such as arm64's PXNTable, cannot be
>> performantly runtime enforced.
> 
> That's not actually true.  We have MODULE_START/END to separate them,
> which is used by mips only for now.

We have MODULES_VADDR and MODULES_END that are used by arm, arm64, 
loongarcg, powerpc, riscv, s390, sparc, x86_64

is_vmalloc_or_module_addr() is using MODULES_VADDR so I guess this 
function fails on mips ?

> 
>>
>> The solution to this problem allows architectures to override the
>> vmalloc wrapper functions by enforcing that the rest of the kernel does
>> not reimplement __vmalloc_node by using __vmalloc_node_range with the
>> same parameters as __vmalloc_node or provides a __weak tag to those
>> functions using __vmalloc_node_range with parameters repeating those of
>> __vmalloc_node.
> 
> I'm really not too happy about overriding the functions.  Especially
> as the separation is a generally good idea and it would be good to
> move everyone (or at least all modern architectures) over to a scheme
> like this.
David Hildenbrand Feb. 21, 2024, 9:27 a.m. UTC | #5
On 21.02.24 08:13, Christophe Leroy wrote:
> 
> 
> Le 20/02/2024 à 21:32, Maxwell Bland a écrit :
>> [Vous ne recevez pas souvent de courriers de mbland@motorola.com. Découvrez pourquoi ceci est important à https://aka.ms/LearnAboutSenderIdentification ]
>>
>> While other descriptors (e.g. pud) allow allocations conditional on
>> which virtual address is allocated, pmd descriptor allocations do not.
>> However, adding support for this is straightforward and is beneficial to
>> future kernel development targeting the PMD memory granularity.
>>
>> As many architectures already implement pmd_populate_kernel in an
>> address-generic manner, it is necessary to roll out support
>> incrementally. For this purpose a preprocessor flag,
> 
> Is it really worth it ? It is only 48 call sites that need to be
> updated. It would avoid that processor flag and avoid introducing that
> pmd_populate_kernel_at() in kernel core.

+1, let's avoid that if possible.
Conor Dooley Feb. 21, 2024, 2:50 p.m. UTC | #6
Hey Maxwell,

FYI:

>   mm/vmalloc: allow arch-specific vmalloc_node overrides
>   mm: pgalloc: support address-conditional pmd allocation

With these two arch/riscv/configs/* are broken with calls to undeclared
functions.

>   arm64: separate code and data virtual memory allocation
>   arm64: dynamic enforcement of pmd-level PXNTable

And with these two the 32-bit and nommu builds are broken.

Cheers,
Conor.
Maxwell Bland Feb. 21, 2024, 3:42 p.m. UTC | #7
> From: Conor Dooley <conor@kernel.org>
> FYI:
> 
> >   mm/vmalloc: allow arch-specific vmalloc_node overrides
> >   mm: pgalloc: support address-conditional pmd allocation
> 
> With these two arch/riscv/configs/* are broken with calls to undeclared
> functions.

Will fix, thanks! I will also figure out how to make sure this doesn't happen again for some other architecture.

> >   arm64: separate code and data virtual memory allocation
> >   arm64: dynamic enforcement of pmd-level PXNTable
> 
> And with these two the 32-bit and nommu builds are broken.

Was not aware there was a dependency here. I will see what I can do.

Thank you,
Maxwell
Maxwell Bland Feb. 21, 2024, 3:54 p.m. UTC | #8
> On February 21, 2024 3:27 AM David Hildenbrand wrote
> On 21.02.24 08:13, Christophe Leroy wrote:
> > Le 20/02/2024 à 21:32, Maxwell Bland a écrit :
> >>
> >> While other descriptors (e.g. pud) allow allocations conditional on
> >> which virtual address is allocated, pmd descriptor allocations do not.
> >> However, adding support for this is straightforward and is beneficial to
> >> future kernel development targeting the PMD memory granularity.
> >>
> >> As many architectures already implement pmd_populate_kernel in an
> >> address-generic manner, it is necessary to roll out support
> >> incrementally. For this purpose a preprocessor flag,
> >
> > Is it really worth it ? It is only 48 call sites that need to be
> > updated. It would avoid that processor flag and avoid introducing that
> > pmd_populate_kernel_at() in kernel core.
> 
> +1, let's avoid that if possible.

Will fix, thank you!

Maxwell
Maxwell Bland Feb. 21, 2024, 5:19 p.m. UTC | #9
> On Wednesday, February 21, 2024 12:59 AM, Christophe Leroy wrote:
> 
> In the code you add __weak for that. But you also add the flags to the
> parameters and I can't understand why when reading the above description.

This  change was made to allow most kernel interfaces use vmalloc_node and
enable the overrides to work. It also reduces the number of kernel locations
which would need to be change if there was ever a change to the
vmalloc_node_range interface.

However, there is a pushback to overriding the vmalloc interface, so this change
will likely not show up in my final patch.

Regards,
Maxwell
Maxwell Bland Feb. 21, 2024, 5:57 p.m. UTC | #10
> On Wednesday, February 21, 2024 at 1:32 AM, Christophe Leroy wrote:
> 
> On powerpc (book3s/32) we have more or less the same although it is not
> directly linked to PMDs: the virtual 4G address space is split in
> segments of 256M. On each segment there's a bit called NX to forbit
> execution. Vmalloc space is allocated in a segment with NX bit set while
> Module spare is allocated in a segment with NX bit unset. We never have
> to override vmalloc wrappers. All consumers of exec memory allocate it
> using module_alloc() while vmalloc() provides non-exec memory.
> 
> For modules, all you have to do is select
> ARCH_WANTS_MODULES_DATA_IN_VMALLOC and module data will be allocated
> using vmalloc() hence non-exec memory in our case.

This critique has led me to some valuable ideas, and I can definitely find a simpler
approach without overrides.

I do want to mention changes to how VMALLOC_* and MODULE_* constants
are used on arm64 may introduce other issues. See discussion/code on the patch
that motivated this patch at:

https://lore.kernel.org/all/CAP5Mv+ydhk=Ob4b40ZahGMgT-5+-VEHxtmA=-LkJiEOOU+K6hw@mail.gmail.com/

In short, maybe the issue of code/data intermixing requires a rework of arm64
memory infrastructure, but I see a potentially elegant solution here based on the
comments given on this patch.

Thanks,
Maxwell