diff mbox

[2/5] arm64: efi: always map runtime services code and data regions down to pages

Message ID 1467204690-10790-3-git-send-email-ard.biesheuvel@linaro.org
State Accepted
Commit bd264d046aad25e9922a142a7831e6841a2f0474
Headers show

Commit Message

Ard Biesheuvel June 29, 2016, 12:51 p.m. UTC
To avoid triggering diagnostics in the MMU code that are finicky about
splitting block mappings into more granular mappings, ensure that regions
that are likely to appear in the Memory Attributes table as well as the
UEFI memory map are always mapped down to pages. This way, we can use
apply_to_page_range() instead of create_pgd_mapping() for the second pass,
which cannot split or merge block entries, and operates strictly on PTEs.

Note that this aligns the arm64 Memory Attributes table handling code with
the ARM code, which already uses apply_to_page_range() to set the strict
permissions.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

---
 arch/arm64/include/asm/efi.h |  3 +-
 arch/arm64/kernel/efi.c      | 36 +++++++++++++++++++-
 2 files changed, 36 insertions(+), 3 deletions(-)

-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

Comments

Sudeep Holla July 22, 2016, 2:30 p.m. UTC | #1
Hi Ard,

On 29/06/16 13:51, Ard Biesheuvel wrote:
> To avoid triggering diagnostics in the MMU code that are finicky about

> splitting block mappings into more granular mappings, ensure that regions

> that are likely to appear in the Memory Attributes table as well as the

> UEFI memory map are always mapped down to pages. This way, we can use

> apply_to_page_range() instead of create_pgd_mapping() for the second pass,

> which cannot split or merge block entries, and operates strictly on PTEs.

>

> Note that this aligns the arm64 Memory Attributes table handling code with

> the ARM code, which already uses apply_to_page_range() to set the strict

> permissions.

>


This patch is merged in arm64/for-next/core now and when I try that
branch with defconfig + CONFIG_PROVE_LOCKING, I get the following splat
on boot and it fails to boot further on Juno.

I could bisect that to this patch(Commit bd264d046aad ("arm64: efi:
always map runtime services code and data regions down to pages") in
that branch)

Regards,
Sudeep

-->8

efi: memattr: Processing EFI Memory Attributes table:
efi: memattr:  0x0000f9400000-0x0000f942ffff [Runtime Data       |RUN| | 
  |XP|  |  |  |   |  |  |  |  ]
Unable to handle kernel NULL pointer dereference at virtual address 00000018
pgd = ffff000009aa4000
[00000018] *pgd=00000009ffffe003, *pud=00000009ffffd003, 
*pmd=0000000000000000
Internal error: Oops: 96000004 [#1] PREEMPT SMP
Modules linked in:
CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.7.0-rc7-next-20160722 #134
Hardware name: ARM Juno development board (r2) (DT)
task: ffff800976ca0000 task.stack: ffff800976c3c000
PC is at __lock_acquire+0x13c/0x19e0
LR is at lock_acquire+0x4c/0x68
pc : [<ffff000008104544>] lr : [<ffff000008106114>] pstate: 200000c5
....
  __lock_acquire+0x13c/0x19e0
  lock_acquire+0x4c/0x68
  _raw_spin_lock+0x40/0x58
  apply_to_page_range+0x18c/0x388
  efi_set_mapping_permissions+0x34/0x44
  efi_memattr_apply_permissions+0x200/0x2a8
  arm_enable_runtime_services+0x1b4/0x1fc
  do_one_initcall+0x38/0x128
  kernel_init_freeable+0x84/0x1f0
  kernel_init+0x10/0x100
  ret_from_fork+0x10/0x40
Code: 5280003c 79004401 140000b5 b000b880 (f9400282)
---[ end trace 892120beb6681b4e ]---


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
diff mbox

Patch

diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h
index 622db3c6474e..8b13476cdf96 100644
--- a/arch/arm64/include/asm/efi.h
+++ b/arch/arm64/include/asm/efi.h
@@ -14,8 +14,7 @@  extern void efi_init(void);
 #endif
 
 int efi_create_mapping(struct mm_struct *mm, efi_memory_desc_t *md);
-
-#define efi_set_mapping_permissions	efi_create_mapping
+int efi_set_mapping_permissions(struct mm_struct *mm, efi_memory_desc_t *md);
 
 #define arch_efi_call_virt_setup()					\
 ({									\
diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c
index 981604948521..4aef89f37049 100644
--- a/arch/arm64/kernel/efi.c
+++ b/arch/arm64/kernel/efi.c
@@ -62,13 +62,47 @@  struct screen_info screen_info __section(.data);
 int __init efi_create_mapping(struct mm_struct *mm, efi_memory_desc_t *md)
 {
 	pteval_t prot_val = create_mapping_protection(md);
+	bool allow_block_mappings = (md->type != EFI_RUNTIME_SERVICES_CODE &&
+				     md->type != EFI_RUNTIME_SERVICES_DATA);
 
 	create_pgd_mapping(mm, md->phys_addr, md->virt_addr,
 			   md->num_pages << EFI_PAGE_SHIFT,
-			   __pgprot(prot_val | PTE_NG), true);
+			   __pgprot(prot_val | PTE_NG), allow_block_mappings);
 	return 0;
 }
 
+static int __init set_permissions(pte_t *ptep, pgtable_t token,
+				  unsigned long addr, void *data)
+{
+	efi_memory_desc_t *md = data;
+	pte_t pte = *ptep;
+
+	if (md->attribute & EFI_MEMORY_RO)
+		pte = set_pte_bit(pte, __pgprot(PTE_RDONLY));
+	if (md->attribute & EFI_MEMORY_XP)
+		pte = set_pte_bit(pte, __pgprot(PTE_PXN));
+	set_pte(ptep, pte);
+	return 0;
+}
+
+int __init efi_set_mapping_permissions(struct mm_struct *mm,
+				       efi_memory_desc_t *md)
+{
+	BUG_ON(md->type != EFI_RUNTIME_SERVICES_CODE &&
+	       md->type != EFI_RUNTIME_SERVICES_DATA);
+
+	/*
+	 * Calling apply_to_page_range() is only safe on regions that are
+	 * guaranteed to be mapped down to pages. Since we are only called
+	 * for regions that have been mapped using efi_create_mapping() above
+	 * (and this is checked by the generic Memory Attributes table parsing
+	 * routines), there is no need to check that again here.
+	 */
+	return apply_to_page_range(mm, md->virt_addr,
+				   md->num_pages << EFI_PAGE_SHIFT,
+				   set_permissions, md);
+}
+
 static int __init arm64_dmi_init(void)
 {
 	/*