diff mbox

[RFC,v2,06/11] ARM64: mm: Restore memblock limit when map_mem finished.

Message ID 1368006763-30774-7-git-send-email-steve.capper@linaro.org
State Superseded
Headers show

Commit Message

Steve Capper May 8, 2013, 9:52 a.m. UTC
In paging_init the memblock limit is set to restrict any addresses
returned by early_alloc to fit within the initial direct kernel
mapping in swapper_pg_dir. This allows map_mem to allocate puds,
pmds and ptes from the initial direct kernel mapping.

The limit stays low after paging_init() though, meaning any
bootmem allocations will be from a restricted subset of memory.
Gigabyte huge pages, for instance, are normally allocated from
bootmem as their order (18) is too large for the default buddy
allocator (MAX_ORDER = 11).

This patch restores the memblock limit when map_mem has finished,
allowing gigabyte huge pages (and other objects) to be allocated
from all of bootmem.

Signed-off-by: Steve Capper <steve.capper@linaro.org>
---
 arch/arm64/mm/mmu.c | 19 +++++++++++++------
 1 file changed, 13 insertions(+), 6 deletions(-)

Comments

Catalin Marinas May 16, 2013, 1:52 p.m. UTC | #1
On Wed, May 08, 2013 at 10:52:38AM +0100, Steve Capper wrote:
> In paging_init the memblock limit is set to restrict any addresses
> returned by early_alloc to fit within the initial direct kernel
> mapping in swapper_pg_dir. This allows map_mem to allocate puds,
> pmds and ptes from the initial direct kernel mapping.
> 
> The limit stays low after paging_init() though, meaning any
> bootmem allocations will be from a restricted subset of memory.
> Gigabyte huge pages, for instance, are normally allocated from
> bootmem as their order (18) is too large for the default buddy
> allocator (MAX_ORDER = 11).
> 
> This patch restores the memblock limit when map_mem has finished,
> allowing gigabyte huge pages (and other objects) to be allocated
> from all of bootmem.
> 
> Signed-off-by: Steve Capper <steve.capper@linaro.org>

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
diff mbox

Patch

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 70b8cd4..d23188c 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -297,6 +297,16 @@  static void __init map_mem(void)
 {
 	struct memblock_region *reg;
 
+	/*
+	 * Temporarily limit the memblock range. We need to do this as
+	 * create_mapping requires puds, pmds and ptes to be allocated from
+	 * memory addressable from the initial direct kernel mapping.
+	 *
+	 * The initial direct kernel mapping, located at swapper_pg_dir,
+	 * gives us PGDIR_SIZE memory starting from PHYS_OFFSET (aligned).
+	 */
+	memblock_set_current_limit((PHYS_OFFSET & PGDIR_MASK) + PGDIR_SIZE);
+
 	/* map all the memory banks */
 	for_each_memblock(memory, reg) {
 		phys_addr_t start = reg->base;
@@ -307,6 +317,9 @@  static void __init map_mem(void)
 
 		create_mapping(start, __phys_to_virt(start), end - start);
 	}
+
+	/* Limit no longer required. */
+	memblock_set_current_limit(MEMBLOCK_ALLOC_ANYWHERE);
 }
 
 /*
@@ -317,12 +330,6 @@  void __init paging_init(void)
 {
 	void *zero_page;
 
-	/*
-	 * Maximum PGDIR_SIZE addressable via the initial direct kernel
-	 * mapping in swapper_pg_dir.
-	 */
-	memblock_set_current_limit((PHYS_OFFSET & PGDIR_MASK) + PGDIR_SIZE);
-
 	init_mem_pgprot();
 	map_mem();