From patchwork Wed Sep 17 21:21:03 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ian Campbell X-Patchwork-Id: 37544 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wg0-f72.google.com (mail-wg0-f72.google.com [74.125.82.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 0DAC52054D for ; Wed, 17 Sep 2014 21:23:03 +0000 (UTC) Received: by mail-wg0-f72.google.com with SMTP id m15sf1215422wgh.7 for ; Wed, 17 Sep 2014 14:23:03 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:in-reply-to :references:mime-version:cc:subject:precedence:list-id :list-unsubscribe:list-post:list-help:list-subscribe:sender :errors-to:x-original-sender:x-original-authentication-results :mailing-list:list-archive:content-type:content-transfer-encoding; bh=4GlfKmxRGCw1dPo6aTqfgpDazrA04okf9+HwoYMBoIo=; b=bMHIyq2TmPIx20NyYN/YziqrLAwaPrmG+CT+OoBxoxrOGlZl82lJ1aZVaArou+PlW4 Nmypgib59+1HH1zOa7DTIkBwy8R6GLvCo6jfttwRf7444vs2orE9NZeB/GMuW1YeMVX9 DTSrpyP1uIBFaStEINF9lvl4x1dVYdMmKCBJlJKz62LhUd7VwKVyIOtAjBSEIePLZq3N DXovy5kXl7uRlqw+/JUK+ThDVd+618rn+r6smt5Rg5CDPoRH94PTWf54lOFtas+eAHQ6 BilvkcLpyk6EtA2GXPcou0sWw7+NZGeCDMCEFvvpEzH+Hz4kIZPSVIuMK/FFiedP53XU JQvQ== X-Gm-Message-State: ALoCoQnrbtd43xX98Dx2x87qTS1Tefwh66b4LzoamsvJoEg1XtpWIEe7ehsvY6dZie7aZlBFwlE+ X-Received: by 10.112.89.8 with SMTP id bk8mr17585lbb.6.1410988983172; Wed, 17 Sep 2014 14:23:03 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.27.2 with SMTP id p2ls164531lag.10.gmail; Wed, 17 Sep 2014 14:23:02 -0700 (PDT) X-Received: by 10.112.72.10 with SMTP id z10mr20310046lbu.87.1410988982859; Wed, 17 Sep 2014 14:23:02 -0700 (PDT) Received: from mail-lb0-f179.google.com (mail-lb0-f179.google.com [209.85.217.179]) by mx.google.com with ESMTPS id tc3si30122716lbb.103.2014.09.17.14.23.02 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 17 Sep 2014 14:23:02 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.179 as permitted sender) client-ip=209.85.217.179; Received: by mail-lb0-f179.google.com with SMTP id p9so2619058lbv.38 for ; Wed, 17 Sep 2014 14:23:02 -0700 (PDT) X-Received: by 10.152.179.226 with SMTP id dj2mr453581lac.40.1410988982719; Wed, 17 Sep 2014 14:23:02 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.130.169 with SMTP id of9csp658626lbb; Wed, 17 Sep 2014 14:23:01 -0700 (PDT) X-Received: by 10.224.15.201 with SMTP id l9mr54953284qaa.27.1410988981210; Wed, 17 Sep 2014 14:23:01 -0700 (PDT) Received: from lists.xen.org (lists.xen.org. [50.57.142.19]) by mx.google.com with ESMTPS id o105si15655704qgd.39.2014.09.17.14.23.00 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 17 Sep 2014 14:23:01 -0700 (PDT) Received-SPF: none (google.com: xen-devel-bounces@lists.xen.org does not designate permitted sender hosts) client-ip=50.57.142.19; Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1XUMf1-0000LG-RT; Wed, 17 Sep 2014 21:21:11 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1XUMf0-0000Kq-Ty for xen-devel@lists.xen.org; Wed, 17 Sep 2014 21:21:11 +0000 Received: from [85.158.137.68:53332] by server-4.bemta-3.messagelabs.com id 4C/9F-18709-64BF9145; Wed, 17 Sep 2014 21:21:10 +0000 X-Env-Sender: Ian.Campbell@citrix.com X-Msg-Ref: server-5.tower-31.messagelabs.com!1410988868!15225725!1 X-Originating-IP: [66.165.176.89] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n X-StarScan-Received: X-StarScan-Version: 6.11.3; banners=-,-,- X-VirusChecked: Checked Received: (qmail 13413 invoked from network); 17 Sep 2014 21:21:09 -0000 Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89) by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP; 17 Sep 2014 21:21:09 -0000 X-IronPort-AV: E=Sophos;i="5.04,542,1406592000"; d="scan'208";a="172617706" Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id 14.3.181.6; Wed, 17 Sep 2014 17:21:06 -0400 Received: from drall.uk.xensource.com ([10.80.16.71]) by ukmail1.uk.xensource.com with smtp (Exim 4.69) (envelope-from ) id 1XUMev-0006w7-SZ; Wed, 17 Sep 2014 22:21:06 +0100 Received: by drall.uk.xensource.com (sSMTP sendmail emulation); Wed, 17 Sep 2014 22:21:05 +0100 From: Ian Campbell To: Date: Wed, 17 Sep 2014 22:21:03 +0100 Message-ID: <1410988863-15238-3-git-send-email-ian.campbell@citrix.com> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1410988836.23505.65.camel@citrix.com> References: <1410988836.23505.65.camel@citrix.com> MIME-Version: 1.0 X-DLP: MIA1 Cc: Ian Campbell , stefano.stabellini@eu.citrix.com, julien.grall@linaro.org, tim@xen.org, Roy Franz , Jan Beulich , Fu Wei Subject: [Xen-devel] [PATCH for-4.5 v3 3/3] xen: arm: Enable physical address space compression (PDX) on arm X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: List-Unsubscribe: , List-Post: , List-Help: , List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: ian.campbell@citrix.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.179 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Archive: This allows us to support sparse physical address maps which we previously could not because the frametable would end up taking up an enormous fraction of RAM. On a fast model which has RAM at 0x80000000-0x100000000 and 0x880000000-0x900000000 this reduces the size of the frametable from 478M to 84M. Signed-off-by: Ian Campbell Reviewed-by: Julien Grall --- v3: - Spelling in commit log and correct $SUBJ - Remove stray trailing blank - Drop nolonger required check that we use all banks - Use new pdx_{init_mask,region_mask} functions v2: - Implement support for arm32 (tested on models). - Simplify the arm64 stuff a bit - Fixedsome bugs --- xen/arch/arm/Rules.mk | 1 + xen/arch/arm/mm.c | 17 +++--- xen/arch/arm/setup.c | 132 ++++++++++++++++-------------------------- xen/include/asm-arm/config.h | 11 +++- xen/include/asm-arm/mm.h | 37 ++++++------ xen/include/asm-arm/numa.h | 2 +- 6 files changed, 89 insertions(+), 111 deletions(-) diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk index 8658176..26fafa2 100644 --- a/xen/arch/arm/Rules.mk +++ b/xen/arch/arm/Rules.mk @@ -10,6 +10,7 @@ HAS_DEVICE_TREE := y HAS_VIDEO := y HAS_ARM_HDLCD := y HAS_PASSTHROUGH := y +HAS_PDX := y CFLAGS += -I$(BASEDIR)/include diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index 0a243b0..8167580 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -138,7 +138,7 @@ unsigned long xenheap_mfn_start __read_mostly = ~0UL; unsigned long xenheap_mfn_end __read_mostly; unsigned long xenheap_virt_end __read_mostly; -unsigned long frametable_base_mfn __read_mostly; +unsigned long frametable_base_pdx __read_mostly; unsigned long frametable_virt_end __read_mostly; unsigned long max_page; @@ -665,7 +665,7 @@ void __init setup_xenheap_mappings(unsigned long base_mfn, /* Align to previous 1GB boundary */ base_mfn &= ~((FIRST_SIZE>>PAGE_SHIFT)-1); - offset = base_mfn - xenheap_mfn_start; + offset = pfn_to_pdx(base_mfn - xenheap_mfn_start); vaddr = DIRECTMAP_VIRT_START + offset*PAGE_SIZE; while ( base_mfn < end_mfn ) @@ -716,7 +716,8 @@ void __init setup_xenheap_mappings(unsigned long base_mfn, void __init setup_frametable_mappings(paddr_t ps, paddr_t pe) { unsigned long nr_pages = (pe - ps) >> PAGE_SHIFT; - unsigned long frametable_size = nr_pages * sizeof(struct page_info); + unsigned long nr_pdxs = pfn_to_pdx(nr_pages); + unsigned long frametable_size = nr_pdxs * sizeof(struct page_info); unsigned long base_mfn; #ifdef CONFIG_ARM_64 lpae_t *second, pte; @@ -724,7 +725,7 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe) int i; #endif - frametable_base_mfn = ps >> PAGE_SHIFT; + frametable_base_pdx = pfn_to_pdx(ps >> PAGE_SHIFT); /* Round up to 32M boundary */ frametable_size = (frametable_size + 0x1ffffff) & ~0x1ffffff; @@ -745,11 +746,11 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe) create_32mb_mappings(xen_second, FRAMETABLE_VIRT_START, base_mfn, frametable_size >> PAGE_SHIFT); #endif - memset(&frame_table[0], 0, nr_pages * sizeof(struct page_info)); - memset(&frame_table[nr_pages], -1, - frametable_size - (nr_pages * sizeof(struct page_info))); + memset(&frame_table[0], 0, nr_pdxs * sizeof(struct page_info)); + memset(&frame_table[nr_pdxs], -1, + frametable_size - (nr_pdxs * sizeof(struct page_info))); - frametable_virt_end = FRAMETABLE_VIRT_START + (nr_pages * sizeof(struct page_info)); + frametable_virt_end = FRAMETABLE_VIRT_START + (nr_pdxs * sizeof(struct page_info)); } void *__init arch_vmap_virt_end(void) diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c index 446b4dc..bd91ced 100644 --- a/xen/arch/arm/setup.c +++ b/xen/arch/arm/setup.c @@ -423,11 +423,47 @@ static paddr_t __init get_xen_paddr(void) return paddr; } +static void init_pdx(void) +{ + paddr_t bank_start, bank_size, bank_end; + + u64 mask = pdx_init_mask(bootinfo.mem.bank[0].start); + int bank; + + for ( bank = 0 ; bank < bootinfo.mem.nr_banks; bank++ ) + { + bank_start = bootinfo.mem.bank[bank].start; + bank_size = bootinfo.mem.bank[bank].size; + + mask |= bank_start | pdx_region_mask(bank_start, bank_size); + } + + for ( bank = 0 ; bank < bootinfo.mem.nr_banks; bank++ ) + { + bank_start = bootinfo.mem.bank[bank].start; + bank_size = bootinfo.mem.bank[bank].size; + + if (~mask & pdx_region_mask(bank_start, bank_size)) + mask = 0; + } + + pfn_pdx_hole_setup(mask >> PAGE_SHIFT); + + for ( bank = 0 ; bank < bootinfo.mem.nr_banks; bank++ ) + { + bank_start = bootinfo.mem.bank[bank].start; + bank_size = bootinfo.mem.bank[bank].size; + bank_end = bank_start + bank_size; + + set_pdx_range(paddr_to_pfn(bank_start), + paddr_to_pfn(bank_end)); + } +} + #ifdef CONFIG_ARM_32 static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size) { paddr_t ram_start, ram_end, ram_size; - paddr_t contig_start, contig_end; paddr_t s, e; unsigned long ram_pages; unsigned long heap_pages, xenheap_pages, domheap_pages; @@ -439,24 +475,11 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size) if ( !bootinfo.mem.nr_banks ) panic("No memory bank"); - /* - * We are going to accumulate two regions here. - * - * The first is the bounds of the initial memory region which is - * contiguous with the first bank. For simplicity the xenheap is - * always allocated from this region. - * - * The second is the complete bounds of the regions containing RAM - * (ie. from the lowest RAM address to the highest), which - * includes any holes. - * - * We also track the number of actual RAM pages (i.e. not counting - * the holes). - */ - ram_size = bootinfo.mem.bank[0].size; + init_pdx(); - contig_start = ram_start = bootinfo.mem.bank[0].start; - contig_end = ram_end = ram_start + ram_size; + ram_start = bootinfo.mem.bank[0].start; + ram_size = bootinfo.mem.bank[0].size; + ram_end = ram_start + ram_size; for ( i = 1; i < bootinfo.mem.nr_banks; i++ ) { @@ -464,41 +487,9 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size) paddr_t bank_size = bootinfo.mem.bank[i].size; paddr_t bank_end = bank_start + bank_size; - paddr_t new_ram_size = ram_size + bank_size; - paddr_t new_ram_start = min(ram_start,bank_start); - paddr_t new_ram_end = max(ram_end,bank_end); - - /* - * If the new bank is contiguous with the initial contiguous - * region then incorporate it into the contiguous region. - * - * Otherwise we allow non-contigious regions so long as at - * least half of the total RAM region actually contains - * RAM. We actually fudge this slightly and require that - * adding the current bank does not cause us to violate this - * restriction. - * - * This restriction ensures that the frametable (which is not - * currently sparse) does not consume all available RAM. - */ - if ( bank_start == contig_end ) - contig_end = bank_end; - else if ( bank_end == contig_start ) - contig_start = bank_start; - else if ( 2 * new_ram_size < new_ram_end - new_ram_start ) - /* Would create memory map which is too sparse, so stop here. */ - break; - - ram_size = new_ram_size; - ram_start = new_ram_start; - ram_end = new_ram_end; - } - - if ( i != bootinfo.mem.nr_banks ) - { - printk("WARNING: only using %d out of %d memory banks\n", - i, bootinfo.mem.nr_banks); - bootinfo.mem.nr_banks = i; + ram_size = ram_size + bank_size; + ram_start = min(ram_start,bank_start); + ram_end = max(ram_end,bank_end); } total_pages = ram_pages = ram_size >> PAGE_SHIFT; @@ -520,8 +511,7 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size) do { - /* xenheap is always in the initial contiguous region */ - e = consider_modules(contig_start, contig_end, + e = consider_modules(ram_start, ram_end, pfn_to_paddr(xenheap_pages), 32<<20, 0); if ( e ) @@ -616,6 +606,8 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size) unsigned long dtb_pages; void *fdt; + init_pdx(); + total_pages = 0; for ( bank = 0 ; bank < bootinfo.mem.nr_banks; bank++ ) { @@ -624,26 +616,9 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size) paddr_t bank_end = bank_start + bank_size; paddr_t s, e; - paddr_t new_ram_size = ram_size + bank_size; - paddr_t new_ram_start = min(ram_start,bank_start); - paddr_t new_ram_end = max(ram_end,bank_end); - - /* - * We allow non-contigious regions so long as at least half of - * the total RAM region actually contains RAM. We actually - * fudge this slightly and require that adding the current - * bank does not cause us to violate this restriction. - * - * This restriction ensures that the frametable (which is not - * currently sparse) does not consume all available RAM. - */ - if ( bank > 0 && 2 * new_ram_size < new_ram_end - new_ram_start ) - /* Would create memory map which is too sparse, so stop here. */ - break; - - ram_start = new_ram_start; - ram_end = new_ram_end; - ram_size = new_ram_size; + ram_size = ram_size + bank_size; + ram_start = min(ram_start,bank_start); + ram_end = max(ram_end,bank_end); setup_xenheap_mappings(bank_start>>PAGE_SHIFT, bank_size>>PAGE_SHIFT); @@ -669,13 +644,6 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size) } } - if ( bank != bootinfo.mem.nr_banks ) - { - printk("WARNING: only using %d out of %d memory banks\n", - bank, bootinfo.mem.nr_banks); - bootinfo.mem.nr_banks = bank; - } - total_pages += ram_size >> PAGE_SHIFT; xenheap_virt_end = XENHEAP_VIRT_START + ram_end - ram_start; diff --git a/xen/include/asm-arm/config.h b/xen/include/asm-arm/config.h index 1c3abcf..59b2887 100644 --- a/xen/include/asm-arm/config.h +++ b/xen/include/asm-arm/config.h @@ -126,7 +126,12 @@ #define CONFIG_SEPARATE_XENHEAP 1 #define FRAMETABLE_VIRT_START _AT(vaddr_t,0x02000000) -#define VMAP_VIRT_START _AT(vaddr_t,0x10000000) +#define FRAMETABLE_SIZE MB(128-32) +#define FRAMETABLE_NR (FRAMETABLE_SIZE / sizeof(*frame_table)) +#define FRAMETABLE_VIRT_END (FRAMETABLE_VIRT_START + FRAMETABLE_SIZE - 1) + +#define VMAP_VIRT_START _AT(vaddr_t,0x10000000) + #define XENHEAP_VIRT_START _AT(vaddr_t,0x40000000) #define XENHEAP_VIRT_END _AT(vaddr_t,0x7fffffff) #define DOMHEAP_VIRT_START _AT(vaddr_t,0x80000000) @@ -149,7 +154,9 @@ #define VMAP_VIRT_END (VMAP_VIRT_START + GB(1) - 1) #define FRAMETABLE_VIRT_START GB(32) -#define FRAMETABLE_VIRT_END (FRAMETABLE_VIRT_START + GB(32) - 1) +#define FRAMETABLE_SIZE GB(32) +#define FRAMETABLE_NR (FRAMETABLE_SIZE / sizeof(*frame_table)) +#define FRAMETABLE_VIRT_END (FRAMETABLE_VIRT_START + FRAMETABLE_SIZE - 1) #define DIRECTMAP_VIRT_START SLOT0(256) #define DIRECTMAP_SIZE (SLOT0_ENTRY_SIZE * (265-256)) diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h index 9fa80a4..120500f 100644 --- a/xen/include/asm-arm/mm.h +++ b/xen/include/asm-arm/mm.h @@ -6,6 +6,7 @@ #include #include #include +#include /* Align Xen to a 2 MiB boundary. */ #define XEN_PADDR_ALIGN (1 << 21) @@ -140,12 +141,14 @@ extern void share_xen_page_with_privileged_guests( struct page_info *page, int readonly); #define frame_table ((struct page_info *)FRAMETABLE_VIRT_START) -/* MFN of the first page in the frame table. */ -extern unsigned long frametable_base_mfn; +/* PDX of the first page in the frame table. */ +extern unsigned long frametable_base_pdx; extern unsigned long max_page; extern unsigned long total_pages; +#define PDX_GROUP_SHIFT SECOND_SHIFT + /* Boot-time pagetable setup */ extern void setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr); /* Remove early mappings */ @@ -184,20 +187,15 @@ static inline void __iomem *ioremap_wc(paddr_t start, size_t len) return ioremap_attr(start, len, PAGE_HYPERVISOR_WC); } +/* XXX -- account for base */ #define mfn_valid(mfn) ({ \ unsigned long __m_f_n = (mfn); \ - likely(__m_f_n >= frametable_base_mfn && __m_f_n < max_page); \ + likely(pfn_to_pdx(__m_f_n) >= frametable_base_pdx && __mfn_valid(__m_f_n)); \ }) -#define max_pdx max_page -#define pfn_to_pdx(pfn) (pfn) -#define pdx_to_pfn(pdx) (pdx) -#define virt_to_pdx(va) virt_to_mfn(va) -#define pdx_to_virt(pdx) mfn_to_virt(pdx) - /* Convert between machine frame numbers and page-info structures. */ -#define mfn_to_page(mfn) (frame_table + (pfn_to_pdx(mfn) - frametable_base_mfn)) -#define page_to_mfn(pg) pdx_to_pfn((unsigned long)((pg) - frame_table) + frametable_base_mfn) +#define mfn_to_page(mfn) (frame_table + (pfn_to_pdx(mfn) - frametable_base_pdx)) +#define page_to_mfn(pg) pdx_to_pfn((unsigned long)((pg) - frame_table) + frametable_base_pdx) #define __page_to_mfn(pg) page_to_mfn(pg) #define __mfn_to_page(mfn) mfn_to_page(mfn) @@ -230,9 +228,11 @@ static inline void *maddr_to_virt(paddr_t ma) #else static inline void *maddr_to_virt(paddr_t ma) { - ASSERT((ma >> PAGE_SHIFT) < (DIRECTMAP_SIZE >> PAGE_SHIFT)); - ma -= pfn_to_paddr(xenheap_mfn_start); - return (void *)(unsigned long) ma + DIRECTMAP_VIRT_START; + ASSERT(pfn_to_pdx(ma >> PAGE_SHIFT) < (DIRECTMAP_SIZE >> PAGE_SHIFT)); + return (void *)(DIRECTMAP_VIRT_START - + pfn_to_paddr(xenheap_mfn_start) + + ((ma & ma_va_bottom_mask) | + ((ma & ma_top_mask) >> pfn_pdx_hole_shift))); } #endif @@ -258,13 +258,14 @@ static inline int gvirt_to_maddr(vaddr_t va, paddr_t *pa, unsigned int flags) static inline struct page_info *virt_to_page(const void *v) { unsigned long va = (unsigned long)v; + unsigned long pdx; + ASSERT(va >= XENHEAP_VIRT_START); ASSERT(va < xenheap_virt_end); - return frame_table - + ((va - XENHEAP_VIRT_START) >> PAGE_SHIFT) - + xenheap_mfn_start - - frametable_base_mfn; + pdx = (va - XENHEAP_VIRT_START) >> PAGE_SHIFT; + pdx += pfn_to_pdx(xenheap_mfn_start); + return frame_table + pdx - frametable_base_pdx; } static inline void *page_to_virt(const struct page_info *pg) diff --git a/xen/include/asm-arm/numa.h b/xen/include/asm-arm/numa.h index 2c019d7..06a9d5a 100644 --- a/xen/include/asm-arm/numa.h +++ b/xen/include/asm-arm/numa.h @@ -12,7 +12,7 @@ static inline __attribute__((pure)) int phys_to_nid(paddr_t addr) /* XXX: implement NUMA support */ #define node_spanned_pages(nid) (total_pages) -#define node_start_pfn(nid) (frametable_base_mfn) +#define node_start_pfn(nid) (pdx_to_pfn(frametable_base_pdx)) #define __node_distance(a, b) (20) #endif /* __ARCH_ARM_NUMA_H */