From patchwork Fri Sep 12 15:15:22 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ian Campbell X-Patchwork-Id: 37348 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f200.google.com (mail-wi0-f200.google.com [209.85.212.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 46E6020C7F for ; Fri, 12 Sep 2014 15:17:35 +0000 (UTC) Received: by mail-wi0-f200.google.com with SMTP id hi2sf430841wib.7 for ; Fri, 12 Sep 2014 08:17:32 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:in-reply-to :references:mime-version:cc:subject:precedence:list-id :list-unsubscribe:list-post:list-help:list-subscribe:sender :errors-to:x-original-sender:x-original-authentication-results :mailing-list:list-archive:content-type:content-transfer-encoding; bh=2jJi7FkK8TShhm4Idiv79nkiGDZOCTlPsT8WS84q2Ik=; b=lTfcmEcPTC4qmXIL+C/sVSX7lGE3lOMwRi1khFBO+NBZ1HTXt3TcjYNqQr06dtWT56 b1IHiwU8vODwR9fR97jlcJZVI8Z/W1tHPGvGqjAbp8NGDuEZMnYx/8S/42OBQHV3HmJB T59Xf4beKmVqPnQbgsNDshEHsLndIVyVcG4fMpM8jozJ0ShUbnuq/5xKilzg2Kcoc6sa fdhjsHeUpOT9DUj/k6k055FpP+q3BJR+4pMb/iDwtlWBi5t5cBkH4I85p9DeJXAtdlpd jKurgArSYwhIIPGCagD6Hqsu0NwRbNa7itumxIKsGHd7fw5V+/B3N4/oNpDPiEah9q2r pTsw== X-Gm-Message-State: ALoCoQlEKwsflMCHn0sVqXceCRuLtsZm5qCsFG1LjkJbKH+NMh5wyGpZKghgPnj83WoBoydR8z8h X-Received: by 10.180.99.74 with SMTP id eo10mr838980wib.2.1410535050996; Fri, 12 Sep 2014 08:17:30 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.224.129 with SMTP id rc1ls172475lac.55.gmail; Fri, 12 Sep 2014 08:17:30 -0700 (PDT) X-Received: by 10.152.1.137 with SMTP id 9mr9654150lam.85.1410535050703; Fri, 12 Sep 2014 08:17:30 -0700 (PDT) Received: from mail-lb0-f174.google.com (mail-lb0-f174.google.com [209.85.217.174]) by mx.google.com with ESMTPS id uh7si7075296lac.19.2014.09.12.08.17.30 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 12 Sep 2014 08:17:30 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.174 as permitted sender) client-ip=209.85.217.174; Received: by mail-lb0-f174.google.com with SMTP id 10so1080157lbg.5 for ; Fri, 12 Sep 2014 08:17:30 -0700 (PDT) X-Received: by 10.152.179.226 with SMTP id dj2mr9875085lac.40.1410535049642; Fri, 12 Sep 2014 08:17:29 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.141.42 with SMTP id rl10csp753598lbb; Fri, 12 Sep 2014 08:17:28 -0700 (PDT) X-Received: by 10.224.136.10 with SMTP id p10mr13392654qat.26.1410535048329; Fri, 12 Sep 2014 08:17:28 -0700 (PDT) Received: from lists.xen.org (lists.xen.org. [50.57.142.19]) by mx.google.com with ESMTPS id n9si6038455qcc.24.2014.09.12.08.17.27 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Fri, 12 Sep 2014 08:17:27 -0700 (PDT) Received-SPF: none (google.com: xen-devel-bounces@lists.xen.org does not designate permitted sender hosts) client-ip=50.57.142.19; Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1XSSZX-0000xi-WB; Fri, 12 Sep 2014 15:15:40 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1XSSZW-0000xa-K9 for xen-devel@lists.xen.org; Fri, 12 Sep 2014 15:15:38 +0000 Received: from [85.158.139.211:36119] by server-11.bemta-5.messagelabs.com id 8A/B1-11011-91E03145; Fri, 12 Sep 2014 15:15:37 +0000 X-Env-Sender: Ian.Campbell@citrix.com X-Msg-Ref: server-5.tower-206.messagelabs.com!1410534927!14134126!1 X-Originating-IP: [66.165.176.63] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n X-StarScan-Received: X-StarScan-Version: 6.11.3; banners=-,-,- X-VirusChecked: Checked Received: (qmail 13864 invoked from network); 12 Sep 2014 15:15:29 -0000 Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63) by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP; 12 Sep 2014 15:15:29 -0000 X-IronPort-AV: E=Sophos;i="5.04,513,1406592000"; d="scan'208";a="171801546" Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id 14.3.181.6; Fri, 12 Sep 2014 11:15:24 -0400 Received: from drall.uk.xensource.com ([10.80.16.71]) by ukmail1.uk.xensource.com with smtp (Exim 4.69) (envelope-from ) id 1XSSZH-00021l-In; Fri, 12 Sep 2014 16:15:24 +0100 Received: by drall.uk.xensource.com (sSMTP sendmail emulation); Fri, 12 Sep 2014 16:15:23 +0100 From: Ian Campbell To: Date: Fri, 12 Sep 2014 16:15:22 +0100 Message-ID: <1410534923-17209-1-git-send-email-ian.campbell@citrix.com> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1410534866.27902.12.camel@kazak.uk.xensource.com> References: <1410534866.27902.12.camel@kazak.uk.xensource.com> MIME-Version: 1.0 X-DLP: MIA1 Cc: Ian Campbell , stefano.stabellini@eu.citrix.com, julien.grall@linaro.org, tim@xen.org, Roy Franz , Fu Wei Subject: [Xen-devel] [PATCH for-4.5 v2 1/2] xen: refactor physical address space compression support into common code X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: List-Unsubscribe: , List-Post: , List-Help: , List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: ian.campbell@citrix.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.174 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Archive: The "pdx compression" functionality will be useful on ARM as well. Move the code to common code+header and introduce HAS_PDX to control when it is built. L2_PAGETABLE_SHIFT is x86 specific, so introduce PDX_GROUP_SHIFT to abstract it out. ARM has no need for superpage compression (yet?) and lacks SUPERPAGE_SHIFT so those functions (spage_to_mfn et al) are not moved. No affect on x86 and no change for ARM (yet). Signed-off-by: Ian Campbell Acked-by: Jan Beulich --- v2: - Correct closing guard comment in pdx.h --- xen/Rules.mk | 1 + xen/arch/x86/Rules.mk | 1 + xen/arch/x86/mm.c | 3 -- xen/arch/x86/setup.c | 10 ---- xen/arch/x86/x86_64/mm.c | 53 -------------------- xen/common/Makefile | 1 + xen/common/pdx.c | 99 +++++++++++++++++++++++++++++++++++++ xen/include/asm-x86/mm.h | 4 +- xen/include/asm-x86/page.h | 2 - xen/include/asm-x86/x86_64/page.h | 25 +--------- xen/include/xen/pdx.h | 47 ++++++++++++++++++ 11 files changed, 152 insertions(+), 94 deletions(-) create mode 100644 xen/common/pdx.c create mode 100644 xen/include/xen/pdx.h diff --git a/xen/Rules.mk b/xen/Rules.mk index b49f3c8..e2f9e36 100644 --- a/xen/Rules.mk +++ b/xen/Rules.mk @@ -59,6 +59,7 @@ CFLAGS-$(HAS_PASSTHROUGH) += -DHAS_PASSTHROUGH CFLAGS-$(HAS_DEVICE_TREE) += -DHAS_DEVICE_TREE CFLAGS-$(HAS_PCI) += -DHAS_PCI CFLAGS-$(HAS_IOPORTS) += -DHAS_IOPORTS +CFLAGS-$(HAS_PDX) += -DHAS_PDX CFLAGS-$(frame_pointer) += -fno-omit-frame-pointer -DCONFIG_FRAME_POINTER ifneq ($(max_phys_cpus),) diff --git a/xen/arch/x86/Rules.mk b/xen/arch/x86/Rules.mk index 576985e..6775cb5 100644 --- a/xen/arch/x86/Rules.mk +++ b/xen/arch/x86/Rules.mk @@ -12,6 +12,7 @@ HAS_NS16550 := y HAS_EHCI := y HAS_KEXEC := y HAS_GDBSX := y +HAS_PDX := y xenoprof := y # diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index d23cb3f..5b3f06f 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -147,9 +147,6 @@ struct domain *dom_xen, *dom_io, *dom_cow; unsigned long max_page; unsigned long total_pages; -unsigned long __read_mostly pdx_group_valid[BITS_TO_LONGS( - (FRAMETABLE_NR + PDX_GROUP_COUNT - 1) / PDX_GROUP_COUNT)] = { [0] = 1 }; - bool_t __read_mostly machine_to_phys_mapping_valid = 0; struct rangeset *__read_mostly mmio_ro_ranges; diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c index 6a814cd..8c8b91f 100644 --- a/xen/arch/x86/setup.c +++ b/xen/arch/x86/setup.c @@ -392,16 +392,6 @@ static void __init setup_max_pdx(unsigned long top_page) max_page = pdx_to_pfn(max_pdx - 1) + 1; } -void set_pdx_range(unsigned long smfn, unsigned long emfn) -{ - unsigned long idx, eidx; - - idx = pfn_to_pdx(smfn) / PDX_GROUP_COUNT; - eidx = (pfn_to_pdx(emfn - 1) + PDX_GROUP_COUNT) / PDX_GROUP_COUNT; - for ( ; idx < eidx; ++idx ) - __set_bit(idx, pdx_group_valid); -} - /* A temporary copy of the e820 map that we can mess with during bootstrap. */ static struct e820map __initdata boot_e820; diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c index 4937f9a..09817fc 100644 --- a/xen/arch/x86/x86_64/mm.c +++ b/xen/arch/x86/x86_64/mm.c @@ -40,15 +40,6 @@ #include #include -/* Parameters for PFN/MADDR compression. */ -unsigned long __read_mostly max_pdx; -unsigned long __read_mostly pfn_pdx_bottom_mask = ~0UL; -unsigned long __read_mostly ma_va_bottom_mask = ~0UL; -unsigned long __read_mostly pfn_top_mask = 0; -unsigned long __read_mostly ma_top_mask = 0; -unsigned long __read_mostly pfn_hole_mask = 0; -unsigned int __read_mostly pfn_pdx_hole_shift = 0; - unsigned int __read_mostly m2p_compat_vstart = __HYPERVISOR_COMPAT_VIRT_START; /* Enough page directories to map into the bottom 1GB. */ @@ -59,14 +50,6 @@ l2_pgentry_t __attribute__ ((__section__ (".bss.page_aligned"))) l2_pgentry_t *compat_idle_pg_table_l2; -int __mfn_valid(unsigned long mfn) -{ - return likely(mfn < max_page) && - likely(!(mfn & pfn_hole_mask)) && - likely(test_bit(pfn_to_pdx(mfn) / PDX_GROUP_COUNT, - pdx_group_valid)); -} - void *do_page_walk(struct vcpu *v, unsigned long addr) { unsigned long mfn = pagetable_get_pfn(v->arch.guest_table); @@ -119,42 +102,6 @@ void *do_page_walk(struct vcpu *v, unsigned long addr) return map_domain_page(mfn) + (addr & ~PAGE_MASK); } -void __init pfn_pdx_hole_setup(unsigned long mask) -{ - unsigned int i, j, bottom_shift = 0, hole_shift = 0; - - /* - * We skip the first MAX_ORDER bits, as we never want to compress them. - * This guarantees that page-pointer arithmetic remains valid within - * contiguous aligned ranges of 2^MAX_ORDER pages. Among others, our - * buddy allocator relies on this assumption. - */ - for ( j = MAX_ORDER-1; ; ) - { - i = find_next_zero_bit(&mask, BITS_PER_LONG, j); - j = find_next_bit(&mask, BITS_PER_LONG, i); - if ( j >= BITS_PER_LONG ) - break; - if ( j - i > hole_shift ) - { - hole_shift = j - i; - bottom_shift = i; - } - } - if ( !hole_shift ) - return; - - printk(KERN_INFO "PFN compression on bits %u...%u\n", - bottom_shift, bottom_shift + hole_shift - 1); - - pfn_pdx_hole_shift = hole_shift; - pfn_pdx_bottom_mask = (1UL << bottom_shift) - 1; - ma_va_bottom_mask = (PAGE_SIZE << bottom_shift) - 1; - pfn_hole_mask = ((1UL << hole_shift) - 1) << bottom_shift; - pfn_top_mask = ~(pfn_pdx_bottom_mask | pfn_hole_mask); - ma_top_mask = pfn_top_mask << PAGE_SHIFT; -} - /* * Allocate page table pages for m2p table */ diff --git a/xen/common/Makefile b/xen/common/Makefile index 3683ae3..f7d10f0 100644 --- a/xen/common/Makefile +++ b/xen/common/Makefile @@ -51,6 +51,7 @@ obj-y += tmem_xen.o obj-y += radix-tree.o obj-y += rbtree.o obj-y += lzo.o +obj-$(HAS_PDX) += pdx.o obj-bin-$(CONFIG_X86) += $(foreach n,decompress bunzip2 unxz unlzma unlzo unlz4 earlycpio,$(n).init.o) diff --git a/xen/common/pdx.c b/xen/common/pdx.c new file mode 100644 index 0000000..11349a7 --- /dev/null +++ b/xen/common/pdx.c @@ -0,0 +1,99 @@ +/****************************************************************************** + * Original code extracted from arch/x86/x86_64/mm.c + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + */ + +#include +#include +#include +#include + +/* Parameters for PFN/MADDR compression. */ +unsigned long __read_mostly max_pdx; +unsigned long __read_mostly pfn_pdx_bottom_mask = ~0UL; +unsigned long __read_mostly ma_va_bottom_mask = ~0UL; +unsigned long __read_mostly pfn_top_mask = 0; +unsigned long __read_mostly ma_top_mask = 0; +unsigned long __read_mostly pfn_hole_mask = 0; +unsigned int __read_mostly pfn_pdx_hole_shift = 0; + +unsigned long __read_mostly pdx_group_valid[BITS_TO_LONGS( + (FRAMETABLE_NR + PDX_GROUP_COUNT - 1) / PDX_GROUP_COUNT)] = { [0] = 1 }; + +int __mfn_valid(unsigned long mfn) +{ + return likely(mfn < max_page) && + likely(!(mfn & pfn_hole_mask)) && + likely(test_bit(pfn_to_pdx(mfn) / PDX_GROUP_COUNT, + pdx_group_valid)); +} + +void set_pdx_range(unsigned long smfn, unsigned long emfn) +{ + unsigned long idx, eidx; + + idx = pfn_to_pdx(smfn) / PDX_GROUP_COUNT; + eidx = (pfn_to_pdx(emfn - 1) + PDX_GROUP_COUNT) / PDX_GROUP_COUNT; + + for ( ; idx < eidx; ++idx ) + __set_bit(idx, pdx_group_valid); +} + +void __init pfn_pdx_hole_setup(unsigned long mask) +{ + unsigned int i, j, bottom_shift = 0, hole_shift = 0; + + /* + * We skip the first MAX_ORDER bits, as we never want to compress them. + * This guarantees that page-pointer arithmetic remains valid within + * contiguous aligned ranges of 2^MAX_ORDER pages. Among others, our + * buddy allocator relies on this assumption. + */ + for ( j = MAX_ORDER-1; ; ) + { + i = find_next_zero_bit(&mask, BITS_PER_LONG, j); + j = find_next_bit(&mask, BITS_PER_LONG, i); + if ( j >= BITS_PER_LONG ) + break; + if ( j - i > hole_shift ) + { + hole_shift = j - i; + bottom_shift = i; + } + } + if ( !hole_shift ) + return; + + printk(KERN_INFO "PFN compression on bits %u...%u\n", + bottom_shift, bottom_shift + hole_shift - 1); + + pfn_pdx_hole_shift = hole_shift; + pfn_pdx_bottom_mask = (1UL << bottom_shift) - 1; + ma_va_bottom_mask = (PAGE_SIZE << bottom_shift) - 1; + pfn_hole_mask = ((1UL << hole_shift) - 1) << bottom_shift; + pfn_top_mask = ~(pfn_pdx_bottom_mask | pfn_hole_mask); + ma_top_mask = pfn_top_mask << PAGE_SHIFT; +} + + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h index 7b85865..746bcf1 100644 --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -280,9 +280,7 @@ extern unsigned long max_page; extern unsigned long total_pages; void init_frametable(void); -#define PDX_GROUP_COUNT ((1 << L2_PAGETABLE_SHIFT) / \ - (sizeof(*frame_table) & -sizeof(*frame_table))) -extern unsigned long pdx_group_valid[]; +#define PDX_GROUP_SHIFT L2_PAGETABLE_SHIFT /* Convert between Xen-heap virtual addresses and page-info structures. */ static inline struct page_info *__virt_to_page(const void *v) diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h index ccc268d..9aa780e 100644 --- a/xen/include/asm-x86/page.h +++ b/xen/include/asm-x86/page.h @@ -334,8 +334,6 @@ void *alloc_xen_pagetable(void); void free_xen_pagetable(void *v); l1_pgentry_t *virt_to_xen_l1e(unsigned long v); -extern void set_pdx_range(unsigned long smfn, unsigned long emfn); - /* Convert between PAT/PCD/PWT embedded in PTE flags and 3-bit cacheattr. */ static inline uint32_t pte_flags_to_cacheattr(uint32_t flags) { diff --git a/xen/include/asm-x86/x86_64/page.h b/xen/include/asm-x86/x86_64/page.h index 3eee5b5..1d54587 100644 --- a/xen/include/asm-x86/x86_64/page.h +++ b/xen/include/asm-x86/x86_64/page.h @@ -35,17 +35,10 @@ #include #include -extern unsigned long xen_virt_end; +#include -extern unsigned long max_pdx; -extern unsigned long pfn_pdx_bottom_mask, ma_va_bottom_mask; -extern unsigned int pfn_pdx_hole_shift; -extern unsigned long pfn_hole_mask; -extern unsigned long pfn_top_mask, ma_top_mask; -extern void pfn_pdx_hole_setup(unsigned long); +extern unsigned long xen_virt_end; -#define page_to_pdx(pg) ((pg) - frame_table) -#define pdx_to_page(pdx) (frame_table + (pdx)) #define spage_to_pdx(spg) (((spg) - spage_table)<<(SUPERPAGE_SHIFT-PAGE_SHIFT)) #define pdx_to_spage(pdx) (spage_table + ((pdx)>>(SUPERPAGE_SHIFT-PAGE_SHIFT))) /* @@ -57,20 +50,6 @@ extern void pfn_pdx_hole_setup(unsigned long); #define pdx_to_virt(pdx) ((void *)(DIRECTMAP_VIRT_START + \ ((unsigned long)(pdx) << PAGE_SHIFT))) -extern int __mfn_valid(unsigned long mfn); - -static inline unsigned long pfn_to_pdx(unsigned long pfn) -{ - return (pfn & pfn_pdx_bottom_mask) | - ((pfn & pfn_top_mask) >> pfn_pdx_hole_shift); -} - -static inline unsigned long pdx_to_pfn(unsigned long pdx) -{ - return (pdx & pfn_pdx_bottom_mask) | - ((pdx << pfn_pdx_hole_shift) & pfn_top_mask); -} - static inline unsigned long pfn_to_sdx(unsigned long pfn) { return pfn_to_pdx(pfn) >> (SUPERPAGE_SHIFT-PAGE_SHIFT); diff --git a/xen/include/xen/pdx.h b/xen/include/xen/pdx.h new file mode 100644 index 0000000..cdab0d1 --- /dev/null +++ b/xen/include/xen/pdx.h @@ -0,0 +1,47 @@ +#ifndef __XEN_PDX_H__ +#define __XEN_PDX_H__ + +#ifdef HAS_PDX + +extern unsigned long max_pdx; +extern unsigned long pfn_pdx_bottom_mask, ma_va_bottom_mask; +extern unsigned int pfn_pdx_hole_shift; +extern unsigned long pfn_hole_mask; +extern unsigned long pfn_top_mask, ma_top_mask; + +#define PDX_GROUP_COUNT ((1 << PDX_GROUP_SHIFT) / \ + (sizeof(*frame_table) & -sizeof(*frame_table))) +extern unsigned long pdx_group_valid[]; + +extern void set_pdx_range(unsigned long smfn, unsigned long emfn); + +#define page_to_pdx(pg) ((pg) - frame_table) +#define pdx_to_page(pdx) (frame_table + (pdx)) + +extern int __mfn_valid(unsigned long mfn); + +static inline unsigned long pfn_to_pdx(unsigned long pfn) +{ + return (pfn & pfn_pdx_bottom_mask) | + ((pfn & pfn_top_mask) >> pfn_pdx_hole_shift); +} + +static inline unsigned long pdx_to_pfn(unsigned long pdx) +{ + return (pdx & pfn_pdx_bottom_mask) | + ((pdx << pfn_pdx_hole_shift) & pfn_top_mask); +} + +extern void pfn_pdx_hole_setup(unsigned long); + +#endif /* HAVE_PDX */ +#endif /* __XEN_PDX_H__ */ + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */