From patchwork Tue Jun 14 12:07:01 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 69995 Delivered-To: patch@linaro.org Received: by 10.140.106.246 with SMTP id e109csp2011620qgf; Tue, 14 Jun 2016 05:09:00 -0700 (PDT) X-Received: by 10.159.37.193 with SMTP id 59mr6792794uaf.117.1465906136991; Tue, 14 Jun 2016 05:08:56 -0700 (PDT) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id d78si9755736vkf.189.2016.06.14.05.08.56 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 14 Jun 2016 05:08:56 -0700 (PDT) Received-SPF: neutral (google.com: 192.237.175.120 is neither permitted nor denied by best guess record for domain of xen-devel-bounces@lists.xen.org) client-ip=192.237.175.120; Authentication-Results: mx.google.com; spf=neutral (google.com: 192.237.175.120 is neither permitted nor denied by best guess record for domain of xen-devel-bounces@lists.xen.org) smtp.mailfrom=xen-devel-bounces@lists.xen.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bCn7r-0004sy-R9; Tue, 14 Jun 2016 12:07:23 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bCn7q-0004sS-9z for xen-devel@lists.xen.org; Tue, 14 Jun 2016 12:07:22 +0000 Received: from [85.158.139.211] by server-6.bemta-5.messagelabs.com id 00/B8-00606-973FF575; Tue, 14 Jun 2016 12:07:21 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrKLMWRWlGSWpSXmKPExsVysyfVTbfic3y 4waU9KhZLPi5mcWD0OLr7N1MAYxRrZl5SfkUCa8aHq0eZCh5tZKxoaGtiamC8WdfFyMUhJLCJ UWLpt38sEM5pRokPPxYwdjFycrAJaErc+fyJCcQWEZCWuPb5MiNIEbPARGaJxj1tYAlhAU+J7 1PvANkcHCwCqhJPj8mDmLwCLhLv5piDVEgIyEmcPDaZFSTMKeAqcXeKMEhYCKhiZsssxgmM3A sYGVYxqhenFpWlFuma6iUVZaZnlOQmZuboGhqY6uWmFhcnpqfmJCYV6yXn525iBHqXAQh2MH7 pdz7EKMnBpCTKe3tifLgQX1J+SmVGYnFGfFFpTmrxIUYZDg4lCd4ln4BygkWp6akVaZk5wDCD SUtw8CiJ8C4ESfMWFyTmFmemQ6ROMSpKifNuBEkIgCQySvPg2mChfYlRVkqYlxHoECGegtSi3 MwSVPlXjOIcjErCvEtBpvBk5pXATX8FtJgJaLHNdLDFJYkIKakGxootHH1MVWtY+mRbPaU1lP Z7JPec1Z90rPKtRE1WjgJXf4f6xbW7fsz0/X9SUMyii6Vkzk2v229fdR/ufj4huG5K1mI+phs 7OmznXLPrnrKd+RpzkTzHQulfh/3OG6ZffuQd0p4Xq9T8uvHYfHXxx4+XWuzgn2Rm/fGUcqLY NUE+tZYHU+f4KLEUZyQaajEXFScCAORXyTBoAgAA X-Env-Sender: julien.grall@arm.com X-Msg-Ref: server-9.tower-206.messagelabs.com!1465906039!44866395!1 X-Originating-IP: [217.140.101.70] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 8.46; banners=-,-,- X-VirusChecked: Checked Received: (qmail 42602 invoked from network); 14 Jun 2016 12:07:20 -0000 Received: from foss.arm.com (HELO foss.arm.com) (217.140.101.70) by server-9.tower-206.messagelabs.com with SMTP; 14 Jun 2016 12:07:20 -0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7EAE82A2; Tue, 14 Jun 2016 05:07:59 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.215.28]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6096A3F246; Tue, 14 Jun 2016 05:07:17 -0700 (PDT) From: Julien Grall To: xen-devel@lists.xen.org Date: Tue, 14 Jun 2016 13:07:01 +0100 Message-Id: <1465906027-16614-3-git-send-email-julien.grall@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1465906027-16614-1-git-send-email-julien.grall@arm.com> References: <1465906027-16614-1-git-send-email-julien.grall@arm.com> Cc: sstabellini@kernel.org, Wei Liu , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Paul Durrant , Jan Beulich , wei.chen@linaro.org Subject: [Xen-devel] [PATCH 2/8] xen: Use typesafe gfn/mfn in guest_physmap_* helpers X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" Also rename some variables to gfn or mfn when it does not require much rework. Signed-off-by: Julien Grall --- Cc: Stefano Stabellini Cc: Jan Beulich Cc: Andrew Cooper Cc: Paul Durrant Cc: George Dunlap Cc: Ian Jackson Cc: Konrad Rzeszutek Wilk Cc: Tim Deegan Cc: Wei Liu --- xen/arch/arm/domain_build.c | 2 +- xen/arch/arm/mm.c | 10 +++++----- xen/arch/arm/p2m.c | 20 ++++++++++---------- xen/arch/x86/domain.c | 5 +++-- xen/arch/x86/domain_build.c | 6 +++--- xen/arch/x86/hvm/ioreq.c | 8 ++++---- xen/arch/x86/mm.c | 12 +++++++----- xen/arch/x86/mm/p2m.c | 32 ++++++++++++++++++++------------ xen/common/grant_table.c | 7 ++++--- xen/common/memory.c | 32 +++++++++++++++++--------------- xen/drivers/passthrough/arm/smmu.c | 4 ++-- xen/include/asm-arm/p2m.h | 12 ++++++------ xen/include/asm-x86/p2m.h | 11 +++++------ xen/include/xen/mm.h | 2 +- 14 files changed, 88 insertions(+), 75 deletions(-) diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index 2e4c295..02b4539 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -125,7 +125,7 @@ static bool_t insert_11_bank(struct domain *d, goto fail; } - res = guest_physmap_add_page(d, spfn, spfn, order); + res = guest_physmap_add_page(d, _gfn(spfn), _mfn(spfn), order); if ( res ) panic("Failed map pages to DOM0: %d", res); diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index 2ec211b..5ab9b75 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -1153,7 +1153,7 @@ int xenmem_add_to_physmap_one( } /* Map at new location. */ - rc = guest_physmap_add_entry(d, gpfn, mfn, 0, t); + rc = guest_physmap_add_entry(d, _gfn(gpfn), _mfn(mfn), 0, t); /* If we fail to add the mapping, we need to drop the reference we * took earlier on foreign pages */ @@ -1282,8 +1282,8 @@ int create_grant_host_mapping(unsigned long addr, unsigned long frame, if ( flags & GNTMAP_readonly ) t = p2m_grant_map_ro; - rc = guest_physmap_add_entry(current->domain, addr >> PAGE_SHIFT, - frame, 0, t); + rc = guest_physmap_add_entry(current->domain, _gfn(addr >> PAGE_SHIFT), + _mfn(frame), 0, t); if ( rc ) return GNTST_general_error; @@ -1294,13 +1294,13 @@ int create_grant_host_mapping(unsigned long addr, unsigned long frame, int replace_grant_host_mapping(unsigned long addr, unsigned long mfn, unsigned long new_addr, unsigned int flags) { - unsigned long gfn = (unsigned long)(addr >> PAGE_SHIFT); + gfn_t gfn = _gfn(addr >> PAGE_SHIFT); struct domain *d = current->domain; if ( new_addr != 0 || (flags & GNTMAP_contains_pte) ) return GNTST_general_error; - guest_physmap_remove_page(d, gfn, mfn, 0); + guest_physmap_remove_page(d, gfn, _mfn(mfn), 0); return GNTST_okay; } diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 867e294..decec0d 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -1290,26 +1290,26 @@ int map_dev_mmio_region(struct domain *d, } int guest_physmap_add_entry(struct domain *d, - unsigned long gpfn, - unsigned long mfn, + gfn_t gfn, + mfn_t mfn, unsigned long page_order, p2m_type_t t) { return apply_p2m_changes(d, INSERT, - pfn_to_paddr(gpfn), - pfn_to_paddr(gpfn + (1 << page_order)), - pfn_to_paddr(mfn), MATTR_MEM, 0, t, + pfn_to_paddr(gfn_x(gfn)), + pfn_to_paddr(gfn_x(gfn) + (1 << page_order)), + pfn_to_paddr(mfn_x(mfn)), MATTR_MEM, 0, t, d->arch.p2m.default_access); } void guest_physmap_remove_page(struct domain *d, - unsigned long gpfn, - unsigned long mfn, unsigned int page_order) + gfn_t gfn, + mfn_t mfn, unsigned int page_order) { apply_p2m_changes(d, REMOVE, - pfn_to_paddr(gpfn), - pfn_to_paddr(gpfn + (1<arch.p2m.default_access); } diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index 989bc74..ff82818 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -802,9 +802,10 @@ int arch_domain_soft_reset(struct domain *d) ret = -ENOMEM; goto exit_put_gfn; } - guest_physmap_remove_page(d, gfn, mfn, PAGE_ORDER_4K); + guest_physmap_remove_page(d, _gfn(gfn), _mfn(mfn), PAGE_ORDER_4K); - ret = guest_physmap_add_page(d, gfn, page_to_mfn(new_page), PAGE_ORDER_4K); + ret = guest_physmap_add_page(d, _gfn(gfn), _mfn(page_to_mfn(new_page)), + PAGE_ORDER_4K); if ( ret ) { printk(XENLOG_G_ERR "Failed to add a page to replace" diff --git a/xen/arch/x86/domain_build.c b/xen/arch/x86/domain_build.c index b29c377..0a02d65 100644 --- a/xen/arch/x86/domain_build.c +++ b/xen/arch/x86/domain_build.c @@ -427,7 +427,7 @@ static __init void pvh_add_mem_mapping(struct domain *d, unsigned long gfn, if ( !iomem_access_permitted(d, mfn + i, mfn + i) ) { omfn = get_gfn_query_unlocked(d, gfn + i, &t); - guest_physmap_remove_page(d, gfn + i, mfn_x(omfn), PAGE_ORDER_4K); + guest_physmap_remove_page(d, _gfn(gfn + i), omfn, PAGE_ORDER_4K); continue; } @@ -530,7 +530,7 @@ static __init void pvh_map_all_iomem(struct domain *d, unsigned long nr_pages) if ( get_gpfn_from_mfn(mfn) != INVALID_M2P_ENTRY ) continue; - rc = guest_physmap_add_page(d, start_pfn, mfn, 0); + rc = guest_physmap_add_page(d, _gfn(start_pfn), _mfn(mfn), 0); if ( rc != 0 ) panic("Unable to add gpfn %#lx mfn %#lx to Dom0 physmap: %d", start_pfn, mfn, rc); @@ -605,7 +605,7 @@ static __init void dom0_update_physmap(struct domain *d, unsigned long pfn, { if ( is_pvh_domain(d) ) { - int rc = guest_physmap_add_page(d, pfn, mfn, 0); + int rc = guest_physmap_add_page(d, _gfn(pfn), _mfn(mfn), 0); BUG_ON(rc); return; } diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 333ce14..7148ac4 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -267,8 +267,8 @@ bool_t is_ioreq_server_page(struct domain *d, const struct page_info *page) static void hvm_remove_ioreq_gmfn( struct domain *d, struct hvm_ioreq_page *iorp) { - guest_physmap_remove_page(d, iorp->gmfn, - page_to_mfn(iorp->page), 0); + guest_physmap_remove_page(d, _gfn(iorp->gmfn), + _mfn(page_to_mfn(iorp->page)), 0); clear_page(iorp->va); } @@ -279,8 +279,8 @@ static int hvm_add_ioreq_gmfn( clear_page(iorp->va); - rc = guest_physmap_add_page(d, iorp->gmfn, - page_to_mfn(iorp->page), 0); + rc = guest_physmap_add_page(d, _gfn(iorp->gmfn), + _mfn(page_to_mfn(iorp->page)), 0); if ( rc == 0 ) paging_mark_dirty(d, page_to_mfn(iorp->page)); diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 8d10a3e..a6e07f2 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -4208,7 +4208,8 @@ static int create_grant_p2m_mapping(uint64_t addr, unsigned long frame, else p2mt = p2m_grant_map_rw; rc = guest_physmap_add_entry(current->domain, - addr >> PAGE_SHIFT, frame, PAGE_ORDER_4K, p2mt); + _gfn(addr >> PAGE_SHIFT), + _mfn(frame), PAGE_ORDER_4K, p2mt); if ( rc ) return GNTST_general_error; else @@ -4265,7 +4266,7 @@ static int replace_grant_p2m_mapping( type, mfn_x(old_mfn), frame); return GNTST_general_error; } - guest_physmap_remove_page(d, gfn, frame, PAGE_ORDER_4K); + guest_physmap_remove_page(d, _gfn(gfn), _mfn(frame), PAGE_ORDER_4K); put_gfn(d, gfn); return GNTST_okay; @@ -4850,7 +4851,8 @@ int xenmem_add_to_physmap_one( { if ( is_xen_heap_mfn(prev_mfn) ) /* Xen heap frames are simply unhooked from this phys slot. */ - guest_physmap_remove_page(d, gpfn, prev_mfn, PAGE_ORDER_4K); + guest_physmap_remove_page(d, _gfn(gpfn), _mfn(prev_mfn), + PAGE_ORDER_4K); else /* Normal domain memory is freed, to avoid leaking memory. */ guest_remove_page(d, gpfn); @@ -4864,10 +4866,10 @@ int xenmem_add_to_physmap_one( if ( space == XENMAPSPACE_gmfn || space == XENMAPSPACE_gmfn_range ) ASSERT( old_gpfn == gfn ); if ( old_gpfn != INVALID_M2P_ENTRY ) - guest_physmap_remove_page(d, old_gpfn, mfn, PAGE_ORDER_4K); + guest_physmap_remove_page(d, _gfn(old_gpfn), _mfn(mfn), PAGE_ORDER_4K); /* Map at new location. */ - rc = guest_physmap_add_page(d, gpfn, mfn, PAGE_ORDER_4K); + rc = guest_physmap_add_page(d, _gfn(gpfn), _mfn(mfn), PAGE_ORDER_4K); /* In the XENMAPSPACE_gmfn, we took a ref of the gfn at the top */ if ( space == XENMAPSPACE_gmfn || space == XENMAPSPACE_gmfn_range ) diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 9b19769..faf6e13 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -665,21 +665,21 @@ p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn, unsigned long mfn, } int -guest_physmap_remove_page(struct domain *d, unsigned long gfn, - unsigned long mfn, unsigned int page_order) +guest_physmap_remove_page(struct domain *d, gfn_t gfn, + mfn_t mfn, unsigned int page_order) { struct p2m_domain *p2m = p2m_get_hostp2m(d); int rc; gfn_lock(p2m, gfn, page_order); - rc = p2m_remove_page(p2m, gfn, mfn, page_order); + rc = p2m_remove_page(p2m, gfn_x(gfn), mfn_x(mfn), page_order); gfn_unlock(p2m, gfn, page_order); return rc; } -int -guest_physmap_add_entry(struct domain *d, unsigned long gfn, - unsigned long mfn, unsigned int page_order, - p2m_type_t t) +static int +__guest_physmap_add_entry(struct domain *d, unsigned long gfn, + unsigned long mfn, unsigned int page_order, + p2m_type_t t) { struct p2m_domain *p2m = p2m_get_hostp2m(d); unsigned long i, ogfn; @@ -838,6 +838,13 @@ out: return rc; } +/* XXX: To be removed when __guest_physmap_add_entry will use typesafe */ +int +guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn, + unsigned int page_order, p2m_type_t t) +{ + return __guest_physmap_add_entry(d, gfn_x(gfn), mfn_x(mfn), page_order, t); +} /* * Modify the p2m type of a single gfn from ot to nt. @@ -2785,7 +2792,8 @@ int p2m_add_foreign(struct domain *tdom, unsigned long fgfn, unsigned long gpfn, domid_t foreigndom) { p2m_type_t p2mt, p2mt_prev; - unsigned long prev_mfn, mfn; + mfn_t prev_mfn; + unsigned long mfn; struct page_info *page; int rc; struct domain *fdom; @@ -2831,12 +2839,12 @@ int p2m_add_foreign(struct domain *tdom, unsigned long fgfn, mfn = mfn_x(page_to_mfn(page)); /* Remove previously mapped page if it is present. */ - prev_mfn = mfn_x(get_gfn(tdom, gpfn, &p2mt_prev)); - if ( mfn_valid(_mfn(prev_mfn)) ) + prev_mfn = get_gfn(tdom, gpfn, &p2mt_prev); + if ( mfn_valid(prev_mfn) ) { - if ( is_xen_heap_mfn(prev_mfn) ) + if ( is_xen_heap_mfn(mfn_x(prev_mfn)) ) /* Xen heap frames are simply unhooked from this phys slot */ - guest_physmap_remove_page(tdom, gpfn, prev_mfn, 0); + guest_physmap_remove_page(tdom, _gfn(gpfn), prev_mfn, 0); else /* Normal domain memory is freed, to avoid leaking memory. */ guest_remove_page(tdom, gpfn); diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c index 3c304f4..3f15543 100644 --- a/xen/common/grant_table.c +++ b/xen/common/grant_table.c @@ -1818,7 +1818,7 @@ gnttab_transfer( goto copyback; } - guest_physmap_remove_page(d, gop.mfn, mfn, 0); + guest_physmap_remove_page(d, _gfn(gop.mfn), _mfn(mfn), 0); gnttab_flush_tlb(d); /* Find the target domain. */ @@ -1946,7 +1946,7 @@ gnttab_transfer( { grant_entry_v1_t *sha = &shared_entry_v1(e->grant_table, gop.ref); - guest_physmap_add_page(e, sha->frame, mfn, 0); + guest_physmap_add_page(e, _gfn(sha->frame), _mfn(mfn), 0); if ( !paging_mode_translate(e) ) sha->frame = mfn; } @@ -1954,7 +1954,8 @@ gnttab_transfer( { grant_entry_v2_t *sha = &shared_entry_v2(e->grant_table, gop.ref); - guest_physmap_add_page(e, sha->full_page.frame, mfn, 0); + guest_physmap_add_page(e, _gfn(sha->full_page.frame), + _mfn(mfn), 0); if ( !paging_mode_translate(e) ) sha->full_page.frame = mfn; } diff --git a/xen/common/memory.c b/xen/common/memory.c index 3149f26..224a7b2 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -213,7 +213,7 @@ static void populate_physmap(struct memop_args *a) mfn = page_to_mfn(page); } - guest_physmap_add_page(d, gpfn, mfn, a->extent_order); + guest_physmap_add_page(d, _gfn(gpfn), _mfn(mfn), a->extent_order); if ( !paging_mode_translate(d) ) { @@ -237,20 +237,20 @@ int guest_remove_page(struct domain *d, unsigned long gmfn) #ifdef CONFIG_X86 p2m_type_t p2mt; #endif - unsigned long mfn; + mfn_t mfn; #ifdef CONFIG_X86 - mfn = mfn_x(get_gfn_query(d, gmfn, &p2mt)); + mfn = get_gfn_query(d, gmfn, &p2mt); if ( unlikely(p2m_is_paging(p2mt)) ) { - guest_physmap_remove_page(d, gmfn, mfn, 0); + guest_physmap_remove_page(d, _gfn(gmfn), mfn, 0); put_gfn(d, gmfn); /* If the page hasn't yet been paged out, there is an * actual page that needs to be released. */ if ( p2mt == p2m_ram_paging_out ) { - ASSERT(mfn_valid(mfn)); - page = mfn_to_page(mfn); + ASSERT(mfn_valid(mfn_x(mfn))); + page = mfn_to_page(mfn_x(mfn)); if ( test_and_clear_bit(_PGC_allocated, &page->count_info) ) put_page(page); } @@ -259,14 +259,14 @@ int guest_remove_page(struct domain *d, unsigned long gmfn) } if ( p2mt == p2m_mmio_direct ) { - clear_mmio_p2m_entry(d, gmfn, _mfn(mfn), 0); + clear_mmio_p2m_entry(d, gmfn, mfn, 0); put_gfn(d, gmfn); return 1; } #else - mfn = mfn_x(gfn_to_mfn(d, _gfn(gmfn))); + mfn = gfn_to_mfn(d, _gfn(gmfn)); #endif - if ( unlikely(!mfn_valid(mfn)) ) + if ( unlikely(!mfn_valid(mfn_x(mfn))) ) { put_gfn(d, gmfn); gdprintk(XENLOG_INFO, "Domain %u page number %lx invalid\n", @@ -288,12 +288,12 @@ int guest_remove_page(struct domain *d, unsigned long gmfn) return 0; } /* Maybe the mfn changed */ - mfn = mfn_x(get_gfn_query_unlocked(d, gmfn, &p2mt)); + mfn = get_gfn_query_unlocked(d, gmfn, &p2mt); ASSERT(!p2m_is_shared(p2mt)); } #endif /* CONFIG_X86 */ - page = mfn_to_page(mfn); + page = mfn_to_page(mfn_x(mfn)); if ( unlikely(!get_page(page, d)) ) { put_gfn(d, gmfn); @@ -316,7 +316,7 @@ int guest_remove_page(struct domain *d, unsigned long gmfn) test_and_clear_bit(_PGC_allocated, &page->count_info) ) put_page(page); - guest_physmap_remove_page(d, gmfn, mfn, 0); + guest_physmap_remove_page(d, _gfn(gmfn), mfn, 0); put_page(page); put_gfn(d, gmfn); @@ -540,7 +540,7 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg) gfn = mfn_to_gmfn(d, mfn); /* Pages were unshared above */ BUG_ON(SHARED_M2P(gfn)); - guest_physmap_remove_page(d, gfn, mfn, 0); + guest_physmap_remove_page(d, _gfn(gfn), _mfn(mfn), 0); put_page(page); } @@ -584,7 +584,8 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg) } mfn = page_to_mfn(page); - guest_physmap_add_page(d, gpfn, mfn, exch.out.extent_order); + guest_physmap_add_page(d, _gfn(gpfn), _mfn(mfn), + exch.out.extent_order); if ( !paging_mode_translate(d) ) { @@ -1087,7 +1088,8 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg) page = get_page_from_gfn(d, xrfp.gpfn, NULL, P2M_ALLOC); if ( page ) { - guest_physmap_remove_page(d, xrfp.gpfn, page_to_mfn(page), 0); + guest_physmap_remove_page(d, _gfn(xrfp.gpfn), + _mfn(page_to_mfn(page)), 0); put_page(page); } else diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c index 54a03a6..90ee2e9 100644 --- a/xen/drivers/passthrough/arm/smmu.c +++ b/xen/drivers/passthrough/arm/smmu.c @@ -2771,7 +2771,7 @@ static int arm_smmu_map_page(struct domain *d, unsigned long gfn, * The function guest_physmap_add_entry replaces the current mapping * if there is already one... */ - return guest_physmap_add_entry(d, gfn, mfn, 0, t); + return guest_physmap_add_entry(d, _gfn(gfn), _mfn(mfn), 0, t); } static int arm_smmu_unmap_page(struct domain *d, unsigned long gfn) @@ -2783,7 +2783,7 @@ static int arm_smmu_unmap_page(struct domain *d, unsigned long gfn) if ( !is_domain_direct_mapped(d) ) return -EINVAL; - guest_physmap_remove_page(d, gfn, gfn, 0); + guest_physmap_remove_page(d, _gfn(gfn), _mfn(gfn), 0); return 0; } diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h index 75c65a8..0d1e61e 100644 --- a/xen/include/asm-arm/p2m.h +++ b/xen/include/asm-arm/p2m.h @@ -160,23 +160,23 @@ int map_dev_mmio_region(struct domain *d, unsigned long mfn); int guest_physmap_add_entry(struct domain *d, - unsigned long gfn, - unsigned long mfn, + gfn_t gfn, + mfn_t mfn, unsigned long page_order, p2m_type_t t); /* Untyped version for RAM only, for compatibility */ static inline int guest_physmap_add_page(struct domain *d, - unsigned long gfn, - unsigned long mfn, + gfn_t gfn, + mfn_t mfn, unsigned int page_order) { return guest_physmap_add_entry(d, gfn, mfn, page_order, p2m_ram_rw); } void guest_physmap_remove_page(struct domain *d, - unsigned long gpfn, - unsigned long mfn, unsigned int page_order); + gfn_t gfn, + mfn_t mfn, unsigned int page_order); mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn); diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index 65675a2..4ab3574 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -545,14 +545,14 @@ void p2m_teardown(struct p2m_domain *p2m); void p2m_final_teardown(struct domain *d); /* Add a page to a domain's p2m table */ -int guest_physmap_add_entry(struct domain *d, unsigned long gfn, - unsigned long mfn, unsigned int page_order, +int guest_physmap_add_entry(struct domain *d, gfn_t gfn, + mfn_t mfn, unsigned int page_order, p2m_type_t t); /* Untyped version for RAM only, for compatibility */ static inline int guest_physmap_add_page(struct domain *d, - unsigned long gfn, - unsigned long mfn, + gfn_t gfn, + mfn_t mfn, unsigned int page_order) { return guest_physmap_add_entry(d, gfn, mfn, page_order, p2m_ram_rw); @@ -560,8 +560,7 @@ static inline int guest_physmap_add_page(struct domain *d, /* Remove a page from a domain's p2m table */ int guest_physmap_remove_page(struct domain *d, - unsigned long gfn, - unsigned long mfn, unsigned int page_order); + gfn_t gfn, mfn_t mfn, unsigned int page_order); /* Set a p2m range as populate-on-demand */ int guest_physmap_mark_populate_on_demand(struct domain *d, unsigned long gfn, diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h index 3cf646a..1682d1f 100644 --- a/xen/include/xen/mm.h +++ b/xen/include/xen/mm.h @@ -511,7 +511,7 @@ int xenmem_add_to_physmap_one(struct domain *d, unsigned int space, /* Returns 1 on success, 0 on error, negative if the ring * for event propagation is full in the presence of paging */ -int guest_remove_page(struct domain *d, unsigned long gmfn); +int guest_remove_page(struct domain *d, unsigned long gfn); #define RAM_TYPE_CONVENTIONAL 0x00000001 #define RAM_TYPE_RESERVED 0x00000002