From patchwork Thu Sep 21 12:40:28 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 113250 Delivered-To: patch@linaro.org Received: by 10.80.163.150 with SMTP id s22csp1846075edb; Thu, 21 Sep 2017 05:43:00 -0700 (PDT) X-Google-Smtp-Source: AOwi7QB6k8XehOgd/P2nQG0LvL4dxKC4lUFrYzUW+Fw1XaFbwq8aR+xWTYwa8O0RhiTskmu18yCx X-Received: by 10.36.115.148 with SMTP id y142mr1157610itb.147.1505997780414; Thu, 21 Sep 2017 05:43:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1505997780; cv=none; d=google.com; s=arc-20160816; b=Xf+RFeuzaD/MFT1MBDk2EPflv7DxaY0HvHNILz5XTNGTFTnG8CSUpeHPAUSR2bMSzC 79xktTzqihznpmK9Xi/UhS+dG+3AZ0hx7LjTLFP8tjluOo6jj5v1dfzrsT7FNvS+6Wg1 MCk9FzGfw9FIIVC7rkKO7AERBpgnRGiQoST0E0lH3bsgsL644Jf7jrrDSLgWzN8zLEl8 8cLGI8U27D8XI5YNC07VvVGUbYfoVMQox4BZlPjfCuXtPaq/Sla2iLcnkG9f4pyypfId AC0aXSjJndPag/+DMiQFP6ppXTUS80RdNchxUOGbqyatgm0zzHMbiJAGyXVd92X0udMH Ipeg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:cc:references:in-reply-to:message-id:date:to :from:arc-authentication-results; bh=9ZtWfL4HRqmec5far5oERCLdt3k6TWqXMRjSTnwqmNQ=; b=IQ+tUw1kZBX3EuycWl7FwG/DMFdJQLg+kIHONnsMzafNA4I23+g0g3UBhM1S64vl1m 9b0j7rNpjItVwnm8gjEyqvUgEDRwRhfwo5Y1LLD2xoL1LG8IDCDQXO3qd0qDXC0GaTI1 F+RJgkheY4sa/LMVqXWEFSkZx/2Mm4p4T4HJBQKyZKB6KJd/OsRsgoVGUMjBgtBk6iVk wtxfICvu1x5aZYKYqcthcM6dLH5f2nBmgANziVf6wDuy+FQAc2bz93lbI5kn3Jeu4lRC ahBduirNQPdu4bP1j81fQWJ5vrSLHac2e2QMxDiVFoizVeOwCc6oYkSrrSFn/9W/DjpR tmHg== ARC-Authentication-Results: i=1; mx.google.com; spf=neutral (google.com: 192.237.175.120 is neither permitted nor denied by best guess record for domain of xen-devel-bounces@lists.xen.org) smtp.mailfrom=xen-devel-bounces@lists.xen.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id u194si1438066itb.84.2017.09.21.05.43.00 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 21 Sep 2017 05:43:00 -0700 (PDT) Received-SPF: neutral (google.com: 192.237.175.120 is neither permitted nor denied by best guess record for domain of xen-devel-bounces@lists.xen.org) client-ip=192.237.175.120; Authentication-Results: mx.google.com; spf=neutral (google.com: 192.237.175.120 is neither permitted nor denied by best guess record for domain of xen-devel-bounces@lists.xen.org) smtp.mailfrom=xen-devel-bounces@lists.xen.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dv0mq-0001MW-2R; Thu, 21 Sep 2017 12:41:00 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dv0mo-0001KK-EB for xen-devel@lists.xen.org; Thu, 21 Sep 2017 12:40:58 +0000 Received: from [85.158.137.68] by server-14.bemta-3.messagelabs.com id 09/EA-01910-953B3C95; Thu, 21 Sep 2017 12:40:57 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrBLMWRWlGSWpSXmKPExsVysyfVTTdi8+F Igw+XzCyWfFzM4sDocXT3b6YAxijWzLyk/IoE1oyX1zcwFWy4zljx/ltJA+OjeYxdjFwcQgKb GSUuHD3N2sXICeScZpTYe9MaxGYT0JS48/kTE4gtIiAtce3zZbAGZpCa/gt72UESwgKeErtOL GAEsVkEVCXe//3P0sXIwcErYCnR0pEHEpYQkJfY1XYRbD4nULjt8DeoXRYSvyacYJ3AyL2AkW EVo3pxalFZapGusV5SUWZ6RkluYmaOrqGBsV5uanFxYnpqTmJSsV5yfu4mRqB/GYBgB2PzF6d DjJIcTEqivO/WHY4U4kvKT6nMSCzOiC8qzUktPsQow8GhJMHbtAkoJ1iUmp5akZaZAww0mLQE B4+SCESat7ggMbc4Mx0idYrRmOPHpCt/mDg6bt79wyTEkpeflyolzmsOUioAUppRmgc3CBYBl xhlpYR5GYFOE+IpSC3KzSxBlX/FKM7BqCTMqwwyhSczrwRu3yugU5iATsnecADklJJEhJRUAy P/5aKq5c//H0o6WyXUYVgeLOYyQ017E++Pb8ILrgfm733Kt70rX7BjRlPZgrjikL2mzw5E3c/ S8cppT/zkqMRvItj00v6/8XaLfRdK2HSn/FLY4c5s7yHlpfbr/lYn2Zyfh4osDnF/2OpwOPH0 ddW2KnefXpG+r5P3cjqk1dWlza+pfD5VV4mlOCPRUIu5qDgRAOXcGQN7AgAA X-Env-Sender: julien.grall@arm.com X-Msg-Ref: server-7.tower-31.messagelabs.com!1505997655!108332021!1 X-Originating-IP: [217.140.101.70] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 9.4.45; banners=-,-,- X-VirusChecked: Checked Received: (qmail 26465 invoked from network); 21 Sep 2017 12:40:56 -0000 Received: from foss.arm.com (HELO foss.arm.com) (217.140.101.70) by server-7.tower-31.messagelabs.com with SMTP; 21 Sep 2017 12:40:56 -0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A6B82165D; Thu, 21 Sep 2017 05:40:55 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.206.53]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 740043F58C; Thu, 21 Sep 2017 05:40:54 -0700 (PDT) From: Julien Grall To: xen-devel@lists.xen.org Date: Thu, 21 Sep 2017 13:40:28 +0100 Message-Id: <20170921124035.2410-10-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170921124035.2410-1-julien.grall@arm.com> References: <20170921124035.2410-1-julien.grall@arm.com> Cc: George Dunlap , Andrew Cooper , Julien Grall , Tamas K Lengyel , Jan Beulich Subject: [Xen-devel] [PATCH v2 09/16] xen/x86: p2m: Use typesafe GFN in p2m_set_entry X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" Signed-off-by: Julien Grall Acked-by: Andrew Cooper Acked-by: Tamas K Lengyel Reviewed-by: Wei Liu --- Cc: George Dunlap Cc: Jan Beulich Cc: Andrew Cooper Cc: Tamas K Lengyel Changes in v2: - Add Andrew & Tamas' acked-by - Rename the variable gfn_t to gfn_ to avoid shadowing the type gfn_t --- xen/arch/x86/mm/hap/nested_hap.c | 2 +- xen/arch/x86/mm/mem_sharing.c | 3 +- xen/arch/x86/mm/p2m-pod.c | 36 +++++++------ xen/arch/x86/mm/p2m.c | 112 ++++++++++++++++++++++----------------- xen/include/asm-x86/p2m.h | 2 +- 5 files changed, 85 insertions(+), 70 deletions(-) diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c index 162afed46b..346fcb53e5 100644 --- a/xen/arch/x86/mm/hap/nested_hap.c +++ b/xen/arch/x86/mm/hap/nested_hap.c @@ -121,7 +121,7 @@ nestedhap_fix_p2m(struct vcpu *v, struct p2m_domain *p2m, gfn = (L2_gpa >> PAGE_SHIFT) & mask; mfn = _mfn((L0_gpa >> PAGE_SHIFT) & mask); - rc = p2m_set_entry(p2m, gfn, mfn, page_order, p2mt, p2ma); + rc = p2m_set_entry(p2m, _gfn(gfn), mfn, page_order, p2mt, p2ma); } p2m_unlock(p2m); diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c index 62a3899089..b856028c02 100644 --- a/xen/arch/x86/mm/mem_sharing.c +++ b/xen/arch/x86/mm/mem_sharing.c @@ -1052,7 +1052,8 @@ int mem_sharing_add_to_physmap(struct domain *sd, unsigned long sgfn, shr_handle goto err_unlock; } - ret = p2m_set_entry(p2m, cgfn, smfn, PAGE_ORDER_4K, p2m_ram_shared, a); + ret = p2m_set_entry(p2m, _gfn(cgfn), smfn, PAGE_ORDER_4K, + p2m_ram_shared, a); /* Tempted to turn this into an assert */ if ( ret ) diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c index c8c8cff014..b8a51cf12a 100644 --- a/xen/arch/x86/mm/p2m-pod.c +++ b/xen/arch/x86/mm/p2m-pod.c @@ -565,7 +565,7 @@ p2m_pod_decrease_reservation(struct domain *d, gfn_t gfn, unsigned int order) * All PoD: Mark the whole region invalid and tell caller * we're done. */ - p2m_set_entry(p2m, gfn_x(gfn), INVALID_MFN, order, p2m_invalid, + p2m_set_entry(p2m, gfn, INVALID_MFN, order, p2m_invalid, p2m->default_access); p2m->pod.entry_count -= 1UL << order; BUG_ON(p2m->pod.entry_count < 0); @@ -609,7 +609,7 @@ p2m_pod_decrease_reservation(struct domain *d, gfn_t gfn, unsigned int order) n = 1UL << cur_order; if ( t == p2m_populate_on_demand ) { - p2m_set_entry(p2m, gfn_x(gfn) + i, INVALID_MFN, cur_order, + p2m_set_entry(p2m, gfn_add(gfn, i), INVALID_MFN, cur_order, p2m_invalid, p2m->default_access); p2m->pod.entry_count -= n; BUG_ON(p2m->pod.entry_count < 0); @@ -631,7 +631,7 @@ p2m_pod_decrease_reservation(struct domain *d, gfn_t gfn, unsigned int order) page = mfn_to_page(mfn); - p2m_set_entry(p2m, gfn_x(gfn) + i, INVALID_MFN, cur_order, + p2m_set_entry(p2m, gfn_add(gfn, i), INVALID_MFN, cur_order, p2m_invalid, p2m->default_access); p2m_tlb_flush_sync(p2m); for ( j = 0; j < n; ++j ) @@ -680,9 +680,10 @@ void p2m_pod_dump_data(struct domain *d) * in the p2m. */ static int -p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn) +p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn_l) { mfn_t mfn, mfn0 = INVALID_MFN; + gfn_t gfn = _gfn(gfn_l); p2m_type_t type, type0 = 0; unsigned long * map = NULL; int ret=0, reset = 0; @@ -693,7 +694,7 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn) ASSERT(pod_locked_by_me(p2m)); - if ( !superpage_aligned(gfn) ) + if ( !superpage_aligned(gfn_l) ) goto out; /* Allow an extra refcount for one shadow pt mapping in shadowed domains */ @@ -717,7 +718,7 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn) unsigned long k; const struct page_info *page; - mfn = p2m->get_entry(p2m, _gfn(gfn + i), &type, &a, 0, + mfn = p2m->get_entry(p2m, gfn_add(gfn, i), &type, &a, 0, &cur_order, NULL); /* @@ -815,7 +816,7 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn) int d:16,order:16; } t; - t.gfn = gfn; + t.gfn = gfn_l; t.mfn = mfn_x(mfn); t.d = d->domain_id; t.order = 9; @@ -898,7 +899,7 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count) } /* Try to remove the page, restoring old mapping if it fails. */ - p2m_set_entry(p2m, gfns[i], INVALID_MFN, PAGE_ORDER_4K, + p2m_set_entry(p2m, _gfn(gfns[i]), INVALID_MFN, PAGE_ORDER_4K, p2m_populate_on_demand, p2m->default_access); /* @@ -910,7 +911,7 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count) unmap_domain_page(map[i]); map[i] = NULL; - p2m_set_entry(p2m, gfns[i], mfns[i], PAGE_ORDER_4K, + p2m_set_entry(p2m, _gfn(gfns[i]), mfns[i], PAGE_ORDER_4K, types[i], p2m->default_access); continue; @@ -937,7 +938,7 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count) */ if ( j < (PAGE_SIZE / sizeof(*map[i])) ) { - p2m_set_entry(p2m, gfns[i], mfns[i], PAGE_ORDER_4K, + p2m_set_entry(p2m, _gfn(gfns[i]), mfns[i], PAGE_ORDER_4K, types[i], p2m->default_access); } else @@ -1080,7 +1081,7 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn, { struct domain *d = p2m->domain; struct page_info *p = NULL; /* Compiler warnings */ - unsigned long gfn_aligned = (gfn >> order) << order; + gfn_t gfn_aligned = _gfn((gfn >> order) << order); mfn_t mfn; unsigned long i; @@ -1152,14 +1153,14 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn, for( i = 0; i < (1UL << order); i++ ) { - set_gpfn_from_mfn(mfn_x(mfn) + i, gfn_aligned + i); + set_gpfn_from_mfn(mfn_x(mfn) + i, gfn_x(gfn_aligned) + i); paging_mark_dirty(d, mfn_add(mfn, i)); } p2m->pod.entry_count -= (1UL << order); BUG_ON(p2m->pod.entry_count < 0); - pod_eager_record(p2m, gfn_aligned, order); + pod_eager_record(p2m, gfn_x(gfn_aligned), order); if ( tb_init_done ) { @@ -1199,7 +1200,7 @@ remap_and_retry: * need promoting the gfn lock from gfn->2M superpage. */ for ( i = 0; i < (1UL << order); i++ ) - p2m_set_entry(p2m, gfn_aligned + i, INVALID_MFN, PAGE_ORDER_4K, + p2m_set_entry(p2m, gfn_add(gfn_aligned, i), INVALID_MFN, PAGE_ORDER_4K, p2m_populate_on_demand, p2m->default_access); if ( tb_init_done ) { @@ -1219,10 +1220,11 @@ remap_and_retry: int -guest_physmap_mark_populate_on_demand(struct domain *d, unsigned long gfn, +guest_physmap_mark_populate_on_demand(struct domain *d, unsigned long gfn_l, unsigned int order) { struct p2m_domain *p2m = p2m_get_hostp2m(d); + gfn_t gfn = _gfn(gfn_l); unsigned long i, n, pod_count = 0; int rc = 0; @@ -1231,7 +1233,7 @@ guest_physmap_mark_populate_on_demand(struct domain *d, unsigned long gfn, gfn_lock(p2m, gfn, order); - P2M_DEBUG("mark pod gfn=%#lx\n", gfn); + P2M_DEBUG("mark pod gfn=%#lx\n", gfn_l); /* Make sure all gpfns are unused */ for ( i = 0; i < (1UL << order); i += n ) @@ -1240,7 +1242,7 @@ guest_physmap_mark_populate_on_demand(struct domain *d, unsigned long gfn, p2m_access_t a; unsigned int cur_order; - p2m->get_entry(p2m, _gfn(gfn + i), &ot, &a, 0, &cur_order, NULL); + p2m->get_entry(p2m, gfn_add(gfn, i), &ot, &a, 0, &cur_order, NULL); n = 1UL << min(order, cur_order); if ( p2m_is_ram(ot) ) { diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 35d4a15391..3fbc537da6 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -532,7 +532,7 @@ struct page_info *p2m_get_page_from_gfn( } /* Returns: 0 for success, -errno for failure */ -int p2m_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn, +int p2m_set_entry(struct p2m_domain *p2m, gfn_t gfn, mfn_t mfn, unsigned int page_order, p2m_type_t p2mt, p2m_access_t p2ma) { struct domain *d = p2m->domain; @@ -546,8 +546,9 @@ int p2m_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn, { if ( hap_enabled(d) ) { - unsigned long fn_mask = !mfn_eq(mfn, INVALID_MFN) ? - (gfn | mfn_x(mfn) | todo) : (gfn | todo); + unsigned long fn_mask = !mfn_eq(mfn, INVALID_MFN) ? mfn_x(mfn) : 0; + + fn_mask |= gfn_x(gfn) | todo; order = (!(fn_mask & ((1ul << PAGE_ORDER_1G) - 1)) && hap_has_1gb) ? PAGE_ORDER_1G : @@ -557,11 +558,11 @@ int p2m_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn, else order = 0; - set_rc = p2m->set_entry(p2m, _gfn(gfn), mfn, order, p2mt, p2ma, -1); + set_rc = p2m->set_entry(p2m, gfn, mfn, order, p2mt, p2ma, -1); if ( set_rc ) rc = set_rc; - gfn += 1ul << order; + gfn = gfn_add(gfn, 1ul << order); if ( !mfn_eq(mfn, INVALID_MFN) ) mfn = mfn_add(mfn, 1ul << order); todo -= 1ul << order; @@ -652,7 +653,7 @@ int p2m_alloc_table(struct p2m_domain *p2m) /* Initialise physmap tables for slot zero. Other code assumes this. */ p2m->defer_nested_flush = 1; - rc = p2m_set_entry(p2m, 0, INVALID_MFN, PAGE_ORDER_4K, + rc = p2m_set_entry(p2m, _gfn(0), INVALID_MFN, PAGE_ORDER_4K, p2m_invalid, p2m->default_access); p2m->defer_nested_flush = 0; p2m_unlock(p2m); @@ -703,10 +704,11 @@ void p2m_final_teardown(struct domain *d) static int -p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn, unsigned long mfn, +p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn_l, unsigned long mfn, unsigned int page_order) { unsigned long i; + gfn_t gfn = _gfn(gfn_l); mfn_t mfn_return; p2m_type_t t; p2m_access_t a; @@ -730,13 +732,13 @@ p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn, unsigned long mfn, } ASSERT(gfn_locked_by_me(p2m, gfn)); - P2M_DEBUG("removing gfn=%#lx mfn=%#lx\n", gfn, mfn); + P2M_DEBUG("removing gfn=%#lx mfn=%#lx\n", gfn_l, mfn); if ( mfn_valid(_mfn(mfn)) ) { for ( i = 0; i < (1UL << page_order); i++ ) { - mfn_return = p2m->get_entry(p2m, _gfn(gfn + i), &t, &a, 0, + mfn_return = p2m->get_entry(p2m, gfn_add(gfn, i), &t, &a, 0, NULL, NULL); if ( !p2m_is_grant(t) && !p2m_is_shared(t) && !p2m_is_foreign(t) ) set_gpfn_from_mfn(mfn+i, INVALID_M2P_ENTRY); @@ -901,7 +903,7 @@ guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn, /* Now, actually do the two-way mapping */ if ( mfn_valid(mfn) ) { - rc = p2m_set_entry(p2m, gfn_x(gfn), mfn, page_order, t, + rc = p2m_set_entry(p2m, gfn, mfn, page_order, t, p2m->default_access); if ( rc ) goto out; /* Failed to update p2m, bail without updating m2p. */ @@ -917,7 +919,7 @@ guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn, { gdprintk(XENLOG_WARNING, "Adding bad mfn to p2m map (%#lx -> %#lx)\n", gfn_x(gfn), mfn_x(mfn)); - rc = p2m_set_entry(p2m, gfn_x(gfn), INVALID_MFN, page_order, + rc = p2m_set_entry(p2m, gfn, INVALID_MFN, page_order, p2m_invalid, p2m->default_access); if ( rc == 0 ) { @@ -940,11 +942,12 @@ out: * Returns: 0 for success, -errno for failure. * Resets the access permissions. */ -int p2m_change_type_one(struct domain *d, unsigned long gfn, +int p2m_change_type_one(struct domain *d, unsigned long gfn_l, p2m_type_t ot, p2m_type_t nt) { p2m_access_t a; p2m_type_t pt; + gfn_t gfn = _gfn(gfn_l); mfn_t mfn; struct p2m_domain *p2m = p2m_get_hostp2m(d); int rc; @@ -954,7 +957,7 @@ int p2m_change_type_one(struct domain *d, unsigned long gfn, gfn_lock(p2m, gfn, 0); - mfn = p2m->get_entry(p2m, _gfn(gfn), &pt, &a, 0, NULL, NULL); + mfn = p2m->get_entry(p2m, gfn, &pt, &a, 0, NULL, NULL); rc = likely(pt == ot) ? p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K, nt, p2m->default_access) @@ -1111,7 +1114,7 @@ static int set_typed_p2m_entry(struct domain *d, unsigned long gfn_l, } P2M_DEBUG("set %d %lx %lx\n", gfn_p2mt, gfn_l, mfn_x(mfn)); - rc = p2m_set_entry(p2m, gfn_l, mfn, order, gfn_p2mt, access); + rc = p2m_set_entry(p2m, gfn, mfn, order, gfn_p2mt, access); if ( rc ) gdprintk(XENLOG_ERR, "p2m_set_entry: %#lx:%u -> %d (0x%"PRI_mfn")\n", gfn_l, order, rc, mfn_x(mfn)); @@ -1146,11 +1149,12 @@ int set_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn, return set_typed_p2m_entry(d, gfn, mfn, order, p2m_mmio_direct, access); } -int set_identity_p2m_entry(struct domain *d, unsigned long gfn, +int set_identity_p2m_entry(struct domain *d, unsigned long gfn_l, p2m_access_t p2ma, unsigned int flag) { p2m_type_t p2mt; p2m_access_t a; + gfn_t gfn = _gfn(gfn_l); mfn_t mfn; struct p2m_domain *p2m = p2m_get_hostp2m(d); int ret; @@ -1159,17 +1163,17 @@ int set_identity_p2m_entry(struct domain *d, unsigned long gfn, { if ( !need_iommu(d) ) return 0; - return iommu_map_page(d, gfn, gfn, IOMMUF_readable|IOMMUF_writable); + return iommu_map_page(d, gfn_l, gfn_l, IOMMUF_readable|IOMMUF_writable); } gfn_lock(p2m, gfn, 0); - mfn = p2m->get_entry(p2m, _gfn(gfn), &p2mt, &a, 0, NULL, NULL); + mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL); if ( p2mt == p2m_invalid || p2mt == p2m_mmio_dm ) - ret = p2m_set_entry(p2m, gfn, _mfn(gfn), PAGE_ORDER_4K, + ret = p2m_set_entry(p2m, gfn, _mfn(gfn_l), PAGE_ORDER_4K, p2m_mmio_direct, p2ma); - else if ( mfn_x(mfn) == gfn && p2mt == p2m_mmio_direct && a == p2ma ) + else if ( mfn_x(mfn) == gfn_l && p2mt == p2m_mmio_direct && a == p2ma ) ret = 0; else { @@ -1180,7 +1184,7 @@ int set_identity_p2m_entry(struct domain *d, unsigned long gfn, printk(XENLOG_G_WARNING "Cannot setup identity map d%d:%lx," " gfn already mapped to %lx.\n", - d->domain_id, gfn, mfn_x(mfn)); + d->domain_id, gfn_l, mfn_x(mfn)); } gfn_unlock(p2m, gfn, 0); @@ -1194,10 +1198,11 @@ int set_identity_p2m_entry(struct domain *d, unsigned long gfn, * order+1 for caller to retry with order (guaranteed smaller than * the order value passed in) */ -int clear_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn, +int clear_mmio_p2m_entry(struct domain *d, unsigned long gfn_l, mfn_t mfn, unsigned int order) { int rc = -EINVAL; + gfn_t gfn = _gfn(gfn_l); mfn_t actual_mfn; p2m_access_t a; p2m_type_t t; @@ -1208,7 +1213,7 @@ int clear_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn, return -EIO; gfn_lock(p2m, gfn, order); - actual_mfn = p2m->get_entry(p2m, _gfn(gfn), &t, &a, 0, &cur_order, NULL); + actual_mfn = p2m->get_entry(p2m, gfn, &t, &a, 0, &cur_order, NULL); if ( cur_order < order ) { rc = cur_order + 1; @@ -1219,13 +1224,13 @@ int clear_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn, if ( mfn_eq(actual_mfn, INVALID_MFN) || (t != p2m_mmio_direct) ) { gdprintk(XENLOG_ERR, - "gfn_to_mfn failed! gfn=%08lx type:%d\n", gfn, t); + "gfn_to_mfn failed! gfn=%08lx type:%d\n", gfn_l, t); goto out; } if ( mfn_x(mfn) != mfn_x(actual_mfn) ) gdprintk(XENLOG_WARNING, "no mapping between mfn %08lx and gfn %08lx\n", - mfn_x(mfn), gfn); + mfn_x(mfn), gfn_l); rc = p2m_set_entry(p2m, gfn, INVALID_MFN, order, p2m_invalid, p2m->default_access); @@ -1235,10 +1240,11 @@ int clear_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn, return rc; } -int clear_identity_p2m_entry(struct domain *d, unsigned long gfn) +int clear_identity_p2m_entry(struct domain *d, unsigned long gfn_l) { p2m_type_t p2mt; p2m_access_t a; + gfn_t gfn = _gfn(gfn_l); mfn_t mfn; struct p2m_domain *p2m = p2m_get_hostp2m(d); int ret; @@ -1247,13 +1253,13 @@ int clear_identity_p2m_entry(struct domain *d, unsigned long gfn) { if ( !need_iommu(d) ) return 0; - return iommu_unmap_page(d, gfn); + return iommu_unmap_page(d, gfn_l); } gfn_lock(p2m, gfn, 0); - mfn = p2m->get_entry(p2m, _gfn(gfn), &p2mt, &a, 0, NULL, NULL); - if ( p2mt == p2m_mmio_direct && mfn_x(mfn) == gfn ) + mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL); + if ( p2mt == p2m_mmio_direct && mfn_x(mfn) == gfn_l ) { ret = p2m_set_entry(p2m, gfn, INVALID_MFN, PAGE_ORDER_4K, p2m_invalid, p2m->default_access); @@ -1264,7 +1270,7 @@ int clear_identity_p2m_entry(struct domain *d, unsigned long gfn) gfn_unlock(p2m, gfn, 0); printk(XENLOG_G_WARNING "non-identity map d%d:%lx not cleared (mapped to %lx)\n", - d->domain_id, gfn, mfn_x(mfn)); + d->domain_id, gfn_l, mfn_x(mfn)); ret = 0; } @@ -1272,10 +1278,11 @@ int clear_identity_p2m_entry(struct domain *d, unsigned long gfn) } /* Returns: 0 for success, -errno for failure */ -int set_shared_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn) +int set_shared_p2m_entry(struct domain *d, unsigned long gfn_l, mfn_t mfn) { struct p2m_domain *p2m = p2m_get_hostp2m(d); int rc = 0; + gfn_t gfn = _gfn(gfn_l); p2m_access_t a; p2m_type_t ot; mfn_t omfn; @@ -1285,7 +1292,7 @@ int set_shared_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn) return -EIO; gfn_lock(p2m, gfn, 0); - omfn = p2m->get_entry(p2m, _gfn(gfn), &ot, &a, 0, NULL, NULL); + omfn = p2m->get_entry(p2m, gfn, &ot, &a, 0, NULL, NULL); /* At the moment we only allow p2m change if gfn has already been made * sharable first */ ASSERT(p2m_is_shared(ot)); @@ -1297,14 +1304,14 @@ int set_shared_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn) || (pg_type & PGT_type_mask) != PGT_shared_page ) set_gpfn_from_mfn(mfn_x(omfn), INVALID_M2P_ENTRY); - P2M_DEBUG("set shared %lx %lx\n", gfn, mfn_x(mfn)); + P2M_DEBUG("set shared %lx %lx\n", gfn_l, mfn_x(mfn)); rc = p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K, p2m_ram_shared, p2m->default_access); gfn_unlock(p2m, gfn, 0); if ( rc ) gdprintk(XENLOG_ERR, "p2m_set_entry failed! mfn=%08lx rc:%d\n", - mfn_x(get_gfn_query_unlocked(p2m->domain, gfn, &ot)), rc); + mfn_x(get_gfn_query_unlocked(p2m->domain, gfn_l, &ot)), rc); return rc; } @@ -1326,18 +1333,19 @@ int set_shared_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn) * Once the p2mt is changed the page is readonly for the guest. On success the * pager can write the page contents to disk and later evict the page. */ -int p2m_mem_paging_nominate(struct domain *d, unsigned long gfn) +int p2m_mem_paging_nominate(struct domain *d, unsigned long gfn_l) { struct page_info *page; struct p2m_domain *p2m = p2m_get_hostp2m(d); p2m_type_t p2mt; p2m_access_t a; + gfn_t gfn = _gfn(gfn_l); mfn_t mfn; int ret = -EBUSY; gfn_lock(p2m, gfn, 0); - mfn = p2m->get_entry(p2m, _gfn(gfn), &p2mt, &a, 0, NULL, NULL); + mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL); /* Check if mfn is valid */ if ( !mfn_valid(mfn) ) @@ -1387,11 +1395,12 @@ int p2m_mem_paging_nominate(struct domain *d, unsigned long gfn) * could evict it, eviction can not be done either. In this case the gfn is * still backed by a mfn. */ -int p2m_mem_paging_evict(struct domain *d, unsigned long gfn) +int p2m_mem_paging_evict(struct domain *d, unsigned long gfn_l) { struct page_info *page; p2m_type_t p2mt; p2m_access_t a; + gfn_t gfn = _gfn(gfn_l); mfn_t mfn; struct p2m_domain *p2m = p2m_get_hostp2m(d); int ret = -EBUSY; @@ -1399,7 +1408,7 @@ int p2m_mem_paging_evict(struct domain *d, unsigned long gfn) gfn_lock(p2m, gfn, 0); /* Get mfn */ - mfn = p2m->get_entry(p2m, _gfn(gfn), &p2mt, &a, 0, NULL, NULL); + mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL); if ( unlikely(!mfn_valid(mfn)) ) goto out; @@ -1502,15 +1511,16 @@ void p2m_mem_paging_drop_page(struct domain *d, unsigned long gfn, * already sent to the pager. In this case the caller has to try again until the * gfn is fully paged in again. */ -void p2m_mem_paging_populate(struct domain *d, unsigned long gfn) +void p2m_mem_paging_populate(struct domain *d, unsigned long gfn_l) { struct vcpu *v = current; vm_event_request_t req = { .reason = VM_EVENT_REASON_MEM_PAGING, - .u.mem_paging.gfn = gfn + .u.mem_paging.gfn = gfn_l }; p2m_type_t p2mt; p2m_access_t a; + gfn_t gfn = _gfn(gfn_l); mfn_t mfn; struct p2m_domain *p2m = p2m_get_hostp2m(d); @@ -1519,7 +1529,7 @@ void p2m_mem_paging_populate(struct domain *d, unsigned long gfn) if ( rc == -ENOSYS ) { gdprintk(XENLOG_ERR, "Domain %hu paging gfn %lx yet no ring " - "in place\n", d->domain_id, gfn); + "in place\n", d->domain_id, gfn_l); /* Prevent the vcpu from faulting repeatedly on the same gfn */ if ( v->domain == d ) vcpu_pause_nosync(v); @@ -1531,7 +1541,7 @@ void p2m_mem_paging_populate(struct domain *d, unsigned long gfn) /* Fix p2m mapping */ gfn_lock(p2m, gfn, 0); - mfn = p2m->get_entry(p2m, _gfn(gfn), &p2mt, &a, 0, NULL, NULL); + mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL); /* Allow only nominated or evicted pages to enter page-in path */ if ( p2mt == p2m_ram_paging_out || p2mt == p2m_ram_paged ) { @@ -1575,11 +1585,12 @@ void p2m_mem_paging_populate(struct domain *d, unsigned long gfn) * mfn if populate was called for gfn which was nominated but not evicted. In * this case only the p2mt needs to be forwarded. */ -int p2m_mem_paging_prep(struct domain *d, unsigned long gfn, uint64_t buffer) +int p2m_mem_paging_prep(struct domain *d, unsigned long gfn_l, uint64_t buffer) { struct page_info *page; p2m_type_t p2mt; p2m_access_t a; + gfn_t gfn = _gfn(gfn_l); mfn_t mfn; struct p2m_domain *p2m = p2m_get_hostp2m(d); int ret, page_extant = 1; @@ -1593,7 +1604,7 @@ int p2m_mem_paging_prep(struct domain *d, unsigned long gfn, uint64_t buffer) gfn_lock(p2m, gfn, 0); - mfn = p2m->get_entry(p2m, _gfn(gfn), &p2mt, &a, 0, NULL, NULL); + mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL); ret = -ENOENT; /* Allow missing pages */ @@ -1629,7 +1640,7 @@ int p2m_mem_paging_prep(struct domain *d, unsigned long gfn, uint64_t buffer) if ( rc ) { gdprintk(XENLOG_ERR, "Failed to load paging-in gfn %lx domain %u " - "bytes left %d\n", gfn, d->domain_id, rc); + "bytes left %d\n", gfn_l, d->domain_id, rc); ret = -EFAULT; put_page(page); /* Don't leak pages */ goto out; @@ -1642,7 +1653,7 @@ int p2m_mem_paging_prep(struct domain *d, unsigned long gfn, uint64_t buffer) ret = p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K, paging_mode_log_dirty(d) ? p2m_ram_logdirty : p2m_ram_rw, a); - set_gpfn_from_mfn(mfn_x(mfn), gfn); + set_gpfn_from_mfn(mfn_x(mfn), gfn_l); if ( !page_extant ) atomic_dec(&d->paged_pages); @@ -1678,10 +1689,10 @@ void p2m_mem_paging_resume(struct domain *d, vm_event_response_t *rsp) /* Fix p2m entry if the page was not dropped */ if ( !(rsp->u.mem_paging.flags & MEM_PAGING_DROP_PAGE) ) { - unsigned long gfn = rsp->u.mem_access.gfn; + gfn_t gfn = _gfn(rsp->u.mem_access.gfn); gfn_lock(p2m, gfn, 0); - mfn = p2m->get_entry(p2m, _gfn(gfn), &p2mt, &a, 0, NULL, NULL); + mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL); /* * Allow only pages which were prepared properly, or pages which * were nominated but not evicted. @@ -1691,7 +1702,7 @@ void p2m_mem_paging_resume(struct domain *d, vm_event_response_t *rsp) p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K, paging_mode_log_dirty(d) ? p2m_ram_logdirty : p2m_ram_rw, a); - set_gpfn_from_mfn(mfn_x(mfn), gfn); + set_gpfn_from_mfn(mfn_x(mfn), gfn_x(gfn)); } gfn_unlock(p2m, gfn, 0); } @@ -2109,8 +2120,9 @@ bool_t p2m_altp2m_lazy_copy(struct vcpu *v, paddr_t gpa, */ mask = ~((1UL << page_order) - 1); mfn = _mfn(mfn_x(mfn) & mask); + gfn = _gfn(gfn_x(gfn) & mask); - rv = p2m_set_entry(*ap2m, gfn_x(gfn) & mask, mfn, page_order, p2mt, p2ma); + rv = p2m_set_entry(*ap2m, gfn, mfn, page_order, p2mt, p2ma); p2m_unlock(*ap2m); if ( rv ) @@ -2396,7 +2408,7 @@ void p2m_altp2m_propagate_change(struct domain *d, gfn_t gfn, } } else if ( !mfn_eq(m, INVALID_MFN) ) - p2m_set_entry(p2m, gfn_x(gfn), mfn, page_order, p2mt, p2ma); + p2m_set_entry(p2m, gfn, mfn, page_order, p2mt, p2ma); __put_gfn(p2m, gfn_x(gfn)); } diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index 1c9a51e9ad..07ca02a173 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -682,7 +682,7 @@ void p2m_free_ptp(struct p2m_domain *p2m, struct page_info *pg); /* Directly set a p2m entry: only for use by p2m code. Does not need * a call to put_gfn afterwards/ */ -int p2m_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn, +int p2m_set_entry(struct p2m_domain *p2m, gfn_t gfn, mfn_t mfn, unsigned int page_order, p2m_type_t p2mt, p2m_access_t p2ma); /* Set up function pointers for PT implementation: only for use by p2m code */