From patchwork Wed Feb 21 14:02:55 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 129071 Delivered-To: patch@linaro.org Received: by 10.46.124.24 with SMTP id x24csp662629ljc; Wed, 21 Feb 2018 06:06:02 -0800 (PST) X-Google-Smtp-Source: AH8x226v1kgP98Py1jG4wggh5h6qvblsXrXWXXc91xRv/4x3dSza47ReitGqnNwndreSfsOsWdoI X-Received: by 10.176.36.136 with SMTP id i8mr2537358uan.65.1519221961949; Wed, 21 Feb 2018 06:06:01 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1519221961; cv=none; d=google.com; s=arc-20160816; b=0uDQG9MNHzH6AFko/DsJ9TuIm0Rn/McXGSg94Z2o52gnuQR/dB0R3EgPnA7dqTP8YA WuLfArjDEPd2ujaoUo0SkMsvSvPiA4Aheh7PEOGlHfACQRnb4viAybh4YET21imB/fvw uwWIYKty5r6CUJttS4qACujzFvnuBTpyBlVBKvAibvfaXTLwGroFp7n3j38s0/9/oiJH /iH0W2S8AeyLwE9I1ftJI4unLYz9HWRGocie0AsCdDAanjMBScZZdCbcjxDj/2zQ1vkM MXXqVL6BamPXNbjvHgPYbDMieW6D5yElKzJsTra5bofKxO92cYJMBgsWtmg/axiOw350 HwZQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:cc:references:in-reply-to:message-id:date:to :from:arc-authentication-results; bh=uk2tHCrq0dkiUVzJu4lAWw7RoN62FsqBq+RPV36lCWk=; b=LrzGiyWsPXZ//hvV/+5Y1vHWWSfyfLlZw4/vZeV31drJJVTBdDmbNHbWJ/TwFC51ng DBN3/THMOiATAesNpIeQ36FpqO9+GHonaPC0fj2wX/joo07KXoHDUcQsdbFMki2e+c47 RGS1tBr+0fM1IBjdXF6zG/78m/hjyUsgBU4g488OkuVIdJV/1QVdeEA0LeBjT4Vq5C07 rbXZKqeLF4KN6pK6K26F7h5qAaU4ofxlqmEvyCwqyq73E5Lu+NpuIgJ2HH71a4CcK1r7 1Qi3iABV3Qj+N4Xqim+/0cAI1NSZSRFzmw5ujZsd3OKUFZMFC1dbVBXnZUZoeHSUWchZ ltRw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id m32si650367uam.318.2018.02.21.06.06.01 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 21 Feb 2018 06:06:01 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1eoUzj-0004d2-78; Wed, 21 Feb 2018 14:03:39 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1eoUzh-0004X2-Q0 for xen-devel@lists.xen.org; Wed, 21 Feb 2018 14:03:37 +0000 X-Inumbo-ID: 3904b0fc-1710-11e8-b9b1-635ca7ef6cff Received: from foss.arm.com (unknown [217.140.101.70]) by us1-amaz-eas1.inumbo.com (Halon) with ESMTP id 3904b0fc-1710-11e8-b9b1-635ca7ef6cff; Wed, 21 Feb 2018 14:05:07 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 56EA31435; Wed, 21 Feb 2018 06:03:31 -0800 (PST) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.206.53]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8A1443F318; Wed, 21 Feb 2018 06:03:29 -0800 (PST) From: Julien Grall To: xen-devel@lists.xen.org Date: Wed, 21 Feb 2018 14:02:55 +0000 Message-Id: <20180221140259.29360-13-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180221140259.29360-1-julien.grall@arm.com> References: <20180221140259.29360-1-julien.grall@arm.com> Cc: Stefano Stabellini , Wei Liu , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Jan Beulich Subject: [Xen-devel] [PATCH v4 12/16] xen/mm: Switch common/memory.c to use typesafe MFN X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" A new helper copy_mfn_to_guest is introduced to easily to copy a MFN to the guest memory. Not functional change intended Signed-off-by: Julien Grall --- Cc: Andrew Cooper Cc: George Dunlap Cc: Ian Jackson Cc: Jan Beulich Cc: Konrad Rzeszutek Wilk Cc: Stefano Stabellini Cc: Tim Deegan Cc: Wei Liu Changes in v4: - Patch added --- xen/common/memory.c | 72 ++++++++++++++++++++++++++++++++--------------------- 1 file changed, 44 insertions(+), 28 deletions(-) diff --git a/xen/common/memory.c b/xen/common/memory.c index 59d23a2a98..93d856df02 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -33,6 +33,12 @@ #include #endif +/* Override macros from asm/page.h to make them work with mfn_t */ +#undef page_to_mfn +#define page_to_mfn(pg) _mfn(__page_to_mfn(pg)) +#undef mfn_to_page +#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn)) + struct memop_args { /* INPUT */ struct domain *domain; /* Domain to be affected. */ @@ -95,11 +101,18 @@ static unsigned int max_order(const struct domain *d) return min(order, MAX_ORDER + 0U); } +/* Helper to copy a typesafe MFN to guest */ +#define copy_mfn_to_guest(hnd, off, mfn) \ + ({ \ + xen_pfn_t mfn_ = mfn_x(mfn); \ + __copy_to_guest_offset(hnd, off, &mfn_, 1); \ + }) + static void increase_reservation(struct memop_args *a) { struct page_info *page; unsigned long i; - xen_pfn_t mfn; + mfn_t mfn; struct domain *d = a->domain; if ( !guest_handle_is_null(a->extent_list) && @@ -133,7 +146,7 @@ static void increase_reservation(struct memop_args *a) !guest_handle_is_null(a->extent_list) ) { mfn = page_to_mfn(page); - if ( unlikely(__copy_to_guest_offset(a->extent_list, i, &mfn, 1)) ) + if ( unlikely(copy_mfn_to_guest(a->extent_list, i, mfn)) ) goto out; } } @@ -146,7 +159,8 @@ static void populate_physmap(struct memop_args *a) { struct page_info *page; unsigned int i, j; - xen_pfn_t gpfn, mfn; + xen_pfn_t gpfn; + mfn_t mfn; struct domain *d = a->domain, *curr_d = current->domain; bool need_tlbflush = false; uint32_t tlbflush_timestamp = 0; @@ -205,14 +219,15 @@ static void populate_physmap(struct memop_args *a) { if ( is_domain_direct_mapped(d) ) { - mfn = gpfn; + mfn = _mfn(gpfn); - for ( j = 0; j < (1U << a->extent_order); j++, mfn++ ) + for ( j = 0; j < (1U << a->extent_order); j++, + mfn = mfn_add(mfn, 1) ) { - if ( !mfn_valid(_mfn(mfn)) ) + if ( !mfn_valid(mfn) ) { - gdprintk(XENLOG_INFO, "Invalid mfn %#"PRI_xen_pfn"\n", - mfn); + gdprintk(XENLOG_INFO, "Invalid mfn %#"PRI_mfn"\n", + mfn_x(mfn)); goto out; } @@ -220,14 +235,14 @@ static void populate_physmap(struct memop_args *a) if ( !get_page(page, d) ) { gdprintk(XENLOG_INFO, - "mfn %#"PRI_xen_pfn" doesn't belong to d%d\n", - mfn, d->domain_id); + "mfn %#"PRI_mfn" doesn't belong to d%d\n", + mfn_x(mfn), d->domain_id); goto out; } put_page(page); } - mfn = gpfn; + mfn = _mfn(gpfn); } else { @@ -253,15 +268,15 @@ static void populate_physmap(struct memop_args *a) mfn = page_to_mfn(page); } - guest_physmap_add_page(d, _gfn(gpfn), _mfn(mfn), a->extent_order); + guest_physmap_add_page(d, _gfn(gpfn), mfn, a->extent_order); if ( !paging_mode_translate(d) ) { for ( j = 0; j < (1U << a->extent_order); j++ ) - set_gpfn_from_mfn(mfn + j, gpfn + j); + set_gpfn_from_mfn(mfn_x(mfn_add(mfn, j)), gpfn + j); /* Inform the domain of the new page's machine address. */ - if ( unlikely(__copy_to_guest_offset(a->extent_list, i, &mfn, 1)) ) + if ( unlikely(copy_mfn_to_guest(a->extent_list, i, mfn)) ) goto out; } } @@ -304,7 +319,7 @@ int guest_remove_page(struct domain *d, unsigned long gmfn) if ( p2mt == p2m_ram_paging_out ) { ASSERT(mfn_valid(mfn)); - page = mfn_to_page(mfn_x(mfn)); + page = mfn_to_page(mfn); if ( test_and_clear_bit(_PGC_allocated, &page->count_info) ) put_page(page); } @@ -349,7 +364,7 @@ int guest_remove_page(struct domain *d, unsigned long gmfn) } #endif /* CONFIG_X86 */ - page = mfn_to_page(mfn_x(mfn)); + page = mfn_to_page(mfn); if ( unlikely(!get_page(page, d)) ) { put_gfn(d, gmfn); @@ -490,7 +505,8 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg) PAGE_LIST_HEAD(in_chunk_list); PAGE_LIST_HEAD(out_chunk_list); unsigned long in_chunk_order, out_chunk_order; - xen_pfn_t gpfn, gmfn, mfn; + xen_pfn_t gpfn, gmfn; + mfn_t mfn; unsigned long i, j, k; unsigned int memflags = 0; long rc = 0; @@ -612,7 +628,7 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg) p2m_type_t p2mt; /* Shared pages cannot be exchanged */ - mfn = mfn_x(get_gfn_unshare(d, gmfn + k, &p2mt)); + mfn = get_gfn_unshare(d, gmfn + k, &p2mt); if ( p2m_is_shared(p2mt) ) { put_gfn(d, gmfn + k); @@ -620,9 +636,9 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg) goto fail; } #else /* !CONFIG_X86 */ - mfn = mfn_x(gfn_to_mfn(d, _gfn(gmfn + k))); + mfn = gfn_to_mfn(d, _gfn(gmfn + k)); #endif - if ( unlikely(!mfn_valid(_mfn(mfn))) ) + if ( unlikely(!mfn_valid(mfn)) ) { put_gfn(d, gmfn + k); rc = -EINVAL; @@ -669,10 +685,10 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg) if ( !test_and_clear_bit(_PGC_allocated, &page->count_info) ) BUG(); mfn = page_to_mfn(page); - gfn = mfn_to_gmfn(d, mfn); + gfn = mfn_to_gmfn(d, mfn_x(mfn)); /* Pages were unshared above */ BUG_ON(SHARED_M2P(gfn)); - if ( guest_physmap_remove_page(d, _gfn(gfn), _mfn(mfn), 0) ) + if ( guest_physmap_remove_page(d, _gfn(gfn), mfn, 0) ) domain_crash(d); put_page(page); } @@ -717,16 +733,16 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg) } mfn = page_to_mfn(page); - guest_physmap_add_page(d, _gfn(gpfn), _mfn(mfn), + guest_physmap_add_page(d, _gfn(gpfn), mfn, exch.out.extent_order); if ( !paging_mode_translate(d) ) { for ( k = 0; k < (1UL << exch.out.extent_order); k++ ) - set_gpfn_from_mfn(mfn + k, gpfn + k); - if ( __copy_to_guest_offset(exch.out.extent_start, - (i << out_chunk_order) + j, - &mfn, 1) ) + set_gpfn_from_mfn(mfn_x(mfn_add(mfn, k)), gpfn + k); + if ( copy_mfn_to_guest(exch.out.extent_start, + (i << out_chunk_order) + j, + mfn) ) rc = -EFAULT; } } @@ -1221,7 +1237,7 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg) if ( page ) { rc = guest_physmap_remove_page(d, _gfn(xrfp.gpfn), - _mfn(page_to_mfn(page)), 0); + page_to_mfn(page), 0); put_page(page); } else