From patchwork Tue Apr 3 15:32:47 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 132768 Delivered-To: patch@linaro.org Received: by 10.46.84.29 with SMTP id i29csp3954378ljb; Tue, 3 Apr 2018 08:35:30 -0700 (PDT) X-Google-Smtp-Source: AIpwx49oESKccuUr3W2gZW+mZoIs3kem5FaKS6e1lxASlJus1LP7vKK/3ykTLQwml+whpNfjUv1t X-Received: by 2002:a24:5e83:: with SMTP id h125-v6mr5449078itb.79.1522769730659; Tue, 03 Apr 2018 08:35:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522769730; cv=none; d=google.com; s=arc-20160816; b=ilsWbi5peKQU5lnYmkcQxIGP2Q5oDO6AkcppP19bsGH5y9oJWxbp/2prbbh/stnIC1 vG1JcignpYY9Kwwasmimzp6l6OdRIxbRZMVJQdFaVqLlQF3QW0ZIkh40YDnpBN44ZE1u TO2ywrNb6VzSnRtoyCzm+jVaiDl9Bx6pJI/zGXP5yIU9bPSoVbLsqsgJhDrgzNUkL8Qf lc8upsIGuvDlEKgOs16TN67vgK57aqdUT99BcbcIy1y/e0ksTjvjsy6hLpd/DyBhqoxP P0Enhz52kEZkLR4YpeWZec9E6ugnJH+e8QXoEzznbxNtT7TkOU5/WuZSGTcvXpcgCcTV NUqQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version:cc :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:references:in-reply-to:message-id:date:to:from :arc-authentication-results; bh=6Lp2cL4ag1/QaUHUssqd/DBPbGtDMIMcSZwzit9FRBQ=; b=j9ikOzc8X027jwNSTgan8NQKOMhUizCImzcMXwGc0qZAz+oi12e2Ok48inQ/CF6JUK 7Q2womeX8bFv2Ad51PauYatWmAVMXucd39Oi1xluIdqgv8hfw6T4yT9I3LPMQ7jOUlEo reVLa/tXMZCOjiM6VWle670U83vFgdO8yjpY0JJZc+5nN+s6z6Oe/5NeZQGwt0PmMTpW wRKQJoEq0vumSNM/s1deLWkMr3XlPcPdnPdRzNXTrvyg1cU7NudL9cRe/HtsUKt9vaC8 sEr8cyEKsxbMD4kLqxIXHaBZ/bo7YjlP9SC7rxndpSHdYUPPYB0/t6mOBm89+rOZrxht upDQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id 96si2106728iot.208.2018.04.03.08.35.30 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 03 Apr 2018 08:35:30 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1f3Nw4-00083z-27; Tue, 03 Apr 2018 15:33:24 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1f3Nw3-00082w-AR for xen-devel@lists.xen.org; Tue, 03 Apr 2018 15:33:23 +0000 X-Inumbo-ID: 424d8e0f-3754-11e8-9728-bc764e045a96 Received: from foss.arm.com (unknown [217.140.101.70]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTP id 424d8e0f-3754-11e8-9728-bc764e045a96; Tue, 03 Apr 2018 17:32:46 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8FE6A1435; Tue, 3 Apr 2018 08:33:21 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.206.53]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C61433F24A; Tue, 3 Apr 2018 08:33:19 -0700 (PDT) From: Julien Grall To: xen-devel@lists.xen.org Date: Tue, 3 Apr 2018 16:32:47 +0100 Message-Id: <20180403153251.19595-13-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180403153251.19595-1-julien.grall@arm.com> References: <20180403153251.19595-1-julien.grall@arm.com> Subject: [Xen-devel] [for-4.11][PATCH v7 12/16] xen/mm: Switch common/memory.c to use typesafe MFN X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Wei Liu , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Jan Beulich MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" A new helper __copy_mfn_to_guest is introduced to easily to copy a MFN to the guest memory. Not functional change intended Signed-off-by: Julien Grall Reviewed-by: Jan Beulich --- Cc: Andrew Cooper Cc: George Dunlap Cc: Ian Jackson Cc: Konrad Rzeszutek Wilk Cc: Stefano Stabellini Cc: Tim Deegan Cc: Wei Liu Changes in v7: - Rename the helper to __copy_mfn_to_guest_offset - Add Jan's reviewed-by Changes in v6: - Use static inline for the new helper - Rename the helper to __copy_mfn_to_guest Changes in v5: - Restrict the scope of some mfn variable. Changes in v4: - Patch added --- xen/common/memory.c | 79 +++++++++++++++++++++++++++++++++-------------------- 1 file changed, 50 insertions(+), 29 deletions(-) diff --git a/xen/common/memory.c b/xen/common/memory.c index 3ed71f8f74..8c8e979bcf 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -33,6 +33,12 @@ #include #endif +/* Override macros from asm/page.h to make them work with mfn_t */ +#undef page_to_mfn +#define page_to_mfn(pg) _mfn(__page_to_mfn(pg)) +#undef mfn_to_page +#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn)) + struct memop_args { /* INPUT */ struct domain *domain; /* Domain to be affected. */ @@ -95,11 +101,20 @@ static unsigned int max_order(const struct domain *d) return min(order, MAX_ORDER + 0U); } +/* Helper to copy a typesafe MFN to guest */ +static inline +unsigned long __copy_mfn_to_guest_offset(XEN_GUEST_HANDLE(xen_pfn_t) hnd, + size_t off, mfn_t mfn) + { + xen_pfn_t mfn_ = mfn_x(mfn); + + return __copy_to_guest_offset(hnd, off, &mfn_, 1); +} + static void increase_reservation(struct memop_args *a) { struct page_info *page; unsigned long i; - xen_pfn_t mfn; struct domain *d = a->domain; if ( !guest_handle_is_null(a->extent_list) && @@ -132,8 +147,9 @@ static void increase_reservation(struct memop_args *a) if ( !paging_mode_translate(d) && !guest_handle_is_null(a->extent_list) ) { - mfn = page_to_mfn(page); - if ( unlikely(__copy_to_guest_offset(a->extent_list, i, &mfn, 1)) ) + mfn_t mfn = page_to_mfn(page); + + if ( unlikely(__copy_mfn_to_guest_offset(a->extent_list, i, mfn)) ) goto out; } } @@ -146,7 +162,7 @@ static void populate_physmap(struct memop_args *a) { struct page_info *page; unsigned int i, j; - xen_pfn_t gpfn, mfn; + xen_pfn_t gpfn; struct domain *d = a->domain, *curr_d = current->domain; bool need_tlbflush = false; uint32_t tlbflush_timestamp = 0; @@ -182,6 +198,8 @@ static void populate_physmap(struct memop_args *a) for ( i = a->nr_done; i < a->nr_extents; i++ ) { + mfn_t mfn; + if ( i != a->nr_done && hypercall_preempt_check() ) { a->preempted = 1; @@ -205,14 +223,15 @@ static void populate_physmap(struct memop_args *a) { if ( is_domain_direct_mapped(d) ) { - mfn = gpfn; + mfn = _mfn(gpfn); - for ( j = 0; j < (1U << a->extent_order); j++, mfn++ ) + for ( j = 0; j < (1U << a->extent_order); j++, + mfn = mfn_add(mfn, 1) ) { - if ( !mfn_valid(_mfn(mfn)) ) + if ( !mfn_valid(mfn) ) { - gdprintk(XENLOG_INFO, "Invalid mfn %#"PRI_xen_pfn"\n", - mfn); + gdprintk(XENLOG_INFO, "Invalid mfn %#"PRI_mfn"\n", + mfn_x(mfn)); goto out; } @@ -220,14 +239,14 @@ static void populate_physmap(struct memop_args *a) if ( !get_page(page, d) ) { gdprintk(XENLOG_INFO, - "mfn %#"PRI_xen_pfn" doesn't belong to d%d\n", - mfn, d->domain_id); + "mfn %#"PRI_mfn" doesn't belong to d%d\n", + mfn_x(mfn), d->domain_id); goto out; } put_page(page); } - mfn = gpfn; + mfn = _mfn(gpfn); } else { @@ -253,15 +272,16 @@ static void populate_physmap(struct memop_args *a) mfn = page_to_mfn(page); } - guest_physmap_add_page(d, _gfn(gpfn), _mfn(mfn), a->extent_order); + guest_physmap_add_page(d, _gfn(gpfn), mfn, a->extent_order); if ( !paging_mode_translate(d) ) { for ( j = 0; j < (1U << a->extent_order); j++ ) - set_gpfn_from_mfn(mfn + j, gpfn + j); + set_gpfn_from_mfn(mfn_x(mfn_add(mfn, j)), gpfn + j); /* Inform the domain of the new page's machine address. */ - if ( unlikely(__copy_to_guest_offset(a->extent_list, i, &mfn, 1)) ) + if ( unlikely(__copy_mfn_to_guest_offset(a->extent_list, i, + mfn)) ) goto out; } } @@ -304,7 +324,7 @@ int guest_remove_page(struct domain *d, unsigned long gmfn) if ( p2mt == p2m_ram_paging_out ) { ASSERT(mfn_valid(mfn)); - page = mfn_to_page(mfn_x(mfn)); + page = mfn_to_page(mfn); if ( test_and_clear_bit(_PGC_allocated, &page->count_info) ) put_page(page); } @@ -349,7 +369,7 @@ int guest_remove_page(struct domain *d, unsigned long gmfn) } #endif /* CONFIG_X86 */ - page = mfn_to_page(mfn_x(mfn)); + page = mfn_to_page(mfn); if ( unlikely(!get_page(page, d)) ) { put_gfn(d, gmfn); @@ -485,7 +505,8 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg) PAGE_LIST_HEAD(in_chunk_list); PAGE_LIST_HEAD(out_chunk_list); unsigned long in_chunk_order, out_chunk_order; - xen_pfn_t gpfn, gmfn, mfn; + xen_pfn_t gpfn, gmfn; + mfn_t mfn; unsigned long i, j, k; unsigned int memflags = 0; long rc = 0; @@ -607,7 +628,7 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg) p2m_type_t p2mt; /* Shared pages cannot be exchanged */ - mfn = mfn_x(get_gfn_unshare(d, gmfn + k, &p2mt)); + mfn = get_gfn_unshare(d, gmfn + k, &p2mt); if ( p2m_is_shared(p2mt) ) { put_gfn(d, gmfn + k); @@ -615,9 +636,9 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg) goto fail; } #else /* !CONFIG_X86 */ - mfn = mfn_x(gfn_to_mfn(d, _gfn(gmfn + k))); + mfn = gfn_to_mfn(d, _gfn(gmfn + k)); #endif - if ( unlikely(!mfn_valid(_mfn(mfn))) ) + if ( unlikely(!mfn_valid(mfn)) ) { put_gfn(d, gmfn + k); rc = -EINVAL; @@ -664,10 +685,10 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg) if ( !test_and_clear_bit(_PGC_allocated, &page->count_info) ) BUG(); mfn = page_to_mfn(page); - gfn = mfn_to_gmfn(d, mfn); + gfn = mfn_to_gmfn(d, mfn_x(mfn)); /* Pages were unshared above */ BUG_ON(SHARED_M2P(gfn)); - if ( guest_physmap_remove_page(d, _gfn(gfn), _mfn(mfn), 0) ) + if ( guest_physmap_remove_page(d, _gfn(gfn), mfn, 0) ) domain_crash(d); put_page(page); } @@ -712,16 +733,16 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg) } mfn = page_to_mfn(page); - guest_physmap_add_page(d, _gfn(gpfn), _mfn(mfn), + guest_physmap_add_page(d, _gfn(gpfn), mfn, exch.out.extent_order); if ( !paging_mode_translate(d) ) { for ( k = 0; k < (1UL << exch.out.extent_order); k++ ) - set_gpfn_from_mfn(mfn + k, gpfn + k); - if ( __copy_to_guest_offset(exch.out.extent_start, - (i << out_chunk_order) + j, - &mfn, 1) ) + set_gpfn_from_mfn(mfn_x(mfn_add(mfn, k)), gpfn + k); + if ( __copy_mfn_to_guest_offset(exch.out.extent_start, + (i << out_chunk_order) + j, + mfn) ) rc = -EFAULT; } } @@ -1216,7 +1237,7 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg) if ( page ) { rc = guest_physmap_remove_page(d, _gfn(xrfp.gpfn), - _mfn(page_to_mfn(page)), 0); + page_to_mfn(page), 0); put_page(page); } else