From patchwork Tue Apr 3 15:32:36 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 132763 Delivered-To: patch@linaro.org Received: by 10.46.84.29 with SMTP id i29csp3954189ljb; Tue, 3 Apr 2018 08:35:20 -0700 (PDT) X-Google-Smtp-Source: AIpwx48K5L4W8ibyZI8eIxREIf/ZSBm66bY4ijAGQ4smgjF+SOhvmQR0n6GUjXkC1tyOVmJzjbaU X-Received: by 2002:a24:5151:: with SMTP id s78-v6mr5531604ita.103.1522769720075; Tue, 03 Apr 2018 08:35:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522769720; cv=none; d=google.com; s=arc-20160816; b=UnNtPjciwI58Ke/evvtKcGuPJbptZY8sADgLUCbkfUN0IwgG0Gd9yUXqny5tW+rarH nK9TBjvJ5q7TVs84iUn5qDPKeSfFelMpmw50BpBVYC0QTQjt4LSP+BxuvMH3UsyfDAlg Igj+dY8CKfopf6aVhM5C3GJN62ZvH7gyprL0cIt/2kj7fAC2kqMZnZl0EP3pmWlrYRH7 s8ihKLbAo+540L6f8Y9w3/c+cwAPEN6p3hs+6JbRrmhkPX8ZAg0PIW3X0LDwTXkXiotv 2ZDBeKTKD9eYLMCeHOG81RiaOAQEtWNNOCUxF7nrhO1A2ttpuu+t6rWq9hIfXJFBC1wM en1Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version:cc :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:references:in-reply-to:message-id:date:to:from :arc-authentication-results; bh=J8iteA1Jfgxm1XEuLmfF5dtZYb46HBOS1DYhqSIrADs=; b=Ujwbv4d2VAvqg1YXOWitwdPozpXGbfLfdsYbW/mSPUAMufDAJ8Y6colxhjYr00m/Jd MSGdqu0maGSfltS1+PTBoLyb8qLyJ8R49PdKCYzARGKmtIc4GGA3lOx4qjAAwmNxoftl f08uwnez6jZURBFdHRJvgawVXcBt1myErs1yxDvOvi5df3hqne3SlPxdYpFL3nmDx4Mt P8cweq1VmoaFe509PDeasdQEM+KlQGt+OfLpry1gV6D5R4u03g8+oFYYfsoFz0jZsJJf l1BnDgOkpIygV1fltgs2mKTYlRyT7n1/cGyvTYbZYn/cqCzvx7mnrGK3jeO7ab0rRnEE 7dZA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id d12-v6si2287664itj.71.2018.04.03.08.35.19 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 03 Apr 2018 08:35:20 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1f3Nvl-0007mX-T1; Tue, 03 Apr 2018 15:33:05 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1f3Nvl-0007mR-2l for xen-devel@lists.xen.org; Tue, 03 Apr 2018 15:33:05 +0000 X-Inumbo-ID: 768ef89b-3754-11e8-8249-2fda3a446a53 Received: from foss.arm.com (unknown [217.140.101.70]) by us1-amaz-eas1.inumbo.com (Halon) with ESMTP id 768ef89b-3754-11e8-8249-2fda3a446a53; Tue, 03 Apr 2018 15:34:14 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3D3A61596; Tue, 3 Apr 2018 08:33:03 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.206.53]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2FB063F24A; Tue, 3 Apr 2018 08:33:02 -0700 (PDT) From: Julien Grall To: xen-devel@lists.xen.org Date: Tue, 3 Apr 2018 16:32:36 +0100 Message-Id: <20180403153251.19595-2-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180403153251.19595-1-julien.grall@arm.com> References: <20180403153251.19595-1-julien.grall@arm.com> Subject: [Xen-devel] [for-4.11][PATCH v7 01/16] x86/mm: skip incrementing mfn if it is not a valid mfn X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Julien Grall , Wei Liu , Jan Beulich MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Wei Liu In a follow-up patch, some callers will be switched to pass INVALID_MFN instead of zero for non-present mappings. So skip incrementing mfn if it is not a valid one. Signed-off-by: Wei Liu Signed-off-by: Julien Grall [Rework the commit message] Reviewed-by: Jan Beulich --- Cc: Jan Beulich Cc: Andrew Cooper Changes in v7: - Add Jan's reviewed-by - Fix typo in the commit message Changes in v6: - Update commit message Changes in v5: - Patch added --- xen/arch/x86/mm.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 17558e0c8c..3aed94bda5 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -4725,7 +4725,8 @@ int map_pages_to_xen( } virt += 1UL << L3_PAGETABLE_SHIFT; - mfn += 1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT); + if ( !mfn_eq(_mfn(mfn), INVALID_MFN) ) + mfn += 1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT); nr_mfns -= 1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT); continue; } @@ -4750,7 +4751,8 @@ int map_pages_to_xen( if ( i > nr_mfns ) i = nr_mfns; virt += i << PAGE_SHIFT; - mfn += i; + if ( !mfn_eq(_mfn(mfn), INVALID_MFN) ) + mfn += i; nr_mfns -= i; continue; } @@ -4818,7 +4820,8 @@ int map_pages_to_xen( } virt += 1UL << L2_PAGETABLE_SHIFT; - mfn += 1UL << PAGETABLE_ORDER; + if ( !mfn_eq(_mfn(mfn), INVALID_MFN) ) + mfn += 1UL << PAGETABLE_ORDER; nr_mfns -= 1UL << PAGETABLE_ORDER; } else @@ -4847,7 +4850,8 @@ int map_pages_to_xen( if ( i > nr_mfns ) i = nr_mfns; virt += i << L1_PAGETABLE_SHIFT; - mfn += i; + if ( !mfn_eq(_mfn(mfn), INVALID_MFN) ) + mfn += i; nr_mfns -= i; goto check_l3; } @@ -4892,7 +4896,8 @@ int map_pages_to_xen( } virt += 1UL << L1_PAGETABLE_SHIFT; - mfn += 1UL; + if ( !mfn_eq(_mfn(mfn), INVALID_MFN) ) + mfn += 1UL; nr_mfns -= 1UL; if ( (flags == PAGE_HYPERVISOR) && From patchwork Tue Apr 3 15:32:37 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 132760 Delivered-To: patch@linaro.org Received: by 10.46.84.29 with SMTP id i29csp3954132ljb; Tue, 3 Apr 2018 08:35:17 -0700 (PDT) X-Google-Smtp-Source: AIpwx4+aqd35L0vSPcOUtPzVeaM6Ywa2ZP3HwEzzHZmGGniZ2rhTW18MFR3UNppJgAuJ0tGCGv1P X-Received: by 10.107.174.214 with SMTP id n83mr12653083ioo.171.1522769717218; Tue, 03 Apr 2018 08:35:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522769717; cv=none; d=google.com; s=arc-20160816; b=rGfoCwH1pw2bBe9mmUPmih53hEtxKillznPhFvtTufx9PESUpVdc4cPV2yKdq0VVXE uGAWSdAIvzysRB+0w1gBhRT98hGtdkKvKFZ9QDgUW54M3MSfFhZHQxERkVY3JziOOCTR siaqL8mDYyQj/Rf28ciV/9fhRot3Fo4u5/UgDRlfv6gHaRzRIb10Mg3VxFLylafXobsj 9e6OoON64R4uyXx8/J0CtJcO15ijz2VHy3VW92QroQ/FcpFiVteO+RLN88eGJ/PDCJbg SFMwVhMZ4uPCE336cM1vZe5Mjef9xIk+wO8T3kALJ5LH65gwmcqFDEmjD+FS9p+lw0k2 Hd7Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version:cc :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:references:in-reply-to:message-id:date:to:from :arc-authentication-results; bh=nBBPmQCCzShR0jKEB/u2PMQwHjsvM1SnteuhtBsb9ZI=; b=Q+hdmewJF4Oa5HkMqR0O32wSDplgjCMScLI+jautgVjbKWpu2VHYxpAsdiy9wXlXSo fNZtYDlPSkEYj26OEKIQapPIo87744xyoIhGb5KrTG7ExnC/fN/jflbOi/2d4R7OCpwj tXWj5t0t3XUrFicGHMdNaM0Hx6u4WvBI384xRE8X2cJLfm7aXlzDv6/aNi1xtS3GoWZR /AN/E9lssQiCTABQIlN/OqfzCSgosk2soLzCplYl3q5qy5nAQoG4K3Vvt2QT9Y1UKUAA HkevMT91q+UOLZrsnKRdYv7e/4tKHHLuCJafvMrSo4yMe4hi1lH7+MFxEjOZKveshA91 82og== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id u4-v6si581103itc.137.2018.04.03.08.35.17 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 03 Apr 2018 08:35:17 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1f3Nvo-0007nf-F7; Tue, 03 Apr 2018 15:33:08 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1f3Nvm-0007mf-Sy for xen-devel@lists.xen.org; Tue, 03 Apr 2018 15:33:06 +0000 X-Inumbo-ID: 3810d9d1-3754-11e8-9728-bc764e045a96 Received: from foss.arm.com (unknown [217.140.101.70]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTP id 3810d9d1-3754-11e8-9728-bc764e045a96; Tue, 03 Apr 2018 17:32:29 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6709615AB; Tue, 3 Apr 2018 08:33:04 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.206.53]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7BC6B3F24A; Tue, 3 Apr 2018 08:33:03 -0700 (PDT) From: Julien Grall To: xen-devel@lists.xen.org Date: Tue, 3 Apr 2018 16:32:37 +0100 Message-Id: <20180403153251.19595-3-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180403153251.19595-1-julien.grall@arm.com> References: <20180403153251.19595-1-julien.grall@arm.com> Subject: [Xen-devel] [for-4.11][PATCH v7 02/16] xen/arm: setup: use maddr_to_mfn rather than _mfn(paddr_to_pfn(...)) X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Julien Grall , Stefano Stabellini , George Dunlap MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Signed-off-by: Julien Grall Reviewed-by: George Dunlap Acked-by: Stefano Stabellini --- Cc: Stefano Stabellini Changes in v6: - Remove the justification from the commit message - Add George's reviewed-by Changes in v4: - Patch added --- xen/arch/arm/setup.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c index b17797dc97..1ce6a26e84 100644 --- a/xen/arch/arm/setup.c +++ b/xen/arch/arm/setup.c @@ -268,8 +268,8 @@ void __init discard_initial_modules(void) if ( mi->module[i].kind == BOOTMOD_XEN ) continue; - if ( !mfn_valid(_mfn(paddr_to_pfn(s))) || - !mfn_valid(_mfn(paddr_to_pfn(e)))) + if ( !mfn_valid(maddr_to_mfn(s)) || + !mfn_valid(maddr_to_mfn(e)) ) continue; dt_unreserved_regions(s, e, init_domheap_pages, 0); From patchwork Tue Apr 3 15:32:38 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 132759 Delivered-To: patch@linaro.org Received: by 10.46.84.29 with SMTP id i29csp3954123ljb; Tue, 3 Apr 2018 08:35:17 -0700 (PDT) X-Google-Smtp-Source: AIpwx4+vOXwumX1ciQE6l4NrLZFpUuv3P8y1J1qmoH/iDRqXNU+iAqxvT5pIzmm5604Hs5sOh3me X-Received: by 2002:a24:e445:: with SMTP id o66-v6mr6136294ith.117.1522769716902; Tue, 03 Apr 2018 08:35:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522769716; cv=none; d=google.com; s=arc-20160816; b=ogj1J/uV/X+Tcd1Y/LsT/6d51S3FiVbWtZQxO3wk5XAymgDfLNnij+RhJkp7Cngtd3 sDVVG0Pexz9fwf6RJeJziWlUiErPeniKkqQO1+zO4y7xvkWrMrabykiKg/ONRdTAW8ED rzaNTvQ+cK0AV+P0DK2zaflcnqVs6lPsKzBtG6COAT5wjUmEhOJ80ejXSyNwdqQbnIjc kOFDy6LU4NgLhABMrjrVd5EYOVrQ9Oq7UbdE9b0HdVhXjkU3yBj2Zcdqxi6VvTvrHvwK d8fQ/FhFmkhtNvp3aCqN8ZamkyZiQ9UZWM3p2dnXcTnxHOh+3QuulaZay/bGs74uOuU+ NvYA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version:cc :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:references:in-reply-to:message-id:date:to:from :arc-authentication-results; bh=SXcqkEdONmH84x1aHlSf7IYLFNhiAJmtBzsanE8hvr0=; b=N5J49aQjUcYCUIkUAr1UbPXzunLxu5uZzeX6u0S9HH1JxscEadYHKrfx7yH4fcVnnN Ht8wsvEK3gd8j+kXA4jeDr//MisBAQby2z4/NJ+zDVU7YZPh+96qG/f80eVF0K2AcSL7 F5ICkSBbegH4ffnZpsPZxju5v730oYD5E54AyrWbNLe0BeaLFQ1pqi2DyyCt+jFyVe7I e1Kq/4EqG+ON1K6u0vvep2pikabp6aIN/8FxVqbhE6IglRnTeq9/vAgmgrGkitjz3geS Rr6RY376RnuYkHOrVT54VPtWtHlYAmAT4CoTZvK8zemLabIpZ/kPtCcMVeCf5Rm0jR9z bAhg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id m141-v6si574023itm.0.2018.04.03.08.35.16 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 03 Apr 2018 08:35:16 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1f3Nvo-0007n6-65; Tue, 03 Apr 2018 15:33:08 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1f3Nvm-0007me-SF for xen-devel@lists.xen.org; Tue, 03 Apr 2018 15:33:06 +0000 X-Inumbo-ID: 38c31036-3754-11e8-9728-bc764e045a96 Received: from foss.arm.com (unknown [217.140.101.70]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTP id 38c31036-3754-11e8-9728-bc764e045a96; Tue, 03 Apr 2018 17:32:30 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8F85F1596; Tue, 3 Apr 2018 08:33:05 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.206.53]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A56EA3F24A; Tue, 3 Apr 2018 08:33:04 -0700 (PDT) From: Julien Grall To: xen-devel@lists.xen.org Date: Tue, 3 Apr 2018 16:32:38 +0100 Message-Id: <20180403153251.19595-4-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180403153251.19595-1-julien.grall@arm.com> References: <20180403153251.19595-1-julien.grall@arm.com> Subject: [Xen-devel] [for-4.11][PATCH v7 03/16] xen/arm: mm: Use gaddr_to_gfn rather than _gfn(paddr_to_pfn(...)) X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Julien Grall , Stefano Stabellini , George Dunlap MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Signed-off-by: Julien Grall Reviewed-by: George Dunlap Acked-by: Stefano Stabellini --- Cc: Stefano Stabellini Changes in v7: - Add George's reviewed-by Changes in v6: - Remove the justification from the commit message - Add George's reviewed-by Changes in v4: - Patch added --- xen/arch/arm/mm.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index baa3b0de1d..1126e246c0 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -1431,7 +1431,7 @@ int create_grant_host_mapping(unsigned long addr, unsigned long frame, if ( flags & GNTMAP_readonly ) t = p2m_grant_map_ro; - rc = guest_physmap_add_entry(current->domain, _gfn(addr >> PAGE_SHIFT), + rc = guest_physmap_add_entry(current->domain, gaddr_to_gfn(addr), _mfn(frame), 0, t); if ( rc ) @@ -1443,7 +1443,7 @@ int create_grant_host_mapping(unsigned long addr, unsigned long frame, int replace_grant_host_mapping(unsigned long addr, unsigned long mfn, unsigned long new_addr, unsigned int flags) { - gfn_t gfn = _gfn(addr >> PAGE_SHIFT); + gfn_t gfn = gaddr_to_gfn(addr); struct domain *d = current->domain; int rc; From patchwork Tue Apr 3 15:32:39 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 132757 Delivered-To: patch@linaro.org Received: by 10.46.84.29 with SMTP id i29csp3954103ljb; Tue, 3 Apr 2018 08:35:16 -0700 (PDT) X-Google-Smtp-Source: AIpwx49+OrkA5Kt2ImnlkIMEeUmwixumJgo+9LEAfHlQikXctHNLWwSIEdCalrofl/5UxwWHUWhT X-Received: by 2002:a24:715:: with SMTP id f21-v6mr5834443itf.150.1522769715979; Tue, 03 Apr 2018 08:35:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522769715; cv=none; d=google.com; s=arc-20160816; b=Rnp8CsI2572ILdV2VjX71Yx9fXYLrRY8seEaJAI6Kmlryy6lrw3Dpol5UMV+aj6iX/ OU9PCgZsgdzdX6Y2YJVFRe1Kzd51gIOireMPIVWKNsJB0sA5AV0hqgiZwrbgVYKkaa6a cX2CLrdqPJNk1Zk6X645m70xjj2cWkNAjwpRS2DOxpBGYZyuJ1UsRP7iEXi2axFCcfz2 FH3yf4cefVUWPVgmDw0JjRxy4YMGTSFHxYaQHTJryDW0cZAV31IDoRNabbyPmXmvCpd5 AB8zep32yiD5Mpi02jkmSmR2UBvwAuNTwimqn3+mFDe6x4t3puiGTjIrgPlrh4grAeMB Fm8g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version:cc :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:references:in-reply-to:message-id:date:to:from :arc-authentication-results; bh=5cbx9EKu/5m0eEenW+spiq8+/ywoINdSwpFSHVeCZpo=; b=XTI4IwQin29o3yfTmO0FI6XSta9v2pSc1FEcxblgdLOasT6nGD8yqQPTApVYBfe8Ka WYzBkiJIMSqAa4xgMtl+mKMTVD1GVdr+zo18kWhaTptLaxwBH3+mzuzx3ncRPIdXmve2 FrR8DDPyNUSqsmsXYM6sxH49OoI8uicV3+i0m8z2WA1/PfMiVdt5NYOEqg+RviIMJaCE V/m6A/br6AMavpzYp1yEXm7SDxD5FBBCTgCvG2oC6/Kd2Cbkl6+c9UmSbcJrdQ9ew5ix PZ/sicbO7YMAGkm+Vj1nCz57u8xzR+czKb76KIYAa39cmifv5e2p/k6ibK8iXBro09OD GNcA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id r64si2056561ioe.22.2018.04.03.08.35.15 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 03 Apr 2018 08:35:15 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1f3Nvo-0007o6-PM; Tue, 03 Apr 2018 15:33:08 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1f3Nvo-0007mz-6l for xen-devel@lists.xen.org; Tue, 03 Apr 2018 15:33:08 +0000 X-Inumbo-ID: 78a31b80-3754-11e8-8249-2fda3a446a53 Received: from foss.arm.com (unknown [217.140.101.70]) by us1-amaz-eas1.inumbo.com (Halon) with ESMTP id 78a31b80-3754-11e8-8249-2fda3a446a53; Tue, 03 Apr 2018 15:34:17 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B731415AB; Tue, 3 Apr 2018 08:33:06 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.206.53]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CD4A23F24A; Tue, 3 Apr 2018 08:33:05 -0700 (PDT) From: Julien Grall To: xen-devel@lists.xen.org Date: Tue, 3 Apr 2018 16:32:39 +0100 Message-Id: <20180403153251.19595-5-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180403153251.19595-1-julien.grall@arm.com> References: <20180403153251.19595-1-julien.grall@arm.com> Subject: [Xen-devel] [for-4.11][PATCH v7 04/16] xen/arm: mm: Remove unused M2P code X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Julien Grall , Stefano Stabellini , George Dunlap MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Arm does not have an M2P and very unlikely to get one in the future, therefore don't keep defines that are not necessary in the common code. At the same time move the remaining M2P define just above set_gpfn_from_mfn to keep all the dummy helpers for M2P together. Signed-off-by: Julien Grall Reviewed-by: George Dunlap Acked-by: Stefano Stabellini --- Cc: Stefano Stabellini Changes in v6: - Add a comment to explain why we implement dummy version of M2P for Arm. - Add George's reviewed-by - Fix typo in the commit message Changes in v4: - Patch added. --- xen/include/asm-arm/mm.h | 29 ++++++++--------------------- 1 file changed, 8 insertions(+), 21 deletions(-) diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h index a0e922f360..cabb1daf30 100644 --- a/xen/include/asm-arm/mm.h +++ b/xen/include/asm-arm/mm.h @@ -313,33 +313,20 @@ static inline void *page_to_virt(const struct page_info *pg) struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va, unsigned long flags); -/* - * The MPT (machine->physical mapping table) is an array of word-sized - * values, indexed on machine frame number. It is expected that guest OSes - * will use it to store a "physical" frame number to give the appearance of - * contiguous (or near contiguous) physical memory. - */ -#undef machine_to_phys_mapping -#define machine_to_phys_mapping ((unsigned long *)RDWR_MPT_VIRT_START) -#define INVALID_M2P_ENTRY (~0UL) -#define VALID_M2P(_e) (!((_e) & (1UL<<(BITS_PER_LONG-1)))) -#define SHARED_M2P_ENTRY (~0UL - 1UL) -#define SHARED_M2P(_e) ((_e) == SHARED_M2P_ENTRY) - -#define _set_gpfn_from_mfn(mfn, pfn) ({ \ - struct domain *d = page_get_owner(__mfn_to_page(mfn)); \ - if(d && (d == dom_cow)) \ - machine_to_phys_mapping[(mfn)] = SHARED_M2P_ENTRY; \ - else \ - machine_to_phys_mapping[(mfn)] = (pfn); \ - }) - static inline void put_gfn(struct domain *d, unsigned long gfn) {} static inline int relinquish_shared_pages(struct domain *d) { return 0; } +/* + * Arm does not have an M2P, but common code expects a handful of + * M2P-related defines and functions. Provide dummy versions of these. + */ +#define INVALID_M2P_ENTRY (~0UL) +#define SHARED_M2P_ENTRY (~0UL - 1UL) +#define SHARED_M2P(_e) ((_e) == SHARED_M2P_ENTRY) + /* Xen always owns P2M on ARM */ #define set_gpfn_from_mfn(mfn, pfn) do { (void) (mfn), (void)(pfn); } while (0) #define mfn_to_gmfn(_d, mfn) (mfn) From patchwork Tue Apr 3 15:32:40 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 132761 Delivered-To: patch@linaro.org Received: by 10.46.84.29 with SMTP id i29csp3954162ljb; Tue, 3 Apr 2018 08:35:18 -0700 (PDT) X-Google-Smtp-Source: AIpwx4/Njfg06lYJaFtlPQ4FmHFsyTM8U4tGQdlRoYUCi3d1DcWaD881uZOSlstbBEc+WbveFOWe X-Received: by 10.107.14.12 with SMTP id 12mr12301536ioo.209.1522769718572; Tue, 03 Apr 2018 08:35:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522769718; cv=none; d=google.com; s=arc-20160816; b=ceeQvCVmCVLdzc2+TOJEBrzWlGx2RWP4vCk3DBMr3l5/nHFbHbroebJHUPyTW9+4hg w2S5b3RrEJ6/xKfqD+WpME4QVJN7AfWTEEh0sCeNkz0/+5675xCh72gp0Yzrinivsur4 MMnfWJT9dczTM6yFP9SkyUEypFvzwkHCYESApL4r3fXHyYV390qrH92yGKKZgWM9J6Rf zo0PH+OfV85O449yqs1vvoeGXUbFtGW3BbhmLoSGYLBdU7wEKuSECJYdfsrumCdwN4SX ZZPY1QgDOMHzS8edsux04dUJNb5wy0nQbZkKfhZJ08imUc2yt2HobT35Zp61Jsd/yByi UNCw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version:cc :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:references:in-reply-to:message-id:date:to:from :arc-authentication-results; bh=F0V+VlhPGOymQCSWUkvGat0MVgxdw38+iGPZ0bBMyqY=; b=W3cCcYEyD2Dyo11KxIfjlN6cE0gWlpgVUxJquQ0VYxox7TIZM1rsubtEfOI0kF2ioG 0bzCIW4Iu93yWYXK3lPeRXSjoLvTePkwadAWn3fvbOvhPq4c3/HWWaMJts4J+BV23jzh zN8zPKb06/JIfu82KJl+jVrq4BlyrpaXjWKLsiOmOXj/aI64IIdQhfsB7hn7fjotviYH V2hx+NH3+YNlfFfEQrd9W5CErJPTFgDXwe3KnLs1boOWV9KUzcP9yDJ7NaUoaUzGAsjw /4dHO3nWTtupVShlanYA0CshjwqVBZX2RgmMJluWEwp/rqqSp/2MJZSqZY5oS32Nln7P W8Ew== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id q135si1930931ioe.149.2018.04.03.08.35.18 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 03 Apr 2018 08:35:18 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1f3Nvq-0007p1-1h; Tue, 03 Apr 2018 15:33:10 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1f3Nvp-0007ob-Bp for xen-devel@lists.xen.org; Tue, 03 Apr 2018 15:33:09 +0000 X-Inumbo-ID: 79552c63-3754-11e8-8249-2fda3a446a53 Received: from foss.arm.com (unknown [217.140.101.70]) by us1-amaz-eas1.inumbo.com (Halon) with ESMTP id 79552c63-3754-11e8-8249-2fda3a446a53; Tue, 03 Apr 2018 15:34:18 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DFE7D1596; Tue, 3 Apr 2018 08:33:07 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.206.53]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 018213F24A; Tue, 3 Apr 2018 08:33:06 -0700 (PDT) From: Julien Grall To: xen-devel@lists.xen.org Date: Tue, 3 Apr 2018 16:32:40 +0100 Message-Id: <20180403153251.19595-6-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180403153251.19595-1-julien.grall@arm.com> References: <20180403153251.19595-1-julien.grall@arm.com> Subject: [Xen-devel] [for-4.11][PATCH v7 05/16] xen/arm: mm: Remove unused relinquish_shared_pages X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Julien Grall , Stefano Stabellini , George Dunlap MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" relinquish_shared_pages is never called on Arm. Signed-off-by: Julien Grall Reviewed-by: George Dunlap Acked-by: Stefano Stabellini --- Cc: Stefano Stabellini Changes in v6: - Add George's reviewed-by Changes in v4: - Patch added --- xen/include/asm-arm/mm.h | 4 ---- 1 file changed, 4 deletions(-) diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h index cabb1daf30..09bec67f63 100644 --- a/xen/include/asm-arm/mm.h +++ b/xen/include/asm-arm/mm.h @@ -314,10 +314,6 @@ struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va, unsigned long flags); static inline void put_gfn(struct domain *d, unsigned long gfn) {} -static inline int relinquish_shared_pages(struct domain *d) -{ - return 0; -} /* * Arm does not have an M2P, but common code expects a handful of From patchwork Tue Apr 3 15:32:41 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 132765 Delivered-To: patch@linaro.org Received: by 10.46.84.29 with SMTP id i29csp3954175ljb; Tue, 3 Apr 2018 08:35:19 -0700 (PDT) X-Google-Smtp-Source: AIpwx494Eku3Iuhlkpd8+o4n3V2D90/a2tbfUg12r8LlEWoZecRXrcWAT87gXS9RVRPztpa/SLGD X-Received: by 2002:a24:df04:: with SMTP id r4-v6mr5559605itg.105.1522769719256; Tue, 03 Apr 2018 08:35:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522769719; cv=none; d=google.com; s=arc-20160816; b=EOa3mmk/P/v3OnXWAsGV3zgYXTxozBShy76pVc8PvLs75I8+Gn2lobFjeevNTHXB08 i8XJuTlXGpVxtYENm3VR82Y86BxCpUjHKzxgaKTse4SJpxBFa0Ea62u4EbwCNUVKGXoo AXXv6alsOL+L7Bb0bT9qSUAT7a0F0NbxxvPdFrEFSvxLITJ3AIMJbuQXTzbqSqR9xA/i MDURrypOcVq50NdXwxF93XMy4r3UgtG5Gu0TB2nb0KAc6tsjlh6BH5+xbcrgUsrLfDfw zX6JpHFY7Tchv54QGG1yZRjQ/SWYzvPar+pCG3QjD3RI+jUkJzr3Ya7BP2/OvSezSn3J f0PA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version:cc :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:references:in-reply-to:message-id:date:to:from :arc-authentication-results; bh=h7VdH78qZwbzN7LmTeI4BkyQqNzEFfLL2J4zrpUBLos=; b=T69eKcryE3J2zM7EsZpCmlI2tBkN5eOryMZuLLD50743Lt3q0SsoUdb+32y6aEnwR2 MhmvyHRa4s8eKpcYSw9NlaZEIcba09YbuJzMgMJN4+cNzq7cOfmY62Wdk3wrAdzTsJss aOcjWWnsO4vmzpwQ0fyiqueDFztIqdtMz4gLOBOX8RJT7U7lG5ZSBE1qk21ytVTpJTOK ZOYJHH3aoSnpb1eF5tZHEgDDP4LQUlwYu7lwpvynK2C+FibWfbATwLejnbAaXgcMju6E kKKZGXUOwBV/VbGsq3fU3V6tMsoDVetxPlCKfV0MyR6NRHmf2ST4vaFcXq7tn2uodbk6 W7qA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id d184-v6si592637ite.132.2018.04.03.08.35.19 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 03 Apr 2018 08:35:19 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1f3Nvs-0007qa-Cj; Tue, 03 Apr 2018 15:33:12 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1f3Nvq-0007pX-HP for xen-devel@lists.xen.org; Tue, 03 Apr 2018 15:33:10 +0000 X-Inumbo-ID: 7a33ccb5-3754-11e8-8249-2fda3a446a53 Received: from foss.arm.com (unknown [217.140.101.70]) by us1-amaz-eas1.inumbo.com (Halon) with ESMTP id 7a33ccb5-3754-11e8-8249-2fda3a446a53; Tue, 03 Apr 2018 15:34:20 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5ABD015AD; Tue, 3 Apr 2018 08:33:09 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.206.53]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 29E1A3F24A; Tue, 3 Apr 2018 08:33:08 -0700 (PDT) From: Julien Grall To: xen-devel@lists.xen.org Date: Tue, 3 Apr 2018 16:32:41 +0100 Message-Id: <20180403153251.19595-7-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180403153251.19595-1-julien.grall@arm.com> References: <20180403153251.19595-1-julien.grall@arm.com> Subject: [Xen-devel] [for-4.11][PATCH v7 06/16] xen/x86: Remove unused override of page_to_mfn/mfn_to_page X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: George Dunlap , Andrew Cooper , Julien Grall , George Dunlap , Jan Beulich MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" A few files override page_to_mfn/mfn_to_page but actually never use those macros. So drop them. Signed-off-by: Julien Grall Acked-by: George Dunlap Acked-by: Jan Beulich --- Cc: George Dunlap Cc: Jan Beulich Cc: Andrew Cooper Changes in v5: - Add George and Jan's acked-by Changes in v4: - Patch added --- xen/arch/x86/mm/hap/nested_hap.c | 3 --- xen/arch/x86/mm/p2m-pt.c | 6 ------ xen/arch/x86/pv/iret.c | 6 ------ xen/arch/x86/pv/mm.c | 6 ------ xen/arch/x86/pv/traps.c | 6 ------ 5 files changed, 27 deletions(-) diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c index 4603ceced4..d2a07a5c79 100644 --- a/xen/arch/x86/mm/hap/nested_hap.c +++ b/xen/arch/x86/mm/hap/nested_hap.c @@ -70,9 +70,6 @@ /********************************************/ /* NESTED VIRT P2M FUNCTIONS */ /********************************************/ -/* Override macros from asm/page.h to make them work with mfn_t */ -#undef page_to_mfn -#define page_to_mfn(_pg) _mfn(__page_to_mfn(_pg)) void nestedp2m_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn, diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c index 753124bdcd..b8c5d2ed26 100644 --- a/xen/arch/x86/mm/p2m-pt.c +++ b/xen/arch/x86/mm/p2m-pt.c @@ -39,12 +39,6 @@ #include "mm-locks.h" -/* Override macros from asm/page.h to make them work with mfn_t */ -#undef mfn_to_page -#define mfn_to_page(_m) __mfn_to_page(mfn_x(_m)) -#undef page_to_mfn -#define page_to_mfn(_pg) _mfn(__page_to_mfn(_pg)) - /* * We may store INVALID_MFN in PTEs. We need to clip this to avoid trampling * over higher-order bits (NX, p2m type, IOMMU flags). We seem to not need diff --git a/xen/arch/x86/pv/iret.c b/xen/arch/x86/pv/iret.c index 56aeac360a..ca433a69c4 100644 --- a/xen/arch/x86/pv/iret.c +++ b/xen/arch/x86/pv/iret.c @@ -24,12 +24,6 @@ #include #include -/* Override macros from asm/page.h to make them work with mfn_t */ -#undef mfn_to_page -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn)) -#undef page_to_mfn -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg)) - unsigned long do_iret(void) { struct cpu_user_regs *regs = guest_cpu_user_regs(); diff --git a/xen/arch/x86/pv/mm.c b/xen/arch/x86/pv/mm.c index 8d7a4fd85f..b46fd94c2c 100644 --- a/xen/arch/x86/pv/mm.c +++ b/xen/arch/x86/pv/mm.c @@ -26,12 +26,6 @@ #include "mm.h" -/* Override macros from asm/page.h to make them work with mfn_t */ -#undef mfn_to_page -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn)) -#undef page_to_mfn -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg)) - /* * Get a mapping of a PV guest's l1e for this linear address. The return * pointer should be unmapped using unmap_domain_page(). diff --git a/xen/arch/x86/pv/traps.c b/xen/arch/x86/pv/traps.c index 98549bc1ea..f48db92243 100644 --- a/xen/arch/x86/pv/traps.c +++ b/xen/arch/x86/pv/traps.c @@ -29,12 +29,6 @@ #include #include -/* Override macros from asm/page.h to make them work with mfn_t */ -#undef mfn_to_page -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn)) -#undef page_to_mfn -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg)) - void do_entry_int82(struct cpu_user_regs *regs) { if ( unlikely(untrusted_msi) ) From patchwork Tue Apr 3 15:32:42 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 132766 Delivered-To: patch@linaro.org Received: by 10.46.84.29 with SMTP id i29csp3954290ljb; Tue, 3 Apr 2018 08:35:26 -0700 (PDT) X-Google-Smtp-Source: AIpwx4/ZyEbdrqMwjhPa+hTjDe4kx2eru1V1fMa93E3NhGMeE1BDCfU/7uCAt4nXH9nj80NakMIm X-Received: by 2002:a24:4fc2:: with SMTP id c185-v6mr1383203itb.34.1522769726124; Tue, 03 Apr 2018 08:35:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522769726; cv=none; d=google.com; s=arc-20160816; b=pWY/DquxPBXPlmeu2wpiTTEIrnulXWoZzmhD0+cN5hpSA2jujRbk8OVvoN38ONYvH0 bsQI639uKBOlu6tw5szI0mrCws7pRDQY04ZmtkZYm277cHlrE8UXKogIJcDTNVraUNSn vgpo7AsCj2N59R4wWXhpRFZ+2L91scupShXQBzQnIML7CamIXEiz589XJxV4BsXh/rJ4 Afr2UmSOkGb3C/D9mSEEIctVXQDnGSeaoux37/IP4W/TYSBOcZYKLT3ekl/UI2crqkCs iF3V8ZeQUBvbVdzl65nHCVl2kSEScZHOFZ9HCzKj81G5q+1xfqKhIViO+yNhZgHY1TEJ SvYA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version:cc :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:references:in-reply-to:message-id:date:to:from :arc-authentication-results; bh=t7/nX8fEypv4gDvupUC0S9DMCuMKHL9MMXSlk/QRjug=; b=RsASnwnofwY+C8j5f2nyxi52RyvWFx/1ktdCVg2NXx8YYIyy4a9l3uhgi/rMzNYz6f kmgzfC5BIqhp30v0V8etzLSjqnfb19DqsHVeG1gDedmbxbsdG7TTUYf7HfxiD5XCZ5H5 ClzQcHO1G8o3/Kh/uCg2dwiIPkfjIintNrazk8p4c7Yy7YdEqSL6z37AsWiIDOPXh9AP 9gC4zifHbvgOQSQQrNNpOa9sIohsK4feoudOBs8POtf+FNXZh+4WsHVFjyioXQaue71L +4z6Fw6wQRXJkaGCtEtb5J6h9iG4pvLW3nUsNJ5D02dSaYwKtdFUI2Ejz9lmmG1zuxzL qUxA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id k185si2038156iok.258.2018.04.03.08.35.25 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 03 Apr 2018 08:35:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1f3Nvs-0007rJ-Up; Tue, 03 Apr 2018 15:33:12 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1f3Nvr-0007qK-Ot for xen-devel@lists.xen.org; Tue, 03 Apr 2018 15:33:11 +0000 X-Inumbo-ID: 7aeb825d-3754-11e8-8249-2fda3a446a53 Received: from foss.arm.com (unknown [217.140.101.70]) by us1-amaz-eas1.inumbo.com (Halon) with ESMTP id 7aeb825d-3754-11e8-8249-2fda3a446a53; Tue, 03 Apr 2018 15:34:21 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8375B1596; Tue, 3 Apr 2018 08:33:10 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.206.53]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 995E73F24A; Tue, 3 Apr 2018 08:33:09 -0700 (PDT) From: Julien Grall To: xen-devel@lists.xen.org Date: Tue, 3 Apr 2018 16:32:42 +0100 Message-Id: <20180403153251.19595-8-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180403153251.19595-1-julien.grall@arm.com> References: <20180403153251.19595-1-julien.grall@arm.com> Subject: [Xen-devel] [for-4.11][PATCH v7 07/16] xen/x86: mm: Switch x86/mm.c to use typesafe for virt_to_mfn X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Julien Grall , Jan Beulich MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" No functional change intended. While we are here, use PFN_DOWN() rather than open coding it. Signed-off Julien Grall Acked-by: Jan Beulich --- Cc: Jan Beulich Cc: Andrew Cooper Changes in v6: - Add George's reviewed-by - Add a word about using PFN_DOWN in the commit message Changes in v5: - Add Jan's acked-by - Use PFN_DOWN Changes in v4: - Patch added --- xen/arch/x86/mm.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 3aed94bda5..605f4377fa 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -135,6 +135,8 @@ #define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn)) #undef page_to_mfn #define page_to_mfn(pg) _mfn(__page_to_mfn(pg)) +#undef virt_to_mfn +#define virt_to_mfn(v) _mfn(__virt_to_mfn(v)) /* Mapping of the fixmap space needed early. */ l1_pgentry_t __section(".bss.page_aligned") __aligned(PAGE_SIZE) @@ -379,7 +381,7 @@ void __init arch_init_memory(void) l3tab[i] = l3idle[i]; for ( ; i < L3_PAGETABLE_ENTRIES; ++i ) l3tab[i] = l3e_empty(); - split_l4e = l4e_from_pfn(virt_to_mfn(l3tab), + split_l4e = l4e_from_mfn(virt_to_mfn(l3tab), __PAGE_HYPERVISOR_RW); } else @@ -4149,7 +4151,7 @@ int xenmem_add_to_physmap_one( { case XENMAPSPACE_shared_info: if ( idx == 0 ) - mfn = _mfn(virt_to_mfn(d->shared_info)); + mfn = virt_to_mfn(d->shared_info); break; case XENMAPSPACE_grant_table: rc = gnttab_map_frame(d, idx, gpfn, &mfn); @@ -4775,7 +4777,7 @@ int map_pages_to_xen( if ( (l3e_get_flags(*pl3e) & _PAGE_PRESENT) && (l3e_get_flags(*pl3e) & _PAGE_PSE) ) { - l3e_write_atomic(pl3e, l3e_from_pfn(virt_to_mfn(pl2e), + l3e_write_atomic(pl3e, l3e_from_mfn(virt_to_mfn(pl2e), __PAGE_HYPERVISOR)); pl2e = NULL; } @@ -4873,7 +4875,7 @@ int map_pages_to_xen( if ( (l2e_get_flags(*pl2e) & _PAGE_PRESENT) && (l2e_get_flags(*pl2e) & _PAGE_PSE) ) { - l2e_write_atomic(pl2e, l2e_from_pfn(virt_to_mfn(pl1e), + l2e_write_atomic(pl2e, l2e_from_mfn(virt_to_mfn(pl1e), __PAGE_HYPERVISOR)); pl1e = NULL; } @@ -5082,7 +5084,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf) if ( (l3e_get_flags(*pl3e) & _PAGE_PRESENT) && (l3e_get_flags(*pl3e) & _PAGE_PSE) ) { - l3e_write_atomic(pl3e, l3e_from_pfn(virt_to_mfn(pl2e), + l3e_write_atomic(pl3e, l3e_from_mfn(virt_to_mfn(pl2e), __PAGE_HYPERVISOR)); pl2e = NULL; } @@ -5136,7 +5138,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf) if ( (l2e_get_flags(*pl2e) & _PAGE_PRESENT) && (l2e_get_flags(*pl2e) & _PAGE_PSE) ) { - l2e_write_atomic(pl2e, l2e_from_pfn(virt_to_mfn(pl1e), + l2e_write_atomic(pl2e, l2e_from_mfn(virt_to_mfn(pl1e), __PAGE_HYPERVISOR)); pl1e = NULL; } @@ -5540,8 +5542,7 @@ static void __memguard_change_range(void *p, unsigned long l, int guard) if ( guard ) flags &= ~_PAGE_PRESENT; - map_pages_to_xen( - _p, virt_to_maddr(p) >> PAGE_SHIFT, _l >> PAGE_SHIFT, flags); + map_pages_to_xen(_p, mfn_x(virt_to_mfn(p)), PFN_DOWN(_l), flags); } void memguard_guard_range(void *p, unsigned long l) From patchwork Tue Apr 3 15:32:43 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 132764 Delivered-To: patch@linaro.org Received: by 10.46.84.29 with SMTP id i29csp3954206ljb; Tue, 3 Apr 2018 08:35:20 -0700 (PDT) X-Google-Smtp-Source: AIpwx4/5zoOcESRH9+sAZRoIhCKZKac131SmFLGKAh3efZuNxzke3wmHejr9Gl/yuWL9gaO6+l/r X-Received: by 10.107.57.133 with SMTP id g127mr11900904ioa.52.1522769720599; Tue, 03 Apr 2018 08:35:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522769720; cv=none; d=google.com; s=arc-20160816; b=mnkO2oh4QZc8vopPAOaXsc4cc+Y4lBYI/uYTU8ieazmO6jIp0vsUJAenj1Z+OhtWIr nj/tr033gTijMmtTAb55bwFs3uaoDiI1yPuhGotd42utGBKNfVU8RrbU4hAWNNqOf/3j Q12V/p2Mt9M1wV7wcP0Q0E+82CcPG6BceTyzbsbsUB2a5MPioZQa2Cz7M9/NUHZ7ZHA1 yVyuQzl4OVvJTOJ3q/aj2HGJhaSCAzjdyMJt4t1N9xwetKsaMw3kdNeyzbD/HOs0Erwd /uGKQz3sNtvjqADVJZWWqflQiQJaeawKA1uXGW6PIoOZdDRCK4FH/e/H/3oRVEK3ZLwW NFzQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version:cc :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:references:in-reply-to:message-id:date:to:from :arc-authentication-results; bh=sriRTEYQi35uFDEGxdj6+knBm4zJCeZ88JGJTzDIVrA=; b=GLzIAG/bLqqTnmgjSk2nc18XUIipFhAaKpMwgDpJ0rcLNBi3GOFaWxFBW8WVkqmzFE 6GpdkXzD7ZY9FCgS1+Krrf6OyOJRwQnWEymjFyIdq0k9Fd67GzoRPoIaFYT1GgCHlLAD fk3Ak1Fhq+lxwqegpFhuLzMLBV19Ua4i0+0QG2YVjcCDQazHom44uzLcUTA96fzH4CEw sJk8pp56Csr2eAQ43LBz0FuCtipf7rG43+AktRGn4VFssuhu3l0cGsnrqqrkvD3gsUQk M9RLKE3CEKYSkbLTmuxLiyYQbVhP3dyEJxvywng5vl+iLWwox8jruzrcCS2rzouqz75M xt9w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id d12-v6si2287674itj.71.2018.04.03.08.35.20 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 03 Apr 2018 08:35:20 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1f3Nvv-0007uK-Ak; Tue, 03 Apr 2018 15:33:15 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1f3Nvt-0007sl-UD for xen-devel@lists.xen.org; Tue, 03 Apr 2018 15:33:13 +0000 X-Inumbo-ID: 7c228b36-3754-11e8-8249-2fda3a446a53 Received: from foss.arm.com (unknown [217.140.101.70]) by us1-amaz-eas1.inumbo.com (Halon) with ESMTP id 7c228b36-3754-11e8-8249-2fda3a446a53; Tue, 03 Apr 2018 15:34:23 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8C65A1596; Tue, 3 Apr 2018 08:33:12 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.206.53]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C290D3F24A; Tue, 3 Apr 2018 08:33:10 -0700 (PDT) From: Julien Grall To: xen-devel@lists.xen.org Date: Tue, 3 Apr 2018 16:32:43 +0100 Message-Id: <20180403153251.19595-9-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180403153251.19595-1-julien.grall@arm.com> References: <20180403153251.19595-1-julien.grall@arm.com> Subject: [Xen-devel] [for-4.11][PATCH v7 08/16] xen/mm: Drop the parameter mfn from populate_pt_range X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Wei Liu , Andrew Cooper , Ian Jackson , George Dunlap , Tim Deegan , Julien Grall , Jan Beulich MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" The function populate_pt_range is used to populate in advance the page-table but it will not do the actual mapping. So passing the MFN in parameter is pointless. Note that the only caller pass 0... At the same time replace 0 by INVALID_MFNs. While this does not matter as the entry will marked as not valid and populated, INVALID_MFN helps the reader to know the MFN is invalid. Signed-off-by: Julien Grall Acked-by: Andrew Cooper Reviewed-by: Wei Liu Reviewed-by: George Dunlap Acked-by: Stefano Stabellini --- Cc: Stefano Stabellini Cc: Julien Grall Cc: Ian Jackson Cc: Jan Beulich Cc: Konrad Rzeszutek Wilk Cc: Tim Deegan Changes in v6: - Add George's and Wei's reviewed-by - Add Andrew's acked-by Changes in v5: - Update the commit message to explain why 0 -> INVALID_MFN. Changes in v4: - Patch added. --- xen/arch/arm/mm.c | 5 ++--- xen/arch/x86/mm.c | 5 ++--- xen/common/vmap.c | 2 +- xen/include/xen/mm.h | 3 +-- 4 files changed, 6 insertions(+), 9 deletions(-) diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index 1126e246c0..436df6936b 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -1072,10 +1072,9 @@ int map_pages_to_xen(unsigned long virt, return create_xen_entries(INSERT, virt, _mfn(mfn), nr_mfns, flags); } -int populate_pt_range(unsigned long virt, unsigned long mfn, - unsigned long nr_mfns) +int populate_pt_range(unsigned long virt, unsigned long nr_mfns) { - return create_xen_entries(RESERVE, virt, _mfn(mfn), nr_mfns, 0); + return create_xen_entries(RESERVE, virt, INVALID_MFN, nr_mfns, 0); } int destroy_xen_mappings(unsigned long v, unsigned long e) diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 605f4377fa..6d5f40482e 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -5007,10 +5007,9 @@ int map_pages_to_xen( return 0; } -int populate_pt_range(unsigned long virt, unsigned long mfn, - unsigned long nr_mfns) +int populate_pt_range(unsigned long virt, unsigned long nr_mfns) { - return map_pages_to_xen(virt, mfn, nr_mfns, MAP_SMALL_PAGES); + return map_pages_to_xen(virt, mfn_x(INVALID_MFN), nr_mfns, MAP_SMALL_PAGES); } /* diff --git a/xen/common/vmap.c b/xen/common/vmap.c index 0b23f8fb97..11785ffb0a 100644 --- a/xen/common/vmap.c +++ b/xen/common/vmap.c @@ -42,7 +42,7 @@ void __init vm_init_type(enum vmap_region type, void *start, void *end) bitmap_fill(vm_bitmap(type), vm_low[type]); /* Populate page tables for the bitmap if necessary. */ - populate_pt_range(va, 0, vm_low[type] - nr); + populate_pt_range(va, vm_low[type] - nr); } static void *vm_alloc(unsigned int nr, unsigned int align, diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h index 142aa73354..538478fa24 100644 --- a/xen/include/xen/mm.h +++ b/xen/include/xen/mm.h @@ -175,8 +175,7 @@ int destroy_xen_mappings(unsigned long v, unsigned long e); * Create only non-leaf page table entries for the * page range in Xen virtual address space. */ -int populate_pt_range(unsigned long virt, unsigned long mfn, - unsigned long nr_mfns); +int populate_pt_range(unsigned long virt, unsigned long nr_mfns); /* Claim handling */ unsigned long domain_adjust_tot_pages(struct domain *d, long pages); int domain_set_outstanding_pages(struct domain *d, unsigned long pages); From patchwork Tue Apr 3 15:32:44 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 132762 Delivered-To: patch@linaro.org Received: by 10.46.84.29 with SMTP id i29csp3954173ljb; Tue, 3 Apr 2018 08:35:19 -0700 (PDT) X-Google-Smtp-Source: AIpwx4/niG9yc0w4+NKooY+hrdrG6LBdJrpnZaAFOR3z3O3eYsQPxGJRgeIj67+C1nrUx+r0JVPc X-Received: by 10.107.139.148 with SMTP id n142mr2957822iod.145.1522769719172; Tue, 03 Apr 2018 08:35:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522769719; cv=none; d=google.com; s=arc-20160816; b=pAmtMYxi0iomsybtx7FprmtXSBp2J8aiB8QqPSxDR+n8M978iPeV5KOWP8bLxKH+5R ZVRBAAhWDOZ+DAFmRaqFDEEFd03SwP1ZzQn2CPNApezQlQM8lJKyZAojvHnuBSs5IJW5 kEB6KWG/exfPxK+RaTPkNFb1ebssbL2cf/uVdTKoNfytseKgw09/2dLvxtSqjNSQz8Nc qy9TuBK0Y4yRjDoy7ZXWsjGDsuTmNUMMSC7wM7RFw8a//t02Oy5xh2WqNE9Q1EJbGhEc Gq95vhADZXtyZ5t+YF2DFhXyc06xyGLE549rr0SeuAjYM6udREG6SxxRZQbgpGDb8OYl R6Ow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version:cc :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:references:in-reply-to:message-id:date:to:from :arc-authentication-results; bh=nalveKMfiH7iP1YVjinO/IKKrnftCMrk6XNJfLPXD1s=; b=1K/sHAvjBFrpHR74GtPl/D4CBM9zRS5ul3wf3QlY1r9/no8uvRVkHJLbs2LkbN1eiv Cljjead2e+VqQ6Bus/P+OrgfcCFlQZ7K2XTpAKtN//vjjcVrX/PXei0FbIQQisf5TDKq lKEGYGO28YO9Pmna40OZvZlmjtPY7rk4xNSKzr5zvPtYjhXqMlK+79nLdBZQ6aMG/HHQ gWhhnu8sR398Ko1GYFDwT1zZQxPQye7AjdNFYbM9ZFmknxb1Q6TVymul5P5rt0lzfRAi Z25bhrfTpWGdBsq9LqgDABwDQK38HbVpvndbnDbgPgDPnd7ag6cFVSbeIsfJMyLQzY2S dkNQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id o80si2014663ioe.174.2018.04.03.08.35.18 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 03 Apr 2018 08:35:19 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1f3Nvx-0007wj-MI; Tue, 03 Apr 2018 15:33:17 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1f3Nvw-0007vZ-BS for xen-devel@lists.xen.org; Tue, 03 Apr 2018 15:33:16 +0000 X-Inumbo-ID: 7d5304ac-3754-11e8-8249-2fda3a446a53 Received: from foss.arm.com (unknown [217.140.101.70]) by us1-amaz-eas1.inumbo.com (Halon) with ESMTP id 7d5304ac-3754-11e8-8249-2fda3a446a53; Tue, 03 Apr 2018 15:34:25 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 955AF15AB; Tue, 3 Apr 2018 08:33:14 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.206.53]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CB87E3F24A; Tue, 3 Apr 2018 08:33:12 -0700 (PDT) From: Julien Grall To: xen-devel@lists.xen.org Date: Tue, 3 Apr 2018 16:32:44 +0100 Message-Id: <20180403153251.19595-10-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180403153251.19595-1-julien.grall@arm.com> References: <20180403153251.19595-1-julien.grall@arm.com> Subject: [Xen-devel] [for-4.11][PATCH v7 09/16] xen/pdx: Introduce helper to convert MFN <-> PDX X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Wei Liu , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Jan Beulich MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" This will avoid use of pfn_to_pdx(mfn_x(mfn)) over the code base. Signed-off-by: Julien Grall Reviewed-by: Wei Liu Acked-by: Andrew Cooper --- Cc: Andrew Cooper Cc: George Dunlap Cc: Ian Jackson Cc: Jan Beulich Cc: Konrad Rzeszutek Wilk Cc: Stefano Stabellini Cc: Tim Deegan Cc: Wei Liu Changes in v6: - Add Andrew's acked-by Changes in v5: - Add Wei's reviewed-by Changes in v4: - Patch added --- xen/include/xen/pdx.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/xen/include/xen/pdx.h b/xen/include/xen/pdx.h index 4c56645c4c..a151aac1a2 100644 --- a/xen/include/xen/pdx.h +++ b/xen/include/xen/pdx.h @@ -35,6 +35,9 @@ static inline unsigned long pdx_to_pfn(unsigned long pdx) ((pdx << pfn_pdx_hole_shift) & pfn_top_mask); } +#define mfn_to_pdx(mfn) pfn_to_pdx(mfn_x(mfn)) +#define pdx_to_mfn(pdx) _mfn(pdx_to_pfn(pdx)) + extern void pfn_pdx_hole_setup(unsigned long); #endif /* HAS_PDX */ From patchwork Tue Apr 3 15:32:45 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 132771 Delivered-To: patch@linaro.org Received: by 10.46.84.29 with SMTP id i29csp3954501ljb; Tue, 3 Apr 2018 08:35:37 -0700 (PDT) X-Google-Smtp-Source: AIpwx499LKTrcoFTi24cEPL/NlvRaRXS09BHpFxfmX1L2HQrwZbRWlA5yXIe9C3Yage+ao26KiYv X-Received: by 10.107.148.212 with SMTP id w203mr13228736iod.305.1522769737328; Tue, 03 Apr 2018 08:35:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522769737; cv=none; d=google.com; s=arc-20160816; b=PVHV6Oq9n56ARfordOnxvh1RJljaaZcA2ravxtlnmiqHdeXGwRIDeBQ0AY7GPSUAaD gi9Sy8ki01uqiwqp4SyHyDu3f/IohdlIsN/NByAXFrDVmMLC8FrH6Y9cZf4B0EoY+LAY 8TJGdD2v6iH/dZeIwMch7H7XMOAXGlmQmZ34FUGAjvn+DW8xsASkyNR37f3zMnPFNeif qXFU5/d7mwobIUkzdyyWV4GGnBoRSzQHFZtzQ7fB1HII4vdmdsDMRvA4I+FTvT1sQ5tm m+tondIUxqpzS+uZEbdcQMiAn8AvT3QXeeUDjyItrQ5Uavao7TCUFAkVwe8QixgKgTM4 WGEg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version:cc :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:references:in-reply-to:message-id:date:to:from :arc-authentication-results; bh=5VfyzFevKlF0Qe+vVRaX1J5Bx8VEPBpU6j+H/C/tOvo=; b=zFCjF3P6RaGsulFEJakXkB94I4y+x0S8uKBUl/8HW/6AofOq9UyZa+XCGfCTXeKvRd Zu2yOnABZespedTpXV+R8iH/5KiYwsiS4avnWgMgz7X1cMvNacEM83vE0N3lK8u7mY+7 gGgyLZZ9+VdIGVwYmJInrjQxSFe1NH4NN5UjzBhs7LiCrMjN3AmbXuiUJYNEA7qK9lpr 3XlpuAkVv1XgNWDgc7lCVtaQ5waGL3hec7aPdaF9bFVgxTMeYa7zhnvhvcJXJTy1dXNC sRv//bmhJrNR/HqlxXlUGtq5QzcAniD16Vne3Caf77UsmvM1rzJhhjfNuI0DgxNilyim WisQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id x13-v6si614867iti.89.2018.04.03.08.35.37 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 03 Apr 2018 08:35:37 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1f3Nw1-00080E-2c; Tue, 03 Apr 2018 15:33:21 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1f3Nvz-0007yz-RY for xen-devel@lists.xen.org; Tue, 03 Apr 2018 15:33:19 +0000 X-Inumbo-ID: 3fcb9b27-3754-11e8-9728-bc764e045a96 Received: from foss.arm.com (unknown [217.140.101.70]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTP id 3fcb9b27-3754-11e8-9728-bc764e045a96; Tue, 03 Apr 2018 17:32:42 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5C6C71435; Tue, 3 Apr 2018 08:33:17 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.206.53]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D5F603F24A; Tue, 3 Apr 2018 08:33:14 -0700 (PDT) From: Julien Grall To: xen-devel@lists.xen.org Date: Tue, 3 Apr 2018 16:32:45 +0100 Message-Id: <20180403153251.19595-11-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180403153251.19595-1-julien.grall@arm.com> References: <20180403153251.19595-1-julien.grall@arm.com> Subject: [Xen-devel] [for-4.11][PATCH v7 10/16] xen/mm: Switch map_pages_to_xen to use MFN typesafe X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Kevin Tian , Stefano Stabellini , Wei Liu , George Dunlap , Andrew Cooper , Ian Jackson , George Dunlap , Tim Deegan , Julien Grall , Jan Beulich , Shane Wang , Gang Wei MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" The current prototype is slightly confusing because it takes a virtual address and a physical frame (not address!). Switching to MFN will improve safety and reduce the chance to mistakenly invert the 2 parameters. Also, take the opportunity to switch (a - b) >> PAGE_SHIFT to PFN_DOWN(a - b) in the code modified. Signed-off-by: Julien Grall Acked-by: Andrew Cooper Reviewed-by: Wei Liu Reviewed-by: George Dunlap Acked-by: Stefano Stabellini --- Cc: Stefano Stabellini Cc: Julien Grall Cc: Andrew Cooper Cc: George Dunlap Cc: Ian Jackson Cc: Jan Beulich Cc: Konrad Rzeszutek Wilk Cc: Tim Deegan Cc: Wei Liu Cc: Gang Wei Cc: Shane Wang Cc: Kevin Tian Changes in v6: - Add Andrew's acked-by - Add Wei's and George's reviewed-by Changes in v5: - Use PFN_DOWN as suggested by Jan - Replace _mfn(0) by INVALID_MFN where relevant Changes in v4: - Patch added --- xen/arch/arm/mm.c | 4 +-- xen/arch/x86/mm.c | 58 +++++++++++++++++++------------------- xen/arch/x86/setup.c | 20 ++++++------- xen/arch/x86/smpboot.c | 2 +- xen/arch/x86/tboot.c | 11 ++++---- xen/arch/x86/x86_64/mm.c | 27 ++++++++++-------- xen/arch/x86/x86_64/mmconfig_64.c | 6 ++-- xen/common/efi/boot.c | 2 +- xen/common/vmap.c | 10 +++++-- xen/drivers/acpi/apei/erst.c | 2 +- xen/drivers/acpi/apei/hest.c | 2 +- xen/drivers/passthrough/vtd/dmar.c | 2 +- xen/include/asm-arm/mm.h | 2 +- xen/include/xen/mm.h | 2 +- 14 files changed, 80 insertions(+), 70 deletions(-) diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index 436df6936b..7af6baa3d6 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -1065,11 +1065,11 @@ out: } int map_pages_to_xen(unsigned long virt, - unsigned long mfn, + mfn_t mfn, unsigned long nr_mfns, unsigned int flags) { - return create_xen_entries(INSERT, virt, _mfn(mfn), nr_mfns, flags); + return create_xen_entries(INSERT, virt, mfn, nr_mfns, flags); } int populate_pt_range(unsigned long virt, unsigned long nr_mfns) diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 6d5f40482e..ec61887d76 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -213,7 +213,7 @@ static void __init init_frametable_chunk(void *start, void *end) while ( step && s + (step << PAGE_SHIFT) > e + (4 << PAGE_SHIFT) ) step >>= PAGETABLE_ORDER; mfn = alloc_boot_pages(step, step); - map_pages_to_xen(s, mfn_x(mfn), step, PAGE_HYPERVISOR); + map_pages_to_xen(s, mfn, step, PAGE_HYPERVISOR); } memset(start, 0, end - start); @@ -787,12 +787,12 @@ static int update_xen_mappings(unsigned long mfn, unsigned int cacheattr) XEN_VIRT_START + ((mfn - PFN_DOWN(xen_phys_start)) << PAGE_SHIFT); if ( unlikely(alias) && cacheattr ) - err = map_pages_to_xen(xen_va, mfn, 1, 0); + err = map_pages_to_xen(xen_va, _mfn(mfn), 1, 0); if ( !err ) - err = map_pages_to_xen((unsigned long)mfn_to_virt(mfn), mfn, 1, + err = map_pages_to_xen((unsigned long)mfn_to_virt(mfn), _mfn(mfn), 1, PAGE_HYPERVISOR | cacheattr_to_pte_flags(cacheattr)); if ( unlikely(alias) && !cacheattr && !err ) - err = map_pages_to_xen(xen_va, mfn, 1, PAGE_HYPERVISOR); + err = map_pages_to_xen(xen_va, _mfn(mfn), 1, PAGE_HYPERVISOR); return err; } @@ -4645,7 +4645,7 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long v) int map_pages_to_xen( unsigned long virt, - unsigned long mfn, + mfn_t mfn, unsigned long nr_mfns, unsigned int flags) { @@ -4677,13 +4677,13 @@ int map_pages_to_xen( ol3e = *pl3e; if ( cpu_has_page1gb && - !(((virt >> PAGE_SHIFT) | mfn) & + !(((virt >> PAGE_SHIFT) | mfn_x(mfn)) & ((1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT)) - 1)) && nr_mfns >= (1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT)) && !(flags & (_PAGE_PAT | MAP_SMALL_PAGES)) ) { /* 1GB-page mapping. */ - l3e_write_atomic(pl3e, l3e_from_pfn(mfn, l1f_to_lNf(flags))); + l3e_write_atomic(pl3e, l3e_from_mfn(mfn, l1f_to_lNf(flags))); if ( (l3e_get_flags(ol3e) & _PAGE_PRESENT) ) { @@ -4727,8 +4727,8 @@ int map_pages_to_xen( } virt += 1UL << L3_PAGETABLE_SHIFT; - if ( !mfn_eq(_mfn(mfn), INVALID_MFN) ) - mfn += 1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT); + if ( !mfn_eq(mfn, INVALID_MFN) ) + mfn = mfn_add(mfn, 1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT)); nr_mfns -= 1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT); continue; } @@ -4743,18 +4743,18 @@ int map_pages_to_xen( if ( ((l3e_get_pfn(ol3e) & ~(L2_PAGETABLE_ENTRIES * L1_PAGETABLE_ENTRIES - 1)) + (l2_table_offset(virt) << PAGETABLE_ORDER) + - l1_table_offset(virt) == mfn) && + l1_table_offset(virt) == mfn_x(mfn)) && ((lNf_to_l1f(l3e_get_flags(ol3e)) ^ flags) & ~(_PAGE_ACCESSED|_PAGE_DIRTY)) == 0 ) { /* We can skip to end of L3 superpage if we got a match. */ i = (1u << (L3_PAGETABLE_SHIFT - PAGE_SHIFT)) - - (mfn & ((1 << (L3_PAGETABLE_SHIFT - PAGE_SHIFT)) - 1)); + (mfn_x(mfn) & ((1 << (L3_PAGETABLE_SHIFT - PAGE_SHIFT)) - 1)); if ( i > nr_mfns ) i = nr_mfns; virt += i << PAGE_SHIFT; - if ( !mfn_eq(_mfn(mfn), INVALID_MFN) ) - mfn += i; + if ( !mfn_eq(mfn, INVALID_MFN) ) + mfn = mfn_add(mfn, i); nr_mfns -= i; continue; } @@ -4792,14 +4792,14 @@ int map_pages_to_xen( if ( !pl2e ) return -ENOMEM; - if ( ((((virt >> PAGE_SHIFT) | mfn) & + if ( ((((virt >> PAGE_SHIFT) | mfn_x(mfn)) & ((1u << PAGETABLE_ORDER) - 1)) == 0) && (nr_mfns >= (1u << PAGETABLE_ORDER)) && !(flags & (_PAGE_PAT|MAP_SMALL_PAGES)) ) { /* Super-page mapping. */ ol2e = *pl2e; - l2e_write_atomic(pl2e, l2e_from_pfn(mfn, l1f_to_lNf(flags))); + l2e_write_atomic(pl2e, l2e_from_mfn(mfn, l1f_to_lNf(flags))); if ( (l2e_get_flags(ol2e) & _PAGE_PRESENT) ) { @@ -4822,8 +4822,8 @@ int map_pages_to_xen( } virt += 1UL << L2_PAGETABLE_SHIFT; - if ( !mfn_eq(_mfn(mfn), INVALID_MFN) ) - mfn += 1UL << PAGETABLE_ORDER; + if ( !mfn_eq(mfn, INVALID_MFN) ) + mfn = mfn_add(mfn, 1UL << PAGETABLE_ORDER); nr_mfns -= 1UL << PAGETABLE_ORDER; } else @@ -4842,18 +4842,18 @@ int map_pages_to_xen( /* Skip this PTE if there is no change. */ if ( (((l2e_get_pfn(*pl2e) & ~(L1_PAGETABLE_ENTRIES - 1)) + - l1_table_offset(virt)) == mfn) && + l1_table_offset(virt)) == mfn_x(mfn)) && (((lNf_to_l1f(l2e_get_flags(*pl2e)) ^ flags) & ~(_PAGE_ACCESSED|_PAGE_DIRTY)) == 0) ) { /* We can skip to end of L2 superpage if we got a match. */ i = (1u << (L2_PAGETABLE_SHIFT - PAGE_SHIFT)) - - (mfn & ((1u << (L2_PAGETABLE_SHIFT - PAGE_SHIFT)) - 1)); + (mfn_x(mfn) & ((1u << (L2_PAGETABLE_SHIFT - PAGE_SHIFT)) - 1)); if ( i > nr_mfns ) i = nr_mfns; virt += i << L1_PAGETABLE_SHIFT; - if ( !mfn_eq(_mfn(mfn), INVALID_MFN) ) - mfn += i; + if ( !mfn_eq(mfn, INVALID_MFN) ) + mfn = mfn_add(mfn, i); nr_mfns -= i; goto check_l3; } @@ -4888,7 +4888,7 @@ int map_pages_to_xen( pl1e = l2e_to_l1e(*pl2e) + l1_table_offset(virt); ol1e = *pl1e; - l1e_write_atomic(pl1e, l1e_from_pfn(mfn, flags)); + l1e_write_atomic(pl1e, l1e_from_mfn(mfn, flags)); if ( (l1e_get_flags(ol1e) & _PAGE_PRESENT) ) { unsigned int flush_flags = FLUSH_TLB | FLUSH_ORDER(0); @@ -4898,13 +4898,13 @@ int map_pages_to_xen( } virt += 1UL << L1_PAGETABLE_SHIFT; - if ( !mfn_eq(_mfn(mfn), INVALID_MFN) ) - mfn += 1UL; + if ( !mfn_eq(mfn, INVALID_MFN) ) + mfn = mfn_add(mfn, 1UL); nr_mfns -= 1UL; if ( (flags == PAGE_HYPERVISOR) && ((nr_mfns == 0) || - ((((virt >> PAGE_SHIFT) | mfn) & + ((((virt >> PAGE_SHIFT) | mfn_x(mfn)) & ((1u << PAGETABLE_ORDER) - 1)) == 0)) ) { unsigned long base_mfn; @@ -4957,7 +4957,7 @@ int map_pages_to_xen( if ( cpu_has_page1gb && (flags == PAGE_HYPERVISOR) && ((nr_mfns == 0) || - !(((virt >> PAGE_SHIFT) | mfn) & + !(((virt >> PAGE_SHIFT) | mfn_x(mfn)) & ((1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT)) - 1))) ) { unsigned long base_mfn; @@ -5009,7 +5009,7 @@ int map_pages_to_xen( int populate_pt_range(unsigned long virt, unsigned long nr_mfns) { - return map_pages_to_xen(virt, mfn_x(INVALID_MFN), nr_mfns, MAP_SMALL_PAGES); + return map_pages_to_xen(virt, INVALID_MFN, nr_mfns, MAP_SMALL_PAGES); } /* @@ -5270,7 +5270,7 @@ void __set_fixmap( enum fixed_addresses idx, unsigned long mfn, unsigned long flags) { BUG_ON(idx >= __end_of_fixed_addresses); - map_pages_to_xen(__fix_to_virt(idx), mfn, 1, flags); + map_pages_to_xen(__fix_to_virt(idx), _mfn(mfn), 1, flags); } void *__init arch_vmap_virt_end(void) @@ -5541,7 +5541,7 @@ static void __memguard_change_range(void *p, unsigned long l, int guard) if ( guard ) flags &= ~_PAGE_PRESENT; - map_pages_to_xen(_p, mfn_x(virt_to_mfn(p)), PFN_DOWN(_l), flags); + map_pages_to_xen(_p, virt_to_mfn(p), PFN_DOWN(_l), flags); } void memguard_guard_range(void *p, unsigned long l) diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c index c0b97a748a..b79ccab49d 100644 --- a/xen/arch/x86/setup.c +++ b/xen/arch/x86/setup.c @@ -354,8 +354,8 @@ void *__init bootstrap_map(const module_t *mod) if ( end - start > BOOTSTRAP_MAP_LIMIT - map_cur ) return NULL; - map_pages_to_xen(map_cur, start >> PAGE_SHIFT, - (end - start) >> PAGE_SHIFT, PAGE_HYPERVISOR); + map_pages_to_xen(map_cur, maddr_to_mfn(start), + PFN_DOWN(end - start), PAGE_HYPERVISOR); map_cur += end - start; return ret; } @@ -979,8 +979,8 @@ void __init noreturn __start_xen(unsigned long mbi_p) { end = min(e, limit); set_pdx_range(s >> PAGE_SHIFT, end >> PAGE_SHIFT); - map_pages_to_xen((unsigned long)__va(s), s >> PAGE_SHIFT, - (end - s) >> PAGE_SHIFT, PAGE_HYPERVISOR); + map_pages_to_xen((unsigned long)__va(s), maddr_to_mfn(s), + PFN_DOWN(end - s), PAGE_HYPERVISOR); } if ( e > min(HYPERVISOR_VIRT_END - DIRECTMAP_VIRT_START, @@ -1294,7 +1294,7 @@ void __init noreturn __start_xen(unsigned long mbi_p) if ( map_e < end ) { - map_pages_to_xen((unsigned long)__va(map_e), PFN_DOWN(map_e), + map_pages_to_xen((unsigned long)__va(map_e), maddr_to_mfn(map_e), PFN_DOWN(end - map_e), PAGE_HYPERVISOR); init_boot_pages(map_e, end); map_e = end; @@ -1304,13 +1304,13 @@ void __init noreturn __start_xen(unsigned long mbi_p) { /* This range must not be passed to the boot allocator and * must also not be mapped with _PAGE_GLOBAL. */ - map_pages_to_xen((unsigned long)__va(map_e), PFN_DOWN(map_e), + map_pages_to_xen((unsigned long)__va(map_e), maddr_to_mfn(map_e), PFN_DOWN(e - map_e), __PAGE_HYPERVISOR_RW); } if ( s < map_s ) { - map_pages_to_xen((unsigned long)__va(s), s >> PAGE_SHIFT, - (map_s - s) >> PAGE_SHIFT, PAGE_HYPERVISOR); + map_pages_to_xen((unsigned long)__va(s), maddr_to_mfn(s), + PFN_DOWN(map_s - s), PAGE_HYPERVISOR); init_boot_pages(s, map_s); } } @@ -1320,7 +1320,7 @@ void __init noreturn __start_xen(unsigned long mbi_p) set_pdx_range(mod[i].mod_start, mod[i].mod_start + PFN_UP(mod[i].mod_end)); map_pages_to_xen((unsigned long)mfn_to_virt(mod[i].mod_start), - mod[i].mod_start, + _mfn(mod[i].mod_start), PFN_UP(mod[i].mod_end), PAGE_HYPERVISOR); } @@ -1333,7 +1333,7 @@ void __init noreturn __start_xen(unsigned long mbi_p) if ( e > s ) map_pages_to_xen((unsigned long)__va(kexec_crash_area.start), - s, e - s, PAGE_HYPERVISOR); + _mfn(s), e - s, PAGE_HYPERVISOR); } #endif diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c index 98873df429..80549ad925 100644 --- a/xen/arch/x86/smpboot.c +++ b/xen/arch/x86/smpboot.c @@ -623,7 +623,7 @@ unsigned long alloc_stub_page(unsigned int cpu, unsigned long *mfn) } stub_va = XEN_VIRT_END - (cpu + 1) * PAGE_SIZE; - if ( map_pages_to_xen(stub_va, mfn_x(page_to_mfn(pg)), 1, + if ( map_pages_to_xen(stub_va, page_to_mfn(pg), 1, PAGE_HYPERVISOR_RX | MAP_SMALL_PAGES) ) { if ( !*mfn ) diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c index d36bf33407..71e757c553 100644 --- a/xen/arch/x86/tboot.c +++ b/xen/arch/x86/tboot.c @@ -336,22 +336,23 @@ static void tboot_gen_frametable_integrity(const uint8_t key[TB_KEY_SIZE], void tboot_shutdown(uint32_t shutdown_type) { - uint32_t map_base, map_size; + mfn_t map_base; + uint32_t map_size; int err; g_tboot_shared->shutdown_type = shutdown_type; /* Create identity map for tboot shutdown code. */ /* do before S3 integrity because mapping tboot may change xenheap */ - map_base = PFN_DOWN(g_tboot_shared->tboot_base); + map_base = maddr_to_mfn(g_tboot_shared->tboot_base); map_size = PFN_UP(g_tboot_shared->tboot_size); - err = map_pages_to_xen(map_base << PAGE_SHIFT, map_base, map_size, + err = map_pages_to_xen(mfn_to_maddr(map_base), map_base, map_size, __PAGE_HYPERVISOR); if ( err != 0 ) { - printk("error (%#x) mapping tboot pages (mfns) @ %#x, %#x\n", err, - map_base, map_size); + printk("error (%#x) mapping tboot pages (mfns) @ %"PRI_mfn", %#x\n", + err, mfn_x(map_base), map_size); return; } diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c index 1c83de0451..f6dd95aa47 100644 --- a/xen/arch/x86/x86_64/mm.c +++ b/xen/arch/x86/x86_64/mm.c @@ -40,6 +40,10 @@ asm(".file \"" __FILE__ "\""); #include #include +/* Override macros from asm/page.h to make them work with mfn_t */ +#undef page_to_mfn +#define page_to_mfn(pg) _mfn(__page_to_mfn(pg)) + unsigned int __read_mostly m2p_compat_vstart = __HYPERVISOR_COMPAT_VIRT_START; l2_pgentry_t *compat_idle_pg_table_l2; @@ -111,14 +115,14 @@ static int hotadd_mem_valid(unsigned long pfn, struct mem_hotadd_info *info) return (pfn < info->epfn && pfn >= info->spfn); } -static unsigned long alloc_hotadd_mfn(struct mem_hotadd_info *info) +static mfn_t alloc_hotadd_mfn(struct mem_hotadd_info *info) { - unsigned mfn; + mfn_t mfn; ASSERT((info->cur + ( 1UL << PAGETABLE_ORDER) < info->epfn) && info->cur >= info->spfn); - mfn = info->cur; + mfn = _mfn(info->cur); info->cur += (1UL << PAGETABLE_ORDER); return mfn; } @@ -317,7 +321,8 @@ static void destroy_m2p_mapping(struct mem_hotadd_info *info) */ static int setup_compat_m2p_table(struct mem_hotadd_info *info) { - unsigned long i, va, smap, emap, rwva, epfn = info->epfn, mfn; + unsigned long i, va, smap, emap, rwva, epfn = info->epfn; + mfn_t mfn; unsigned int n; l3_pgentry_t *l3_ro_mpt = NULL; l2_pgentry_t *l2_ro_mpt = NULL; @@ -378,7 +383,7 @@ static int setup_compat_m2p_table(struct mem_hotadd_info *info) memset((void *)rwva, 0xFF, 1UL << L2_PAGETABLE_SHIFT); /* NB. Cannot be GLOBAL as the ptes get copied into per-VM space. */ l2e_write(&l2_ro_mpt[l2_table_offset(va)], - l2e_from_pfn(mfn, _PAGE_PSE|_PAGE_PRESENT)); + l2e_from_mfn(mfn, _PAGE_PSE|_PAGE_PRESENT)); } #undef CNT #undef MFN @@ -438,7 +443,7 @@ static int setup_m2p_table(struct mem_hotadd_info *info) break; if ( n < CNT ) { - unsigned long mfn = alloc_hotadd_mfn(info); + mfn_t mfn = alloc_hotadd_mfn(info); ret = map_pages_to_xen( RDWR_MPT_VIRT_START + i * sizeof(unsigned long), @@ -473,7 +478,7 @@ static int setup_m2p_table(struct mem_hotadd_info *info) } /* NB. Cannot be GLOBAL: guest user mode should not see it. */ - l2e_write(l2_ro_mpt, l2e_from_pfn(mfn, + l2e_write(l2_ro_mpt, l2e_from_mfn(mfn, /*_PAGE_GLOBAL|*/_PAGE_PSE|_PAGE_USER|_PAGE_PRESENT)); } if ( !((unsigned long)l2_ro_mpt & ~PAGE_MASK) ) @@ -692,7 +697,7 @@ void __init zap_low_mappings(void) flush_local(FLUSH_TLB_GLOBAL); /* Replace with mapping of the boot trampoline only. */ - map_pages_to_xen(trampoline_phys, trampoline_phys >> PAGE_SHIFT, + map_pages_to_xen(trampoline_phys, maddr_to_mfn(trampoline_phys), PFN_UP(trampoline_end - trampoline_start), __PAGE_HYPERVISOR); } @@ -769,7 +774,7 @@ static int setup_frametable_chunk(void *start, void *end, { unsigned long s = (unsigned long)start; unsigned long e = (unsigned long)end; - unsigned long mfn; + mfn_t mfn; int err; ASSERT(!(s & ((1 << L2_PAGETABLE_SHIFT) - 1))); @@ -1364,7 +1369,7 @@ int memory_add(unsigned long spfn, unsigned long epfn, unsigned int pxm) i = virt_to_mfn(HYPERVISOR_VIRT_END - 1) + 1; if ( spfn < i ) { - ret = map_pages_to_xen((unsigned long)mfn_to_virt(spfn), spfn, + ret = map_pages_to_xen((unsigned long)mfn_to_virt(spfn), _mfn(spfn), min(epfn, i) - spfn, PAGE_HYPERVISOR); if ( ret ) goto destroy_directmap; @@ -1373,7 +1378,7 @@ int memory_add(unsigned long spfn, unsigned long epfn, unsigned int pxm) { if ( i < spfn ) i = spfn; - ret = map_pages_to_xen((unsigned long)mfn_to_virt(i), i, + ret = map_pages_to_xen((unsigned long)mfn_to_virt(i), _mfn(i), epfn - i, __PAGE_HYPERVISOR_RW); if ( ret ) goto destroy_directmap; diff --git a/xen/arch/x86/x86_64/mmconfig_64.c b/xen/arch/x86/x86_64/mmconfig_64.c index 958b6cf2f4..2b3085931e 100644 --- a/xen/arch/x86/x86_64/mmconfig_64.c +++ b/xen/arch/x86/x86_64/mmconfig_64.c @@ -125,9 +125,9 @@ static void __iomem *mcfg_ioremap(const struct acpi_mcfg_allocation *cfg, return NULL; if (map_pages_to_xen(virt, - (cfg->address >> PAGE_SHIFT) + - (cfg->start_bus_number << (20 - PAGE_SHIFT)), - size >> PAGE_SHIFT, prot)) + mfn_add(maddr_to_mfn(cfg->address), + (cfg->start_bus_number << (20 - PAGE_SHIFT))), + PFN_DOWN(size), prot)) return NULL; return (void __iomem *) virt; diff --git a/xen/common/efi/boot.c b/xen/common/efi/boot.c index 469bf980cc..64d12685d3 100644 --- a/xen/common/efi/boot.c +++ b/xen/common/efi/boot.c @@ -1464,7 +1464,7 @@ void __init efi_init_memory(void) if ( (unsigned long)mfn_to_virt(emfn - 1) >= HYPERVISOR_VIRT_END ) prot &= ~_PAGE_GLOBAL; if ( map_pages_to_xen((unsigned long)mfn_to_virt(smfn), - smfn, emfn - smfn, prot) == 0 ) + _mfn(smfn), emfn - smfn, prot) == 0 ) desc->VirtualStart = (unsigned long)maddr_to_virt(desc->PhysicalStart); else diff --git a/xen/common/vmap.c b/xen/common/vmap.c index 11785ffb0a..04f5db386d 100644 --- a/xen/common/vmap.c +++ b/xen/common/vmap.c @@ -9,6 +9,10 @@ #include #include +/* Override macros from asm/page.h to make them work with mfn_t */ +#undef page_to_mfn +#define page_to_mfn(pg) _mfn(__page_to_mfn(pg)) + static DEFINE_SPINLOCK(vm_lock); static void *__read_mostly vm_base[VMAP_REGION_NR]; #define vm_bitmap(x) ((unsigned long *)vm_base[x]) @@ -208,7 +212,7 @@ void *__vmap(const mfn_t *mfn, unsigned int granularity, for ( ; va && nr--; ++mfn, cur += PAGE_SIZE * granularity ) { - if ( map_pages_to_xen(cur, mfn_x(*mfn), granularity, flags) ) + if ( map_pages_to_xen(cur, *mfn, granularity, flags) ) { vunmap(va); va = NULL; @@ -234,7 +238,7 @@ void vunmap(const void *va) #ifndef _PAGE_NONE destroy_xen_mappings(addr, addr + PAGE_SIZE * pages); #else /* Avoid tearing down intermediate page tables. */ - map_pages_to_xen(addr, 0, pages, _PAGE_NONE); + map_pages_to_xen(addr, INVALID_MFN, pages, _PAGE_NONE); #endif vm_free(va); } @@ -258,7 +262,7 @@ static void *vmalloc_type(size_t size, enum vmap_region type) pg = alloc_domheap_page(NULL, 0); if ( pg == NULL ) goto error; - mfn[i] = _mfn(page_to_mfn(pg)); + mfn[i] = page_to_mfn(pg); } va = __vmap(mfn, 1, pages, 1, PAGE_HYPERVISOR, type); diff --git a/xen/drivers/acpi/apei/erst.c b/xen/drivers/acpi/apei/erst.c index 14acf5d773..7fc4de5de9 100644 --- a/xen/drivers/acpi/apei/erst.c +++ b/xen/drivers/acpi/apei/erst.c @@ -799,7 +799,7 @@ int __init erst_init(void) printk(KERN_WARNING "Failed to get ERST table: %s\n", msg); return -EINVAL; } - map_pages_to_xen((unsigned long)__va(erst_addr), PFN_DOWN(erst_addr), + map_pages_to_xen((unsigned long)__va(erst_addr), maddr_to_mfn(erst_addr), PFN_UP(erst_addr + erst_len) - PFN_DOWN(erst_addr), PAGE_HYPERVISOR); erst_tab = __va(erst_addr); diff --git a/xen/drivers/acpi/apei/hest.c b/xen/drivers/acpi/apei/hest.c index f74e7c2a06..70734ab0e2 100644 --- a/xen/drivers/acpi/apei/hest.c +++ b/xen/drivers/acpi/apei/hest.c @@ -184,7 +184,7 @@ void __init acpi_hest_init(void) acpi_format_exception(status)); goto err; } - map_pages_to_xen((unsigned long)__va(hest_addr), PFN_DOWN(hest_addr), + map_pages_to_xen((unsigned long)__va(hest_addr), maddr_to_mfn(hest_addr), PFN_UP(hest_addr + hest_len) - PFN_DOWN(hest_addr), PAGE_HYPERVISOR); hest_tab = __va(hest_addr); diff --git a/xen/drivers/passthrough/vtd/dmar.c b/xen/drivers/passthrough/vtd/dmar.c index d713a8ca5d..46decd4eb1 100644 --- a/xen/drivers/passthrough/vtd/dmar.c +++ b/xen/drivers/passthrough/vtd/dmar.c @@ -1008,7 +1008,7 @@ int __init acpi_dmar_init(void) if ( ACPI_SUCCESS(acpi_get_table_phys(ACPI_SIG_DMAR, 0, &dmar_addr, &dmar_len)) ) { - map_pages_to_xen((unsigned long)__va(dmar_addr), PFN_DOWN(dmar_addr), + map_pages_to_xen((unsigned long)__va(dmar_addr), maddr_to_mfn(dmar_addr), PFN_UP(dmar_addr + dmar_len) - PFN_DOWN(dmar_addr), PAGE_HYPERVISOR); dmar_table = __va(dmar_addr); diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h index 09bec67f63..5a9ca6a55b 100644 --- a/xen/include/asm-arm/mm.h +++ b/xen/include/asm-arm/mm.h @@ -138,7 +138,7 @@ extern vaddr_t xenheap_virt_start; #endif #ifdef CONFIG_ARM_32 -#define is_xen_heap_page(page) is_xen_heap_mfn(page_to_mfn(page)) +#define is_xen_heap_page(page) is_xen_heap_mfn(__page_to_mfn(page)) #define is_xen_heap_mfn(mfn) ({ \ unsigned long mfn_ = (mfn); \ (mfn_ >= mfn_x(xenheap_mfn_start) && \ diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h index 538478fa24..5a7d25e33f 100644 --- a/xen/include/xen/mm.h +++ b/xen/include/xen/mm.h @@ -165,7 +165,7 @@ bool scrub_free_pages(void); /* Map machine page range in Xen virtual address space. */ int map_pages_to_xen( unsigned long virt, - unsigned long mfn, + mfn_t mfn, unsigned long nr_mfns, unsigned int flags); /* Alter the permissions of a range of Xen virtual address space. */ From patchwork Tue Apr 3 15:32:46 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 132769 Delivered-To: patch@linaro.org Received: by 10.46.84.29 with SMTP id i29csp3954414ljb; Tue, 3 Apr 2018 08:35:32 -0700 (PDT) X-Google-Smtp-Source: AIpwx48RguW5y6OwAlj6mUp9C8foc35VRE05Zk+Fuf+4Gvi5hY5O3j6lFCUvCD2j6SPUscHl9etg X-Received: by 10.107.68.4 with SMTP id r4mr13462875ioa.161.1522769732809; Tue, 03 Apr 2018 08:35:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522769732; cv=none; d=google.com; s=arc-20160816; b=Ibw5GSyFyTxbVgcb7uCPg4ZiTL1UHgu0CuYjTcwE/ArR1+ZUyUWkDzwMvqa10KH6UR IuSiyDtPCSVEc2ivjovgi7q9/Wv4NLTMUBhpCm1T3EWVvo+iLW2KtK/hiZdI9pRWjna5 SmtN70j2wgMXJtjV21tqfkFyvtBfKQau7hOHggWQICfqbf1PPAa1v42g/oli8lF6G5nd aCZnygTCQ30setdaba8Z1t7YtmL7VqoIfEQdHSR8a4dN/Ipr1ybcce3a9aMFP6XVnvZn kgyo9lsBuMFAWEgsOlw0yXQ3fkIIpdpEUi1k+5dTlpRJ5PwxiSSTp7aloYs0+QT058s4 mxjQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version:cc :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:references:in-reply-to:message-id:date:to:from :arc-authentication-results; bh=2XGJQpLO6RrL60Uo6+YaSs8lmjkBCfmgGpLJm3GCrZc=; b=o7zDEfJgjUg0zLkv7AZK3GG6lRdYNTS7wMK/wS0J1X1ESDHFeaR8MPZVRFRlyP2tav /Mk8HUGqMte0WJnqY2LtedkjlBiWn2kIP22COKkQ7jeTCIddczsGn2x56ItLMlDnXJEu CLDni3LDOseRZT4uvwjSs31qDapi5iUJi7w8fFC15ZaRtsJ9ose1Nn56cSl/69Kq/HAb FY1u2NsX8sfYXRbT3wPe7RfX5l8eWrXAFLBYbhpgDG1rte0B7OZOVEWc+ooRk/CT3RNu PZ7AqlZqUHSgfMA1/BYeWegcqSz7bfCdix3hnqNJl0dto/EnL+0NwLg7hxsh22033QcY 4WzQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id a17si2049098ioe.268.2018.04.03.08.35.32 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 03 Apr 2018 08:35:32 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1f3Nw2-00081u-L5; Tue, 03 Apr 2018 15:33:22 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1f3Nw1-000808-1N for xen-devel@lists.xen.org; Tue, 03 Apr 2018 15:33:21 +0000 X-Inumbo-ID: 411645bc-3754-11e8-9728-bc764e045a96 Received: from foss.arm.com (unknown [217.140.101.70]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTP id 411645bc-3754-11e8-9728-bc764e045a96; Tue, 03 Apr 2018 17:32:44 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 87ADF15AB; Tue, 3 Apr 2018 08:33:19 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.206.53]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9A9D13F24A; Tue, 3 Apr 2018 08:33:17 -0700 (PDT) From: Julien Grall To: xen-devel@lists.xen.org Date: Tue, 3 Apr 2018 16:32:46 +0100 Message-Id: <20180403153251.19595-12-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180403153251.19595-1-julien.grall@arm.com> References: <20180403153251.19595-1-julien.grall@arm.com> Subject: [Xen-devel] [for-4.11][PATCH v7 11/16] xen/mm: Switch some of page_alloc.c to typesafe MFN X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Wei Liu , George Dunlap , Andrew Cooper , Ian Jackson , George Dunlap , Tim Deegan , Julien Grall , Jan Beulich MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" No functional change intended. Signed-off-by: Julien Grall Reviewed-by: Wei Liu Reviewed-by: George Dunlap --- Cc: Andrew Cooper Cc: George Dunlap Cc: Ian Jackson Cc: Jan Beulich Cc: Konrad Rzeszutek Wilk Cc: Stefano Stabellini Cc: Tim Deegan Cc: Wei Liu Cc: Julien Grall Wei, I have kept the reviewed-by because the changes were minor. Let's me know if you want me to drop it. Changes in v6: - Add George's reviewed-by Changes in v5: - Add Wei's reviewed-by - Fix coding style (space before and after '+') - Rework the commit title as page_alloc.c was not fully converted to typesafe MFN. Changes in v4: - Patch added --- xen/common/page_alloc.c | 64 ++++++++++++++++++++++++++-------------------- xen/include/asm-arm/numa.h | 8 +++--- 2 files changed, 41 insertions(+), 31 deletions(-) diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index 4de8988bea..6e50fb2621 100644 --- a/xen/common/page_alloc.c +++ b/xen/common/page_alloc.c @@ -151,6 +151,12 @@ #define p2m_pod_offline_or_broken_replace(pg) BUG_ON(pg != NULL) #endif +/* Override macros from asm/page.h to make them work with mfn_t */ +#undef page_to_mfn +#define page_to_mfn(pg) _mfn(__page_to_mfn(pg)) +#undef mfn_to_page +#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn)) + /* * Comma-separated list of hexadecimal page numbers containing bad bytes. * e.g. 'badpage=0x3f45,0x8a321'. @@ -197,7 +203,7 @@ PAGE_LIST_HEAD(page_broken_list); * first_valid_mfn is exported because it is use in ARM specific NUMA * helpers. See comment in asm-arm/numa.h. */ -unsigned long first_valid_mfn = ~0UL; +mfn_t first_valid_mfn = INVALID_MFN_INITIALIZER; static struct bootmem_region { unsigned long s, e; /* MFNs @s through @e-1 inclusive are free */ @@ -283,7 +289,7 @@ void __init init_boot_pages(paddr_t ps, paddr_t pe) if ( pe <= ps ) return; - first_valid_mfn = min_t(unsigned long, ps >> PAGE_SHIFT, first_valid_mfn); + first_valid_mfn = mfn_min(maddr_to_mfn(ps), first_valid_mfn); bootmem_region_add(ps >> PAGE_SHIFT, pe >> PAGE_SHIFT); @@ -397,7 +403,7 @@ mfn_t __init alloc_boot_pages(unsigned long nr_pfns, unsigned long pfn_align) #define bits_to_zone(b) (((b) < (PAGE_SHIFT + 1)) ? 1 : ((b) - PAGE_SHIFT)) #define page_to_zone(pg) (is_xen_heap_page(pg) ? MEMZONE_XEN : \ - (flsl(page_to_mfn(pg)) ? : 1)) + (flsl(mfn_x(page_to_mfn(pg))) ? : 1)) typedef struct page_list_head heap_by_zone_and_order_t[NR_ZONES][MAX_ORDER+1]; static heap_by_zone_and_order_t *_heap[MAX_NUMNODES]; @@ -729,7 +735,7 @@ static void page_list_add_scrub(struct page_info *pg, unsigned int node, static void poison_one_page(struct page_info *pg) { #ifdef CONFIG_SCRUB_DEBUG - mfn_t mfn = _mfn(page_to_mfn(pg)); + mfn_t mfn = page_to_mfn(pg); uint64_t *ptr; if ( !scrub_debug ) @@ -744,7 +750,7 @@ static void poison_one_page(struct page_info *pg) static void check_one_page(struct page_info *pg) { #ifdef CONFIG_SCRUB_DEBUG - mfn_t mfn = _mfn(page_to_mfn(pg)); + mfn_t mfn = page_to_mfn(pg); const uint64_t *ptr; unsigned int i; @@ -992,7 +998,8 @@ static struct page_info *alloc_heap_pages( /* Ensure cache and RAM are consistent for platforms where the * guest can control its own visibility of/through the cache. */ - flush_page_to_ram(page_to_mfn(&pg[i]), !(memflags & MEMF_no_icache_flush)); + flush_page_to_ram(mfn_x(page_to_mfn(&pg[i])), + !(memflags & MEMF_no_icache_flush)); } spin_unlock(&heap_lock); @@ -1344,7 +1351,8 @@ bool scrub_free_pages(void) static void free_heap_pages( struct page_info *pg, unsigned int order, bool need_scrub) { - unsigned long mask, mfn = page_to_mfn(pg); + unsigned long mask; + mfn_t mfn = page_to_mfn(pg); unsigned int i, node = phys_to_nid(page_to_maddr(pg)), tainted = 0; unsigned int zone = page_to_zone(pg); @@ -1381,7 +1389,7 @@ static void free_heap_pages( /* This page is not a guest frame any more. */ page_set_owner(&pg[i], NULL); /* set_gpfn_from_mfn snoops pg owner */ - set_gpfn_from_mfn(mfn + i, INVALID_M2P_ENTRY); + set_gpfn_from_mfn(mfn_x(mfn) + i, INVALID_M2P_ENTRY); if ( need_scrub ) { @@ -1409,12 +1417,12 @@ static void free_heap_pages( { mask = 1UL << order; - if ( (page_to_mfn(pg) & mask) ) + if ( (mfn_x(page_to_mfn(pg)) & mask) ) { struct page_info *predecessor = pg - mask; /* Merge with predecessor block? */ - if ( !mfn_valid(_mfn(page_to_mfn(predecessor))) || + if ( !mfn_valid(page_to_mfn(predecessor)) || !page_state_is(predecessor, free) || (PFN_ORDER(predecessor) != order) || (phys_to_nid(page_to_maddr(predecessor)) != node) ) @@ -1437,7 +1445,7 @@ static void free_heap_pages( struct page_info *successor = pg + mask; /* Merge with successor block? */ - if ( !mfn_valid(_mfn(page_to_mfn(successor))) || + if ( !mfn_valid(page_to_mfn(successor)) || !page_state_is(successor, free) || (PFN_ORDER(successor) != order) || (phys_to_nid(page_to_maddr(successor)) != node) ) @@ -1470,7 +1478,7 @@ static unsigned long mark_page_offline(struct page_info *pg, int broken) { unsigned long nx, x, y = pg->count_info; - ASSERT(page_is_ram_type(page_to_mfn(pg), RAM_TYPE_CONVENTIONAL)); + ASSERT(page_is_ram_type(mfn_x(page_to_mfn(pg)), RAM_TYPE_CONVENTIONAL)); ASSERT(spin_is_locked(&heap_lock)); do { @@ -1533,7 +1541,7 @@ int offline_page(unsigned long mfn, int broken, uint32_t *status) } *status = 0; - pg = mfn_to_page(mfn); + pg = mfn_to_page(_mfn(mfn)); if ( is_xen_fixed_mfn(mfn) ) { @@ -1640,7 +1648,7 @@ unsigned int online_page(unsigned long mfn, uint32_t *status) return -EINVAL; } - pg = mfn_to_page(mfn); + pg = mfn_to_page(_mfn(mfn)); spin_lock(&heap_lock); @@ -1694,7 +1702,7 @@ int query_page_offline(unsigned long mfn, uint32_t *status) *status = 0; spin_lock(&heap_lock); - pg = mfn_to_page(mfn); + pg = mfn_to_page(_mfn(mfn)); if ( page_state_is(pg, offlining) ) *status |= PG_OFFLINE_STATUS_OFFLINE_PENDING; @@ -1726,7 +1734,7 @@ static void init_heap_pages( * Update first_valid_mfn to ensure those regions are covered. */ spin_lock(&heap_lock); - first_valid_mfn = min_t(unsigned long, page_to_mfn(pg), first_valid_mfn); + first_valid_mfn = mfn_min(page_to_mfn(pg), first_valid_mfn); spin_unlock(&heap_lock); for ( i = 0; i < nr_pages; i++ ) @@ -1735,14 +1743,14 @@ static void init_heap_pages( if ( unlikely(!avail[nid]) ) { - unsigned long s = page_to_mfn(pg + i); - unsigned long e = page_to_mfn(pg + nr_pages - 1) + 1; + unsigned long s = mfn_x(page_to_mfn(pg + i)); + unsigned long e = mfn_x(mfn_add(page_to_mfn(pg + nr_pages - 1), 1)); bool_t use_tail = (nid == phys_to_nid(pfn_to_paddr(e - 1))) && !(s & ((1UL << MAX_ORDER) - 1)) && (find_first_set_bit(e) <= find_first_set_bit(s)); unsigned long n; - n = init_node_heap(nid, page_to_mfn(pg+i), nr_pages - i, + n = init_node_heap(nid, mfn_x(page_to_mfn(pg + i)), nr_pages - i, &use_tail); BUG_ON(i + n > nr_pages); if ( n && !use_tail ) @@ -1796,7 +1804,7 @@ void __init end_boot_allocator(void) if ( (r->s < r->e) && (phys_to_nid(pfn_to_paddr(r->s)) == cpu_to_node(0)) ) { - init_heap_pages(mfn_to_page(r->s), r->e - r->s); + init_heap_pages(mfn_to_page(_mfn(r->s)), r->e - r->s); r->e = r->s; break; } @@ -1805,7 +1813,7 @@ void __init end_boot_allocator(void) { struct bootmem_region *r = &bootmem_region_list[i]; if ( r->s < r->e ) - init_heap_pages(mfn_to_page(r->s), r->e - r->s); + init_heap_pages(mfn_to_page(_mfn(r->s)), r->e - r->s); } nr_bootmem_regions = 0; init_heap_pages(virt_to_page(bootmem_region_list), 1); @@ -1862,7 +1870,7 @@ static void __init smp_scrub_heap_pages(void *data) for ( mfn = start; mfn < end; mfn++ ) { - pg = mfn_to_page(mfn); + pg = mfn_to_page(_mfn(mfn)); /* Check the mfn is valid and page is free. */ if ( !mfn_valid(_mfn(mfn)) || !page_state_is(pg, free) ) @@ -1915,7 +1923,7 @@ static void __init scrub_heap_pages(void) if ( !node_spanned_pages(i) ) continue; /* Calculate Node memory start and end address. */ - start = max(node_start_pfn(i), first_valid_mfn); + start = max(node_start_pfn(i), mfn_x(first_valid_mfn)); end = min(node_start_pfn(i) + node_spanned_pages(i), max_page); /* Just in case NODE has 1 page and starts below first_valid_mfn. */ end = max(end, start); @@ -2159,17 +2167,17 @@ void free_xenheap_pages(void *v, unsigned int order) void init_domheap_pages(paddr_t ps, paddr_t pe) { - unsigned long smfn, emfn; + mfn_t smfn, emfn; ASSERT(!in_irq()); - smfn = round_pgup(ps) >> PAGE_SHIFT; - emfn = round_pgdown(pe) >> PAGE_SHIFT; + smfn = maddr_to_mfn(round_pgup(ps)); + emfn = maddr_to_mfn(round_pgup(pe)); - if ( emfn <= smfn ) + if ( mfn_x(emfn) <= mfn_x(smfn) ) return; - init_heap_pages(mfn_to_page(smfn), emfn - smfn); + init_heap_pages(mfn_to_page(smfn), mfn_x(emfn) - mfn_x(smfn)); } diff --git a/xen/include/asm-arm/numa.h b/xen/include/asm-arm/numa.h index 7e0b69413d..490d1f31aa 100644 --- a/xen/include/asm-arm/numa.h +++ b/xen/include/asm-arm/numa.h @@ -1,6 +1,8 @@ #ifndef __ARCH_ARM_NUMA_H #define __ARCH_ARM_NUMA_H +#include + typedef u8 nodeid_t; /* Fake one node for now. See also node_online_map. */ @@ -16,11 +18,11 @@ static inline __attribute__((pure)) nodeid_t phys_to_nid(paddr_t addr) * TODO: make first_valid_mfn static when NUMA is supported on Arm, this * is required because the dummy helpers are using it. */ -extern unsigned long first_valid_mfn; +extern mfn_t first_valid_mfn; /* XXX: implement NUMA support */ -#define node_spanned_pages(nid) (max_page - first_valid_mfn) -#define node_start_pfn(nid) (first_valid_mfn) +#define node_spanned_pages(nid) (max_page - mfn_x(first_valid_mfn)) +#define node_start_pfn(nid) (mfn_x(first_valid_mfn)) #define __node_distance(a, b) (20) static inline unsigned int arch_get_dma_bitsize(void) From patchwork Tue Apr 3 15:32:47 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 132768 Delivered-To: patch@linaro.org Received: by 10.46.84.29 with SMTP id i29csp3954378ljb; Tue, 3 Apr 2018 08:35:30 -0700 (PDT) X-Google-Smtp-Source: AIpwx49oESKccuUr3W2gZW+mZoIs3kem5FaKS6e1lxASlJus1LP7vKK/3ykTLQwml+whpNfjUv1t X-Received: by 2002:a24:5e83:: with SMTP id h125-v6mr5449078itb.79.1522769730659; Tue, 03 Apr 2018 08:35:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522769730; cv=none; d=google.com; s=arc-20160816; b=ilsWbi5peKQU5lnYmkcQxIGP2Q5oDO6AkcppP19bsGH5y9oJWxbp/2prbbh/stnIC1 vG1JcignpYY9Kwwasmimzp6l6OdRIxbRZMVJQdFaVqLlQF3QW0ZIkh40YDnpBN44ZE1u TO2ywrNb6VzSnRtoyCzm+jVaiDl9Bx6pJI/zGXP5yIU9bPSoVbLsqsgJhDrgzNUkL8Qf lc8upsIGuvDlEKgOs16TN67vgK57aqdUT99BcbcIy1y/e0ksTjvjsy6hLpd/DyBhqoxP P0Enhz52kEZkLR4YpeWZec9E6ugnJH+e8QXoEzznbxNtT7TkOU5/WuZSGTcvXpcgCcTV NUqQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version:cc :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:references:in-reply-to:message-id:date:to:from :arc-authentication-results; bh=6Lp2cL4ag1/QaUHUssqd/DBPbGtDMIMcSZwzit9FRBQ=; b=j9ikOzc8X027jwNSTgan8NQKOMhUizCImzcMXwGc0qZAz+oi12e2Ok48inQ/CF6JUK 7Q2womeX8bFv2Ad51PauYatWmAVMXucd39Oi1xluIdqgv8hfw6T4yT9I3LPMQ7jOUlEo reVLa/tXMZCOjiM6VWle670U83vFgdO8yjpY0JJZc+5nN+s6z6Oe/5NeZQGwt0PmMTpW wRKQJoEq0vumSNM/s1deLWkMr3XlPcPdnPdRzNXTrvyg1cU7NudL9cRe/HtsUKt9vaC8 sEr8cyEKsxbMD4kLqxIXHaBZ/bo7YjlP9SC7rxndpSHdYUPPYB0/t6mOBm89+rOZrxht upDQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id 96si2106728iot.208.2018.04.03.08.35.30 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 03 Apr 2018 08:35:30 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1f3Nw4-00083z-27; Tue, 03 Apr 2018 15:33:24 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1f3Nw3-00082w-AR for xen-devel@lists.xen.org; Tue, 03 Apr 2018 15:33:23 +0000 X-Inumbo-ID: 424d8e0f-3754-11e8-9728-bc764e045a96 Received: from foss.arm.com (unknown [217.140.101.70]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTP id 424d8e0f-3754-11e8-9728-bc764e045a96; Tue, 03 Apr 2018 17:32:46 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8FE6A1435; Tue, 3 Apr 2018 08:33:21 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.206.53]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C61433F24A; Tue, 3 Apr 2018 08:33:19 -0700 (PDT) From: Julien Grall To: xen-devel@lists.xen.org Date: Tue, 3 Apr 2018 16:32:47 +0100 Message-Id: <20180403153251.19595-13-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180403153251.19595-1-julien.grall@arm.com> References: <20180403153251.19595-1-julien.grall@arm.com> Subject: [Xen-devel] [for-4.11][PATCH v7 12/16] xen/mm: Switch common/memory.c to use typesafe MFN X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Wei Liu , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Jan Beulich MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" A new helper __copy_mfn_to_guest is introduced to easily to copy a MFN to the guest memory. Not functional change intended Signed-off-by: Julien Grall Reviewed-by: Jan Beulich --- Cc: Andrew Cooper Cc: George Dunlap Cc: Ian Jackson Cc: Konrad Rzeszutek Wilk Cc: Stefano Stabellini Cc: Tim Deegan Cc: Wei Liu Changes in v7: - Rename the helper to __copy_mfn_to_guest_offset - Add Jan's reviewed-by Changes in v6: - Use static inline for the new helper - Rename the helper to __copy_mfn_to_guest Changes in v5: - Restrict the scope of some mfn variable. Changes in v4: - Patch added --- xen/common/memory.c | 79 +++++++++++++++++++++++++++++++++-------------------- 1 file changed, 50 insertions(+), 29 deletions(-) diff --git a/xen/common/memory.c b/xen/common/memory.c index 3ed71f8f74..8c8e979bcf 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -33,6 +33,12 @@ #include #endif +/* Override macros from asm/page.h to make them work with mfn_t */ +#undef page_to_mfn +#define page_to_mfn(pg) _mfn(__page_to_mfn(pg)) +#undef mfn_to_page +#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn)) + struct memop_args { /* INPUT */ struct domain *domain; /* Domain to be affected. */ @@ -95,11 +101,20 @@ static unsigned int max_order(const struct domain *d) return min(order, MAX_ORDER + 0U); } +/* Helper to copy a typesafe MFN to guest */ +static inline +unsigned long __copy_mfn_to_guest_offset(XEN_GUEST_HANDLE(xen_pfn_t) hnd, + size_t off, mfn_t mfn) + { + xen_pfn_t mfn_ = mfn_x(mfn); + + return __copy_to_guest_offset(hnd, off, &mfn_, 1); +} + static void increase_reservation(struct memop_args *a) { struct page_info *page; unsigned long i; - xen_pfn_t mfn; struct domain *d = a->domain; if ( !guest_handle_is_null(a->extent_list) && @@ -132,8 +147,9 @@ static void increase_reservation(struct memop_args *a) if ( !paging_mode_translate(d) && !guest_handle_is_null(a->extent_list) ) { - mfn = page_to_mfn(page); - if ( unlikely(__copy_to_guest_offset(a->extent_list, i, &mfn, 1)) ) + mfn_t mfn = page_to_mfn(page); + + if ( unlikely(__copy_mfn_to_guest_offset(a->extent_list, i, mfn)) ) goto out; } } @@ -146,7 +162,7 @@ static void populate_physmap(struct memop_args *a) { struct page_info *page; unsigned int i, j; - xen_pfn_t gpfn, mfn; + xen_pfn_t gpfn; struct domain *d = a->domain, *curr_d = current->domain; bool need_tlbflush = false; uint32_t tlbflush_timestamp = 0; @@ -182,6 +198,8 @@ static void populate_physmap(struct memop_args *a) for ( i = a->nr_done; i < a->nr_extents; i++ ) { + mfn_t mfn; + if ( i != a->nr_done && hypercall_preempt_check() ) { a->preempted = 1; @@ -205,14 +223,15 @@ static void populate_physmap(struct memop_args *a) { if ( is_domain_direct_mapped(d) ) { - mfn = gpfn; + mfn = _mfn(gpfn); - for ( j = 0; j < (1U << a->extent_order); j++, mfn++ ) + for ( j = 0; j < (1U << a->extent_order); j++, + mfn = mfn_add(mfn, 1) ) { - if ( !mfn_valid(_mfn(mfn)) ) + if ( !mfn_valid(mfn) ) { - gdprintk(XENLOG_INFO, "Invalid mfn %#"PRI_xen_pfn"\n", - mfn); + gdprintk(XENLOG_INFO, "Invalid mfn %#"PRI_mfn"\n", + mfn_x(mfn)); goto out; } @@ -220,14 +239,14 @@ static void populate_physmap(struct memop_args *a) if ( !get_page(page, d) ) { gdprintk(XENLOG_INFO, - "mfn %#"PRI_xen_pfn" doesn't belong to d%d\n", - mfn, d->domain_id); + "mfn %#"PRI_mfn" doesn't belong to d%d\n", + mfn_x(mfn), d->domain_id); goto out; } put_page(page); } - mfn = gpfn; + mfn = _mfn(gpfn); } else { @@ -253,15 +272,16 @@ static void populate_physmap(struct memop_args *a) mfn = page_to_mfn(page); } - guest_physmap_add_page(d, _gfn(gpfn), _mfn(mfn), a->extent_order); + guest_physmap_add_page(d, _gfn(gpfn), mfn, a->extent_order); if ( !paging_mode_translate(d) ) { for ( j = 0; j < (1U << a->extent_order); j++ ) - set_gpfn_from_mfn(mfn + j, gpfn + j); + set_gpfn_from_mfn(mfn_x(mfn_add(mfn, j)), gpfn + j); /* Inform the domain of the new page's machine address. */ - if ( unlikely(__copy_to_guest_offset(a->extent_list, i, &mfn, 1)) ) + if ( unlikely(__copy_mfn_to_guest_offset(a->extent_list, i, + mfn)) ) goto out; } } @@ -304,7 +324,7 @@ int guest_remove_page(struct domain *d, unsigned long gmfn) if ( p2mt == p2m_ram_paging_out ) { ASSERT(mfn_valid(mfn)); - page = mfn_to_page(mfn_x(mfn)); + page = mfn_to_page(mfn); if ( test_and_clear_bit(_PGC_allocated, &page->count_info) ) put_page(page); } @@ -349,7 +369,7 @@ int guest_remove_page(struct domain *d, unsigned long gmfn) } #endif /* CONFIG_X86 */ - page = mfn_to_page(mfn_x(mfn)); + page = mfn_to_page(mfn); if ( unlikely(!get_page(page, d)) ) { put_gfn(d, gmfn); @@ -485,7 +505,8 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg) PAGE_LIST_HEAD(in_chunk_list); PAGE_LIST_HEAD(out_chunk_list); unsigned long in_chunk_order, out_chunk_order; - xen_pfn_t gpfn, gmfn, mfn; + xen_pfn_t gpfn, gmfn; + mfn_t mfn; unsigned long i, j, k; unsigned int memflags = 0; long rc = 0; @@ -607,7 +628,7 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg) p2m_type_t p2mt; /* Shared pages cannot be exchanged */ - mfn = mfn_x(get_gfn_unshare(d, gmfn + k, &p2mt)); + mfn = get_gfn_unshare(d, gmfn + k, &p2mt); if ( p2m_is_shared(p2mt) ) { put_gfn(d, gmfn + k); @@ -615,9 +636,9 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg) goto fail; } #else /* !CONFIG_X86 */ - mfn = mfn_x(gfn_to_mfn(d, _gfn(gmfn + k))); + mfn = gfn_to_mfn(d, _gfn(gmfn + k)); #endif - if ( unlikely(!mfn_valid(_mfn(mfn))) ) + if ( unlikely(!mfn_valid(mfn)) ) { put_gfn(d, gmfn + k); rc = -EINVAL; @@ -664,10 +685,10 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg) if ( !test_and_clear_bit(_PGC_allocated, &page->count_info) ) BUG(); mfn = page_to_mfn(page); - gfn = mfn_to_gmfn(d, mfn); + gfn = mfn_to_gmfn(d, mfn_x(mfn)); /* Pages were unshared above */ BUG_ON(SHARED_M2P(gfn)); - if ( guest_physmap_remove_page(d, _gfn(gfn), _mfn(mfn), 0) ) + if ( guest_physmap_remove_page(d, _gfn(gfn), mfn, 0) ) domain_crash(d); put_page(page); } @@ -712,16 +733,16 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg) } mfn = page_to_mfn(page); - guest_physmap_add_page(d, _gfn(gpfn), _mfn(mfn), + guest_physmap_add_page(d, _gfn(gpfn), mfn, exch.out.extent_order); if ( !paging_mode_translate(d) ) { for ( k = 0; k < (1UL << exch.out.extent_order); k++ ) - set_gpfn_from_mfn(mfn + k, gpfn + k); - if ( __copy_to_guest_offset(exch.out.extent_start, - (i << out_chunk_order) + j, - &mfn, 1) ) + set_gpfn_from_mfn(mfn_x(mfn_add(mfn, k)), gpfn + k); + if ( __copy_mfn_to_guest_offset(exch.out.extent_start, + (i << out_chunk_order) + j, + mfn) ) rc = -EFAULT; } } @@ -1216,7 +1237,7 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg) if ( page ) { rc = guest_physmap_remove_page(d, _gfn(xrfp.gpfn), - _mfn(page_to_mfn(page)), 0); + page_to_mfn(page), 0); put_page(page); } else From patchwork Tue Apr 3 15:32:48 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 132758 Delivered-To: patch@linaro.org Received: by 10.46.84.29 with SMTP id i29csp3954109ljb; Tue, 3 Apr 2018 08:35:16 -0700 (PDT) X-Google-Smtp-Source: AIpwx49fogRhKShLV56kaOmGbo4J+rmBQEI9MvQILb86ff3LMRV20Z+SmdnlPoedTpZ/xjhilm6Z X-Received: by 10.107.167.21 with SMTP id q21mr13211703ioe.36.1522769716104; Tue, 03 Apr 2018 08:35:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522769716; cv=none; d=google.com; s=arc-20160816; b=qHnW5SYGU/2mPZWJAuso+VZTyoJ6wn2B2MSr5vNjoYMg/++EjrcCyAIJHxqyes0HMv l6tPtTcRbhdmf4gQcOQaRc9FdlypZwX0U4f8ElfywikrOSvODwMDWo5x+xdf8NRPvWFt ViDz558d8Y7qc4iWS6mrFplQq1pBy+fuoHSccqNfbAfwx4mN+/MZbDtLvj4O1wsi3gU2 C1FzvzPrwafi+WbL7Z8IHNXEj27Lvnjcp0qRj6flhY6KMZh8mo7I3VBYiPO/x+ndfR0v QInQSbwe6vaEfF0SaG1CmO/jZfWaGeMuvNReQ28oqtok6bprPqunYeSUT8hECARiQSMT VSzQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version:cc :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:references:in-reply-to:message-id:date:to:from :arc-authentication-results; bh=XyJ5XtPZjscWrydLlzsXArzETWtyuFv8OZtQkSKP9TU=; b=cB3RwUKvZ4o6MzhX16fhg/oOGY0Hzk0oeaATSRhtRJ1E+JohcJ8ZgYf/QpvnhrEJKN kxL5C2AzSCkn6lRbVlnmTuV5g8k6KjvnCRZfx97BAxkbebppyvzaG8c3LF0knG4P0hSd JplIXOgMw1Vab6vxafp1B5V3tkbt6jDx+zUJIObYsLMXj/NPJ/IOswbr9vi14ajbMg0s GQZUgujbCggG3JXka4tF8ZxFJnNwsYfiJeNTMdUpJQvk2ii3TH7UjLsVWQ1+Jo7z/96B LPag53r6cl4h1QdFGAC6SxdDWxA4Is7R5RLM+4aut1qUkN6zbnrqGELj7OBU+Q8R1Cbu p9Yg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id z62si1997397iod.199.2018.04.03.08.35.15 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 03 Apr 2018 08:35:16 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1f3Nw6-00086w-EL; Tue, 03 Apr 2018 15:33:26 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1f3Nw5-00085N-6s for xen-devel@lists.xen.org; Tue, 03 Apr 2018 15:33:25 +0000 X-Inumbo-ID: 43855142-3754-11e8-9728-bc764e045a96 Received: from foss.arm.com (unknown [217.140.101.70]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTP id 43855142-3754-11e8-9728-bc764e045a96; Tue, 03 Apr 2018 17:32:48 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 991D21435; Tue, 3 Apr 2018 08:33:23 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.206.53]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CF09F3F24A; Tue, 3 Apr 2018 08:33:21 -0700 (PDT) From: Julien Grall To: xen-devel@lists.xen.org Date: Tue, 3 Apr 2018 16:32:48 +0100 Message-Id: <20180403153251.19595-14-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180403153251.19595-1-julien.grall@arm.com> References: <20180403153251.19595-1-julien.grall@arm.com> Subject: [Xen-devel] [for-4.11][PATCH v7 13/16] xen/grant: Switch {create, replace}_grant_p2m_mapping to typesafe MFN X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Wei Liu , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Jan Beulich MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" The current prototype is slightly confusing because it takes a guest physical address and a machine physical frame (not address!). Switching to MFN will improve safety and reduce the chance to mistakenly invert the 2 parameters. Signed-off-by: Julien grall Reviewed-by: Wei Liu Reviewed-by: Jan Beulich Acked-by: Stefano Stabellini --- Cc: Stefano Stabellini Cc: Julien Grall Cc: Andrew Cooper Cc: George Dunlap Cc: Ian Jackson Cc: Jan Beulich Cc: Konrad Rzeszutek Wilk Cc: Tim Deegan Cc: Wei Liu Changes in v5: - Add Wei's and Jan's reviewed-by Changes in v4: - Patch added --- xen/arch/arm/mm.c | 10 +++++----- xen/arch/x86/hvm/grant_table.c | 14 +++++++------- xen/arch/x86/pv/grant_table.c | 10 +++++----- xen/common/grant_table.c | 8 ++++---- xen/include/asm-arm/grant_table.h | 9 ++++----- xen/include/asm-x86/grant_table.h | 4 ++-- xen/include/asm-x86/hvm/grant_table.h | 8 ++++---- xen/include/asm-x86/pv/grant_table.h | 8 ++++---- 8 files changed, 35 insertions(+), 36 deletions(-) diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index 7af6baa3d6..49080ca0ac 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -1418,7 +1418,7 @@ void gnttab_mark_dirty(struct domain *d, unsigned long l) } } -int create_grant_host_mapping(unsigned long addr, unsigned long frame, +int create_grant_host_mapping(unsigned long addr, mfn_t frame, unsigned int flags, unsigned int cache_flags) { int rc; @@ -1431,7 +1431,7 @@ int create_grant_host_mapping(unsigned long addr, unsigned long frame, t = p2m_grant_map_ro; rc = guest_physmap_add_entry(current->domain, gaddr_to_gfn(addr), - _mfn(frame), 0, t); + frame, 0, t); if ( rc ) return GNTST_general_error; @@ -1439,8 +1439,8 @@ int create_grant_host_mapping(unsigned long addr, unsigned long frame, return GNTST_okay; } -int replace_grant_host_mapping(unsigned long addr, unsigned long mfn, - unsigned long new_addr, unsigned int flags) +int replace_grant_host_mapping(unsigned long addr, mfn_t mfn, + unsigned long new_addr, unsigned int flags) { gfn_t gfn = gaddr_to_gfn(addr); struct domain *d = current->domain; @@ -1449,7 +1449,7 @@ int replace_grant_host_mapping(unsigned long addr, unsigned long mfn, if ( new_addr != 0 || (flags & GNTMAP_contains_pte) ) return GNTST_general_error; - rc = guest_physmap_remove_page(d, gfn, _mfn(mfn), 0); + rc = guest_physmap_remove_page(d, gfn, mfn, 0); return rc ? GNTST_general_error : GNTST_okay; } diff --git a/xen/arch/x86/hvm/grant_table.c b/xen/arch/x86/hvm/grant_table.c index 9ca9fe0425..ecd7d078ab 100644 --- a/xen/arch/x86/hvm/grant_table.c +++ b/xen/arch/x86/hvm/grant_table.c @@ -25,7 +25,7 @@ #include -int create_grant_p2m_mapping(uint64_t addr, unsigned long frame, +int create_grant_p2m_mapping(uint64_t addr, mfn_t frame, unsigned int flags, unsigned int cache_flags) { @@ -41,14 +41,14 @@ int create_grant_p2m_mapping(uint64_t addr, unsigned long frame, p2mt = p2m_grant_map_rw; rc = guest_physmap_add_entry(current->domain, _gfn(addr >> PAGE_SHIFT), - _mfn(frame), PAGE_ORDER_4K, p2mt); + frame, PAGE_ORDER_4K, p2mt); if ( rc ) return GNTST_general_error; else return GNTST_okay; } -int replace_grant_p2m_mapping(uint64_t addr, unsigned long frame, +int replace_grant_p2m_mapping(uint64_t addr, mfn_t frame, uint64_t new_addr, unsigned int flags) { unsigned long gfn = (unsigned long)(addr >> PAGE_SHIFT); @@ -60,15 +60,15 @@ int replace_grant_p2m_mapping(uint64_t addr, unsigned long frame, return GNTST_general_error; old_mfn = get_gfn(d, gfn, &type); - if ( !p2m_is_grant(type) || mfn_x(old_mfn) != frame ) + if ( !p2m_is_grant(type) || !mfn_eq(old_mfn, frame) ) { put_gfn(d, gfn); gdprintk(XENLOG_WARNING, - "old mapping invalid (type %d, mfn %" PRI_mfn ", frame %lx)\n", - type, mfn_x(old_mfn), frame); + "old mapping invalid (type %d, mfn %" PRI_mfn ", frame %"PRI_mfn")\n", + type, mfn_x(old_mfn), mfn_x(frame)); return GNTST_general_error; } - if ( guest_physmap_remove_page(d, _gfn(gfn), _mfn(frame), PAGE_ORDER_4K) ) + if ( guest_physmap_remove_page(d, _gfn(gfn), frame, PAGE_ORDER_4K) ) { put_gfn(d, gfn); return GNTST_general_error; diff --git a/xen/arch/x86/pv/grant_table.c b/xen/arch/x86/pv/grant_table.c index 4dbc550366..458085e1b6 100644 --- a/xen/arch/x86/pv/grant_table.c +++ b/xen/arch/x86/pv/grant_table.c @@ -50,7 +50,7 @@ static unsigned int grant_to_pte_flags(unsigned int grant_flags, return pte_flags; } -int create_grant_pv_mapping(uint64_t addr, unsigned long frame, +int create_grant_pv_mapping(uint64_t addr, mfn_t frame, unsigned int flags, unsigned int cache_flags) { struct vcpu *curr = current; @@ -60,7 +60,7 @@ int create_grant_pv_mapping(uint64_t addr, unsigned long frame, mfn_t gl1mfn; int rc = GNTST_general_error; - nl1e = l1e_from_pfn(frame, grant_to_pte_flags(flags, cache_flags)); + nl1e = l1e_from_mfn(frame, grant_to_pte_flags(flags, cache_flags)); nl1e = adjust_guest_l1e(nl1e, currd); /* @@ -192,7 +192,7 @@ static bool steal_linear_address(unsigned long linear, l1_pgentry_t *out) * new_addr has only ever been available via GNTABOP_unmap_and_replace, and * only when !(flags & GNTMAP_contains_pte). */ -int replace_grant_pv_mapping(uint64_t addr, unsigned long frame, +int replace_grant_pv_mapping(uint64_t addr, mfn_t frame, uint64_t new_addr, unsigned int flags) { struct vcpu *curr = current; @@ -282,14 +282,14 @@ int replace_grant_pv_mapping(uint64_t addr, unsigned long frame, * Check that the address supplied is actually mapped to frame (with * appropriate permissions). */ - if ( unlikely(l1e_get_pfn(ol1e) != frame) || + if ( unlikely(!mfn_eq(l1e_get_mfn(ol1e), frame)) || unlikely((l1e_get_flags(ol1e) ^ grant_pte_flags) & (_PAGE_PRESENT | _PAGE_RW)) ) { gdprintk(XENLOG_ERR, "PTE %"PRIpte" for %"PRIx64" doesn't match grant (%"PRIpte")\n", l1e_get_intpte(ol1e), addr, - l1e_get_intpte(l1e_from_pfn(frame, grant_pte_flags))); + l1e_get_intpte(l1e_from_mfn(frame, grant_pte_flags))); goto out_unlock; } diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c index 18201912e4..f9e3d1bb95 100644 --- a/xen/common/grant_table.c +++ b/xen/common/grant_table.c @@ -1071,7 +1071,7 @@ map_grant_ref( if ( op->flags & GNTMAP_host_map ) { - rc = create_grant_host_mapping(op->host_addr, frame, op->flags, + rc = create_grant_host_mapping(op->host_addr, _mfn(frame), op->flags, cache_flags); if ( rc != GNTST_okay ) goto undo_out; @@ -1111,7 +1111,7 @@ map_grant_ref( typecnt++; } - rc = create_grant_host_mapping(op->host_addr, frame, op->flags, 0); + rc = create_grant_host_mapping(op->host_addr, _mfn(frame), op->flags, 0); if ( rc != GNTST_okay ) goto undo_out; @@ -1188,7 +1188,7 @@ map_grant_ref( undo_out: if ( host_map_created ) { - replace_grant_host_mapping(op->host_addr, frame, 0, op->flags); + replace_grant_host_mapping(op->host_addr, _mfn(frame), 0, op->flags); gnttab_flush_tlb(ld); } @@ -1374,7 +1374,7 @@ unmap_common( if ( op->host_addr && (flags & GNTMAP_host_map) ) { if ( (rc = replace_grant_host_mapping(op->host_addr, - op->frame, op->new_addr, + _mfn(op->frame), op->new_addr, flags)) < 0 ) goto act_release_out; diff --git a/xen/include/asm-arm/grant_table.h b/xen/include/asm-arm/grant_table.h index d2027d26b2..24644084a1 100644 --- a/xen/include/asm-arm/grant_table.h +++ b/xen/include/asm-arm/grant_table.h @@ -14,12 +14,11 @@ struct grant_table_arch { }; void gnttab_clear_flag(unsigned long nr, uint16_t *addr); -int create_grant_host_mapping(unsigned long gpaddr, - unsigned long mfn, unsigned int flags, unsigned int - cache_flags); +int create_grant_host_mapping(unsigned long gpaddr, mfn_t mfn, + unsigned int flags, unsigned int cache_flags); #define gnttab_host_mapping_get_page_type(ro, ld, rd) (0) -int replace_grant_host_mapping(unsigned long gpaddr, unsigned long mfn, - unsigned long new_gpaddr, unsigned int flags); +int replace_grant_host_mapping(unsigned long gpaddr, mfn_t mfn, + unsigned long new_gpaddr, unsigned int flags); void gnttab_mark_dirty(struct domain *d, unsigned long l); #define gnttab_create_status_page(d, t, i) do {} while (0) #define gnttab_release_host_mappings(domain) 1 diff --git a/xen/include/asm-x86/grant_table.h b/xen/include/asm-x86/grant_table.h index 4ac0b9b4c7..fc07291ff2 100644 --- a/xen/include/asm-x86/grant_table.h +++ b/xen/include/asm-x86/grant_table.h @@ -21,7 +21,7 @@ struct grant_table_arch { * Caller must own caller's BIGLOCK, is responsible for flushing the TLB, and * must hold a reference to the page. */ -static inline int create_grant_host_mapping(uint64_t addr, unsigned long frame, +static inline int create_grant_host_mapping(uint64_t addr, mfn_t frame, unsigned int flags, unsigned int cache_flags) { @@ -30,7 +30,7 @@ static inline int create_grant_host_mapping(uint64_t addr, unsigned long frame, return create_grant_pv_mapping(addr, frame, flags, cache_flags); } -static inline int replace_grant_host_mapping(uint64_t addr, unsigned long frame, +static inline int replace_grant_host_mapping(uint64_t addr, mfn_t frame, uint64_t new_addr, unsigned int flags) { diff --git a/xen/include/asm-x86/hvm/grant_table.h b/xen/include/asm-x86/hvm/grant_table.h index 711ce9b560..a5612585b3 100644 --- a/xen/include/asm-x86/hvm/grant_table.h +++ b/xen/include/asm-x86/hvm/grant_table.h @@ -23,24 +23,24 @@ #ifdef CONFIG_HVM -int create_grant_p2m_mapping(uint64_t addr, unsigned long frame, +int create_grant_p2m_mapping(uint64_t addr, mfn_t frame, unsigned int flags, unsigned int cache_flags); -int replace_grant_p2m_mapping(uint64_t addr, unsigned long frame, +int replace_grant_p2m_mapping(uint64_t addr, mfn_t frame, uint64_t new_addr, unsigned int flags); #else #include -static inline int create_grant_p2m_mapping(uint64_t addr, unsigned long frame, +static inline int create_grant_p2m_mapping(uint64_t addr, mfn_t frame, unsigned int flags, unsigned int cache_flags) { return GNTST_general_error; } -static inline int replace_grant_p2m_mapping(uint64_t addr, unsigned long frame, +static inline int replace_grant_p2m_mapping(uint64_t addr, mfn_t frame, uint64_t new_addr, unsigned int flags) { return GNTST_general_error; diff --git a/xen/include/asm-x86/pv/grant_table.h b/xen/include/asm-x86/pv/grant_table.h index 556e68f0eb..85442b6074 100644 --- a/xen/include/asm-x86/pv/grant_table.h +++ b/xen/include/asm-x86/pv/grant_table.h @@ -23,23 +23,23 @@ #ifdef CONFIG_PV -int create_grant_pv_mapping(uint64_t addr, unsigned long frame, +int create_grant_pv_mapping(uint64_t addr, mfn_t frame, unsigned int flags, unsigned int cache_flags); -int replace_grant_pv_mapping(uint64_t addr, unsigned long frame, +int replace_grant_pv_mapping(uint64_t addr, mfn_t frame, uint64_t new_addr, unsigned int flags); #else #include -static inline int create_grant_pv_mapping(uint64_t addr, unsigned long frame, +static inline int create_grant_pv_mapping(uint64_t addr, mfn_t frame, unsigned int flags, unsigned int cache_flags) { return GNTST_general_error; } -static inline int replace_grant_pv_mapping(uint64_t addr, unsigned long frame, +static inline int replace_grant_pv_mapping(uint64_t addr, mfn_t frame, uint64_t new_addr, unsigned int flags) { return GNTST_general_error; From patchwork Tue Apr 3 15:32:49 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 132773 Delivered-To: patch@linaro.org Received: by 10.46.84.29 with SMTP id i29csp3954832ljb; Tue, 3 Apr 2018 08:35:55 -0700 (PDT) X-Google-Smtp-Source: AIpwx49uQbI+tpy3N2jcrXkhTFrRISzY5S41NviKuGpXJKKKR0/Kr7galzXjMLYRzroQd8i8XEOl X-Received: by 2002:a24:4391:: with SMTP id s139-v6mr5926281itb.18.1522769755734; Tue, 03 Apr 2018 08:35:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522769755; cv=none; d=google.com; s=arc-20160816; b=cWmjqKL+JRkhmflRDPcT/+NKwADD46HjXbuKEA9DXMG2df1N2SQl9z0FRj5rbJd8lz cIGRXveyif9VUSu7QpBlZTcolnrNUanrdmUGDL8VrY70lXd4iHaN1vawj7C0waxOkVOO mzzj0c3LMnSQ0WFj46NAfNmOVAGlwTYfE1JWOPxsF2tZVW+XB+g337m7YlUaylv8fPhd l4y/qwz56rGptNEbGYSXlXr5JH/EGdA+NSCpfv4xkFqfceUsTlI7pm8xaRHP0f8hoWDK dS/aZVOvTxf1e4vIX/TbPrybjKD4NwqUgPUU9jaB8GcF5VvH+gZ0VGploPCAOqCcjtpF 9JEw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version:cc :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:references:in-reply-to:message-id:date:to:from :arc-authentication-results; bh=ZiCVojS1nP9TE75DuBjvQ42tWB/vnOS5HXtew/vgfkk=; b=WLmP0EP4PQ0o23EjkKRBFI9nGv8L+xLE0PBcG7lA3vtibq/4CKYnEbjqKJsUcstkuK gm31elSrNfODrCFMwUxg+b4lkweR8xpqDaOsEF/UiFMESpSs2maYOD5IDctdCDoDcivI WrDN+j0sqU+NvKxvhYAaDhxxacOIS65uBboIDKqkxSQaz6O94TvO0g8HAzggmNTCAfXy zmJ6PEzHiPf4jbqAUuZc3jDgfDtAunVbDk1xtb3JV8GNfbCiZqaKC2AUwElID14L2j4j D0U34D5oVO530UFxI6XvV6a47YFYhvZnwSg9WAVh3mknWrTcctWJrxesxI/f+13W9XYD ydpg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id f14si1930804ioh.169.2018.04.03.08.35.55 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 03 Apr 2018 08:35:55 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1f3Nw9-0008AQ-2b; Tue, 03 Apr 2018 15:33:29 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1f3Nw7-00088U-Dy for xen-devel@lists.xen.org; Tue, 03 Apr 2018 15:33:27 +0000 X-Inumbo-ID: 44d0c8e9-3754-11e8-9728-bc764e045a96 Received: from foss.arm.com (unknown [217.140.101.70]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTP id 44d0c8e9-3754-11e8-9728-bc764e045a96; Tue, 03 Apr 2018 17:32:50 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C5DED1435; Tue, 3 Apr 2018 08:33:25 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.206.53]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D86193F24A; Tue, 3 Apr 2018 08:33:23 -0700 (PDT) From: Julien Grall To: xen-devel@lists.xen.org Date: Tue, 3 Apr 2018 16:32:49 +0100 Message-Id: <20180403153251.19595-15-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180403153251.19595-1-julien.grall@arm.com> References: <20180403153251.19595-1-julien.grall@arm.com> Subject: [Xen-devel] [for-4.11][PATCH v7 14/16] xen/grant: Switch common/grant_table.c to use typesafe MFN X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Wei Liu , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Jan Beulich MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" At the same time replace MFN 0 by INVALID_MFN or drop the initializer when it is not necessary. This will make clearer that the MFN initialized is not valid. Other than MFN 0 -> INVALID_MFN, no functional change intended. Signed-off-by: Julien Grall Reviewed-by: Jan Beulich Reviewed-by: Wei Liu Acked-by: Stefano Stabellini --- Cc: Stefano Stabellini Cc: Julien Grall Cc: Andrew Cooper Cc: George Dunlap Cc: Ian Jackson Cc: Jan Beulich Cc: Konrad Rzeszutek Wilk Cc: Tim Deegan Cc: Wei Liu Changes in v6: - s/_mfn(0)/MFN 0/ - Add Jan's and Wei's reviewed-by Changes in v5: - Remove _mfn(0) when not needed or replace by INVALID_MFN. Changes in v4: - Patch added --- xen/arch/arm/mm.c | 2 +- xen/common/grant_table.c | 147 ++++++++++++++++++++------------------ xen/include/asm-arm/grant_table.h | 2 +- xen/include/asm-x86/grant_table.h | 2 +- 4 files changed, 82 insertions(+), 71 deletions(-) diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index 49080ca0ac..eb3659f913 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -1408,7 +1408,7 @@ void gnttab_clear_flag(unsigned long nr, uint16_t *addr) } while (cmpxchg(addr, old, old & mask) != old); } -void gnttab_mark_dirty(struct domain *d, unsigned long l) +void gnttab_mark_dirty(struct domain *d, mfn_t mfn) { /* XXX: mark dirty */ static int warning; diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c index f9e3d1bb95..4bedf5984a 100644 --- a/xen/common/grant_table.c +++ b/xen/common/grant_table.c @@ -40,6 +40,12 @@ #include #include +/* Override macros from asm/page.h to make them work with mfn_t */ +#undef page_to_mfn +#define page_to_mfn(pg) _mfn(__page_to_mfn(pg)) +#undef mfn_to_page +#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn)) + /* Per-domain grant information. */ struct grant_table { /* @@ -167,7 +173,7 @@ struct gnttab_unmap_common { /* Shared state beteen *_unmap and *_unmap_complete */ uint16_t done; - unsigned long frame; + mfn_t frame; struct domain *rd; grant_ref_t ref; }; @@ -266,7 +272,7 @@ struct active_grant_entry { grant. */ grant_ref_t trans_gref; struct domain *trans_domain; - unsigned long frame; /* Frame being granted. */ + mfn_t frame; /* Frame being granted. */ #ifndef NDEBUG gfn_t gfn; /* Guest's idea of the frame being granted. */ #endif @@ -371,14 +377,14 @@ static inline unsigned int grant_to_status_frames(unsigned int grant_frames) If rc == GNTST_okay, *page contains the page struct with a ref taken. Caller must do put_page(*page). If any error, *page = NULL, *frame = INVALID_MFN, no ref taken. */ -static int get_paged_frame(unsigned long gfn, unsigned long *frame, +static int get_paged_frame(unsigned long gfn, mfn_t *frame, struct page_info **page, bool readonly, struct domain *rd) { int rc = GNTST_okay; p2m_type_t p2mt; - *frame = mfn_x(INVALID_MFN); + *frame = INVALID_MFN; *page = get_page_from_gfn(rd, gfn, &p2mt, readonly ? P2M_ALLOC : P2M_UNSHARE); if ( !*page ) @@ -823,7 +829,7 @@ static int _set_status(unsigned gt_version, static struct active_grant_entry *grant_map_exists(const struct domain *ld, struct grant_table *rgt, - unsigned long mfn, + mfn_t mfn, grant_ref_t *cur_ref) { grant_ref_t ref, max_iter; @@ -842,7 +848,8 @@ static struct active_grant_entry *grant_map_exists(const struct domain *ld, { struct active_grant_entry *act = active_entry_acquire(rgt, ref); - if ( act->pin && act->domid == ld->domain_id && act->frame == mfn ) + if ( act->pin && act->domid == ld->domain_id && + mfn_eq(act->frame, mfn) ) return act; active_entry_release(act); } @@ -859,7 +866,7 @@ static struct active_grant_entry *grant_map_exists(const struct domain *ld, #define MAPKIND_READ 1 #define MAPKIND_WRITE 2 static unsigned int mapkind( - struct grant_table *lgt, const struct domain *rd, unsigned long mfn) + struct grant_table *lgt, const struct domain *rd, mfn_t mfn) { struct grant_mapping *map; grant_handle_t handle, limit = lgt->maptrack_limit; @@ -884,7 +891,7 @@ static unsigned int mapkind( if ( !(map->flags & (GNTMAP_device_map|GNTMAP_host_map)) || map->domid != rd->domain_id ) continue; - if ( _active_entry(rd->grant_table, map->ref).frame == mfn ) + if ( mfn_eq(_active_entry(rd->grant_table, map->ref).frame, mfn) ) kind |= map->flags & GNTMAP_readonly ? MAPKIND_READ : MAPKIND_WRITE; } @@ -907,7 +914,7 @@ map_grant_ref( struct grant_table *lgt, *rgt; struct vcpu *led; grant_handle_t handle; - unsigned long frame = 0; + mfn_t frame; struct page_info *pg = NULL; int rc = GNTST_okay; u32 old_pin; @@ -1034,7 +1041,7 @@ map_grant_ref( /* pg may be set, with a refcount included, from get_paged_frame(). */ if ( !pg ) { - pg = mfn_valid(_mfn(frame)) ? mfn_to_page(frame) : NULL; + pg = mfn_valid(frame) ? mfn_to_page(frame) : NULL; if ( pg ) owner = page_get_owner_and_reference(pg); } @@ -1060,18 +1067,18 @@ map_grant_ref( goto undo_out; } - if ( !iomem_access_permitted(rd, frame, frame) ) + if ( !iomem_access_permitted(rd, mfn_x(frame), mfn_x(frame)) ) { gdprintk(XENLOG_WARNING, - "Iomem mapping not permitted %lx (domain %d)\n", - frame, rd->domain_id); + "Iomem mapping not permitted %#"PRI_mfn" (domain %d)\n", + mfn_x(frame), rd->domain_id); rc = GNTST_general_error; goto undo_out; } if ( op->flags & GNTMAP_host_map ) { - rc = create_grant_host_mapping(op->host_addr, _mfn(frame), op->flags, + rc = create_grant_host_mapping(op->host_addr, frame, op->flags, cache_flags); if ( rc != GNTST_okay ) goto undo_out; @@ -1111,7 +1118,7 @@ map_grant_ref( typecnt++; } - rc = create_grant_host_mapping(op->host_addr, _mfn(frame), op->flags, 0); + rc = create_grant_host_mapping(op->host_addr, frame, op->flags, 0); if ( rc != GNTST_okay ) goto undo_out; @@ -1122,8 +1129,8 @@ map_grant_ref( { could_not_pin: if ( !rd->is_dying ) - gdprintk(XENLOG_WARNING, "Could not pin grant frame %lx\n", - frame); + gdprintk(XENLOG_WARNING, "Could not pin grant frame %#"PRI_mfn"\n", + mfn_x(frame)); rc = GNTST_general_error; goto undo_out; } @@ -1143,13 +1150,14 @@ map_grant_ref( !(old_pin & (GNTPIN_hstw_mask|GNTPIN_devw_mask)) ) { if ( !(kind & MAPKIND_WRITE) ) - err = iommu_map_page(ld, frame, frame, + err = iommu_map_page(ld, mfn_x(frame), mfn_x(frame), IOMMUF_readable|IOMMUF_writable); } else if ( act_pin && !old_pin ) { if ( !kind ) - err = iommu_map_page(ld, frame, frame, IOMMUF_readable); + err = iommu_map_page(ld, mfn_x(frame), mfn_x(frame), + IOMMUF_readable); } if ( err ) { @@ -1178,7 +1186,7 @@ map_grant_ref( if ( need_iommu ) double_gt_unlock(lgt, rgt); - op->dev_bus_addr = (u64)frame << PAGE_SHIFT; + op->dev_bus_addr = mfn_to_maddr(frame); op->handle = handle; op->status = GNTST_okay; @@ -1188,7 +1196,7 @@ map_grant_ref( undo_out: if ( host_map_created ) { - replace_grant_host_mapping(op->host_addr, _mfn(frame), 0, op->flags); + replace_grant_host_mapping(op->host_addr, frame, 0, op->flags); gnttab_flush_tlb(ld); } @@ -1366,15 +1374,15 @@ unmap_common( op->frame = act->frame; if ( op->dev_bus_addr && - unlikely(op->dev_bus_addr != pfn_to_paddr(act->frame)) ) + unlikely(op->dev_bus_addr != mfn_to_maddr(act->frame)) ) PIN_FAIL(act_release_out, GNTST_general_error, "Bus address doesn't match gntref (%"PRIx64" != %"PRIpaddr")\n", - op->dev_bus_addr, pfn_to_paddr(act->frame)); + op->dev_bus_addr, mfn_to_maddr(act->frame)); if ( op->host_addr && (flags & GNTMAP_host_map) ) { if ( (rc = replace_grant_host_mapping(op->host_addr, - _mfn(op->frame), op->new_addr, + op->frame, op->new_addr, flags)) < 0 ) goto act_release_out; @@ -1411,9 +1419,10 @@ unmap_common( kind = mapkind(lgt, rd, op->frame); if ( !kind ) - err = iommu_unmap_page(ld, op->frame); + err = iommu_unmap_page(ld, mfn_x(op->frame)); else if ( !(kind & MAPKIND_WRITE) ) - err = iommu_map_page(ld, op->frame, op->frame, IOMMUF_readable); + err = iommu_map_page(ld, mfn_x(op->frame), + mfn_x(op->frame), IOMMUF_readable); double_gt_unlock(lgt, rgt); @@ -1464,7 +1473,7 @@ unmap_common_complete(struct gnttab_unmap_common *op) if ( op->done & GNTMAP_device_map ) { - if ( !is_iomem_page(_mfn(act->frame)) ) + if ( !is_iomem_page(act->frame) ) { if ( op->done & GNTMAP_readonly ) put_page(pg); @@ -1481,7 +1490,7 @@ unmap_common_complete(struct gnttab_unmap_common *op) if ( op->done & GNTMAP_host_map ) { - if ( !is_iomem_page(_mfn(op->frame)) ) + if ( !is_iomem_page(op->frame) ) { if ( gnttab_host_mapping_get_page_type(op->done & GNTMAP_readonly, ld, rd) ) @@ -1522,7 +1531,7 @@ unmap_grant_ref( common->done = 0; common->new_addr = 0; common->rd = NULL; - common->frame = 0; + common->frame = INVALID_MFN; unmap_common(common); op->status = common->status; @@ -1588,7 +1597,7 @@ unmap_and_replace( common->done = 0; common->dev_bus_addr = 0; common->rd = NULL; - common->frame = 0; + common->frame = INVALID_MFN; unmap_common(common); op->status = common->status; @@ -1692,7 +1701,7 @@ gnttab_unpopulate_status_frames(struct domain *d, struct grant_table *gt) int rc = gfn_eq(gfn, INVALID_GFN) ? 0 : guest_physmap_remove_page(d, gfn, - _mfn(page_to_mfn(pg)), 0); + page_to_mfn(pg), 0); if ( rc ) { @@ -2097,7 +2106,7 @@ gnttab_transfer( struct page_info *page; int i; struct gnttab_transfer gop; - unsigned long mfn; + mfn_t mfn; unsigned int max_bitsize; struct active_grant_entry *act; @@ -2121,16 +2130,16 @@ gnttab_transfer( { p2m_type_t p2mt; - mfn = mfn_x(get_gfn_unshare(d, gop.mfn, &p2mt)); + mfn = get_gfn_unshare(d, gop.mfn, &p2mt); if ( p2m_is_shared(p2mt) || !p2m_is_valid(p2mt) ) - mfn = mfn_x(INVALID_MFN); + mfn = INVALID_MFN; } #else - mfn = mfn_x(gfn_to_mfn(d, _gfn(gop.mfn))); + mfn = gfn_to_mfn(d, _gfn(gop.mfn)); #endif /* Check the passed page frame for basic validity. */ - if ( unlikely(!mfn_valid(_mfn(mfn))) ) + if ( unlikely(!mfn_valid(mfn)) ) { put_gfn(d, gop.mfn); gdprintk(XENLOG_INFO, "out-of-range %lx\n", (unsigned long)gop.mfn); @@ -2146,12 +2155,13 @@ gnttab_transfer( goto copyback; } - rc = guest_physmap_remove_page(d, _gfn(gop.mfn), _mfn(mfn), 0); + rc = guest_physmap_remove_page(d, _gfn(gop.mfn), mfn, 0); gnttab_flush_tlb(d); if ( rc ) { - gdprintk(XENLOG_INFO, "can't remove GFN %"PRI_xen_pfn" (MFN %lx)\n", - gop.mfn, mfn); + gdprintk(XENLOG_INFO, + "can't remove GFN %"PRI_xen_pfn" (MFN %#"PRI_mfn")\n", + gop.mfn, mfn_x(mfn)); gop.status = GNTST_general_error; goto put_gfn_and_copyback; } @@ -2180,7 +2190,7 @@ gnttab_transfer( e, e->grant_table->gt_version > 1 || paging_mode_translate(e) ? BITS_PER_LONG + PAGE_SHIFT : 32 + PAGE_SHIFT); if ( max_bitsize < BITS_PER_LONG + PAGE_SHIFT && - (mfn >> (max_bitsize - PAGE_SHIFT)) ) + (mfn_x(mfn) >> (max_bitsize - PAGE_SHIFT)) ) { struct page_info *new_page; @@ -2192,7 +2202,7 @@ gnttab_transfer( goto unlock_and_copyback; } - copy_domain_page(_mfn(page_to_mfn(new_page)), _mfn(mfn)); + copy_domain_page(page_to_mfn(new_page), mfn); page->count_info &= ~(PGC_count_mask|PGC_allocated); free_domheap_page(page); @@ -2269,18 +2279,17 @@ gnttab_transfer( { grant_entry_v1_t *sha = &shared_entry_v1(e->grant_table, gop.ref); - guest_physmap_add_page(e, _gfn(sha->frame), _mfn(mfn), 0); + guest_physmap_add_page(e, _gfn(sha->frame), mfn, 0); if ( !paging_mode_translate(e) ) - sha->frame = mfn; + sha->frame = mfn_x(mfn); } else { grant_entry_v2_t *sha = &shared_entry_v2(e->grant_table, gop.ref); - guest_physmap_add_page(e, _gfn(sha->full_page.frame), - _mfn(mfn), 0); + guest_physmap_add_page(e, _gfn(sha->full_page.frame), mfn, 0); if ( !paging_mode_translate(e) ) - sha->full_page.frame = mfn; + sha->full_page.frame = mfn_x(mfn); } smp_wmb(); shared_entry_header(e->grant_table, gop.ref)->flags |= @@ -2316,7 +2325,7 @@ release_grant_for_copy( struct grant_table *rgt = rd->grant_table; grant_entry_header_t *sha; struct active_grant_entry *act; - unsigned long r_frame; + mfn_t r_frame; uint16_t *status; grant_ref_t trans_gref; struct domain *td; @@ -2393,7 +2402,7 @@ static void fixup_status_for_copy_pin(const struct active_grant_entry *act, static int acquire_grant_for_copy( struct domain *rd, grant_ref_t gref, domid_t ldom, bool readonly, - unsigned long *frame, struct page_info **page, + mfn_t *frame, struct page_info **page, uint16_t *page_off, uint16_t *length, bool allow_transitive) { struct grant_table *rgt = rd->grant_table; @@ -2405,7 +2414,7 @@ acquire_grant_for_copy( domid_t trans_domid; grant_ref_t trans_gref; struct domain *td; - unsigned long grant_frame; + mfn_t grant_frame; uint16_t trans_page_off; uint16_t trans_length; bool is_sub_page; @@ -2506,7 +2515,8 @@ acquire_grant_for_copy( */ if ( rgt->gt_version != 2 || act->pin != old_pin || - (old_pin && (act->domid != ldom || act->frame != grant_frame || + (old_pin && (act->domid != ldom || + !mfn_eq(act->frame, grant_frame) || act->start != trans_page_off || act->length != trans_length || act->trans_domain != td || @@ -2598,7 +2608,7 @@ acquire_grant_for_copy( } else { - ASSERT(mfn_valid(_mfn(act->frame))); + ASSERT(mfn_valid(act->frame)); *page = mfn_to_page(act->frame); td = page_get_owner_and_reference(*page); /* @@ -2653,7 +2663,7 @@ struct gnttab_copy_buf { /* Mapped etc. */ struct domain *domain; - unsigned long frame; + mfn_t frame; struct page_info *page; void *virt; bool_t read_only; @@ -2785,15 +2795,16 @@ static int gnttab_copy_claim_buf(const struct gnttab_copy *op, if ( !get_page_type(buf->page, PGT_writable_page) ) { if ( !buf->domain->is_dying ) - gdprintk(XENLOG_WARNING, "Could not get writable frame %lx\n", - buf->frame); + gdprintk(XENLOG_WARNING, + "Could not get writable frame %#"PRI_mfn"\n", + mfn_x(buf->frame)); rc = GNTST_general_error; goto out; } buf->have_type = 1; } - buf->virt = map_domain_page(_mfn(buf->frame)); + buf->virt = map_domain_page(buf->frame); rc = GNTST_okay; out: @@ -3296,7 +3307,7 @@ static int cache_flush(const gnttab_cache_flush_t *cflush, grant_ref_t *cur_ref) { struct domain *d, *owner; struct page_info *page; - unsigned long mfn; + mfn_t mfn; struct active_grant_entry *act = NULL; void *v; int ret; @@ -3315,9 +3326,9 @@ static int cache_flush(const gnttab_cache_flush_t *cflush, grant_ref_t *cur_ref) return -EOPNOTSUPP; d = rcu_lock_current_domain(); - mfn = cflush->a.dev_bus_addr >> PAGE_SHIFT; + mfn = maddr_to_mfn(cflush->a.dev_bus_addr); - if ( !mfn_valid(_mfn(mfn)) ) + if ( !mfn_valid(mfn) ) { rcu_unlock_domain(d); return -EINVAL; @@ -3345,7 +3356,7 @@ static int cache_flush(const gnttab_cache_flush_t *cflush, grant_ref_t *cur_ref) } } - v = map_domain_page(_mfn(mfn)); + v = map_domain_page(mfn); v += cflush->offset; if ( (cflush->op & GNTTAB_CACHE_INVAL) && (cflush->op & GNTTAB_CACHE_CLEAN) ) @@ -3663,7 +3674,7 @@ gnttab_release_mappings( { BUG_ON(!(act->pin & GNTPIN_devr_mask)); act->pin -= GNTPIN_devr_inc; - if ( !is_iomem_page(_mfn(act->frame)) ) + if ( !is_iomem_page(act->frame) ) put_page(pg); } @@ -3672,7 +3683,7 @@ gnttab_release_mappings( BUG_ON(!(act->pin & GNTPIN_hstr_mask)); act->pin -= GNTPIN_hstr_inc; if ( gnttab_release_host_mappings(d) && - !is_iomem_page(_mfn(act->frame)) ) + !is_iomem_page(act->frame) ) put_page(pg); } } @@ -3682,7 +3693,7 @@ gnttab_release_mappings( { BUG_ON(!(act->pin & GNTPIN_devw_mask)); act->pin -= GNTPIN_devw_inc; - if ( !is_iomem_page(_mfn(act->frame)) ) + if ( !is_iomem_page(act->frame) ) put_page_and_type(pg); } @@ -3691,7 +3702,7 @@ gnttab_release_mappings( BUG_ON(!(act->pin & GNTPIN_hstw_mask)); act->pin -= GNTPIN_hstw_inc; if ( gnttab_release_host_mappings(d) && - !is_iomem_page(_mfn(act->frame)) ) + !is_iomem_page(act->frame) ) { if ( gnttab_host_mapping_get_page_type((map->flags & GNTMAP_readonly), @@ -3743,12 +3754,12 @@ void grant_table_warn_active_grants(struct domain *d) #ifndef NDEBUG "GFN %lx, " #endif - "MFN: %lx)\n", + "MFN: %#"PRI_mfn")\n", d->domain_id, ref, #ifndef NDEBUG gfn_x(act->gfn), #endif - act->frame); + mfn_x(act->frame)); active_entry_release(act); } @@ -3955,9 +3966,9 @@ static void gnttab_usage_print(struct domain *rd) first = 0; - /* [0xXXX] ddddd 0xXXXXXX 0xXXXXXXXX ddddd 0xXXXXXX 0xXX */ - printk("[0x%03x] %5d 0x%06lx 0x%08x %5d 0x%06"PRIx64" 0x%02x\n", - ref, act->domid, act->frame, act->pin, + /* [0xXXX] ddddd 0xXXXXX 0xXXXXXXXX ddddd 0xXXXXXX 0xXX */ + printk("[0x%03x] %5d 0x%"PRI_mfn" 0x%08x %5d 0x%06"PRIx64" 0x%02x\n", + ref, act->domid, mfn_x(act->frame), act->pin, sha->domid, frame, status); active_entry_release(act); } diff --git a/xen/include/asm-arm/grant_table.h b/xen/include/asm-arm/grant_table.h index 24644084a1..e52936c79f 100644 --- a/xen/include/asm-arm/grant_table.h +++ b/xen/include/asm-arm/grant_table.h @@ -19,7 +19,7 @@ int create_grant_host_mapping(unsigned long gpaddr, mfn_t mfn, #define gnttab_host_mapping_get_page_type(ro, ld, rd) (0) int replace_grant_host_mapping(unsigned long gpaddr, mfn_t mfn, unsigned long new_gpaddr, unsigned int flags); -void gnttab_mark_dirty(struct domain *d, unsigned long l); +void gnttab_mark_dirty(struct domain *d, mfn_t mfn); #define gnttab_create_status_page(d, t, i) do {} while (0) #define gnttab_release_host_mappings(domain) 1 static inline int replace_grant_supported(void) diff --git a/xen/include/asm-x86/grant_table.h b/xen/include/asm-x86/grant_table.h index fc07291ff2..e42030936b 100644 --- a/xen/include/asm-x86/grant_table.h +++ b/xen/include/asm-x86/grant_table.h @@ -80,7 +80,7 @@ static inline unsigned int gnttab_dom0_max(void) #define gnttab_status_gmfn(d, t, i) \ (mfn_to_gmfn(d, gnttab_status_mfn(t, i))) -#define gnttab_mark_dirty(d, f) paging_mark_dirty((d), _mfn(f)) +#define gnttab_mark_dirty(d, f) paging_mark_dirty((d), f) static inline void gnttab_clear_flag(unsigned int nr, uint16_t *st) { From patchwork Tue Apr 3 15:32:50 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 132770 Delivered-To: patch@linaro.org Received: by 10.46.84.29 with SMTP id i29csp3954445ljb; Tue, 3 Apr 2018 08:35:34 -0700 (PDT) X-Google-Smtp-Source: AIpwx48S/oaYzoJElPaZvbr488Vw+RFUQv/VZv00VfHLfrOXDpLplj3KomfVJDJvCefcqJO89/LH X-Received: by 10.107.164.17 with SMTP id n17mr12640865ioe.173.1522769734719; Tue, 03 Apr 2018 08:35:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522769734; cv=none; d=google.com; s=arc-20160816; b=ba16BUrKF/kKG6KbTIe9IBxcpa5BJsv2acL081PLW/c1e/S8FtJ97n8T+2hRz8SouL Bcmy73ZgNhmU2q28r2m6SuCqwo5lVkJaALvFAcFIqaiKJqeftXmZkWl3Z9tnSsVAdq9l TSWYy7tsmc3R03bpomYQiz1ua+Tbim9vhL7EBxfjs61WGo1ZQ/1ssqXhjC7j8gGavgiC +EofCcehII/ViWk1RXcl6TI/zzJNjX5qmQoxALKf1Zf8QnieN6qo0itPzgIsx9po0Uv+ rABApc1m4a08reeOz7+IcFl4R/qclwp/FiBDcauZrmyai7dN5aIBro5k42hsUhtd+8Iy fgNg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version:cc :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:references:in-reply-to:message-id:date:to:from :arc-authentication-results; bh=KnHSVY9Itvyj86XXj4O49elFQeY0lWP9NWoBJ1L5D/0=; b=a8lrhPAY8ZWYLnFDZVIjqlYz6rzxLRuImeCIS2OInQKr3U9luOXL06711zsyDjWPIh tcPDGysg1ykJu/TboCytjB4lPkj7VRro52K9oeYZrkshK0I5ZSEORC4lVw4KO0zzbkPn lVHb+BdyeKsQi6Ump/3yOKwqFkVmpFx7UcRWgeKH62TVuJFRkBYnNkHh6odlegcdL7ns gOfNXU9JSbvCb5JUV9uyhnO/NsHnYXF3gnvT/Zvhixx7Y6hYtW1XaMeb83btO/DIofcy zm8HZP5xHDyH8oJwWT5ogcrZLgoR97U7whxz7vLgAZYk0xvsY7AkV+ury+CSG2EqcXev fA/g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id s64-v6si573741its.163.2018.04.03.08.35.34 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 03 Apr 2018 08:35:34 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1f3Nw9-0008B7-HX; Tue, 03 Apr 2018 15:33:29 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1f3Nw8-0008A3-Ow for xen-devel@lists.xen.org; Tue, 03 Apr 2018 15:33:28 +0000 X-Inumbo-ID: 45827d6d-3754-11e8-9728-bc764e045a96 Received: from foss.arm.com (unknown [217.140.101.70]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTP id 45827d6d-3754-11e8-9728-bc764e045a96; Tue, 03 Apr 2018 17:32:51 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EE4AE15AB; Tue, 3 Apr 2018 08:33:26 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.206.53]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 100C23F24A; Tue, 3 Apr 2018 08:33:25 -0700 (PDT) From: Julien Grall To: xen-devel@lists.xen.org Date: Tue, 3 Apr 2018 16:32:50 +0100 Message-Id: <20180403153251.19595-16-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180403153251.19595-1-julien.grall@arm.com> References: <20180403153251.19595-1-julien.grall@arm.com> Subject: [Xen-devel] [for-4.11][PATCH v7 15/16] xen/x86: Switch mfn_to_page in x86_64/mm.c to use typesafe MFN X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Julien Grall , Jan Beulich MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Other than MFN 0 -> INVALID_MFN, no functional change intended. Signed-off-by: Julien Grall Acked-by: Jan Beulich --- Cc: Jan Beulich Cc: Andrew Cooper Changes in v6: - s/_mfn(0)/MFN 0/ - Fix typo in the commit message Changes in v5: - Use INVALID_MFN instead of _mfn(0) Changes in v4: - Patch added --- xen/arch/x86/x86_64/mm.c | 46 ++++++++++++++++++++++++++-------------------- 1 file changed, 26 insertions(+), 20 deletions(-) diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c index f6dd95aa47..8d1f130abf 100644 --- a/xen/arch/x86/x86_64/mm.c +++ b/xen/arch/x86/x86_64/mm.c @@ -43,6 +43,8 @@ asm(".file \"" __FILE__ "\""); /* Override macros from asm/page.h to make them work with mfn_t */ #undef page_to_mfn #define page_to_mfn(pg) _mfn(__page_to_mfn(pg)) +#undef mfn_to_page +#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn)) unsigned int __read_mostly m2p_compat_vstart = __HYPERVISOR_COMPAT_VIRT_START; @@ -160,7 +162,8 @@ static int m2p_mapped(unsigned long spfn) static int share_hotadd_m2p_table(struct mem_hotadd_info *info) { - unsigned long i, n, v, m2p_start_mfn = 0; + unsigned long i, n, v; + mfn_t m2p_start_mfn = INVALID_MFN; l3_pgentry_t l3e; l2_pgentry_t l2e; @@ -180,15 +183,16 @@ static int share_hotadd_m2p_table(struct mem_hotadd_info *info) l2e = l3e_to_l2e(l3e)[l2_table_offset(v)]; if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) ) continue; - m2p_start_mfn = l2e_get_pfn(l2e); + m2p_start_mfn = l2e_get_mfn(l2e); } else continue; for ( i = 0; i < n; i++ ) { - struct page_info *page = mfn_to_page(m2p_start_mfn + i); - if (hotadd_mem_valid(m2p_start_mfn + i, info)) + struct page_info *page = mfn_to_page(mfn_add(m2p_start_mfn, i)); + + if ( hotadd_mem_valid(mfn_x(mfn_add(m2p_start_mfn, i)), info) ) share_xen_page_with_privileged_guests(page, SHARE_ro); } } @@ -204,12 +208,13 @@ static int share_hotadd_m2p_table(struct mem_hotadd_info *info) l2e = l3e_to_l2e(l3e)[l2_table_offset(v)]; if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) ) continue; - m2p_start_mfn = l2e_get_pfn(l2e); + m2p_start_mfn = l2e_get_mfn(l2e); for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ ) { - struct page_info *page = mfn_to_page(m2p_start_mfn + i); - if (hotadd_mem_valid(m2p_start_mfn + i, info)) + struct page_info *page = mfn_to_page(mfn_add(m2p_start_mfn, i)); + + if ( hotadd_mem_valid(mfn_x(mfn_add(m2p_start_mfn, i)), info) ) share_xen_page_with_privileged_guests(page, SHARE_ro); } } @@ -720,10 +725,10 @@ static void cleanup_frame_table(struct mem_hotadd_info *info) unsigned long sva, eva; l3_pgentry_t l3e; l2_pgentry_t l2e; - unsigned long spfn, epfn; + mfn_t spfn, epfn; - spfn = info->spfn; - epfn = info->epfn; + spfn = _mfn(info->spfn); + epfn = _mfn(info->epfn); sva = (unsigned long)mfn_to_page(spfn); eva = (unsigned long)mfn_to_page(epfn); @@ -795,16 +800,17 @@ static int setup_frametable_chunk(void *start, void *end, static int extend_frame_table(struct mem_hotadd_info *info) { - unsigned long cidx, nidx, eidx, spfn, epfn; + unsigned long cidx, nidx, eidx; + mfn_t spfn, epfn; - spfn = info->spfn; - epfn = info->epfn; + spfn = _mfn(info->spfn); + epfn = _mfn(info->epfn); - eidx = (pfn_to_pdx(epfn) + PDX_GROUP_COUNT - 1) / PDX_GROUP_COUNT; - nidx = cidx = pfn_to_pdx(spfn)/PDX_GROUP_COUNT; + eidx = (mfn_to_pdx(epfn) + PDX_GROUP_COUNT - 1) / PDX_GROUP_COUNT; + nidx = cidx = mfn_to_pdx(spfn)/PDX_GROUP_COUNT; - ASSERT( pfn_to_pdx(epfn) <= (DIRECTMAP_SIZE >> PAGE_SHIFT) && - pfn_to_pdx(epfn) <= FRAMETABLE_NR ); + ASSERT( mfn_to_pdx(epfn) <= (DIRECTMAP_SIZE >> PAGE_SHIFT) && + mfn_to_pdx(epfn) <= FRAMETABLE_NR ); if ( test_bit(cidx, pdx_group_valid) ) cidx = find_next_zero_bit(pdx_group_valid, eidx, cidx); @@ -866,7 +872,7 @@ void __init subarch_init_memory(void) for ( i = 0; i < n; i++ ) share_xen_page_with_privileged_guests( - mfn_to_page(m2p_start_mfn + i), SHARE_ro); + mfn_to_page(_mfn(m2p_start_mfn + i)), SHARE_ro); } for ( v = RDWR_COMPAT_MPT_VIRT_START; @@ -884,7 +890,7 @@ void __init subarch_init_memory(void) for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ ) share_xen_page_with_privileged_guests( - mfn_to_page(m2p_start_mfn + i), SHARE_ro); + mfn_to_page(_mfn(m2p_start_mfn + i)), SHARE_ro); } /* Mark all of direct map NX if hardware supports it. */ @@ -1270,7 +1276,7 @@ static int transfer_pages_to_heap(struct mem_hotadd_info *info) */ for (i = info->spfn; i < info->cur; i++) { - pg = mfn_to_page(i); + pg = mfn_to_page(_mfn(i)); pg->count_info = PGC_state_inuse; } From patchwork Tue Apr 3 15:32:51 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 132772 Delivered-To: patch@linaro.org Received: by 10.46.84.29 with SMTP id i29csp3954645ljb; Tue, 3 Apr 2018 08:35:46 -0700 (PDT) X-Google-Smtp-Source: AIpwx48LqYeYNcezPJJ5W8k6hfMsDEpK/R20RdXYaZQ917+v6LLTDlTMtn3GxEsZ+EiyVPHpFbOe X-Received: by 10.107.28.15 with SMTP id c15mr13261388ioc.247.1522769745944; Tue, 03 Apr 2018 08:35:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522769745; cv=none; d=google.com; s=arc-20160816; b=CE7K31ghnLCtDJ5Aebgx15puUWU4wz1BJ24v4rAnYFvT2J5AIHGfqjnO7xlE0N5W/Q i/GItwHOGyHFJSr/pU0DOiwGUvGf823mzUG+XpAGa+Ds9nHbkF1aj7+a3JMzI+TnU8w5 9wz9uII1mKVIAkqXfm24IvKZoRbtlkZRuo2+fxPNbKwRq5cCBzuI0+GvvupPWPtjogPc wugLjTn24hJyFiriAPqovofFoTiwuL73rNfAe/Uva8pl0VOVI4HSuORrz2zxniAyQlND MWsInqNIWSeXUtqB4hI1zTwmc17Y3UOzgjFTc7W+7bJDQ81sF91amF/6BepRjzL3y/EJ v5aQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version:cc :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:references:in-reply-to:message-id:date:to:from :arc-authentication-results; bh=vQMWyKkd/Oc+GKadaypp5KwSjlhwZo+qZGf/sy6MX6k=; b=lnXemv+jfyL8b9sIqpdYG+jkJGA9DrR/FSF1peYv8EqjUO5zas2dwBKXxyn6UcXxum 4sE+P4XPrpIHjGvaHaTZmRkJpPVKRMWICj+GchWumezqLuPxeCMJVIE7wQV7sRs40KWz n+7IWtfm8BBfRK113HpLiJXN97Sd+wX5GzwRXJj2lvfNkeu1KY9CDZSfXZmCclYPXa0W bD4rMO3LcKuOkZlYBk06vrlV98nsn7D3bRmrry7jdJt7EJcjDwcncmfDtYCibQSA5KeD Z1Y/rj+R73mDEh2eTyEw8sEZaLM0k3V4G8Vl8APex/bx7S7kK873Ge7uLLdOb9PcyjQ4 htkA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id r75-v6si570513itb.1.2018.04.03.08.35.45 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 03 Apr 2018 08:35:45 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1f3NwG-0008It-7y; Tue, 03 Apr 2018 15:33:36 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1f3NwE-0008Hf-Sm for xen-devel@lists.xen.org; Tue, 03 Apr 2018 15:33:34 +0000 X-Inumbo-ID: 86f4da23-3754-11e8-8249-2fda3a446a53 Received: from foss.arm.com (unknown [217.140.101.70]) by us1-amaz-eas1.inumbo.com (Halon) with ESMTP id 86f4da23-3754-11e8-8249-2fda3a446a53; Tue, 03 Apr 2018 15:34:41 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BAC9C1435; Tue, 3 Apr 2018 08:33:30 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.206.53]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3CAA03F24A; Tue, 3 Apr 2018 08:33:27 -0700 (PDT) From: Julien Grall To: xen-devel@lists.xen.org Date: Tue, 3 Apr 2018 16:32:51 +0100 Message-Id: <20180403153251.19595-17-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180403153251.19595-1-julien.grall@arm.com> References: <20180403153251.19595-1-julien.grall@arm.com> Subject: [Xen-devel] [for-4.11][PATCH v7 16/16] xen: Convert page_to_mfn and mfn_to_page to use typesafe MFN X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Jun Nakajima , Kevin Tian , Stefano Stabellini , Wei Liu , Suravee Suthikulpanit , Razvan Cojocaru , George Dunlap , Andrew Cooper , Tim Deegan , George Dunlap , Gang Wei , Julien Grall , Paul Durrant , Tamas K Lengyel , Jan Beulich , Shane Wang , Boris Ostrovsky , Ian Jackson MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Most of the users of page_to_mfn and mfn_to_page are either overriding the macros to make them work with mfn_t or use mfn_x/_mfn because the rest of the function use mfn_t. So make page_to_mfn and mfn_to_page return mfn_t by default. The __* version are now dropped as this patch will convert all the remaining non-typesafe callers. Only reasonable clean-ups are done in this patch. The rest will use _mfn/mfn_x for the time being. Lastly, domain_page_to_mfn is also converted to use mfn_t given that most of the callers are now switched to _mfn(domain_page_to_mfn(...)). Signed-off-by: Julien Grall Acked-by: Razvan Cojocaru Reviewed-by: Paul Durrant Reviewed-by: Boris Ostrovsky Reviewed-by: Kevin Tian Reviewed-by: Wei Liu Acked-by: Jan Beulich Reviewed-by: George Dunlap Acked-by: Tim Deegan Acked-by: Stefano Stabellini --- Andrew suggested to drop IS_VALID_PAGE in xen/tmem_xen.h. His comment was: "/sigh This is tautological. The definition of a "valid mfn" in this case is one for which we have frametable entry, and by having a struct page_info in our hands, this is by definition true (unless you have a wild pointer, at which point your bug is elsewhere). IS_VALID_PAGE() is only ever used in assertions and never usefully, so instead I would remove it entirely rather than trying to fix it up." I can remove the function in a separate patch at the begining of the series if Konrad (TMEM maintainer) is happy with that. Cc: Stefano Stabellini Cc: Julien Grall Cc: Andrew Cooper Cc: George Dunlap Cc: Ian Jackson Cc: Jan Beulich Cc: Konrad Rzeszutek Wilk Cc: Tamas K Lengyel Cc: Suravee Suthikulpanit Cc: Jun Nakajima Cc: George Dunlap Cc: Gang Wei Cc: Shane Wang Changes in v7: - Add Tim's acked-by Changes in v6: - Add Jan's acked-by - Add George's reviewed-by for x86/mm bits Changes in v5: - Remove some spurious parentheses in the code changed - Remove spurious change in _set_gpfn_from_mfn - Add Razvan's acked-by - Add Paul's reviewed-by - Add Boris's reviewed-by - Add Kevin's reviewed-by - Add Wei's reviewed-by Changes in v4: - Drop __page_to_mfn and __mfn_to_page. Reword the commit title/message to reflect that. Changes in v3: - Rebase on the latest staging and fix some conflicts. Tags haven't be retained. - Switch the printf format to PRI_mfn Changes in v2: - Some part have been moved in separate patch - Remove one spurious comment - Convert domain_page_to_mfn to use mfn_t --- xen/arch/arm/domain_build.c | 2 -- xen/arch/arm/kernel.c | 2 +- xen/arch/arm/mem_access.c | 2 +- xen/arch/arm/mm.c | 8 ++++---- xen/arch/arm/p2m.c | 10 ++-------- xen/arch/x86/cpu/vpmu.c | 4 ++-- xen/arch/x86/domain.c | 21 +++++++++++---------- xen/arch/x86/domain_page.c | 6 +++--- xen/arch/x86/hvm/dm.c | 2 +- xen/arch/x86/hvm/dom0_build.c | 6 +++--- xen/arch/x86/hvm/emulate.c | 6 +++--- xen/arch/x86/hvm/hvm.c | 12 ++++++------ xen/arch/x86/hvm/ioreq.c | 4 ++-- xen/arch/x86/hvm/stdvga.c | 2 +- xen/arch/x86/hvm/svm/svm.c | 4 ++-- xen/arch/x86/hvm/viridian.c | 6 +++--- xen/arch/x86/hvm/vmx/vmcs.c | 2 +- xen/arch/x86/hvm/vmx/vmx.c | 10 +++++----- xen/arch/x86/hvm/vmx/vvmx.c | 6 +++--- xen/arch/x86/mm.c | 4 ---- xen/arch/x86/mm/guest_walk.c | 6 +++--- xen/arch/x86/mm/hap/guest_walk.c | 2 +- xen/arch/x86/mm/hap/hap.c | 6 ------ xen/arch/x86/mm/hap/nested_ept.c | 2 +- xen/arch/x86/mm/mem_sharing.c | 5 ----- xen/arch/x86/mm/p2m-ept.c | 8 ++++---- xen/arch/x86/mm/p2m-pod.c | 6 ------ xen/arch/x86/mm/p2m.c | 6 ------ xen/arch/x86/mm/paging.c | 6 ------ xen/arch/x86/mm/shadow/private.h | 16 ++-------------- xen/arch/x86/numa.c | 2 +- xen/arch/x86/physdev.c | 2 +- xen/arch/x86/pv/callback.c | 6 ------ xen/arch/x86/pv/descriptor-tables.c | 6 ------ xen/arch/x86/pv/dom0_build.c | 14 +++++++------- xen/arch/x86/pv/domain.c | 6 ------ xen/arch/x86/pv/emul-gate-op.c | 6 ------ xen/arch/x86/pv/emul-priv-op.c | 10 ---------- xen/arch/x86/pv/grant_table.c | 6 ------ xen/arch/x86/pv/ro-page-fault.c | 6 ------ xen/arch/x86/pv/shim.c | 4 +--- xen/arch/x86/smpboot.c | 6 ------ xen/arch/x86/tboot.c | 4 ++-- xen/arch/x86/traps.c | 4 ++-- xen/arch/x86/x86_64/mm.c | 6 ------ xen/common/domain.c | 4 ++-- xen/common/grant_table.c | 6 ------ xen/common/kimage.c | 6 ------ xen/common/memory.c | 6 ------ xen/common/page_alloc.c | 6 ------ xen/common/tmem.c | 2 +- xen/common/tmem_xen.c | 4 ---- xen/common/trace.c | 4 ++-- xen/common/vmap.c | 6 +----- xen/common/xenoprof.c | 2 -- xen/drivers/passthrough/amd/iommu_map.c | 12 ++++++------ xen/drivers/passthrough/iommu.c | 2 +- xen/drivers/passthrough/x86/iommu.c | 2 +- xen/include/asm-arm/mm.h | 20 ++++++++++---------- xen/include/asm-arm/p2m.h | 4 ++-- xen/include/asm-x86/mm.h | 6 +++--- xen/include/asm-x86/p2m.h | 2 +- xen/include/asm-x86/page.h | 32 +++++++++++++++----------------- xen/include/xen/domain_page.h | 8 ++++---- xen/include/xen/mm.h | 5 ----- xen/include/xen/tmem_xen.h | 2 +- 66 files changed, 129 insertions(+), 282 deletions(-) diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index 9ef9030253..11cdf05091 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -49,8 +49,6 @@ struct map_range_data /* Override macros from asm/page.h to make them work with mfn_t */ #undef virt_to_mfn #define virt_to_mfn(va) _mfn(__virt_to_mfn(va)) -#undef page_to_mfn -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg)) //#define DEBUG_11_ALLOCATION #ifdef DEBUG_11_ALLOCATION diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c index 2fb0b9684d..8fdfd91543 100644 --- a/xen/arch/arm/kernel.c +++ b/xen/arch/arm/kernel.c @@ -286,7 +286,7 @@ static __init int kernel_decompress(struct bootmodule *mod) iounmap(input); return -ENOMEM; } - mfn = _mfn(page_to_mfn(pages)); + mfn = page_to_mfn(pages); output = __vmap(&mfn, 1 << kernel_order_out, 1, 1, PAGE_HYPERVISOR, VMAP_DEFAULT); rc = perform_gunzip(output, input, size); diff --git a/xen/arch/arm/mem_access.c b/xen/arch/arm/mem_access.c index 11c2b03b7b..ae2686ffa2 100644 --- a/xen/arch/arm/mem_access.c +++ b/xen/arch/arm/mem_access.c @@ -212,7 +212,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag, if ( t != p2m_ram_rw ) goto err; - page = mfn_to_page(mfn_x(mfn)); + page = mfn_to_page(mfn); if ( unlikely(!get_page(page, v->domain)) ) page = NULL; diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index eb3659f913..a6de77c28c 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -477,7 +477,7 @@ void unmap_domain_page(const void *va) local_irq_restore(flags); } -unsigned long domain_page_map_to_mfn(const void *ptr) +mfn_t domain_page_map_to_mfn(const void *ptr) { unsigned long va = (unsigned long)ptr; lpae_t *map = this_cpu(xen_dommap); @@ -485,12 +485,12 @@ unsigned long domain_page_map_to_mfn(const void *ptr) unsigned long offset = (va>>THIRD_SHIFT) & LPAE_ENTRY_MASK; if ( va >= VMAP_VIRT_START && va < VMAP_VIRT_END ) - return __virt_to_mfn(va); + return virt_to_mfn(va); ASSERT(slot >= 0 && slot < DOMHEAP_ENTRIES); ASSERT(map[slot].pt.avail != 0); - return map[slot].pt.base + offset; + return _mfn(map[slot].pt.base + offset); } #endif @@ -1282,7 +1282,7 @@ int xenmem_add_to_physmap_one( return -EINVAL; } - mfn = _mfn(page_to_mfn(page)); + mfn = page_to_mfn(page); t = p2m_map_foreign; rcu_unlock_domain(od); diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 5de82aafe1..d43c3aa896 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -37,12 +37,6 @@ static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT; #define P2M_ROOT_PAGES (1<domain_id, addr); - printk("P2M @ %p mfn:0x%lx\n", - p2m->root, __page_to_mfn(p2m->root)); + printk("P2M @ %p mfn:%#"PRI_mfn"\n", + p2m->root, mfn_x(page_to_mfn(p2m->root))); dump_pt_walk(page_to_maddr(p2m->root), addr, P2M_ROOT_LEVEL, P2M_ROOT_PAGES); diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c index 7baf4614be..b978e05613 100644 --- a/xen/arch/x86/cpu/vpmu.c +++ b/xen/arch/x86/cpu/vpmu.c @@ -653,7 +653,7 @@ static void pvpmu_finish(struct domain *d, xen_pmu_params_t *params) { struct vcpu *v; struct vpmu_struct *vpmu; - uint64_t mfn; + mfn_t mfn; void *xenpmu_data; if ( (params->vcpu >= d->max_vcpus) || (d->vcpu[params->vcpu] == NULL) ) @@ -675,7 +675,7 @@ static void pvpmu_finish(struct domain *d, xen_pmu_params_t *params) if ( xenpmu_data ) { mfn = domain_page_map_to_mfn(xenpmu_data); - ASSERT(mfn_valid(_mfn(mfn))); + ASSERT(mfn_valid(mfn)); unmap_domain_page_global(xenpmu_data); put_page_and_type(mfn_to_page(mfn)); } diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index fbb320da9c..e10a1a2e66 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -195,7 +195,7 @@ void dump_pageframe_info(struct domain *d) } } printk(" DomPage %p: caf=%08lx, taf=%" PRtype_info "\n", - _p(page_to_mfn(page)), + _p(mfn_x(page_to_mfn(page))), page->count_info, page->u.inuse.type_info); } spin_unlock(&d->page_alloc_lock); @@ -208,7 +208,7 @@ void dump_pageframe_info(struct domain *d) page_list_for_each ( page, &d->xenpage_list ) { printk(" XenPage %p: caf=%08lx, taf=%" PRtype_info "\n", - _p(page_to_mfn(page)), + _p(mfn_x(page_to_mfn(page))), page->count_info, page->u.inuse.type_info); } spin_unlock(&d->page_alloc_lock); @@ -637,7 +637,8 @@ int arch_domain_soft_reset(struct domain *d) struct page_info *page = virt_to_page(d->shared_info), *new_page; int ret = 0; struct domain *owner; - unsigned long mfn, gfn; + mfn_t mfn; + unsigned long gfn; p2m_type_t p2mt; unsigned int i; @@ -671,7 +672,7 @@ int arch_domain_soft_reset(struct domain *d) ASSERT( owner == d ); mfn = page_to_mfn(page); - gfn = mfn_to_gmfn(d, mfn); + gfn = mfn_to_gmfn(d, mfn_x(mfn)); /* * gfn == INVALID_GFN indicates that the shared_info page was never mapped @@ -680,7 +681,7 @@ int arch_domain_soft_reset(struct domain *d) if ( gfn == gfn_x(INVALID_GFN) ) goto exit_put_page; - if ( mfn_x(get_gfn_query(d, gfn, &p2mt)) != mfn ) + if ( !mfn_eq(get_gfn_query(d, gfn, &p2mt), mfn) ) { printk(XENLOG_G_ERR "Failed to get Dom%d's shared_info GFN (%lx)\n", d->domain_id, gfn); @@ -697,7 +698,7 @@ int arch_domain_soft_reset(struct domain *d) goto exit_put_gfn; } - ret = guest_physmap_remove_page(d, _gfn(gfn), _mfn(mfn), PAGE_ORDER_4K); + ret = guest_physmap_remove_page(d, _gfn(gfn), mfn, PAGE_ORDER_4K); if ( ret ) { printk(XENLOG_G_ERR "Failed to remove Dom%d's shared_info frame %lx\n", @@ -706,7 +707,7 @@ int arch_domain_soft_reset(struct domain *d) goto exit_put_gfn; } - ret = guest_physmap_add_page(d, _gfn(gfn), _mfn(page_to_mfn(new_page)), + ret = guest_physmap_add_page(d, _gfn(gfn), page_to_mfn(new_page), PAGE_ORDER_4K); if ( ret ) { @@ -1002,7 +1003,7 @@ int arch_set_info_guest( { if ( (page->u.inuse.type_info & PGT_type_mask) == PGT_l4_page_table ) - done = !fill_ro_mpt(_mfn(page_to_mfn(page))); + done = !fill_ro_mpt(page_to_mfn(page)); page_unlock(page); } @@ -1131,7 +1132,7 @@ int arch_set_info_guest( l4_pgentry_t *l4tab; l4tab = map_domain_page(pagetable_get_mfn(v->arch.guest_table)); - *l4tab = l4e_from_pfn(page_to_mfn(cr3_page), + *l4tab = l4e_from_mfn(page_to_mfn(cr3_page), _PAGE_PRESENT|_PAGE_RW|_PAGE_USER|_PAGE_ACCESSED); unmap_domain_page(l4tab); } @@ -2000,7 +2001,7 @@ int domain_relinquish_resources(struct domain *d) if ( d->arch.pirq_eoi_map != NULL ) { unmap_domain_page_global(d->arch.pirq_eoi_map); - put_page_and_type(mfn_to_page(d->arch.pirq_eoi_map_mfn)); + put_page_and_type(mfn_to_page(_mfn(d->arch.pirq_eoi_map_mfn))); d->arch.pirq_eoi_map = NULL; d->arch.auto_unmask = 0; } diff --git a/xen/arch/x86/domain_page.c b/xen/arch/x86/domain_page.c index b5780f201f..11b6a5421a 100644 --- a/xen/arch/x86/domain_page.c +++ b/xen/arch/x86/domain_page.c @@ -330,13 +330,13 @@ void unmap_domain_page_global(const void *ptr) } /* Translate a map-domain-page'd address to the underlying MFN */ -unsigned long domain_page_map_to_mfn(const void *ptr) +mfn_t domain_page_map_to_mfn(const void *ptr) { unsigned long va = (unsigned long)ptr; const l1_pgentry_t *pl1e; if ( va >= DIRECTMAP_VIRT_START ) - return virt_to_mfn(ptr); + return _mfn(virt_to_mfn(ptr)); if ( va >= VMAP_VIRT_START && va < VMAP_VIRT_END ) { @@ -349,5 +349,5 @@ unsigned long domain_page_map_to_mfn(const void *ptr) pl1e = &__linear_l1_table[l1_linear_offset(va)]; } - return l1e_get_pfn(*pl1e); + return l1e_get_mfn(*pl1e); } diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c index 96b0d13f2f..5575e6da29 100644 --- a/xen/arch/x86/hvm/dm.c +++ b/xen/arch/x86/hvm/dm.c @@ -193,7 +193,7 @@ static int modified_memory(struct domain *d, * These are most probably not page tables any more * don't take a long time and don't die either. */ - sh_remove_shadows(d, _mfn(page_to_mfn(page)), 1, 0); + sh_remove_shadows(d, page_to_mfn(page), 1, 0); put_page(page); } } diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c index d3f65eadbe..b237508072 100644 --- a/xen/arch/x86/hvm/dom0_build.c +++ b/xen/arch/x86/hvm/dom0_build.c @@ -120,7 +120,7 @@ static int __init pvh_populate_memory_range(struct domain *d, continue; } - rc = guest_physmap_add_page(d, _gfn(start), _mfn(page_to_mfn(page)), + rc = guest_physmap_add_page(d, _gfn(start), page_to_mfn(page), order); if ( rc != 0 ) { @@ -270,7 +270,7 @@ static int __init pvh_setup_vmx_realmode_helpers(struct domain *d) } write_32bit_pse_identmap(ident_pt); unmap_domain_page(ident_pt); - put_page(mfn_to_page(mfn_x(mfn))); + put_page(mfn_to_page(mfn)); d->arch.hvm_domain.params[HVM_PARAM_IDENT_PT] = gaddr; if ( pvh_add_mem_range(d, gaddr, gaddr + PAGE_SIZE, E820_RESERVED) ) printk("Unable to set identity page tables as reserved in the memory map\n"); @@ -288,7 +288,7 @@ static void __init pvh_steal_low_ram(struct domain *d, unsigned long start, for ( mfn = start; mfn < start + nr_pages; mfn++ ) { - struct page_info *pg = mfn_to_page(mfn); + struct page_info *pg = mfn_to_page(_mfn(mfn)); int rc; rc = unshare_xen_page_with_guest(pg, dom_io); diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index 4d13b87bae..213426db59 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -591,7 +591,7 @@ static void *hvmemul_map_linear_addr( goto unhandleable; } - *mfn++ = _mfn(page_to_mfn(page)); + *mfn++ = page_to_mfn(page); if ( p2m_is_discard_write(p2mt) ) { @@ -623,7 +623,7 @@ static void *hvmemul_map_linear_addr( out: /* Drop all held references. */ while ( mfn-- > hvmemul_ctxt->mfn ) - put_page(mfn_to_page(mfn_x(*mfn))); + put_page(mfn_to_page(*mfn)); return err; } @@ -649,7 +649,7 @@ static void hvmemul_unmap_linear_addr( { ASSERT(mfn_valid(*mfn)); paging_mark_dirty(currd, *mfn); - put_page(mfn_to_page(mfn_x(*mfn))); + put_page(mfn_to_page(*mfn)); *mfn++ = _mfn(0); /* Clean slot for map()'s error checking. */ } diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 569b124603..0316c65b10 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -2254,7 +2254,7 @@ int hvm_set_cr0(unsigned long value, bool_t may_defer) v->arch.guest_table = pagetable_from_page(page); HVM_DBG_LOG(DBG_LEVEL_VMMU, "Update CR3 value = %lx, mfn = %lx", - v->arch.hvm_vcpu.guest_cr[3], page_to_mfn(page)); + v->arch.hvm_vcpu.guest_cr[3], mfn_x(page_to_mfn(page))); } } else if ( !(value & X86_CR0_PG) && (old_value & X86_CR0_PG) ) @@ -2638,7 +2638,7 @@ void *hvm_map_guest_frame_ro(unsigned long gfn, bool_t permanent) void hvm_unmap_guest_frame(void *p, bool_t permanent) { - unsigned long mfn; + mfn_t mfn; struct page_info *page; if ( !p ) @@ -2659,7 +2659,7 @@ void hvm_unmap_guest_frame(void *p, bool_t permanent) list_for_each_entry(track, &d->arch.hvm_domain.write_map.list, list) if ( track->page == page ) { - paging_mark_dirty(d, _mfn(mfn)); + paging_mark_dirty(d, mfn); list_del(&track->list); xfree(track); break; @@ -2676,7 +2676,7 @@ void hvm_mapped_guest_frames_mark_dirty(struct domain *d) spin_lock(&d->arch.hvm_domain.write_map.lock); list_for_each_entry(track, &d->arch.hvm_domain.write_map.list, list) - paging_mark_dirty(d, _mfn(page_to_mfn(track->page))); + paging_mark_dirty(d, page_to_mfn(track->page)); spin_unlock(&d->arch.hvm_domain.write_map.lock); } @@ -3250,8 +3250,8 @@ static enum hvm_translation_result __hvm_copy( if ( xchg(&lastpage, gfn_x(gfn)) != gfn_x(gfn) ) dprintk(XENLOG_G_DEBUG, - "%pv attempted write to read-only gfn %#lx (mfn=%#lx)\n", - v, gfn_x(gfn), page_to_mfn(page)); + "%pv attempted write to read-only gfn %#lx (mfn=%#"PRI_mfn")\n", + v, gfn_x(gfn), mfn_x(page_to_mfn(page))); } else { diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 5b469f4b12..860dd51795 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -272,7 +272,7 @@ static void hvm_remove_ioreq_gfn( struct domain *d, struct hvm_ioreq_page *iorp) { if ( guest_physmap_remove_page(d, _gfn(iorp->gfn), - _mfn(page_to_mfn(iorp->page)), 0) ) + page_to_mfn(iorp->page), 0) ) domain_crash(d); clear_page(iorp->va); } @@ -285,7 +285,7 @@ static int hvm_add_ioreq_gfn( clear_page(iorp->va); rc = guest_physmap_add_page(d, _gfn(iorp->gfn), - _mfn(page_to_mfn(iorp->page)), 0); + page_to_mfn(iorp->page), 0); if ( rc == 0 ) paging_mark_pfn_dirty(d, _pfn(iorp->gfn)); diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c index 088fbdf8ce..925bab2438 100644 --- a/xen/arch/x86/hvm/stdvga.c +++ b/xen/arch/x86/hvm/stdvga.c @@ -590,7 +590,7 @@ void stdvga_init(struct domain *d) if ( pg == NULL ) break; s->vram_page[i] = pg; - clear_domain_page(_mfn(page_to_mfn(pg))); + clear_domain_page(page_to_mfn(pg)); } if ( i == ARRAY_SIZE(s->vram_page) ) diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c index 1e98adb7c7..c7616456dd 100644 --- a/xen/arch/x86/hvm/svm/svm.c +++ b/xen/arch/x86/hvm/svm/svm.c @@ -1571,7 +1571,7 @@ static int svm_cpu_up_prepare(unsigned int cpu) if ( !pg ) goto err; - clear_domain_page(_mfn(page_to_mfn(pg))); + clear_domain_page(page_to_mfn(pg)); *this_hsa = page_to_maddr(pg); } @@ -1581,7 +1581,7 @@ static int svm_cpu_up_prepare(unsigned int cpu) if ( !pg ) goto err; - clear_domain_page(_mfn(page_to_mfn(pg))); + clear_domain_page(page_to_mfn(pg)); *this_vmcb = page_to_maddr(pg); } diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c index 70aab520bc..d6aa89d0b7 100644 --- a/xen/arch/x86/hvm/viridian.c +++ b/xen/arch/x86/hvm/viridian.c @@ -354,7 +354,7 @@ static void enable_hypercall_page(struct domain *d) if ( page ) put_page(page); gdprintk(XENLOG_WARNING, "Bad GMFN %#"PRI_gfn" (MFN %#"PRI_mfn")\n", - gmfn, page ? page_to_mfn(page) : mfn_x(INVALID_MFN)); + gmfn, mfn_x(page ? page_to_mfn(page) : INVALID_MFN)); return; } @@ -414,7 +414,7 @@ static void initialize_vp_assist(struct vcpu *v) fail: gdprintk(XENLOG_WARNING, "Bad GMFN %#"PRI_gfn" (MFN %#"PRI_mfn")\n", gmfn, - page ? page_to_mfn(page) : mfn_x(INVALID_MFN)); + mfn_x(page ? page_to_mfn(page) : INVALID_MFN)); } static void teardown_vp_assist(struct vcpu *v) @@ -492,7 +492,7 @@ static void update_reference_tsc(struct domain *d, bool_t initialize) if ( page ) put_page(page); gdprintk(XENLOG_WARNING, "Bad GMFN %#"PRI_gfn" (MFN %#"PRI_mfn")\n", - gmfn, page ? page_to_mfn(page) : mfn_x(INVALID_MFN)); + gmfn, mfn_x(page ? page_to_mfn(page) : INVALID_MFN)); return; } diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c index 326dd024de..2c5ef36e5e 100644 --- a/xen/arch/x86/hvm/vmx/vmcs.c +++ b/xen/arch/x86/hvm/vmx/vmcs.c @@ -1434,7 +1434,7 @@ int vmx_vcpu_enable_pml(struct vcpu *v) vmx_vmcs_enter(v); - __vmwrite(PML_ADDRESS, page_to_mfn(v->arch.hvm_vmx.pml_pg) << PAGE_SHIFT); + __vmwrite(PML_ADDRESS, page_to_maddr(v->arch.hvm_vmx.pml_pg)); __vmwrite(GUEST_PML_INDEX, NR_PML_ENTRIES - 1); v->arch.hvm_vmx.secondary_exec_control |= SECONDARY_EXEC_ENABLE_PML; diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index b2fdbf0ef0..691c88e9b3 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -2925,7 +2925,7 @@ gp_fault: static int vmx_alloc_vlapic_mapping(struct domain *d) { struct page_info *pg; - unsigned long mfn; + mfn_t mfn; if ( !cpu_has_vmx_virtualize_apic_accesses ) return 0; @@ -2934,10 +2934,10 @@ static int vmx_alloc_vlapic_mapping(struct domain *d) if ( !pg ) return -ENOMEM; mfn = page_to_mfn(pg); - clear_domain_page(_mfn(mfn)); + clear_domain_page(mfn); share_xen_page_with_guest(pg, d, SHARE_rw); - d->arch.hvm_domain.vmx.apic_access_mfn = mfn; - set_mmio_p2m_entry(d, paddr_to_pfn(APIC_DEFAULT_PHYS_BASE), _mfn(mfn), + d->arch.hvm_domain.vmx.apic_access_mfn = mfn_x(mfn); + set_mmio_p2m_entry(d, paddr_to_pfn(APIC_DEFAULT_PHYS_BASE), mfn, PAGE_ORDER_4K, p2m_get_hostp2m(d)->default_access); return 0; @@ -2948,7 +2948,7 @@ static void vmx_free_vlapic_mapping(struct domain *d) unsigned long mfn = d->arch.hvm_domain.vmx.apic_access_mfn; if ( mfn != 0 ) - free_shared_domheap_page(mfn_to_page(mfn)); + free_shared_domheap_page(mfn_to_page(_mfn(mfn))); } static void vmx_install_vlapic_mapping(struct vcpu *v) diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c index dcd3b28f86..f66d62d716 100644 --- a/xen/arch/x86/hvm/vmx/vvmx.c +++ b/xen/arch/x86/hvm/vmx/vvmx.c @@ -84,7 +84,7 @@ int nvmx_vcpu_initialise(struct vcpu *v) } v->arch.hvm_vmx.vmread_bitmap = vmread_bitmap; - clear_domain_page(_mfn(page_to_mfn(vmread_bitmap))); + clear_domain_page(page_to_mfn(vmread_bitmap)); vmwrite_bitmap = alloc_domheap_page(NULL, 0); if ( !vmwrite_bitmap ) @@ -1733,7 +1733,7 @@ int nvmx_handle_vmptrld(struct cpu_user_regs *regs) nvcpu->nv_vvmcx = vvmcx; nvcpu->nv_vvmcxaddr = gpa; v->arch.hvm_vmx.vmcs_shadow_maddr = - pfn_to_paddr(domain_page_map_to_mfn(vvmcx)); + mfn_to_maddr(domain_page_map_to_mfn(vvmcx)); } else { @@ -1819,7 +1819,7 @@ int nvmx_handle_vmclear(struct cpu_user_regs *regs) { if ( writable ) clear_vvmcs_launched(&nvmx->launched_list, - domain_page_map_to_mfn(vvmcs)); + mfn_x(domain_page_map_to_mfn(vvmcs))); else rc = VMFAIL_VALID; hvm_unmap_guest_frame(vvmcs, 0); diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index ec61887d76..e4ac5cce9f 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -131,10 +131,6 @@ #include "pv/mm.h" /* Override macros from asm/page.h to make them work with mfn_t */ -#undef mfn_to_page -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn)) -#undef page_to_mfn -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg)) #undef virt_to_mfn #define virt_to_mfn(v) _mfn(__virt_to_mfn(v)) diff --git a/xen/arch/x86/mm/guest_walk.c b/xen/arch/x86/mm/guest_walk.c index 6055fec1ad..f67aeda3d0 100644 --- a/xen/arch/x86/mm/guest_walk.c +++ b/xen/arch/x86/mm/guest_walk.c @@ -469,20 +469,20 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m, if ( l3p ) { unmap_domain_page(l3p); - put_page(mfn_to_page(mfn_x(gw->l3mfn))); + put_page(mfn_to_page(gw->l3mfn)); } #endif #if GUEST_PAGING_LEVELS >= 3 if ( l2p ) { unmap_domain_page(l2p); - put_page(mfn_to_page(mfn_x(gw->l2mfn))); + put_page(mfn_to_page(gw->l2mfn)); } #endif if ( l1p ) { unmap_domain_page(l1p); - put_page(mfn_to_page(mfn_x(gw->l1mfn))); + put_page(mfn_to_page(gw->l1mfn)); } return walk_ok; diff --git a/xen/arch/x86/mm/hap/guest_walk.c b/xen/arch/x86/mm/hap/guest_walk.c index c550017ba4..cb3f9cebe7 100644 --- a/xen/arch/x86/mm/hap/guest_walk.c +++ b/xen/arch/x86/mm/hap/guest_walk.c @@ -83,7 +83,7 @@ unsigned long hap_p2m_ga_to_gfn(GUEST_PAGING_LEVELS)( *pfec &= ~PFEC_page_present; goto out_tweak_pfec; } - top_mfn = _mfn(page_to_mfn(top_page)); + top_mfn = page_to_mfn(top_page); /* Map the top-level table and call the tree-walker */ ASSERT(mfn_valid(top_mfn)); diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c index b76e6b8c6b..812a8405df 100644 --- a/xen/arch/x86/mm/hap/hap.c +++ b/xen/arch/x86/mm/hap/hap.c @@ -42,12 +42,6 @@ #include "private.h" -/* Override macros from asm/page.h to make them work with mfn_t */ -#undef mfn_to_page -#define mfn_to_page(_m) __mfn_to_page(mfn_x(_m)) -#undef page_to_mfn -#define page_to_mfn(_pg) _mfn(__page_to_mfn(_pg)) - /************************************************/ /* HAP VRAM TRACKING SUPPORT */ /************************************************/ diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c index 14b1bb01e9..1738df69f6 100644 --- a/xen/arch/x86/mm/hap/nested_ept.c +++ b/xen/arch/x86/mm/hap/nested_ept.c @@ -173,7 +173,7 @@ nept_walk_tables(struct vcpu *v, unsigned long l2ga, ept_walk_t *gw) goto map_err; gw->lxe[lvl] = lxp[ept_lvl_table_offset(l2ga, lvl)]; unmap_domain_page(lxp); - put_page(mfn_to_page(mfn_x(lxmfn))); + put_page(mfn_to_page(lxmfn)); if ( nept_non_present_check(gw->lxe[lvl]) ) goto non_present; diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c index 57f54c55c8..fad8a9df13 100644 --- a/xen/arch/x86/mm/mem_sharing.c +++ b/xen/arch/x86/mm/mem_sharing.c @@ -152,11 +152,6 @@ static inline shr_handle_t get_next_handle(void) #define mem_sharing_enabled(d) \ (is_hvm_domain(d) && (d)->arch.hvm_domain.mem_sharing_enabled) -#undef mfn_to_page -#define mfn_to_page(_m) __mfn_to_page(mfn_x(_m)) -#undef page_to_mfn -#define page_to_mfn(_pg) _mfn(__page_to_mfn(_pg)) - static atomic_t nr_saved_mfns = ATOMIC_INIT(0); static atomic_t nr_shared_mfns = ATOMIC_INIT(0); diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c index 66dbb3e83a..14b593923b 100644 --- a/xen/arch/x86/mm/p2m-ept.c +++ b/xen/arch/x86/mm/p2m-ept.c @@ -74,13 +74,13 @@ static int atomic_write_ept_entry(ept_entry_t *entryptr, ept_entry_t new, goto out; rc = -ESRCH; - fdom = page_get_owner(mfn_to_page(new.mfn)); + fdom = page_get_owner(mfn_to_page(_mfn(new.mfn))); if ( fdom == NULL ) goto out; /* get refcount on the page */ rc = -EBUSY; - if ( !get_page(mfn_to_page(new.mfn), fdom) ) + if ( !get_page(mfn_to_page(_mfn(new.mfn)), fdom) ) goto out; } } @@ -91,7 +91,7 @@ static int atomic_write_ept_entry(ept_entry_t *entryptr, ept_entry_t new, write_atomic(&entryptr->epte, new.epte); if ( unlikely(oldmfn != mfn_x(INVALID_MFN)) ) - put_page(mfn_to_page(oldmfn)); + put_page(mfn_to_page(_mfn(oldmfn))); rc = 0; @@ -270,7 +270,7 @@ static void ept_free_entry(struct p2m_domain *p2m, ept_entry_t *ept_entry, int l } p2m_tlb_flush_sync(p2m); - p2m_free_ptp(p2m, mfn_to_page(ept_entry->mfn)); + p2m_free_ptp(p2m, mfn_to_page(_mfn(ept_entry->mfn))); } static bool_t ept_split_super_page(struct p2m_domain *p2m, diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c index fa13e07f7c..631e9aec33 100644 --- a/xen/arch/x86/mm/p2m-pod.c +++ b/xen/arch/x86/mm/p2m-pod.c @@ -29,12 +29,6 @@ #include "mm-locks.h" -/* Override macros from asm/page.h to make them work with mfn_t */ -#undef mfn_to_page -#define mfn_to_page(_m) __mfn_to_page(mfn_x(_m)) -#undef page_to_mfn -#define page_to_mfn(_pg) _mfn(__page_to_mfn(_pg)) - #define superpage_aligned(_x) (((_x)&(SUPERPAGE_PAGES-1))==0) /* Enforce lock ordering when grabbing the "external" page_alloc lock */ diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 48e50fb5d8..9ce0a5c9e1 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -47,12 +47,6 @@ bool_t __initdata opt_hap_1gb = 1, __initdata opt_hap_2mb = 1; boolean_param("hap_1gb", opt_hap_1gb); boolean_param("hap_2mb", opt_hap_2mb); -/* Override macros from asm/page.h to make them work with mfn_t */ -#undef mfn_to_page -#define mfn_to_page(_m) __mfn_to_page(mfn_x(_m)) -#undef page_to_mfn -#define page_to_mfn(_pg) _mfn(__page_to_mfn(_pg)) - DEFINE_PERCPU_RWLOCK_GLOBAL(p2m_percpu_rwlock); /* Init the datastructures for later use by the p2m code */ diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c index 8a658b9118..2b0445ffe9 100644 --- a/xen/arch/x86/mm/paging.c +++ b/xen/arch/x86/mm/paging.c @@ -47,12 +47,6 @@ /* Per-CPU variable for enforcing the lock ordering */ DEFINE_PER_CPU(int, mm_lock_level); -/* Override macros from asm/page.h to make them work with mfn_t */ -#undef mfn_to_page -#define mfn_to_page(_m) __mfn_to_page(mfn_x(_m)) -#undef page_to_mfn -#define page_to_mfn(_pg) _mfn(__page_to_mfn(_pg)) - /************************************************/ /* LOG DIRTY SUPPORT */ /************************************************/ diff --git a/xen/arch/x86/mm/shadow/private.h b/xen/arch/x86/mm/shadow/private.h index 2dee084642..691bcf6db0 100644 --- a/xen/arch/x86/mm/shadow/private.h +++ b/xen/arch/x86/mm/shadow/private.h @@ -315,7 +315,7 @@ static inline int page_is_out_of_sync(struct page_info *p) static inline int mfn_is_out_of_sync(mfn_t gmfn) { - return page_is_out_of_sync(mfn_to_page(mfn_x(gmfn))); + return page_is_out_of_sync(mfn_to_page(gmfn)); } static inline int page_oos_may_write(struct page_info *p) @@ -326,7 +326,7 @@ static inline int page_oos_may_write(struct page_info *p) static inline int mfn_oos_may_write(mfn_t gmfn) { - return page_oos_may_write(mfn_to_page(mfn_x(gmfn))); + return page_oos_may_write(mfn_to_page(gmfn)); } #endif /* (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC) */ @@ -455,18 +455,6 @@ void sh_reset_l3_up_pointers(struct vcpu *v); * MFN/page-info handling */ -/* Override macros from asm/page.h to make them work with mfn_t */ -#undef mfn_to_page -#define mfn_to_page(_m) __mfn_to_page(mfn_x(_m)) -#undef page_to_mfn -#define page_to_mfn(_pg) _mfn(__page_to_mfn(_pg)) - -/* Override pagetable_t <-> struct page_info conversions to work with mfn_t */ -#undef pagetable_get_page -#define pagetable_get_page(x) mfn_to_page(pagetable_get_mfn(x)) -#undef pagetable_from_page -#define pagetable_from_page(pg) pagetable_from_mfn(page_to_mfn(pg)) - #define backpointer(sp) _mfn(pdx_to_pfn((unsigned long)(sp)->v.sh.back)) static inline unsigned long __backpointer(const struct page_info *sp) { diff --git a/xen/arch/x86/numa.c b/xen/arch/x86/numa.c index 4fc967f893..a87987da6f 100644 --- a/xen/arch/x86/numa.c +++ b/xen/arch/x86/numa.c @@ -430,7 +430,7 @@ static void dump_numa(unsigned char key) spin_lock(&d->page_alloc_lock); page_list_for_each(page, &d->page_list) { - i = phys_to_nid((paddr_t)page_to_mfn(page) << PAGE_SHIFT); + i = phys_to_nid(page_to_maddr(page)); page_num_node[i]++; } spin_unlock(&d->page_alloc_lock); diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c index 984491c3dc..b87ec9034c 100644 --- a/xen/arch/x86/physdev.c +++ b/xen/arch/x86/physdev.c @@ -239,7 +239,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg) } if ( cmpxchg(&currd->arch.pirq_eoi_map_mfn, - 0, page_to_mfn(page)) != 0 ) + 0, mfn_x(page_to_mfn(page))) != 0 ) { put_page_and_type(page); ret = -EBUSY; diff --git a/xen/arch/x86/pv/callback.c b/xen/arch/x86/pv/callback.c index 29ae692855..2550a726d2 100644 --- a/xen/arch/x86/pv/callback.c +++ b/xen/arch/x86/pv/callback.c @@ -31,12 +31,6 @@ #include -/* Override macros from asm/page.h to make them work with mfn_t */ -#undef mfn_to_page -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn)) -#undef page_to_mfn -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg)) - static int register_guest_nmi_callback(unsigned long address) { struct vcpu *curr = current; diff --git a/xen/arch/x86/pv/descriptor-tables.c b/xen/arch/x86/pv/descriptor-tables.c index b418bbb581..71bf92713e 100644 --- a/xen/arch/x86/pv/descriptor-tables.c +++ b/xen/arch/x86/pv/descriptor-tables.c @@ -25,12 +25,6 @@ #include #include -/* Override macros from asm/page.h to make them work with mfn_t */ -#undef mfn_to_page -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn)) -#undef page_to_mfn -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg)) - /* * Flush the LDT, dropping any typerefs. Returns a boolean indicating whether * mappings have been removed (i.e. a TLB flush is needed). diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c index 0bd2f1bf90..5b4325b87f 100644 --- a/xen/arch/x86/pv/dom0_build.c +++ b/xen/arch/x86/pv/dom0_build.c @@ -64,7 +64,7 @@ static __init void mark_pv_pt_pages_rdonly(struct domain *d, for ( count = 0; count < nr_pt_pages; count++ ) { l1e_remove_flags(*pl1e, _PAGE_RW); - page = mfn_to_page(l1e_get_pfn(*pl1e)); + page = mfn_to_page(l1e_get_mfn(*pl1e)); /* Read-only mapping + PGC_allocated + page-table page. */ page->count_info = PGC_allocated | 3; @@ -496,7 +496,7 @@ int __init dom0_construct_pv(struct domain *d, page = alloc_domheap_pages(d, order, 0); if ( page == NULL ) panic("Not enough RAM for domain 0 allocation"); - alloc_spfn = page_to_mfn(page); + alloc_spfn = mfn_x(page_to_mfn(page)); alloc_epfn = alloc_spfn + d->tot_pages; if ( initrd_len ) @@ -524,12 +524,12 @@ int __init dom0_construct_pv(struct domain *d, mpt_alloc = (paddr_t)initrd->mod_start << PAGE_SHIFT; init_domheap_pages(mpt_alloc, mpt_alloc + PAGE_ALIGN(initrd_len)); - initrd->mod_start = initrd_mfn = page_to_mfn(page); + initrd->mod_start = initrd_mfn = mfn_x(page_to_mfn(page)); } else { while ( count-- ) - if ( assign_pages(d, mfn_to_page(mfn++), 0, 0) ) + if ( assign_pages(d, mfn_to_page(_mfn(mfn++)), 0, 0) ) BUG(); } initrd->mod_end = 0; @@ -661,7 +661,7 @@ int __init dom0_construct_pv(struct domain *d, L1_PROT : COMPAT_L1_PROT)); l1tab++; - page = mfn_to_page(mfn); + page = mfn_to_page(_mfn(mfn)); if ( !page->u.inuse.type_info && !get_page_and_type(page, d, PGT_writable_page) ) BUG(); @@ -801,7 +801,7 @@ int __init dom0_construct_pv(struct domain *d, si->nr_p2m_frames = d->tot_pages - count; page_list_for_each ( page, &d->page_list ) { - mfn = page_to_mfn(page); + mfn = mfn_x(page_to_mfn(page)); BUG_ON(SHARED_M2P(get_gpfn_from_mfn(mfn))); if ( get_gpfn_from_mfn(mfn) >= count ) { @@ -826,7 +826,7 @@ int __init dom0_construct_pv(struct domain *d, panic("Not enough RAM for DOM0 reservation"); while ( pfn < d->tot_pages ) { - mfn = page_to_mfn(page); + mfn = mfn_x(page_to_mfn(page)); #ifndef NDEBUG #define pfn (nr_pages - 1 - (pfn - (alloc_epfn - alloc_spfn))) #endif diff --git a/xen/arch/x86/pv/domain.c b/xen/arch/x86/pv/domain.c index 01c62e2d45..ac65ba4609 100644 --- a/xen/arch/x86/pv/domain.c +++ b/xen/arch/x86/pv/domain.c @@ -11,12 +11,6 @@ #include -/* Override macros from asm/page.h to make them work with mfn_t */ -#undef mfn_to_page -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn)) -#undef page_to_mfn -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg)) - static void noreturn continue_nonidle_domain(struct vcpu *v) { check_wakeup_from_wait(); diff --git a/xen/arch/x86/pv/emul-gate-op.c b/xen/arch/x86/pv/emul-gate-op.c index 14ce95e26e..810c4f7d8c 100644 --- a/xen/arch/x86/pv/emul-gate-op.c +++ b/xen/arch/x86/pv/emul-gate-op.c @@ -41,12 +41,6 @@ #include "emulate.h" -/* Override macros from asm/page.h to make them work with mfn_t */ -#undef mfn_to_page -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn)) -#undef page_to_mfn -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg)) - static int read_gate_descriptor(unsigned int gate_sel, const struct vcpu *v, unsigned int *sel, diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c index b5c60c1094..b4564080ad 100644 --- a/xen/arch/x86/pv/emul-priv-op.c +++ b/xen/arch/x86/pv/emul-priv-op.c @@ -43,16 +43,6 @@ #include "emulate.h" #include "mm.h" -/* Override macros from asm/page.h to make them work with mfn_t */ -#undef mfn_to_page -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn)) -#undef page_to_mfn -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg)) - -/*********************** - * I/O emulation support - */ - struct priv_op_ctxt { struct x86_emulate_ctxt ctxt; struct { diff --git a/xen/arch/x86/pv/grant_table.c b/xen/arch/x86/pv/grant_table.c index 458085e1b6..6b7d855c8a 100644 --- a/xen/arch/x86/pv/grant_table.c +++ b/xen/arch/x86/pv/grant_table.c @@ -27,12 +27,6 @@ #include "mm.h" -/* Override macros from asm/page.h to make them work with mfn_t */ -#undef mfn_to_page -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn)) -#undef page_to_mfn -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg)) - static unsigned int grant_to_pte_flags(unsigned int grant_flags, unsigned int cache_flags) { diff --git a/xen/arch/x86/pv/ro-page-fault.c b/xen/arch/x86/pv/ro-page-fault.c index 280544a283..aa8d5a7556 100644 --- a/xen/arch/x86/pv/ro-page-fault.c +++ b/xen/arch/x86/pv/ro-page-fault.c @@ -33,12 +33,6 @@ #include "emulate.h" #include "mm.h" -/* Override macros from asm/page.h to make them work with mfn_t */ -#undef mfn_to_page -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn)) -#undef page_to_mfn -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg)) - /********************* * Writable Pagetables */ diff --git a/xen/arch/x86/pv/shim.c b/xen/arch/x86/pv/shim.c index dd76264b21..1299112ce0 100644 --- a/xen/arch/x86/pv/shim.c +++ b/xen/arch/x86/pv/shim.c @@ -37,8 +37,6 @@ #include -#undef mfn_to_page -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn)) #undef virt_to_mfn #define virt_to_mfn(va) _mfn(__virt_to_mfn(va)) @@ -846,7 +844,7 @@ static unsigned long batch_memory_op(unsigned int cmd, unsigned int order, set_xen_guest_handle(xmr.extent_start, pfns); page_list_for_each ( pg, list ) { - pfns[xmr.nr_extents++] = page_to_mfn(pg); + pfns[xmr.nr_extents++] = mfn_x(page_to_mfn(pg)); if ( xmr.nr_extents == ARRAY_SIZE(pfns) || !page_list_next(pg, list) ) { long nr = xen_hypercall_memory_op(cmd, &xmr); diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c index 80549ad925..3d9df498b1 100644 --- a/xen/arch/x86/smpboot.c +++ b/xen/arch/x86/smpboot.c @@ -48,12 +48,6 @@ #include #include -/* Override macros from asm/page.h to make them work with mfn_t */ -#undef mfn_to_page -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn)) -#undef page_to_mfn -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg)) - #define setup_trampoline() (bootsym_phys(trampoline_realmode_entry)) unsigned long __read_mostly trampoline_phys; diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c index 71e757c553..fb4616ae83 100644 --- a/xen/arch/x86/tboot.c +++ b/xen/arch/x86/tboot.c @@ -184,7 +184,7 @@ static void update_pagetable_mac(vmac_ctx_t *ctx) for ( mfn = 0; mfn < max_page; mfn++ ) { - struct page_info *page = mfn_to_page(mfn); + struct page_info *page = mfn_to_page(_mfn(mfn)); if ( !mfn_valid(_mfn(mfn)) ) continue; @@ -276,7 +276,7 @@ static void tboot_gen_xenheap_integrity(const uint8_t key[TB_KEY_SIZE], vmac_set_key((uint8_t *)key, &ctx); for ( mfn = 0; mfn < max_page; mfn++ ) { - struct page_info *page = __mfn_to_page(mfn); + struct page_info *page = mfn_to_page(_mfn(mfn)); if ( !mfn_valid(_mfn(mfn)) ) continue; diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c index 4bed9de2c1..63c65699dc 100644 --- a/xen/arch/x86/traps.c +++ b/xen/arch/x86/traps.c @@ -825,8 +825,8 @@ int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val) } gdprintk(XENLOG_WARNING, - "Bad GMFN %lx (MFN %lx) to MSR %08x\n", - gmfn, page ? page_to_mfn(page) : -1UL, base); + "Bad GMFN %lx (MFN %#"PRI_mfn") to MSR %08x\n", + gmfn, mfn_x(page ? page_to_mfn(page) : INVALID_MFN), base); return 0; } diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c index 8d1f130abf..cca4ae926e 100644 --- a/xen/arch/x86/x86_64/mm.c +++ b/xen/arch/x86/x86_64/mm.c @@ -40,12 +40,6 @@ asm(".file \"" __FILE__ "\""); #include #include -/* Override macros from asm/page.h to make them work with mfn_t */ -#undef page_to_mfn -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg)) -#undef mfn_to_page -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn)) - unsigned int __read_mostly m2p_compat_vstart = __HYPERVISOR_COMPAT_VIRT_START; l2_pgentry_t *compat_idle_pg_table_l2; diff --git a/xen/common/domain.c b/xen/common/domain.c index b00cc1f70b..6cbf135457 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -1230,7 +1230,7 @@ int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset) } v->vcpu_info = new_info; - v->vcpu_info_mfn = _mfn(page_to_mfn(page)); + v->vcpu_info_mfn = page_to_mfn(page); /* Set new vcpu_info pointer /before/ setting pending flags. */ smp_wmb(); @@ -1263,7 +1263,7 @@ void unmap_vcpu_info(struct vcpu *v) vcpu_info_reset(v); /* NB: Clobbers v->vcpu_info_mfn */ - put_page_and_type(mfn_to_page(mfn_x(mfn))); + put_page_and_type(mfn_to_page(mfn)); } int default_initialise_vcpu(struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg) diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c index 4bedf5984a..c757b7f6f5 100644 --- a/xen/common/grant_table.c +++ b/xen/common/grant_table.c @@ -40,12 +40,6 @@ #include #include -/* Override macros from asm/page.h to make them work with mfn_t */ -#undef page_to_mfn -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg)) -#undef mfn_to_page -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn)) - /* Per-domain grant information. */ struct grant_table { /* diff --git a/xen/common/kimage.c b/xen/common/kimage.c index afd8292cc1..210241dfb7 100644 --- a/xen/common/kimage.c +++ b/xen/common/kimage.c @@ -23,12 +23,6 @@ #include -/* Override macros from asm/page.h to make them work with mfn_t */ -#undef mfn_to_page -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn)) -#undef page_to_mfn -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg)) - /* * When kexec transitions to the new kernel there is a one-to-one * mapping between physical and virtual addresses. On processors diff --git a/xen/common/memory.c b/xen/common/memory.c index 8c8e979bcf..ca1560d48b 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -33,12 +33,6 @@ #include #endif -/* Override macros from asm/page.h to make them work with mfn_t */ -#undef page_to_mfn -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg)) -#undef mfn_to_page -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn)) - struct memop_args { /* INPUT */ struct domain *domain; /* Domain to be affected. */ diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index 6e50fb2621..186b39a6c8 100644 --- a/xen/common/page_alloc.c +++ b/xen/common/page_alloc.c @@ -151,12 +151,6 @@ #define p2m_pod_offline_or_broken_replace(pg) BUG_ON(pg != NULL) #endif -/* Override macros from asm/page.h to make them work with mfn_t */ -#undef page_to_mfn -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg)) -#undef mfn_to_page -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn)) - /* * Comma-separated list of hexadecimal page numbers containing bad bytes. * e.g. 'badpage=0x3f45,0x8a321'. diff --git a/xen/common/tmem.c b/xen/common/tmem.c index 324f42a6f9..c077f87e77 100644 --- a/xen/common/tmem.c +++ b/xen/common/tmem.c @@ -243,7 +243,7 @@ static void tmem_persistent_pool_page_put(void *page_va) struct page_info *pi; ASSERT(IS_PAGE_ALIGNED(page_va)); - pi = mfn_to_page(virt_to_mfn(page_va)); + pi = mfn_to_page(_mfn(virt_to_mfn(page_va))); ASSERT(IS_VALID_PAGE(pi)); __tmem_free_page_thispool(pi); } diff --git a/xen/common/tmem_xen.c b/xen/common/tmem_xen.c index bd52e44faf..bf7b14f79a 100644 --- a/xen/common/tmem_xen.c +++ b/xen/common/tmem_xen.c @@ -14,10 +14,6 @@ #include #include -/* Override macros from asm/page.h to make them work with mfn_t */ -#undef page_to_mfn -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg)) - bool __read_mostly opt_tmem; boolean_param("tmem", opt_tmem); diff --git a/xen/common/trace.c b/xen/common/trace.c index 680f6ae21e..8cdc17b731 100644 --- a/xen/common/trace.c +++ b/xen/common/trace.c @@ -242,7 +242,7 @@ static int alloc_trace_bufs(unsigned int pages) /* Now share the trace pages */ for ( i = 0; i < pages; i++ ) share_xen_page_with_privileged_guests( - mfn_to_page(t_info_mfn_list[offset + i]), SHARE_rw); + mfn_to_page(_mfn(t_info_mfn_list[offset + i])), SHARE_rw); } /* Finally, share the t_info page */ @@ -271,7 +271,7 @@ out_dealloc: uint32_t mfn = t_info_mfn_list[offset + i]; if ( !mfn ) break; - ASSERT(!(mfn_to_page(mfn)->count_info & PGC_allocated)); + ASSERT(!(mfn_to_page(_mfn(mfn))->count_info & PGC_allocated)); free_xenheap_pages(mfn_to_virt(mfn), 0); } } diff --git a/xen/common/vmap.c b/xen/common/vmap.c index 04f5db386d..faebc1ddf1 100644 --- a/xen/common/vmap.c +++ b/xen/common/vmap.c @@ -9,10 +9,6 @@ #include #include -/* Override macros from asm/page.h to make them work with mfn_t */ -#undef page_to_mfn -#define page_to_mfn(pg) _mfn(__page_to_mfn(pg)) - static DEFINE_SPINLOCK(vm_lock); static void *__read_mostly vm_base[VMAP_REGION_NR]; #define vm_bitmap(x) ((unsigned long *)vm_base[x]) @@ -274,7 +270,7 @@ static void *vmalloc_type(size_t size, enum vmap_region type) error: while ( i-- ) - free_domheap_page(mfn_to_page(mfn_x(mfn[i]))); + free_domheap_page(mfn_to_page(mfn[i])); xfree(mfn); return NULL; } diff --git a/xen/common/xenoprof.c b/xen/common/xenoprof.c index c1b43034e3..8a72e382e6 100644 --- a/xen/common/xenoprof.c +++ b/xen/common/xenoprof.c @@ -22,8 +22,6 @@ /* Override macros from asm/page.h to make them work with mfn_t */ #undef virt_to_mfn #define virt_to_mfn(va) _mfn(__virt_to_mfn(va)) -#undef mfn_to_page -#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn)) /* Limit amount of pages used for shared buffer (per domain) */ #define MAX_OPROF_SHARED_PAGES 32 diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c index fd2327d3e5..70b4345b37 100644 --- a/xen/drivers/passthrough/amd/iommu_map.c +++ b/xen/drivers/passthrough/amd/iommu_map.c @@ -451,7 +451,7 @@ static int iommu_pde_from_gfn(struct domain *d, unsigned long pfn, BUG_ON( table == NULL || level < IOMMU_PAGING_MODE_LEVEL_1 || level > IOMMU_PAGING_MODE_LEVEL_6 ); - next_table_mfn = page_to_mfn(table); + next_table_mfn = mfn_x(page_to_mfn(table)); if ( level == IOMMU_PAGING_MODE_LEVEL_1 ) { @@ -493,7 +493,7 @@ static int iommu_pde_from_gfn(struct domain *d, unsigned long pfn, return 1; } - next_table_mfn = page_to_mfn(table); + next_table_mfn = mfn_x(page_to_mfn(table)); set_iommu_pde_present((u32*)pde, next_table_mfn, next_level, !!IOMMUF_writable, !!IOMMUF_readable); @@ -520,7 +520,7 @@ static int iommu_pde_from_gfn(struct domain *d, unsigned long pfn, unmap_domain_page(next_table_vaddr); return 1; } - next_table_mfn = page_to_mfn(table); + next_table_mfn = mfn_x(page_to_mfn(table)); set_iommu_pde_present((u32*)pde, next_table_mfn, next_level, !!IOMMUF_writable, !!IOMMUF_readable); } @@ -577,7 +577,7 @@ static int update_paging_mode(struct domain *d, unsigned long gfn) } new_root_vaddr = __map_domain_page(new_root); - old_root_mfn = page_to_mfn(old_root); + old_root_mfn = mfn_x(page_to_mfn(old_root)); set_iommu_pde_present(new_root_vaddr, old_root_mfn, level, !!IOMMUF_writable, !!IOMMUF_readable); level++; @@ -712,7 +712,7 @@ int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn, } /* Deallocate lower level page table */ - free_amd_iommu_pgtable(mfn_to_page(pt_mfn[merge_level - 1])); + free_amd_iommu_pgtable(mfn_to_page(_mfn(pt_mfn[merge_level - 1]))); } out: @@ -802,7 +802,7 @@ void amd_iommu_share_p2m(struct domain *d) mfn_t pgd_mfn; pgd_mfn = pagetable_get_mfn(p2m_get_pagetable(p2m_get_hostp2m(d))); - p2m_table = mfn_to_page(mfn_x(pgd_mfn)); + p2m_table = mfn_to_page(pgd_mfn); if ( hd->arch.root_table != p2m_table ) { diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c index 1aecf7cf34..2c44fabf99 100644 --- a/xen/drivers/passthrough/iommu.c +++ b/xen/drivers/passthrough/iommu.c @@ -184,7 +184,7 @@ void __hwdom_init iommu_hwdom_init(struct domain *d) page_list_for_each ( page, &d->page_list ) { - unsigned long mfn = page_to_mfn(page); + unsigned long mfn = mfn_x(page_to_mfn(page)); unsigned long gfn = mfn_to_gmfn(d, mfn); unsigned int mapping = IOMMUF_readable; int ret; diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c index 0253823173..68182afd91 100644 --- a/xen/drivers/passthrough/x86/iommu.c +++ b/xen/drivers/passthrough/x86/iommu.c @@ -58,7 +58,7 @@ int arch_iommu_populate_page_table(struct domain *d) if ( is_hvm_domain(d) || (page->u.inuse.type_info & PGT_type_mask) == PGT_writable_page ) { - unsigned long mfn = page_to_mfn(page); + unsigned long mfn = mfn_x(page_to_mfn(page)); unsigned long gfn = mfn_to_gmfn(d, mfn); if ( gfn != gfn_x(INVALID_GFN) ) diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h index 5a9ca6a55b..835b2634cd 100644 --- a/xen/include/asm-arm/mm.h +++ b/xen/include/asm-arm/mm.h @@ -138,7 +138,7 @@ extern vaddr_t xenheap_virt_start; #endif #ifdef CONFIG_ARM_32 -#define is_xen_heap_page(page) is_xen_heap_mfn(__page_to_mfn(page)) +#define is_xen_heap_page(page) is_xen_heap_mfn(mfn_x(page_to_mfn(page))) #define is_xen_heap_mfn(mfn) ({ \ unsigned long mfn_ = (mfn); \ (mfn_ >= mfn_x(xenheap_mfn_start) && \ @@ -147,7 +147,7 @@ extern vaddr_t xenheap_virt_start; #else #define is_xen_heap_page(page) ((page)->count_info & PGC_xen_heap) #define is_xen_heap_mfn(mfn) \ - (mfn_valid(_mfn(mfn)) && is_xen_heap_page(__mfn_to_page(mfn))) + (mfn_valid(_mfn(mfn)) && is_xen_heap_page(mfn_to_page(_mfn(mfn)))) #endif #define is_xen_fixed_mfn(mfn) \ @@ -213,12 +213,14 @@ static inline void __iomem *ioremap_wc(paddr_t start, size_t len) }) /* Convert between machine frame numbers and page-info structures. */ -#define __mfn_to_page(mfn) (frame_table + (pfn_to_pdx(mfn) - frametable_base_pdx)) -#define __page_to_mfn(pg) pdx_to_pfn((unsigned long)((pg) - frame_table) + frametable_base_pdx) +#define mfn_to_page(mfn) \ + (frame_table + (mfn_to_pdx(mfn) - frametable_base_pdx)) +#define page_to_mfn(pg) \ + pdx_to_mfn((unsigned long)((pg) - frame_table) + frametable_base_pdx) /* Convert between machine addresses and page-info structures. */ -#define maddr_to_page(ma) __mfn_to_page((ma) >> PAGE_SHIFT) -#define page_to_maddr(pg) ((paddr_t)__page_to_mfn(pg) << PAGE_SHIFT) +#define maddr_to_page(ma) mfn_to_page(maddr_to_mfn(ma)) +#define page_to_maddr(pg) (mfn_to_maddr(page_to_mfn(pg))) /* Convert between frame number and address formats. */ #define pfn_to_paddr(pfn) ((paddr_t)(pfn) << PAGE_SHIFT) @@ -228,7 +230,7 @@ static inline void __iomem *ioremap_wc(paddr_t start, size_t len) #define gaddr_to_gfn(ga) _gfn(paddr_to_pfn(ga)) #define mfn_to_maddr(mfn) pfn_to_paddr(mfn_x(mfn)) #define maddr_to_mfn(ma) _mfn(paddr_to_pfn(ma)) -#define vmap_to_mfn(va) paddr_to_pfn(virt_to_maddr((vaddr_t)va)) +#define vmap_to_mfn(va) maddr_to_mfn(virt_to_maddr((vaddr_t)va)) #define vmap_to_page(va) mfn_to_page(vmap_to_mfn(va)) /* Page-align address and convert to frame number format */ @@ -286,8 +288,6 @@ static inline uint64_t gvirt_to_maddr(vaddr_t va, paddr_t *pa, * These are overriden in various source files while underscored version * remain intact. */ -#define mfn_to_page(mfn) __mfn_to_page(mfn) -#define page_to_mfn(pg) __page_to_mfn(pg) #define virt_to_mfn(va) __virt_to_mfn(va) #define mfn_to_virt(mfn) __mfn_to_virt(mfn) @@ -307,7 +307,7 @@ static inline struct page_info *virt_to_page(const void *v) static inline void *page_to_virt(const struct page_info *pg) { - return mfn_to_virt(page_to_mfn(pg)); + return mfn_to_virt(mfn_x(page_to_mfn(pg))); } struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va, diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h index 45ef2cd58b..ff729895ef 100644 --- a/xen/include/asm-arm/p2m.h +++ b/xen/include/asm-arm/p2m.h @@ -274,7 +274,7 @@ static inline struct page_info *get_page_from_gfn( { struct page_info *page; p2m_type_t p2mt; - unsigned long mfn = mfn_x(p2m_lookup(d, _gfn(gfn), &p2mt)); + mfn_t mfn = p2m_lookup(d, _gfn(gfn), &p2mt); if (t) *t = p2mt; @@ -282,7 +282,7 @@ static inline struct page_info *get_page_from_gfn( if ( !p2m_is_any_ram(p2mt) ) return NULL; - if ( !mfn_valid(_mfn(mfn)) ) + if ( !mfn_valid(mfn) ) return NULL; page = mfn_to_page(mfn); diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h index c115661837..5d78a84148 100644 --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -271,7 +271,7 @@ struct page_info #define is_xen_heap_page(page) ((page)->count_info & PGC_xen_heap) #define is_xen_heap_mfn(mfn) \ - (__mfn_valid(mfn) && is_xen_heap_page(__mfn_to_page(mfn))) + (__mfn_valid(mfn) && is_xen_heap_page(mfn_to_page(_mfn(mfn)))) #define is_xen_fixed_mfn(mfn) \ ((((mfn) << PAGE_SHIFT) >= __pa(&_stext)) && \ (((mfn) << PAGE_SHIFT) <= __pa(&__2M_rwdata_end))) @@ -376,7 +376,7 @@ void put_page_from_l1e(l1_pgentry_t l1e, struct domain *l1e_owner); static inline struct page_info *get_page_from_mfn(mfn_t mfn, struct domain *d) { - struct page_info *page = __mfn_to_page(mfn_x(mfn)); + struct page_info *page = mfn_to_page(mfn); if ( unlikely(!mfn_valid(mfn)) || unlikely(!get_page(page, d)) ) { @@ -471,7 +471,7 @@ extern paddr_t mem_hotplug; #define compat_machine_to_phys_mapping ((unsigned int *)RDWR_COMPAT_MPT_VIRT_START) #define _set_gpfn_from_mfn(mfn, pfn) ({ \ - struct domain *d = page_get_owner(__mfn_to_page(mfn)); \ + struct domain *d = page_get_owner(mfn_to_page(_mfn(mfn))); \ unsigned long entry = (d && (d == dom_cow)) ? \ SHARED_M2P_ENTRY : (pfn); \ ((void)((mfn) >= (RDWR_COMPAT_MPT_VIRT_END - RDWR_COMPAT_MPT_VIRT_START) / 4 || \ diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index 2e7aa8fc79..c486b6f8f0 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -488,7 +488,7 @@ static inline struct page_info *get_page_from_gfn( /* Non-translated guests see 1-1 RAM / MMIO mappings everywhere */ if ( t ) *t = likely(d != dom_io) ? p2m_ram_rw : p2m_mmio_direct; - page = __mfn_to_page(gfn); + page = mfn_to_page(_mfn(gfn)); return mfn_valid(_mfn(gfn)) && get_page(page, d) ? page : NULL; } diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h index 45ca742678..c1e92937c0 100644 --- a/xen/include/asm-x86/page.h +++ b/xen/include/asm-x86/page.h @@ -88,10 +88,10 @@ ((paddr_t)(((x).l4 & (PADDR_MASK&PAGE_MASK)))) /* Get pointer to info structure of page mapped by pte (struct page_info *). */ -#define l1e_get_page(x) (__mfn_to_page(l1e_get_pfn(x))) -#define l2e_get_page(x) (__mfn_to_page(l2e_get_pfn(x))) -#define l3e_get_page(x) (__mfn_to_page(l3e_get_pfn(x))) -#define l4e_get_page(x) (__mfn_to_page(l4e_get_pfn(x))) +#define l1e_get_page(x) mfn_to_page(l1e_get_mfn(x)) +#define l2e_get_page(x) mfn_to_page(l2e_get_mfn(x)) +#define l3e_get_page(x) mfn_to_page(l3e_get_mfn(x)) +#define l4e_get_page(x) mfn_to_page(l4e_get_mfn(x)) /* Get pte access flags (unsigned int). */ #define l1e_get_flags(x) (get_pte_flags((x).l1)) @@ -157,10 +157,10 @@ static inline l4_pgentry_t l4e_from_paddr(paddr_t pa, unsigned int flags) #define l4e_from_intpte(intpte) ((l4_pgentry_t) { (intpte_t)(intpte) }) /* Construct a pte from a page pointer and access flags. */ -#define l1e_from_page(page, flags) l1e_from_pfn(__page_to_mfn(page), (flags)) -#define l2e_from_page(page, flags) l2e_from_pfn(__page_to_mfn(page), (flags)) -#define l3e_from_page(page, flags) l3e_from_pfn(__page_to_mfn(page), (flags)) -#define l4e_from_page(page, flags) l4e_from_pfn(__page_to_mfn(page), (flags)) +#define l1e_from_page(page, flags) l1e_from_mfn(page_to_mfn(page), flags) +#define l2e_from_page(page, flags) l2e_from_mfn(page_to_mfn(page), flags) +#define l3e_from_page(page, flags) l3e_from_mfn(page_to_mfn(page), flags) +#define l4e_from_page(page, flags) l4e_from_mfn(page_to_mfn(page), flags) /* Add extra flags to an existing pte. */ #define l1e_add_flags(x, flags) ((x).l1 |= put_pte_flags(flags)) @@ -215,13 +215,13 @@ static inline l4_pgentry_t l4e_from_paddr(paddr_t pa, unsigned int flags) /* Page-table type. */ typedef struct { u64 pfn; } pagetable_t; #define pagetable_get_paddr(x) ((paddr_t)(x).pfn << PAGE_SHIFT) -#define pagetable_get_page(x) __mfn_to_page((x).pfn) +#define pagetable_get_page(x) mfn_to_page(pagetable_get_mfn(x)) #define pagetable_get_pfn(x) ((x).pfn) #define pagetable_get_mfn(x) _mfn(((x).pfn)) #define pagetable_is_null(x) ((x).pfn == 0) #define pagetable_from_pfn(pfn) ((pagetable_t) { (pfn) }) #define pagetable_from_mfn(mfn) ((pagetable_t) { mfn_x(mfn) }) -#define pagetable_from_page(pg) pagetable_from_pfn(__page_to_mfn(pg)) +#define pagetable_from_page(pg) pagetable_from_mfn(page_to_mfn(pg)) #define pagetable_from_paddr(p) pagetable_from_pfn((p)>>PAGE_SHIFT) #define pagetable_null() pagetable_from_pfn(0) @@ -240,12 +240,12 @@ void copy_page_sse2(void *, const void *); #define __mfn_to_virt(mfn) (maddr_to_virt((paddr_t)(mfn) << PAGE_SHIFT)) /* Convert between machine frame numbers and page-info structures. */ -#define __mfn_to_page(mfn) (frame_table + pfn_to_pdx(mfn)) -#define __page_to_mfn(pg) pdx_to_pfn((unsigned long)((pg) - frame_table)) +#define mfn_to_page(mfn) (frame_table + mfn_to_pdx(mfn)) +#define page_to_mfn(pg) pdx_to_mfn((unsigned long)((pg) - frame_table)) /* Convert between machine addresses and page-info structures. */ -#define __maddr_to_page(ma) __mfn_to_page((ma) >> PAGE_SHIFT) -#define __page_to_maddr(pg) ((paddr_t)__page_to_mfn(pg) << PAGE_SHIFT) +#define __maddr_to_page(ma) mfn_to_page(maddr_to_mfn(ma)) +#define __page_to_maddr(pg) mfn_to_maddr(page_to_mfn(pg)) /* Convert between frame number and address formats. */ #define __pfn_to_paddr(pfn) ((paddr_t)(pfn) << PAGE_SHIFT) @@ -264,8 +264,6 @@ void copy_page_sse2(void *, const void *); #define mfn_to_virt(mfn) __mfn_to_virt(mfn) #define virt_to_maddr(va) __virt_to_maddr((unsigned long)(va)) #define maddr_to_virt(ma) __maddr_to_virt((unsigned long)(ma)) -#define mfn_to_page(mfn) __mfn_to_page(mfn) -#define page_to_mfn(pg) __page_to_mfn(pg) #define maddr_to_page(ma) __maddr_to_page(ma) #define page_to_maddr(pg) __page_to_maddr(pg) #define virt_to_page(va) __virt_to_page(va) @@ -273,7 +271,7 @@ void copy_page_sse2(void *, const void *); #define pfn_to_paddr(pfn) __pfn_to_paddr(pfn) #define paddr_to_pfn(pa) __paddr_to_pfn(pa) #define paddr_to_pdx(pa) pfn_to_pdx(paddr_to_pfn(pa)) -#define vmap_to_mfn(va) l1e_get_pfn(*virt_to_xen_l1e((unsigned long)(va))) +#define vmap_to_mfn(va) _mfn(l1e_get_pfn(*virt_to_xen_l1e((unsigned long)(va)))) #define vmap_to_page(va) mfn_to_page(vmap_to_mfn(va)) #endif /* !defined(__ASSEMBLY__) */ diff --git a/xen/include/xen/domain_page.h b/xen/include/xen/domain_page.h index 890bae5b9c..32669a3339 100644 --- a/xen/include/xen/domain_page.h +++ b/xen/include/xen/domain_page.h @@ -34,7 +34,7 @@ void unmap_domain_page(const void *va); /* * Given a VA from map_domain_page(), return its underlying MFN. */ -unsigned long domain_page_map_to_mfn(const void *va); +mfn_t domain_page_map_to_mfn(const void *va); /* * Similar to the above calls, except the mapping is accessible in all @@ -44,11 +44,11 @@ unsigned long domain_page_map_to_mfn(const void *va); void *map_domain_page_global(mfn_t mfn); void unmap_domain_page_global(const void *va); -#define __map_domain_page(pg) map_domain_page(_mfn(__page_to_mfn(pg))) +#define __map_domain_page(pg) map_domain_page(page_to_mfn(pg)) static inline void *__map_domain_page_global(const struct page_info *pg) { - return map_domain_page_global(_mfn(__page_to_mfn(pg))); + return map_domain_page_global(page_to_mfn(pg)); } #else /* !CONFIG_DOMAIN_PAGE */ @@ -56,7 +56,7 @@ static inline void *__map_domain_page_global(const struct page_info *pg) #define map_domain_page(mfn) __mfn_to_virt(mfn_x(mfn)) #define __map_domain_page(pg) page_to_virt(pg) #define unmap_domain_page(va) ((void)(va)) -#define domain_page_map_to_mfn(va) virt_to_mfn((unsigned long)(va)) +#define domain_page_map_to_mfn(va) _mfn(virt_to_mfn((unsigned long)(va))) static inline void *map_domain_page_global(mfn_t mfn) { diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h index 5a7d25e33f..e928551c91 100644 --- a/xen/include/xen/mm.h +++ b/xen/include/xen/mm.h @@ -277,13 +277,8 @@ struct page_list_head # define PAGE_LIST_NULL ((typeof(((struct page_info){}).list.next))~0) # if !defined(pdx_to_page) && !defined(page_to_pdx) -# if defined(__page_to_mfn) || defined(__mfn_to_page) -# define page_to_pdx __page_to_mfn -# define pdx_to_page __mfn_to_page -# else # define page_to_pdx page_to_mfn # define pdx_to_page mfn_to_page -# endif # endif # define PAGE_LIST_HEAD_INIT(name) { NULL, NULL } diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h index 542c0b3f20..8516a0b131 100644 --- a/xen/include/xen/tmem_xen.h +++ b/xen/include/xen/tmem_xen.h @@ -25,7 +25,7 @@ typedef uint32_t pagesize_t; /* like size_t, must handle largest PAGE_SIZE */ #define IS_PAGE_ALIGNED(addr) IS_ALIGNED((unsigned long)(addr), PAGE_SIZE) -#define IS_VALID_PAGE(_pi) mfn_valid(_mfn(page_to_mfn(_pi))) +#define IS_VALID_PAGE(_pi) mfn_valid(page_to_mfn(_pi)) extern struct page_list_head tmem_page_list; extern spinlock_t tmem_page_list_lock;