From patchwork Wed Feb 21 14:02:54 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 129064 Delivered-To: patch@linaro.org Received: by 10.46.124.24 with SMTP id x24csp662436ljc; Wed, 21 Feb 2018 06:05:53 -0800 (PST) X-Google-Smtp-Source: AH8x226Jg+6J9eBE5Z7Bd0e0d8XMvPLzaPnKHG7lQj2LHpDW1BYBSPF5TqQdjEouVh2d2H0B+Nhi X-Received: by 10.237.53.98 with SMTP id b31mr5629560qte.230.1519221953386; Wed, 21 Feb 2018 06:05:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1519221953; cv=none; d=google.com; s=arc-20160816; b=gjlvmc9akD7iVicGbbHS8jRGMU5csu+6tnETuNdgfnycS7Nax5p7lBqUyKIpCvAVyR C0zJ9E0K3Mch0FREBM9J2if3BgpNdSBWMUudVPE15E4uyNQuoRsu6Ki91SicBSmtIrdY rUc0gVrsrmuFfPha8GDZkzVrLD6x+atXvhAQ9ra2KAlEniYA3qGrr34zCPlJ2s3S+bo/ ToCh9WHtRbXLLkzkP6JlOumq7Qv3TGwfBTFHsQ8yejcZUZoKnFdzm3qlqHirFe/n7l2T bOd5T46qTj0dxIfrm5kO3Hj+DI+3d/rNnjDiyvQnVCuNYvRC76+xvnzqGmj+JciH8U5a iYkA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:cc:references:in-reply-to:message-id:date:to :from:arc-authentication-results; bh=aCBAlEVUvngvYlCX/BM319EsTcA503vUe24q5B/3lqQ=; b=o4OYgh1DlprDe6Z7GuBHdpcth/WXCr6gYuhH+10YjZ00xggYvI5eyaSwzVio7BhRvz u3XHyr/xfpo9rHKR3q8sRHbiIlpRpoKEWwUFepLoOfGfWLMxAYcpNxV3/MioBDymZeTV unlSnmb0AlAmwJYmuatZj+B2zwGrYg4zrYbKIbjGCApaXMl13IqtKBGn/sfHmXFI0eyg UsMr1p5X1GlVT4gVcvEDp68Zk7RzYa5BpLLjNOhKWIQyd8odoefXYUrNuFVNN1H3XBRi +qV2aYS5H5OawjzFMlLS4P6F5oH47gloZAF2EaBxxQS5pS5wvPd5dzecmUyv1oBoqFX3 Epdg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id r123si2632497qkf.32.2018.02.21.06.05.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 21 Feb 2018 06:05:53 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1eoUzg-0004a9-ON; Wed, 21 Feb 2018 14:03:36 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1eoUzg-0004Vh-2L for xen-devel@lists.xen.org; Wed, 21 Feb 2018 14:03:36 +0000 X-Inumbo-ID: 37cb7cd9-1710-11e8-b9b1-635ca7ef6cff Received: from foss.arm.com (unknown [217.140.101.70]) by us1-amaz-eas1.inumbo.com (Halon) with ESMTP id 37cb7cd9-1710-11e8-b9b1-635ca7ef6cff; Wed, 21 Feb 2018 14:05:05 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 49BF815AB; Wed, 21 Feb 2018 06:03:29 -0800 (PST) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.206.53]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7CD3E3F318; Wed, 21 Feb 2018 06:03:27 -0800 (PST) From: Julien Grall To: xen-devel@lists.xen.org Date: Wed, 21 Feb 2018 14:02:54 +0000 Message-Id: <20180221140259.29360-12-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180221140259.29360-1-julien.grall@arm.com> References: <20180221140259.29360-1-julien.grall@arm.com> Cc: Stefano Stabellini , Wei Liu , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Jan Beulich Subject: [Xen-devel] [PATCH v4 11/16] xen/mm: Switch page_alloc.c to typesafe MFN X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" No functional change intended. Signed-off-by: Julien Grall Reviewed-by: Wei Liu --- Cc: Andrew Cooper Cc: George Dunlap Cc: Ian Jackson Cc: Jan Beulich Cc: Konrad Rzeszutek Wilk Cc: Stefano Stabellini Cc: Tim Deegan Cc: Wei Liu Cc: Julien Grall Changes in v4: - Patch added --- xen/common/page_alloc.c | 64 ++++++++++++++++++++++++++-------------------- xen/include/asm-arm/numa.h | 8 +++--- 2 files changed, 41 insertions(+), 31 deletions(-) diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index 4de8988bea..b0db41feea 100644 --- a/xen/common/page_alloc.c +++ b/xen/common/page_alloc.c @@ -151,6 +151,12 @@ #define p2m_pod_offline_or_broken_replace(pg) BUG_ON(pg != NULL) #endif +/* Override macros from asm/page.h to make them work with mfn_t */ +#undef page_to_mfn +#define page_to_mfn(pg) _mfn(__page_to_mfn(pg)) +#undef mfn_to_page +#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn)) + /* * Comma-separated list of hexadecimal page numbers containing bad bytes. * e.g. 'badpage=0x3f45,0x8a321'. @@ -197,7 +203,7 @@ PAGE_LIST_HEAD(page_broken_list); * first_valid_mfn is exported because it is use in ARM specific NUMA * helpers. See comment in asm-arm/numa.h. */ -unsigned long first_valid_mfn = ~0UL; +mfn_t first_valid_mfn = INVALID_MFN_INITIALIZER; static struct bootmem_region { unsigned long s, e; /* MFNs @s through @e-1 inclusive are free */ @@ -283,7 +289,7 @@ void __init init_boot_pages(paddr_t ps, paddr_t pe) if ( pe <= ps ) return; - first_valid_mfn = min_t(unsigned long, ps >> PAGE_SHIFT, first_valid_mfn); + first_valid_mfn = mfn_min(maddr_to_mfn(ps), first_valid_mfn); bootmem_region_add(ps >> PAGE_SHIFT, pe >> PAGE_SHIFT); @@ -397,7 +403,7 @@ mfn_t __init alloc_boot_pages(unsigned long nr_pfns, unsigned long pfn_align) #define bits_to_zone(b) (((b) < (PAGE_SHIFT + 1)) ? 1 : ((b) - PAGE_SHIFT)) #define page_to_zone(pg) (is_xen_heap_page(pg) ? MEMZONE_XEN : \ - (flsl(page_to_mfn(pg)) ? : 1)) + (flsl(mfn_x(page_to_mfn(pg))) ? : 1)) typedef struct page_list_head heap_by_zone_and_order_t[NR_ZONES][MAX_ORDER+1]; static heap_by_zone_and_order_t *_heap[MAX_NUMNODES]; @@ -729,7 +735,7 @@ static void page_list_add_scrub(struct page_info *pg, unsigned int node, static void poison_one_page(struct page_info *pg) { #ifdef CONFIG_SCRUB_DEBUG - mfn_t mfn = _mfn(page_to_mfn(pg)); + mfn_t mfn = page_to_mfn(pg); uint64_t *ptr; if ( !scrub_debug ) @@ -744,7 +750,7 @@ static void poison_one_page(struct page_info *pg) static void check_one_page(struct page_info *pg) { #ifdef CONFIG_SCRUB_DEBUG - mfn_t mfn = _mfn(page_to_mfn(pg)); + mfn_t mfn = page_to_mfn(pg); const uint64_t *ptr; unsigned int i; @@ -992,7 +998,8 @@ static struct page_info *alloc_heap_pages( /* Ensure cache and RAM are consistent for platforms where the * guest can control its own visibility of/through the cache. */ - flush_page_to_ram(page_to_mfn(&pg[i]), !(memflags & MEMF_no_icache_flush)); + flush_page_to_ram(mfn_x(page_to_mfn(&pg[i])), + !(memflags & MEMF_no_icache_flush)); } spin_unlock(&heap_lock); @@ -1344,7 +1351,8 @@ bool scrub_free_pages(void) static void free_heap_pages( struct page_info *pg, unsigned int order, bool need_scrub) { - unsigned long mask, mfn = page_to_mfn(pg); + unsigned long mask; + mfn_t mfn = page_to_mfn(pg); unsigned int i, node = phys_to_nid(page_to_maddr(pg)), tainted = 0; unsigned int zone = page_to_zone(pg); @@ -1381,7 +1389,7 @@ static void free_heap_pages( /* This page is not a guest frame any more. */ page_set_owner(&pg[i], NULL); /* set_gpfn_from_mfn snoops pg owner */ - set_gpfn_from_mfn(mfn + i, INVALID_M2P_ENTRY); + set_gpfn_from_mfn(mfn_x(mfn) + i, INVALID_M2P_ENTRY); if ( need_scrub ) { @@ -1409,12 +1417,12 @@ static void free_heap_pages( { mask = 1UL << order; - if ( (page_to_mfn(pg) & mask) ) + if ( (mfn_x(page_to_mfn(pg)) & mask) ) { struct page_info *predecessor = pg - mask; /* Merge with predecessor block? */ - if ( !mfn_valid(_mfn(page_to_mfn(predecessor))) || + if ( !mfn_valid(page_to_mfn(predecessor)) || !page_state_is(predecessor, free) || (PFN_ORDER(predecessor) != order) || (phys_to_nid(page_to_maddr(predecessor)) != node) ) @@ -1437,7 +1445,7 @@ static void free_heap_pages( struct page_info *successor = pg + mask; /* Merge with successor block? */ - if ( !mfn_valid(_mfn(page_to_mfn(successor))) || + if ( !mfn_valid(page_to_mfn(successor)) || !page_state_is(successor, free) || (PFN_ORDER(successor) != order) || (phys_to_nid(page_to_maddr(successor)) != node) ) @@ -1470,7 +1478,7 @@ static unsigned long mark_page_offline(struct page_info *pg, int broken) { unsigned long nx, x, y = pg->count_info; - ASSERT(page_is_ram_type(page_to_mfn(pg), RAM_TYPE_CONVENTIONAL)); + ASSERT(page_is_ram_type(mfn_x(page_to_mfn(pg)), RAM_TYPE_CONVENTIONAL)); ASSERT(spin_is_locked(&heap_lock)); do { @@ -1533,7 +1541,7 @@ int offline_page(unsigned long mfn, int broken, uint32_t *status) } *status = 0; - pg = mfn_to_page(mfn); + pg = mfn_to_page(_mfn(mfn)); if ( is_xen_fixed_mfn(mfn) ) { @@ -1640,7 +1648,7 @@ unsigned int online_page(unsigned long mfn, uint32_t *status) return -EINVAL; } - pg = mfn_to_page(mfn); + pg = mfn_to_page(_mfn(mfn)); spin_lock(&heap_lock); @@ -1694,7 +1702,7 @@ int query_page_offline(unsigned long mfn, uint32_t *status) *status = 0; spin_lock(&heap_lock); - pg = mfn_to_page(mfn); + pg = mfn_to_page(_mfn(mfn)); if ( page_state_is(pg, offlining) ) *status |= PG_OFFLINE_STATUS_OFFLINE_PENDING; @@ -1726,7 +1734,7 @@ static void init_heap_pages( * Update first_valid_mfn to ensure those regions are covered. */ spin_lock(&heap_lock); - first_valid_mfn = min_t(unsigned long, page_to_mfn(pg), first_valid_mfn); + first_valid_mfn = mfn_min(page_to_mfn(pg), first_valid_mfn); spin_unlock(&heap_lock); for ( i = 0; i < nr_pages; i++ ) @@ -1735,14 +1743,14 @@ static void init_heap_pages( if ( unlikely(!avail[nid]) ) { - unsigned long s = page_to_mfn(pg + i); - unsigned long e = page_to_mfn(pg + nr_pages - 1) + 1; + unsigned long s = mfn_x(page_to_mfn(pg + i)); + unsigned long e = mfn_x(mfn_add(page_to_mfn(pg + nr_pages - 1), 1)); bool_t use_tail = (nid == phys_to_nid(pfn_to_paddr(e - 1))) && !(s & ((1UL << MAX_ORDER) - 1)) && (find_first_set_bit(e) <= find_first_set_bit(s)); unsigned long n; - n = init_node_heap(nid, page_to_mfn(pg+i), nr_pages - i, + n = init_node_heap(nid, mfn_x(page_to_mfn(pg+i)), nr_pages - i, &use_tail); BUG_ON(i + n > nr_pages); if ( n && !use_tail ) @@ -1796,7 +1804,7 @@ void __init end_boot_allocator(void) if ( (r->s < r->e) && (phys_to_nid(pfn_to_paddr(r->s)) == cpu_to_node(0)) ) { - init_heap_pages(mfn_to_page(r->s), r->e - r->s); + init_heap_pages(mfn_to_page(_mfn(r->s)), r->e - r->s); r->e = r->s; break; } @@ -1805,7 +1813,7 @@ void __init end_boot_allocator(void) { struct bootmem_region *r = &bootmem_region_list[i]; if ( r->s < r->e ) - init_heap_pages(mfn_to_page(r->s), r->e - r->s); + init_heap_pages(mfn_to_page(_mfn(r->s)), r->e - r->s); } nr_bootmem_regions = 0; init_heap_pages(virt_to_page(bootmem_region_list), 1); @@ -1862,7 +1870,7 @@ static void __init smp_scrub_heap_pages(void *data) for ( mfn = start; mfn < end; mfn++ ) { - pg = mfn_to_page(mfn); + pg = mfn_to_page(_mfn(mfn)); /* Check the mfn is valid and page is free. */ if ( !mfn_valid(_mfn(mfn)) || !page_state_is(pg, free) ) @@ -1915,7 +1923,7 @@ static void __init scrub_heap_pages(void) if ( !node_spanned_pages(i) ) continue; /* Calculate Node memory start and end address. */ - start = max(node_start_pfn(i), first_valid_mfn); + start = max(node_start_pfn(i), mfn_x(first_valid_mfn)); end = min(node_start_pfn(i) + node_spanned_pages(i), max_page); /* Just in case NODE has 1 page and starts below first_valid_mfn. */ end = max(end, start); @@ -2159,17 +2167,17 @@ void free_xenheap_pages(void *v, unsigned int order) void init_domheap_pages(paddr_t ps, paddr_t pe) { - unsigned long smfn, emfn; + mfn_t smfn, emfn; ASSERT(!in_irq()); - smfn = round_pgup(ps) >> PAGE_SHIFT; - emfn = round_pgdown(pe) >> PAGE_SHIFT; + smfn = maddr_to_mfn(round_pgup(ps)); + emfn = maddr_to_mfn(round_pgup(pe)); - if ( emfn <= smfn ) + if ( mfn_x(emfn) <= mfn_x(smfn) ) return; - init_heap_pages(mfn_to_page(smfn), emfn - smfn); + init_heap_pages(mfn_to_page(smfn), mfn_x(emfn) - mfn_x(smfn)); } diff --git a/xen/include/asm-arm/numa.h b/xen/include/asm-arm/numa.h index 7e0b69413d..490d1f31aa 100644 --- a/xen/include/asm-arm/numa.h +++ b/xen/include/asm-arm/numa.h @@ -1,6 +1,8 @@ #ifndef __ARCH_ARM_NUMA_H #define __ARCH_ARM_NUMA_H +#include + typedef u8 nodeid_t; /* Fake one node for now. See also node_online_map. */ @@ -16,11 +18,11 @@ static inline __attribute__((pure)) nodeid_t phys_to_nid(paddr_t addr) * TODO: make first_valid_mfn static when NUMA is supported on Arm, this * is required because the dummy helpers are using it. */ -extern unsigned long first_valid_mfn; +extern mfn_t first_valid_mfn; /* XXX: implement NUMA support */ -#define node_spanned_pages(nid) (max_page - first_valid_mfn) -#define node_start_pfn(nid) (first_valid_mfn) +#define node_spanned_pages(nid) (max_page - mfn_x(first_valid_mfn)) +#define node_start_pfn(nid) (mfn_x(first_valid_mfn)) #define __node_distance(a, b) (20) static inline unsigned int arch_get_dma_bitsize(void)