From patchwork Thu Sep 21 12:40:22 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 113253 Delivered-To: patch@linaro.org Received: by 10.80.163.150 with SMTP id s22csp1846128edb; Thu, 21 Sep 2017 05:43:03 -0700 (PDT) X-Google-Smtp-Source: AOwi7QBV0Yx/HvCCUtce5oarRjXUpCfTVXkESvR31C+1d8OAjs2D3vdpxicgDA+kr/rsuOksHxZP X-Received: by 10.36.250.10 with SMTP id v10mr1117342ith.135.1505997783830; Thu, 21 Sep 2017 05:43:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1505997783; cv=none; d=google.com; s=arc-20160816; b=j/+GJY9HVZf7+uhVkZhAZJUYdkSZqMUY5EIpJ3SXWadsZHnwHc1UFRjD0SUJpiV4X0 DuhBY0B19o3VnPX/W6nk4W27pwPzxtrQgZVIAGr2WL2lCacGBcBrUGt4mxhs2GnUGRRh 6bADkpF538o+UP4nNG90EVksy6DRkNEGO1I4T9E2nDGTfITEtfJmI91WTcPmoXACh/mH MZ+TW0XfvN4lYpk4F73Rg/nZe059r6cStcgjBg46HGc9ZF3j/IOH1Ca6lpgjYefru/hX ZZ0/BAKSacn9EnP4N384dpWPgbwWD8ddofqCksyo+TWVaguhHctSMpONaRscHfDsfOxt rj3A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:cc:references:in-reply-to:message-id:date:to :from:arc-authentication-results; bh=2eacanKA709jNaGR2S3799KfBI3UyBSRMJdF1G7LpZw=; b=K1XaFvJ1mPO8mSJvPFnmFWkyPgZ+aDNANHkB90hD8YimMaRyyj70PqBQBLyJjCvDsx d4UMZMjl0+JX2olAS2Ec9RwSbfhRQmlwFBGnUQ3SM1Up5CAKkWCcqTqk4wOMUKLVDUE1 OlZiUSFwiRH2tImcGS0gTxTCQBcKB5XkKkCFEc0ZMRDBbIWFEjZjZhDSF4N1Yhv1wirG /lEcBOQrjW1jbFQYU+eNEKxztgo+mF35GCUzimHqFocHq/JLYoyoGWrbJFwCYqUdwpNS 06ULU4udRPlephOY8UxdKAF/yTAep9HmMKzLX51Pu43v0mM/xSUNLv6qttKDjhumwrOw DMVQ== ARC-Authentication-Results: i=1; mx.google.com; spf=neutral (google.com: 192.237.175.120 is neither permitted nor denied by best guess record for domain of xen-devel-bounces@lists.xen.org) smtp.mailfrom=xen-devel-bounces@lists.xen.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id c16si1317841itc.193.2017.09.21.05.43.03 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 21 Sep 2017 05:43:03 -0700 (PDT) Received-SPF: neutral (google.com: 192.237.175.120 is neither permitted nor denied by best guess record for domain of xen-devel-bounces@lists.xen.org) client-ip=192.237.175.120; Authentication-Results: mx.google.com; spf=neutral (google.com: 192.237.175.120 is neither permitted nor denied by best guess record for domain of xen-devel-bounces@lists.xen.org) smtp.mailfrom=xen-devel-bounces@lists.xen.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dv0mg-0001DG-92; Thu, 21 Sep 2017 12:40:50 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dv0mf-0001Ck-8a for xen-devel@lists.xen.org; Thu, 21 Sep 2017 12:40:49 +0000 Received: from [193.109.254.147] by server-7.bemta-6.messagelabs.com id B9/63-03610-053B3C95; Thu, 21 Sep 2017 12:40:48 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrMLMWRWlGSWpSXmKPExsVysyfVTdd/8+F IgytnjS2WfFzM4sDocXT3b6YAxijWzLyk/IoE1ow1s2+zF/zIrTjRtIutgfGffxcjF4eQwCZG ifdTm1m7GDmBnNOMEhef1IHYbAKaEnc+f2ICsUUEpCWufb7MCNLALLCYUeLz/l5mkISwgKfEs 6v97CA2i4CqxIVdM8AG8QpYSFxvucQGYksIyEvsarsIFucUsJRoO/wNapmFxK8JJ1gnMHIvYG RYxahRnFpUllqka2iul1SUmZ5RkpuYmaNraGCml5taXJyYnpqTmFSsl5yfu4kR6GEGINjBeHt jwCFGSQ4mJVHeI4WHI4X4kvJTKjMSizPii0pzUosPMcpwcChJ8D4tB8oJFqWmp1akZeYAQw0m LcHBoyTC+wckzVtckJhbnJkOkTrFqMvRcfPuHyYhlrz8vFQpcd6jIEUCIEUZpXlwI2Bhf4lRV kqYlxHoKCGegtSi3MwSVPlXjOIcjErCvNwVQFN4MvNK4Da9AjqCCeiI7A0HQI4oSURISTUwxj ubKrLuV0zo6XqSyWv5NCb1UrS7p/Tbe6+SV+vJK/19oX6hzfr5IZNZV+Oi2Jw3WFdsjnyQMLm Ewz/qhIlJa/fn3zuZhPr4X6zbKuT30u3UlGi7ddtefHjWaLztutvEujM+RpvX8h2TNUyauHDR jPArYet5OIr3GxkXnzmx62lxkEj8WblOJZbijERDLeai4kQA3BrFiXYCAAA= X-Env-Sender: julien.grall@arm.com X-Msg-Ref: server-8.tower-27.messagelabs.com!1505997647!106972113!1 X-Originating-IP: [217.140.101.70] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 9.4.45; banners=-,-,- X-VirusChecked: Checked Received: (qmail 45703 invoked from network); 21 Sep 2017 12:40:47 -0000 Received: from usa-sjc-mx-foss1.foss.arm.com (HELO foss.arm.com) (217.140.101.70) by server-8.tower-27.messagelabs.com with SMTP; 21 Sep 2017 12:40:47 -0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D7D1A80D; Thu, 21 Sep 2017 05:40:46 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.206.53]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A52823F58C; Thu, 21 Sep 2017 05:40:45 -0700 (PDT) From: Julien Grall To: xen-devel@lists.xen.org Date: Thu, 21 Sep 2017 13:40:22 +0100 Message-Id: <20170921124035.2410-4-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170921124035.2410-1-julien.grall@arm.com> References: <20170921124035.2410-1-julien.grall@arm.com> Cc: George Dunlap , Andrew Cooper , Julien Grall , Jan Beulich Subject: [Xen-devel] [PATCH v2 03/16] xen/x86: p2m-pod: Fix coding style for comments X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" Signed-off-by: Julien Grall Acked-by: Andrew Cooper Reviewed-by: Wei Liu Reviewed-by: George Dunlap --- Cc: George Dunlap Cc: Jan Beulich Cc: Andrew Cooper Changes in v2: - Add Andrew's acked-by --- xen/arch/x86/mm/p2m-pod.c | 154 ++++++++++++++++++++++++++++++---------------- 1 file changed, 102 insertions(+), 52 deletions(-) diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c index 1f07441259..6f045081b5 100644 --- a/xen/arch/x86/mm/p2m-pod.c +++ b/xen/arch/x86/mm/p2m-pod.c @@ -155,8 +155,10 @@ static struct page_info * p2m_pod_cache_get(struct p2m_domain *p2m, BUG_ON( page_list_empty(&p2m->pod.super) ); - /* Break up a superpage to make single pages. NB count doesn't - * need to be adjusted. */ + /* + * Break up a superpage to make single pages. NB count doesn't + * need to be adjusted. + */ p = page_list_remove_head(&p2m->pod.super); mfn = mfn_x(page_to_mfn(p)); @@ -242,8 +244,10 @@ p2m_pod_set_cache_target(struct p2m_domain *p2m, unsigned long pod_target, int p } /* Decreasing the target */ - /* We hold the pod lock here, so we don't need to worry about - * cache disappearing under our feet. */ + /* + * We hold the pod lock here, so we don't need to worry about + * cache disappearing under our feet. + */ while ( pod_target < p2m->pod.count ) { struct page_info * page; @@ -345,15 +349,19 @@ p2m_pod_set_mem_target(struct domain *d, unsigned long target) if ( d->is_dying ) goto out; - /* T' < B: Don't reduce the cache size; let the balloon driver - * take care of it. */ + /* + * T' < B: Don't reduce the cache size; let the balloon driver + * take care of it. + */ if ( target < d->tot_pages ) goto out; pod_target = target - populated; - /* B < T': Set the cache size equal to # of outstanding entries, - * let the balloon driver fill in the rest. */ + /* + * B < T': Set the cache size equal to # of outstanding entries, + * let the balloon driver fill in the rest. + */ if ( populated > 0 && pod_target > p2m->pod.entry_count ) pod_target = p2m->pod.entry_count; @@ -491,7 +499,8 @@ static int p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn); -/* This function is needed for two reasons: +/* + * This function is needed for two reasons: * + To properly handle clearing of PoD entries * + To "steal back" memory being freed for the PoD cache, rather than * releasing it. @@ -513,8 +522,10 @@ p2m_pod_decrease_reservation(struct domain *d, gfn_lock(p2m, gpfn, order); pod_lock(p2m); - /* If we don't have any outstanding PoD entries, let things take their - * course */ + /* + * If we don't have any outstanding PoD entries, let things take their + * course. + */ if ( p2m->pod.entry_count == 0 ) goto out_unlock; @@ -550,8 +561,10 @@ p2m_pod_decrease_reservation(struct domain *d, if ( !nonpod ) { - /* All PoD: Mark the whole region invalid and tell caller - * we're done. */ + /* + * All PoD: Mark the whole region invalid and tell caller + * we're done. + */ p2m_set_entry(p2m, gpfn, INVALID_MFN, order, p2m_invalid, p2m->default_access); p2m->pod.entry_count-=(1<= SUPERPAGE_ORDER (the loop below will take care of this) * - not all of the pages were RAM (now knowing order < SUPERPAGE_ORDER) */ - if ( steal_for_cache && order < SUPERPAGE_ORDER && ram == (1 << order) && + if ( steal_for_cache && order < SUPERPAGE_ORDER && ram == (1UL << order) && p2m_pod_zero_check_superpage(p2m, gpfn & ~(SUPERPAGE_PAGES - 1)) ) { - pod = 1 << order; + pod = 1UL << order; ram = nonpod = 0; ASSERT(steal_for_cache == (p2m->pod.entry_count > p2m->pod.count)); } - /* Process as long as: + /* + * Process as long as: * + There are PoD entries to handle, or * + There is ram left, and we want to steal it */ @@ -631,8 +645,10 @@ p2m_pod_decrease_reservation(struct domain *d, } } - /* If there are no more non-PoD entries, tell decrease_reservation() that - * there's nothing left to do. */ + /* + * If there are no more non-PoD entries, tell decrease_reservation() that + * there's nothing left to do. + */ if ( nonpod == 0 ) ret = 1; @@ -658,9 +674,11 @@ void p2m_pod_dump_data(struct domain *d) } -/* Search for all-zero superpages to be reclaimed as superpages for the +/* + * Search for all-zero superpages to be reclaimed as superpages for the * PoD cache. Must be called w/ pod lock held, must lock the superpage - * in the p2m */ + * in the p2m. + */ static int p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn) { @@ -682,12 +700,16 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn) if ( paging_mode_shadow(d) ) max_ref++; - /* NOTE: this is why we don't enforce deadlock constraints between p2m - * and pod locks */ + /* + * NOTE: this is why we don't enforce deadlock constraints between p2m + * and pod locks. + */ gfn_lock(p2m, gfn, SUPERPAGE_ORDER); - /* Look up the mfns, checking to make sure they're the same mfn - * and aligned, and mapping them. */ + /* + * Look up the mfns, checking to make sure they're the same mfn + * and aligned, and mapping them. + */ for ( i = 0; i < SUPERPAGE_PAGES; i += n ) { p2m_access_t a; @@ -697,7 +719,8 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn) mfn = p2m->get_entry(p2m, gfn + i, &type, &a, 0, &cur_order, NULL); - /* Conditions that must be met for superpage-superpage: + /* + * Conditions that must be met for superpage-superpage: * + All gfns are ram types * + All gfns have the same type * + All of the mfns are allocated to a domain @@ -751,9 +774,11 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn) p2m_populate_on_demand, p2m->default_access); p2m_tlb_flush_sync(p2m); - /* Make none of the MFNs are used elsewhere... for example, mapped + /* + * Make none of the MFNs are used elsewhere... for example, mapped * via the grant table interface, or by qemu. Allow one refcount for - * being allocated to the domain. */ + * being allocated to the domain. + */ for ( i=0; i < SUPERPAGE_PAGES; i++ ) { mfn = _mfn(mfn_x(mfn0) + i); @@ -797,8 +822,10 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn) __trace_var(TRC_MEM_POD_ZERO_RECLAIM, 0, sizeof(t), &t); } - /* Finally! We've passed all the checks, and can add the mfn superpage - * back on the PoD cache, and account for the new p2m PoD entries */ + /* + * Finally! We've passed all the checks, and can add the mfn superpage + * back on the PoD cache, and account for the new p2m PoD entries. + */ p2m_pod_cache_add(p2m, mfn_to_page(mfn0), PAGE_ORDER_2M); p2m->pod.entry_count += SUPERPAGE_PAGES; @@ -833,8 +860,10 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count) { p2m_access_t a; mfns[i] = p2m->get_entry(p2m, gfns[i], types + i, &a, 0, NULL, NULL); - /* If this is ram, and not a pagetable or from the xen heap, and probably not mapped - elsewhere, map it; otherwise, skip. */ + /* + * If this is ram, and not a pagetable or from the xen heap, and + * probably not mapped elsewhere, map it; otherwise, skip. + */ if ( p2m_is_ram(types[i]) && ( (mfn_to_page(mfns[i])->count_info & PGC_allocated) != 0 ) && ( (mfn_to_page(mfns[i])->count_info & (PGC_page_table|PGC_xen_heap)) == 0 ) @@ -844,8 +873,10 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count) map[i] = NULL; } - /* Then, go through and check for zeroed pages, removing write permission - * for those with zeroes. */ + /* + * Then, go through and check for zeroed pages, removing write permission + * for those with zeroes. + */ for ( i=0; idefault_access); - /* See if the page was successfully unmapped. (Allow one refcount - * for being allocated to a domain.) */ + /* + * See if the page was successfully unmapped. (Allow one refcount + * for being allocated to a domain.) + */ if ( (mfn_to_page(mfns[i])->count_info & PGC_count_mask) > 1 ) { unmap_domain_page(map[i]); @@ -895,8 +928,10 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count) unmap_domain_page(map[i]); - /* See comment in p2m_pod_zero_check_superpage() re gnttab - * check timing. */ + /* + * See comment in p2m_pod_zero_check_superpage() re gnttab + * check timing. + */ if ( j < PAGE_SIZE/sizeof(*map[i]) ) { p2m_set_entry(p2m, gfns[i], mfns[i], PAGE_ORDER_4K, @@ -944,9 +979,11 @@ p2m_pod_emergency_sweep(struct p2m_domain *p2m) limit = (start > POD_SWEEP_LIMIT) ? (start - POD_SWEEP_LIMIT) : 0; /* FIXME: Figure out how to avoid superpages */ - /* NOTE: Promote to globally locking the p2m. This will get complicated + /* + * NOTE: Promote to globally locking the p2m. This will get complicated * in a fine-grained scenario. If we lock each gfn individually we must be - * careful about spinlock recursion limits and POD_SWEEP_STRIDE. */ + * careful about spinlock recursion limits and POD_SWEEP_STRIDE. + */ p2m_lock(p2m); for ( i=p2m->pod.reclaim_single; i > 0 ; i-- ) { @@ -963,11 +1000,13 @@ p2m_pod_emergency_sweep(struct p2m_domain *p2m) j = 0; } } - /* Stop if we're past our limit and we have found *something*. + /* + * Stop if we're past our limit and we have found *something*. * * NB that this is a zero-sum game; we're increasing our cache size * by re-increasing our 'debt'. Since we hold the pod lock, - * (entry_count - count) must remain the same. */ + * (entry_count - count) must remain the same. + */ if ( i < limit && (p2m->pod.count > 0 || hypercall_preempt_check()) ) break; } @@ -1045,20 +1084,25 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn, ASSERT(gfn_locked_by_me(p2m, gfn)); pod_lock(p2m); - /* This check is done with the pod lock held. This will make sure that + /* + * This check is done with the pod lock held. This will make sure that * even if d->is_dying changes under our feet, p2m_pod_empty_cache() - * won't start until we're done. */ + * won't start until we're done. + */ if ( unlikely(d->is_dying) ) goto out_fail; - /* Because PoD does not have cache list for 1GB pages, it has to remap - * 1GB region to 2MB chunks for a retry. */ + /* + * Because PoD does not have cache list for 1GB pages, it has to remap + * 1GB region to 2MB chunks for a retry. + */ if ( order == PAGE_ORDER_1G ) { pod_unlock(p2m); gfn_aligned = (gfn >> order) << order; - /* Note that we are supposed to call p2m_set_entry() 512 times to + /* + * Note that we are supposed to call p2m_set_entry() 512 times to * split 1GB into 512 2MB pages here. But We only do once here because * p2m_set_entry() should automatically shatter the 1GB page into * 512 2MB pages. The rest of 511 calls are unnecessary. @@ -1075,8 +1119,10 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn, if ( p2m->pod.entry_count > p2m->pod.count ) pod_eager_reclaim(p2m); - /* Only sweep if we're actually out of memory. Doing anything else - * causes unnecessary time and fragmentation of superpages in the p2m. */ + /* + * Only sweep if we're actually out of memory. Doing anything else + * causes unnecessary time and fragmentation of superpages in the p2m. + */ if ( p2m->pod.count == 0 ) p2m_pod_emergency_sweep(p2m); @@ -1088,8 +1134,10 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn, if ( gfn > p2m->pod.max_guest ) p2m->pod.max_guest = gfn; - /* Get a page f/ the cache. A NULL return value indicates that the - * 2-meg range should be marked singleton PoD, and retried */ + /* + * Get a page f/ the cache. A NULL return value indicates that the + * 2-meg range should be marked singleton PoD, and retried. + */ if ( (p = p2m_pod_cache_get(p2m, order)) == NULL ) goto remap_and_retry; @@ -1146,8 +1194,10 @@ remap_and_retry: pod_unlock(p2m); /* Remap this 2-meg region in singleton chunks */ - /* NOTE: In a p2m fine-grained lock scenario this might - * need promoting the gfn lock from gfn->2M superpage */ + /* + * NOTE: In a p2m fine-grained lock scenario this might + * need promoting the gfn lock from gfn->2M superpage. + */ gfn_aligned = (gfn>>order)<