From patchwork Thu Sep 15 11:28:18 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 76280 Delivered-To: patch@linaro.org Received: by 10.140.106.72 with SMTP id d66csp2387159qgf; Thu, 15 Sep 2016 04:30:38 -0700 (PDT) X-Received: by 10.107.39.194 with SMTP id n185mr18363143ion.47.1473939038444; Thu, 15 Sep 2016 04:30:38 -0700 (PDT) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id m184si5454553ioe.181.2016.09.15.04.30.37 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Sep 2016 04:30:38 -0700 (PDT) Received-SPF: neutral (google.com: 192.237.175.120 is neither permitted nor denied by best guess record for domain of xen-devel-bounces@lists.xen.org) client-ip=192.237.175.120; Authentication-Results: mx.google.com; spf=neutral (google.com: 192.237.175.120 is neither permitted nor denied by best guess record for domain of xen-devel-bounces@lists.xen.org) smtp.mailfrom=xen-devel-bounces@lists.xen.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bkUqb-0004ax-D0; Thu, 15 Sep 2016 11:28:53 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bkUqa-0004aJ-EM for xen-devel@lists.xen.org; Thu, 15 Sep 2016 11:28:52 +0000 Received: from [193.109.254.147] by server-1.bemta-6.messagelabs.com id C8/3B-21406-3F58AD75; Thu, 15 Sep 2016 11:28:51 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrGLMWRWlGSWpSXmKPExsVysyfVTfdT661 wg+NXjC2WfFzM4sDocXT3b6YAxijWzLyk/IoE1oyu5yYFx10qlt9pZWtg/K3XxcjFISSwiVHi wLSzLBDOaUaJLT82M3YxcnKwCWhK3Pn8iQnEFhGQlrj2+TIjSBGzQDujxNr+XmaQhLBAusT6M 1/BbBYBVYmDbX9ZQWxeAReJT2cfsYDYEgJyEiePTQaKc3BwCrhKLHkuChIWAio5dmIn2wRG7g WMDKsYNYpTi8pSi3SNjfWSijLTM0pyEzNzdA0NzPRyU4uLE9NTcxKTivWS83M3MQL9ywAEOxh 3rg88xCjJwaQkyuuWditciC8pP6UyI7E4I76oNCe1+BCjDAeHkgTvvhagnGBRanpqRVpmDjDQ YNISHDxKIrzsIGne4oLE3OLMdIjUKUZFKXGIPgGQREZpHlwbLLgvMcpKCfMyAh0ixFOQWpSbW YIq/4pRnINRSZi3C2QKT2ZeCdz0V0CLmYAWb1lzHWRxSSJCSqqBkZGjetfTkzPzVq+5zPlWKc vwJb/yvqklzFcS/a1zpxxXnH3sg6GCiUDVx/PbnOS57mxW2bFYqaG3Z8/lDPsVcsd938d6VXO fCm1Y48Rq/XDaq3vbNPrFZtvP5Zh3T2zvzsbtch9NPXLnM1etj1omaue54puzFyPvw7nhZfMO SYe0Xl/xlmMLmxJLcUaioRZzUXEiAN1SSRlpAgAA X-Env-Sender: julien.grall@arm.com X-Msg-Ref: server-13.tower-27.messagelabs.com!1473938930!59120709!1 X-Originating-IP: [217.140.101.70] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 8.84; banners=-,-,- X-VirusChecked: Checked Received: (qmail 55900 invoked from network); 15 Sep 2016 11:28:50 -0000 Received: from foss.arm.com (HELO foss.arm.com) (217.140.101.70) by server-13.tower-27.messagelabs.com with SMTP; 15 Sep 2016 11:28:50 -0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 019D117C; Thu, 15 Sep 2016 04:28:50 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.218.32]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 008F43F251; Thu, 15 Sep 2016 04:28:48 -0700 (PDT) From: Julien Grall To: xen-devel@lists.xen.org Date: Thu, 15 Sep 2016 12:28:18 +0100 Message-Id: <1473938919-31976-3-git-send-email-julien.grall@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1473938919-31976-1-git-send-email-julien.grall@arm.com> References: <1473938919-31976-1-git-send-email-julien.grall@arm.com> Cc: proskurin@sec.in.tum.de, Julien Grall , sstabellini@kernel.org, steve.capper@arm.com, wei.chen@linaro.org Subject: [Xen-devel] [for-4.8][PATCH v2 02/23] xen/arm: p2m: Store in p2m_domain whether we need to clean the entry X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" Each entry in the page table has to be cleaned when the IOMMU does not support coherent walk. Rather than querying every time the page table is updated, it is possible to do it only once when the p2m is initialized. This is because this value can never change, Xen would be in big trouble otherwise. With this change, the initialization of the IOMMU for a given domain has to be done earlier in order to know whether the page table entries need to be cleaned. It is fine to move the call earlier because it has no dependency. Signed-off-by: Julien Grall Reviewed-by: Stefano Stabellini --- Changes in v2: - Fix typoes in the commit message - Add Stefano's reviewed-by - Use bool instead of bool_t --- xen/arch/arm/domain.c | 8 +++++--- xen/arch/arm/p2m.c | 47 ++++++++++++++++++++++------------------------- xen/include/asm-arm/p2m.h | 3 +++ 3 files changed, 30 insertions(+), 28 deletions(-) diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index 20bb2ba..48f04c8 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -555,6 +555,11 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags, return 0; ASSERT(config != NULL); + + /* p2m_init relies on some value initialized by the IOMMU subsystem */ + if ( (rc = iommu_domain_init(d)) != 0 ) + goto fail; + if ( (rc = p2m_init(d)) != 0 ) goto fail; @@ -637,9 +642,6 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags, if ( is_hardware_domain(d) && (rc = domain_vuart_init(d)) ) goto fail; - if ( (rc = iommu_domain_init(d)) != 0 ) - goto fail; - return 0; fail: diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index b648a9d..f482cfd 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -416,7 +416,7 @@ static inline void p2m_remove_pte(lpae_t *p, bool_t flush_cache) * level_shift is the number of bits at the level we want to create. */ static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry, - int level_shift, bool_t flush_cache) + int level_shift) { struct page_info *page; lpae_t *p; @@ -466,7 +466,7 @@ static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry, else clear_page(p); - if ( flush_cache ) + if ( p2m->clean_pte ) clean_dcache_va_range(p, PAGE_SIZE); unmap_domain_page(p); @@ -478,7 +478,7 @@ static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry, pte = mfn_to_p2m_entry(_mfn(page_to_mfn(page)), p2m_invalid, p2m->default_access); - p2m_write_pte(entry, pte, flush_cache); + p2m_write_pte(entry, pte, p2m->clean_pte); return 0; } @@ -661,12 +661,10 @@ static const paddr_t level_shifts[] = static int p2m_shatter_page(struct p2m_domain *p2m, lpae_t *entry, - unsigned int level, - bool_t flush_cache) + unsigned int level) { const paddr_t level_shift = level_shifts[level]; - int rc = p2m_create_table(p2m, entry, - level_shift - PAGE_SHIFT, flush_cache); + int rc = p2m_create_table(p2m, entry, level_shift - PAGE_SHIFT); if ( !rc ) { @@ -688,7 +686,6 @@ static int p2m_shatter_page(struct p2m_domain *p2m, static int apply_one_level(struct domain *d, lpae_t *entry, unsigned int level, - bool_t flush_cache, enum p2m_operation op, paddr_t start_gpaddr, paddr_t end_gpaddr, @@ -727,7 +724,7 @@ static int apply_one_level(struct domain *d, if ( level < 3 ) pte.p2m.table = 0; /* Superpage entry */ - p2m_write_pte(entry, pte, flush_cache); + p2m_write_pte(entry, pte, p2m->clean_pte); *flush |= p2m_valid(orig_pte); @@ -762,7 +759,7 @@ static int apply_one_level(struct domain *d, /* Not present -> create table entry and descend */ if ( !p2m_valid(orig_pte) ) { - rc = p2m_create_table(p2m, entry, 0, flush_cache); + rc = p2m_create_table(p2m, entry, 0); if ( rc < 0 ) return rc; return P2M_ONE_DESCEND; @@ -772,7 +769,7 @@ static int apply_one_level(struct domain *d, if ( p2m_mapping(orig_pte) ) { *flush = true; - rc = p2m_shatter_page(p2m, entry, level, flush_cache); + rc = p2m_shatter_page(p2m, entry, level); if ( rc < 0 ) return rc; } /* else: an existing table mapping -> descend */ @@ -809,7 +806,7 @@ static int apply_one_level(struct domain *d, * and descend. */ *flush = true; - rc = p2m_shatter_page(p2m, entry, level, flush_cache); + rc = p2m_shatter_page(p2m, entry, level); if ( rc < 0 ) return rc; @@ -835,7 +832,7 @@ static int apply_one_level(struct domain *d, *flush = true; - p2m_remove_pte(entry, flush_cache); + p2m_remove_pte(entry, p2m->clean_pte); p2m_mem_access_radix_set(p2m, paddr_to_pfn(*addr), p2m_access_rwx); *addr += level_size; @@ -894,7 +891,7 @@ static int apply_one_level(struct domain *d, /* Shatter large pages as we descend */ if ( p2m_mapping(orig_pte) ) { - rc = p2m_shatter_page(p2m, entry, level, flush_cache); + rc = p2m_shatter_page(p2m, entry, level); if ( rc < 0 ) return rc; } /* else: an existing table mapping -> descend */ @@ -912,7 +909,7 @@ static int apply_one_level(struct domain *d, return rc; p2m_set_permission(&pte, pte.p2m.type, a); - p2m_write_pte(entry, pte, flush_cache); + p2m_write_pte(entry, pte, p2m->clean_pte); } *addr += level_size; @@ -962,17 +959,9 @@ static int apply_p2m_changes(struct domain *d, const unsigned int preempt_count_limit = (op == MEMACCESS) ? 1 : 0x2000; const bool_t preempt = !is_idle_vcpu(current); bool_t flush = false; - bool_t flush_pt; PAGE_LIST_HEAD(free_pages); struct page_info *pg; - /* - * Some IOMMU don't support coherent PT walk. When the p2m is - * shared with the CPU, Xen has to make sure that the PT changes have - * reached the memory - */ - flush_pt = iommu_enabled && !iommu_has_feature(d, IOMMU_FEAT_COHERENT_WALK); - p2m_write_lock(p2m); /* Static mapping. P2M_ROOT_PAGES > 1 are handled below */ @@ -1078,7 +1067,7 @@ static int apply_p2m_changes(struct domain *d, lpae_t old_entry = *entry; ret = apply_one_level(d, entry, - level, flush_pt, op, + level, op, start_gpaddr, end_gpaddr, &addr, &maddr, &flush, t, a); @@ -1135,7 +1124,7 @@ static int apply_p2m_changes(struct domain *d, page_list_del(pg, &p2m->pages); - p2m_remove_pte(entry, flush_pt); + p2m_remove_pte(entry, p2m->clean_pte); p2m->stats.mappings[level - 1]--; update_reference_mapping(pages[level - 1], old_entry, *entry); @@ -1407,6 +1396,14 @@ int p2m_init(struct domain *d) p2m->mem_access_enabled = false; radix_tree_init(&p2m->mem_access_settings); + /* + * Some IOMMUs don't support coherent PT walk. When the p2m is + * shared with the CPU, Xen has to make sure that the PT changes have + * reached the memory + */ + p2m->clean_pte = iommu_enabled && + !iommu_has_feature(d, IOMMU_FEAT_COHERENT_WALK); + rc = p2m_alloc_table(d); return rc; diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h index 53c4d78..b9269e4 100644 --- a/xen/include/asm-arm/p2m.h +++ b/xen/include/asm-arm/p2m.h @@ -48,6 +48,9 @@ struct p2m_domain { * decrease. */ gfn_t lowest_mapped_gfn; + /* Indicate if it is required to clean the cache when writing an entry */ + bool clean_pte; + /* Gather some statistics for information purposes only */ struct { /* Number of mappings at each p2m tree level */