From patchwork Wed Mar 8 18:06:02 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 95057 Delivered-To: patch@linaro.org Received: by 10.182.3.34 with SMTP id 2csp27149obz; Wed, 8 Mar 2017 10:08:17 -0800 (PST) X-Received: by 10.36.228.13 with SMTP id o13mr15635739ith.56.1488996497931; Wed, 08 Mar 2017 10:08:17 -0800 (PST) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id u188si743881ita.57.2017.03.08.10.08.16 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 08 Mar 2017 10:08:17 -0800 (PST) Received-SPF: neutral (google.com: 192.237.175.120 is neither permitted nor denied by best guess record for domain of xen-devel-bounces@lists.xen.org) client-ip=192.237.175.120; Authentication-Results: mx.google.com; spf=neutral (google.com: 192.237.175.120 is neither permitted nor denied by best guess record for domain of xen-devel-bounces@lists.xen.org) smtp.mailfrom=xen-devel-bounces@lists.xen.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1clfyc-0007EM-3W; Wed, 08 Mar 2017 18:06:18 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1clfya-0007Du-9e for xen-devel@lists.xen.org; Wed, 08 Mar 2017 18:06:16 +0000 Received: from [85.158.137.68] by server-12.bemta-3.messagelabs.com id D5/DC-12861-71840C85; Wed, 08 Mar 2017 18:06:15 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrMLMWRWlGSWpSXmKPExsVysyfVTVfM40C EwZorZhZLPi5mcWD0OLr7N1MAYxRrZl5SfkUCa8btH9uYClZLVGw5KNTAOE+4i5GLQ0hgM6PE gTubWSGc04wSN66+YOti5ORgE9CUuPP5ExOILSIgLXHt82VGEJtZoFBi9dl/zF2MHBzCAsES5 755goRZBFQltq1YwwgS5hWwlDi3zhokLCEgL7Gr7SIrSJhTwEri1/lokLAQUMWp48dYJzByL2 BkWMWoUZxaVJZapGtoppdUlJmeUZKbmJmja2hgrJebWlycmJ6ak5hUrJecn7uJEehZBiDYwbh qu+chRkkOJiVR3quqByKE+JLyUyozEosz4otKc1KLDzHKcHAoSfCauwPlBItS01Mr0jJzgCEG k5bg4FES4e0HSfMWFyTmFmemQ6ROMSpKifMWgSQEQBIZpXlwbbCwvsQoKyXMywh0iBBPQWpRb mYJqvwrRnEORiVh3liQKTyZeSVw018BLWYCWqztuhdkcUkiQkqqgTHUSbqwV+UZ5xUmxg97H9 10fbm9yq5IwMbDdcq2iZNNep13bkufK34+nPuew/aZ7ZuOyljsiLr77ppU0kHB9cc6hRZu6VI un8baWfLOu+OkrOaPYKO1UaffP0570rHwVbVlWa39Yf2qG+5l88uX+sX0z2VPe5hcV+wfcVyv deNnmQ0bS1jqGZVYijMSDbWYi4oTAfsC4q9mAgAA X-Env-Sender: julien.grall@arm.com X-Msg-Ref: server-4.tower-31.messagelabs.com!1488996374!31188382!1 X-Originating-IP: [217.140.101.70] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 9.2.3; banners=-,-,- X-VirusChecked: Checked Received: (qmail 30158 invoked from network); 8 Mar 2017 18:06:14 -0000 Received: from foss.arm.com (HELO foss.arm.com) (217.140.101.70) by server-4.tower-31.messagelabs.com with SMTP; 8 Mar 2017 18:06:14 -0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1A3B814F6; Wed, 8 Mar 2017 10:06:14 -0800 (PST) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.218.32]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2370E3F23B; Wed, 8 Mar 2017 10:06:12 -0800 (PST) From: Julien Grall To: xen-devel@lists.xen.org Date: Wed, 8 Mar 2017 18:06:02 +0000 Message-Id: <20170308180602.24430-4-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170308180602.24430-1-julien.grall@arm.com> References: <20170308180602.24430-1-julien.grall@arm.com> Cc: andre.przywara@arm.com, Julien Grall , sstabellini@kernel.org, punit.agrawal@arm.com Subject: [Xen-devel] [PATCH 3/3] xen/arm: p2m: Perform local TLB invalidation on vCPU migration X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" The ARM architecture allows an OS to have per-CPU page tables, as it guarantees that TLBs never migrate from one CPU to another. This works fine until this is done in a guest. Consider the following scenario: - vcpu-0 maps P to V - vpcu-1 maps P' to V If run on the same physical CPU, vcpu-1 can hit in TLBs generated by vcpu-0 accesses, and access the wrong physical page. The solution to this is to keep a per-p2m map of which vCPU ran the last on each given pCPU and invalidate local TLBs if two vPCUs from the same VM run on the same CPU. Unfortunately it is not possible to allocate per-cpu variable on the fly. So for now the size of the array is NR_CPUS, this is fine because we still have space in the structure domain. We may want to add an helper to allocate per-cpu variable in the future. Signed-off-by: Julien Grall --- This patch is a candidate for backport to Xen 4.8, 4.7 and 4.6. --- xen/arch/arm/p2m.c | 24 ++++++++++++++++++++++++ xen/include/asm-arm/p2m.h | 3 +++ 2 files changed, 27 insertions(+) diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 1fc6ca3bb2..626376090d 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -130,6 +130,7 @@ void p2m_restore_state(struct vcpu *n) { register_t hcr; struct p2m_domain *p2m = &n->domain->arch.p2m; + uint8_t *last_vcpu_ran; if ( is_idle_vcpu(n) ) return; @@ -149,6 +150,17 @@ void p2m_restore_state(struct vcpu *n) WRITE_SYSREG(hcr, HCR_EL2); isb(); + + last_vcpu_ran = &p2m->last_vcpu_ran[smp_processor_id()]; + + /* + * Flush local TLB for the domain to prevent wrong TLB translation + * when running multiple vCPU of the same domain on a single pCPU. + */ + if ( *last_vcpu_ran != INVALID_VCPU_ID && *last_vcpu_ran != n->vcpu_id ) + flush_tlb_local(); + + *last_vcpu_ran = n->vcpu_id; } static void p2m_flush_tlb(struct p2m_domain *p2m) @@ -1247,6 +1259,7 @@ int p2m_init(struct domain *d) { struct p2m_domain *p2m = &d->arch.p2m; int rc = 0; + unsigned int cpu; rwlock_init(&p2m->lock); INIT_PAGE_LIST_HEAD(&p2m->pages); @@ -1275,6 +1288,17 @@ int p2m_init(struct domain *d) rc = p2m_alloc_table(d); + /* + * Make sure that the type chosen to is able to store the an vCPU ID + * between 0 and the maximum of virtual CPUS supported as long as + * the INVALID_VCPU_ID. + */ + BUILD_BUG_ON((1 << (sizeof(p2m->last_vcpu_ran[0]) * 8)) < MAX_VIRT_CPUS); + BUILD_BUG_ON((1 << (sizeof(p2m->last_vcpu_ran[0])* 8)) < INVALID_VCPU_ID); + + for_each_possible_cpu(cpu) + p2m->last_vcpu_ran[cpu] = INVALID_VCPU_ID; + return rc; } diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h index 0899523084..18c57f936e 100644 --- a/xen/include/asm-arm/p2m.h +++ b/xen/include/asm-arm/p2m.h @@ -96,6 +96,9 @@ struct p2m_domain { /* back pointer to domain */ struct domain *domain; + + /* Keeping track on which CPU this p2m was used and for which vCPU */ + uint8_t last_vcpu_ran[NR_CPUS]; }; /*