diff mbox

xen/arm: correct flush_tlb_mask behaviour

Message ID 1389286683-11656-1-git-send-email-julien.grall@linaro.org
State Accepted
Commit 8a9ab685bef64a50f58d99a4e8728b770289e9ef
Headers show

Commit Message

Julien Grall Jan. 9, 2014, 4:58 p.m. UTC
On ARM, flush_tlb_mask is used in the common code:
    - alloc_heap_pages: the flush is only be called if the new allocated
    page was used by a domain before. So we need to flush only TLB non-secure
non-hyp inner-shareable.
    - common/grant-table.c: every calls to flush_tlb_mask are used with
    the current domain. A flush TLB by current VMID inner-shareable is enough.

The current code only flush hypervisor TLB on the current PCPU. For now,
flush TLBs non-secure non-hyp on every PCPUs.

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    This patch is bug fix for Xen 4.4. We were safe because there is already
a flush in create_p2m_entries if the previous mapping was valid.

For Xen 4.5, we should optimize the function to avoid flush for every VMIDs
each time we allocated a new page.
---
 xen/arch/arm/smp.c                   |    3 ++-
 xen/include/asm-arm/arm32/flushtlb.h |   11 +++++++++++
 xen/include/asm-arm/arm64/flushtlb.h |   11 +++++++++++
 3 files changed, 24 insertions(+), 1 deletion(-)

Comments

Ian Campbell Jan. 13, 2014, 5:16 p.m. UTC | #1
On Thu, 2014-01-09 at 16:58 +0000, Julien Grall wrote:
> On ARM, flush_tlb_mask is used in the common code:
>     - alloc_heap_pages: the flush is only be called if the new allocated
>     page was used by a domain before. So we need to flush only TLB non-secure
> non-hyp inner-shareable.
>     - common/grant-table.c: every calls to flush_tlb_mask are used with
>     the current domain. A flush TLB by current VMID inner-shareable is enough.
> 
> The current code only flush hypervisor TLB on the current PCPU. For now,
> flush TLBs non-secure non-hyp on every PCPUs.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>     This patch is bug fix for Xen 4.4. We were safe because there is already
> a flush in create_p2m_entries if the previous mapping was valid.
> 
> For Xen 4.5, we should optimize the function to avoid flush for every VMIDs
> each time we allocated a new page.

Right, this whole are is ripe for rationalisation and optimisation.

Ian.
Ian Campbell Jan. 15, 2014, 2:13 p.m. UTC | #2
On Mon, 2014-01-13 at 17:16 +0000, Ian Campbell wrote:
> On Thu, 2014-01-09 at 16:58 +0000, Julien Grall wrote:
> > On ARM, flush_tlb_mask is used in the common code:
> >     - alloc_heap_pages: the flush is only be called if the new allocated
> >     page was used by a domain before. So we need to flush only TLB non-secure
> > non-hyp inner-shareable.
> >     - common/grant-table.c: every calls to flush_tlb_mask are used with
> >     the current domain. A flush TLB by current VMID inner-shareable is enough.
> > 
> > The current code only flush hypervisor TLB on the current PCPU. For now,
> > flush TLBs non-secure non-hyp on every PCPUs.
> > 
> > Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Applied.
diff mbox

Patch

diff --git a/xen/arch/arm/smp.c b/xen/arch/arm/smp.c
index 4042db5..30203b8 100644
--- a/xen/arch/arm/smp.c
+++ b/xen/arch/arm/smp.c
@@ -4,11 +4,12 @@ 
 #include <asm/cpregs.h>
 #include <asm/page.h>
 #include <asm/gic.h>
+#include <asm/flushtlb.h>
 
 void flush_tlb_mask(const cpumask_t *mask)
 {
     /* No need to IPI other processors on ARM, the processor takes care of it. */
-    flush_xen_data_tlb();
+    flush_tlb_all();
 }
 
 void smp_send_event_check_mask(const cpumask_t *mask)
diff --git a/xen/include/asm-arm/arm32/flushtlb.h b/xen/include/asm-arm/arm32/flushtlb.h
index ab166f3..7183a07 100644
--- a/xen/include/asm-arm/arm32/flushtlb.h
+++ b/xen/include/asm-arm/arm32/flushtlb.h
@@ -34,6 +34,17 @@  static inline void flush_tlb_all_local(void)
     isb();
 }
 
+/* Flush innershareable TLBs, all VMIDs, non-hypervisor mode */
+static inline void flush_tlb_all(void)
+{
+    dsb();
+
+    WRITE_CP32((uint32_t) 0, TLBIALLNSNHIS);
+
+    dsb();
+    isb();
+}
+
 #endif /* __ASM_ARM_ARM32_FLUSHTLB_H__ */
 /*
  * Local variables:
diff --git a/xen/include/asm-arm/arm64/flushtlb.h b/xen/include/asm-arm/arm64/flushtlb.h
index 9ce79a8..a73df92 100644
--- a/xen/include/asm-arm/arm64/flushtlb.h
+++ b/xen/include/asm-arm/arm64/flushtlb.h
@@ -34,6 +34,17 @@  static inline void flush_tlb_all_local(void)
         : : : "memory");
 }
 
+/* Flush innershareable TLBs, all VMIDs, non-hypervisor mode */
+static inline void flush_tlb_all(void)
+{
+    asm volatile(
+        "dsb sy;"
+        "tlbi alle1is;"
+        "dsb sy;"
+        "isb;"
+        : : : "memory");
+}
+
 #endif /* __ASM_ARM_ARM64_FLUSHTLB_H__ */
 /*
  * Local variables: