diff mbox

[Xen-devel,2/3] xen: arm: flush TLB on all CPUs when setting or clearing fixmaps

Message ID 1396447971-27846-2-git-send-email-ian.campbell@citrix.com
State New
Headers show

Commit Message

Ian Campbell April 2, 2014, 2:12 p.m. UTC
These mappings are global and therefore need flushing on all processors. Add
flush_all_xen_data_tlb_range_va which accomplishes this.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v3: use dsb(sy) not dsb()
---
 xen/arch/arm/mm.c                |    4 ++--
 xen/include/asm-arm/arm32/page.h |   19 +++++++++++++++++++
 xen/include/asm-arm/arm64/page.h |   19 +++++++++++++++++++
 3 files changed, 40 insertions(+), 2 deletions(-)

Comments

Julien Grall April 2, 2014, 2:46 p.m. UTC | #1
On 04/02/2014 03:12 PM, Ian Campbell wrote:
> These mappings are global and therefore need flushing on all processors. Add
> flush_all_xen_data_tlb_range_va which accomplishes this.

I think remove_early_mappings should also use flush_xen_data_range_va.

I'm wondering why BOOT_FDT_VIRT_START is removed so late (i.e in
discard_initial_modules). It can be done once the DTB is copied in setup_mm.

[..]

>  
> +/*
> + * Flush a range of VA's hypervisor mappings from the data TLB on all
> + * processors in the inner-shareable domain. This is not sufficient
> + * when changing code mappings or for self modifying code.
> + */
> +static inline void flush_xen_data_tlb_range_va(unsigned long va,
> +                                               unsigned long size)
> +{
> +    unsigned long end = va + size;
> +    dsb(sy); /* Ensure preceding are visible */
> +    while ( va < end ) {
> +        asm volatile(STORE_CP32(0, TLBIMVAHIS)
> +                     : : "r" (va) : "memory");
> +        va += PAGE_SIZE;
> +    }
> +    dsb(sy); /* Ensure completion of the TLB flush */
> +    isb();
> +}
> +

This loop is exactly the same on arm64 (except the TLBIMVAHIS), is it
possible to have a common code like clean_xen_dcache_va_range?

Regards,
Ian Campbell April 2, 2014, 3:22 p.m. UTC | #2
On Wed, 2014-04-02 at 15:46 +0100, Julien Grall wrote:
> On 04/02/2014 03:12 PM, Ian Campbell wrote:
> > These mappings are global and therefore need flushing on all processors. Add
> > flush_all_xen_data_tlb_range_va which accomplishes this.
> 
> I think remove_early_mappings should also use flush_xen_data_range_va.

Probably. I'll make that change too.

> I'm wondering why BOOT_FDT_VIRT_START is removed so late (i.e in
> discard_initial_modules). It can be done once the DTB is copied in setup_mm.

I'm not sure, I think it just seemed like a convenient place to do it.
Feel free to move it earlier.

> > +/*
> > + * Flush a range of VA's hypervisor mappings from the data TLB on all
> > + * processors in the inner-shareable domain. This is not sufficient
> > + * when changing code mappings or for self modifying code.
> > + */
> > +static inline void flush_xen_data_tlb_range_va(unsigned long va,
> > +                                               unsigned long size)
> > +{
> > +    unsigned long end = va + size;
> > +    dsb(sy); /* Ensure preceding are visible */
> > +    while ( va < end ) {
> > +        asm volatile(STORE_CP32(0, TLBIMVAHIS)
> > +                     : : "r" (va) : "memory");
> > +        va += PAGE_SIZE;
> > +    }
> > +    dsb(sy); /* Ensure completion of the TLB flush */
> > +    isb();
> > +}
> > +
> 
> This loop is exactly the same on arm64 (except the TLBIMVAHIS), is it
> possible to have a common code like clean_xen_dcache_va_range?

flush_xen_data_tlb_range_va_local has the same issue. I'll fix them
both.

Ian.
Ian Campbell April 2, 2014, 4:54 p.m. UTC | #3
On Wed, 2014-04-02 at 16:22 +0100, Ian Campbell wrote:
> > This loop is exactly the same on arm64 (except the TLBIMVAHIS), is it
> > possible to have a common code like clean_xen_dcache_va_range?
> 
> flush_xen_data_tlb_range_va_local has the same issue. I'll fix them
> both.

Actually there is a second difference which I didn't remember which is 
        : : "r" (va) : "memory");
vs:
     : : "r" (va>>PAGE_SHIFT) : "memory");

That's a lot harder to abstract away conveniently. I could have a
#define for the shift amount, I guess. I'll have a play and see if I can
come up with something satisfactory. If not I'll fall back to keeping
the loops per subarch.

Ian.
Julien Grall April 2, 2014, 4:59 p.m. UTC | #4
On 04/02/2014 05:54 PM, Ian Campbell wrote:
> That's a lot harder to abstract away conveniently. I could have a
> #define for the shift amount, I guess. I'll have a play and see if I can
> come up with something satisfactory. If not I'll fall back to keeping
> the loops per subarch.

Why can't you put the whole assembly inline in a macro?
diff mbox

Patch

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index d523f77..b966a5c 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -215,7 +215,7 @@  void set_fixmap(unsigned map, unsigned long mfn, unsigned attributes)
     pte.pt.table = 1; /* 4k mappings always have this bit set */
     pte.pt.xn = 1;
     write_pte(xen_fixmap + third_table_offset(FIXMAP_ADDR(map)), pte);
-    flush_xen_data_tlb_range_va_local(FIXMAP_ADDR(map), PAGE_SIZE);
+    flush_xen_data_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
 }
 
 /* Remove a mapping from a fixmap entry */
@@ -223,7 +223,7 @@  void clear_fixmap(unsigned map)
 {
     lpae_t pte = {0};
     write_pte(xen_fixmap + third_table_offset(FIXMAP_ADDR(map)), pte);
-    flush_xen_data_tlb_range_va_local(FIXMAP_ADDR(map), PAGE_SIZE);
+    flush_xen_data_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
 }
 
 #ifdef CONFIG_DOMAIN_PAGE
diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
index b0a2025..2b2bbe6 100644
--- a/xen/include/asm-arm/arm32/page.h
+++ b/xen/include/asm-arm/arm32/page.h
@@ -82,6 +82,25 @@  static inline void flush_xen_data_tlb_range_va_local(unsigned long va,
     isb();
 }
 
+/*
+ * Flush a range of VA's hypervisor mappings from the data TLB on all
+ * processors in the inner-shareable domain. This is not sufficient
+ * when changing code mappings or for self modifying code.
+ */
+static inline void flush_xen_data_tlb_range_va(unsigned long va,
+                                               unsigned long size)
+{
+    unsigned long end = va + size;
+    dsb(sy); /* Ensure preceding are visible */
+    while ( va < end ) {
+        asm volatile(STORE_CP32(0, TLBIMVAHIS)
+                     : : "r" (va) : "memory");
+        va += PAGE_SIZE;
+    }
+    dsb(sy); /* Ensure completion of the TLB flush */
+    isb();
+}
+
 /* Ask the MMU to translate a VA for us */
 static inline uint64_t __va_to_par(vaddr_t va)
 {
diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
index 65332a3..fdc652c 100644
--- a/xen/include/asm-arm/arm64/page.h
+++ b/xen/include/asm-arm/arm64/page.h
@@ -74,6 +74,25 @@  static inline void flush_xen_data_tlb_range_va_local(unsigned long va,
     isb();
 }
 
+/*
+ * Flush a range of VA's hypervisor mappings from the data TLB of all
+ * processors in the inner-shareable domain. This is not sufficient
+ * when changing code mappings or for self modifying code.
+ */
+static inline void flush_xen_data_tlb_range_va(unsigned long va,
+                                               unsigned long size)
+{
+    unsigned long end = va + size;
+    dsb(sy); /* Ensure preceding are visible */
+    while ( va < end ) {
+        asm volatile("tlbi vae2is, %0;"
+                     : : "r" (va>>PAGE_SHIFT) : "memory");
+        va += PAGE_SIZE;
+    }
+    dsb(sy); /* Ensure completion of the TLB flush */
+    isb();
+}
+
 /* Ask the MMU to translate a VA for us */
 static inline uint64_t __va_to_par(vaddr_t va)
 {