diff mbox

[Xen-devel] xen: arm: flush TLB on all CPUs when setting or clearing fixmaps

Message ID 1389718513-1638-1-git-send-email-ian.campbell@citrix.com
State New
Headers show

Commit Message

Ian Campbell Jan. 14, 2014, 4:55 p.m. UTC
These mappings are global and therefore need flushing on all processors. Add
flush_all_xen_data_tlb_range_va which accomplishes this.

Also update the comments in the other flush_xen_*_tlb functions to mention
that they operate on the local processor only.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/mm.c                |    4 ++--
 xen/include/asm-arm/arm32/page.h |   36 ++++++++++++++++++++++++++++++------
 xen/include/asm-arm/arm64/page.h |   35 +++++++++++++++++++++++++++++------
 3 files changed, 61 insertions(+), 14 deletions(-)

Comments

Julien Grall Jan. 14, 2014, 6:55 p.m. UTC | #1
On 01/14/2014 04:55 PM, Ian Campbell wrote:
> These mappings are global and therefore need flushing on all processors. Add
> flush_all_xen_data_tlb_range_va which accomplishes this.

Can we make name consistent across every *tlb* function call? On
flushtlb.h we use *_local for maintenance on the current processor only.
If the suffix is not present then the maintenance will be done on every
processor.
Ian Campbell Jan. 15, 2014, 9:37 a.m. UTC | #2
On Tue, 2014-01-14 at 18:55 +0000, Julien Grall wrote:
> On 01/14/2014 04:55 PM, Ian Campbell wrote:
> > These mappings are global and therefore need flushing on all processors. Add
> > flush_all_xen_data_tlb_range_va which accomplishes this.
> 
> Can we make name consistent across every *tlb* function call? On
> flushtlb.h we use *_local for maintenance on the current processor only.
> If the suffix is not present then the maintenance will be done on every
> processor.

I was trying to avoid a massive renaming of the existing flush_xen_*. I
suppose I should just go ahead and do it.

Ian.
Julien Grall Jan. 15, 2014, 1:50 p.m. UTC | #3
On 01/15/2014 09:37 AM, Ian Campbell wrote:
> On Tue, 2014-01-14 at 18:55 +0000, Julien Grall wrote:
>> On 01/14/2014 04:55 PM, Ian Campbell wrote:
>>> These mappings are global and therefore need flushing on all processors. Add
>>> flush_all_xen_data_tlb_range_va which accomplishes this.
>>
>> Can we make name consistent across every *tlb* function call? On
>> flushtlb.h we use *_local for maintenance on the current processor only.
>> If the suffix is not present then the maintenance will be done on every
>> processor.
> 
> I was trying to avoid a massive renaming of the existing flush_xen_*. I
> suppose I should just go ahead and do it.

If it's too big for 4.4, the modification could be post release. It
would be nice to have a common convention at some point.
Ian Campbell Jan. 15, 2014, 2:05 p.m. UTC | #4
On Wed, 2014-01-15 at 13:50 +0000, Julien Grall wrote:
> On 01/15/2014 09:37 AM, Ian Campbell wrote:
> > On Tue, 2014-01-14 at 18:55 +0000, Julien Grall wrote:
> >> On 01/14/2014 04:55 PM, Ian Campbell wrote:
> >>> These mappings are global and therefore need flushing on all processors. Add
> >>> flush_all_xen_data_tlb_range_va which accomplishes this.
> >>
> >> Can we make name consistent across every *tlb* function call? On
> >> flushtlb.h we use *_local for maintenance on the current processor only.
> >> If the suffix is not present then the maintenance will be done on every
> >> processor.
> > 
> > I was trying to avoid a massive renaming of the existing flush_xen_*. I
> > suppose I should just go ahead and do it.
> 
> If it's too big for 4.4,

With my temporary-RM hat on I've struggled with this a few times this
week -- that is, larger, mostly mechanical, textual changes which come
about because it is the correct/cleanest thing to do as part of a
smaller change which on their own would be pretty clear candidates for
an exception. Chen's change "xen/arm{32, 64}: fix section shift when
mapping 2MB block in boot page table" is in a similar boat.

I'm not sure where the balance should lie really.

>  the modification could be post release. It
> would be nice to have a common convention at some point.

Yes, this is certainly the case.

Ian.
Julien Grall Jan. 15, 2014, 2:58 p.m. UTC | #5
On 01/15/2014 02:05 PM, Ian Campbell wrote:
> On Wed, 2014-01-15 at 13:50 +0000, Julien Grall wrote:
>> On 01/15/2014 09:37 AM, Ian Campbell wrote:
>>> On Tue, 2014-01-14 at 18:55 +0000, Julien Grall wrote:
>>>> On 01/14/2014 04:55 PM, Ian Campbell wrote:
>>>>> These mappings are global and therefore need flushing on all processors. Add
>>>>> flush_all_xen_data_tlb_range_va which accomplishes this.
>>>>
>>>> Can we make name consistent across every *tlb* function call? On
>>>> flushtlb.h we use *_local for maintenance on the current processor only.
>>>> If the suffix is not present then the maintenance will be done on every
>>>> processor.
>>>
>>> I was trying to avoid a massive renaming of the existing flush_xen_*. I
>>> suppose I should just go ahead and do it.
>>
>> If it's too big for 4.4,
> 
> With my temporary-RM hat on I've struggled with this a few times this
> week -- that is, larger, mostly mechanical, textual changes which come
> about because it is the correct/cleanest thing to do as part of a
> smaller change which on their own would be pretty clear candidates for
> an exception. Chen's change "xen/arm{32, 64}: fix section shift when
> mapping 2MB block in boot page table" is in a similar boat.
> 
> I'm not sure where the balance should lie really.

The "issue" I see is backporting patch from Xen 4.5 to Xen 4.4 will be
less trivial. We will have to think about the function name.
Ian Campbell Jan. 15, 2014, 3:02 p.m. UTC | #6
On Wed, 2014-01-15 at 14:58 +0000, Julien Grall wrote:
> On 01/15/2014 02:05 PM, Ian Campbell wrote:
> > On Wed, 2014-01-15 at 13:50 +0000, Julien Grall wrote:
> >> On 01/15/2014 09:37 AM, Ian Campbell wrote:
> >>> On Tue, 2014-01-14 at 18:55 +0000, Julien Grall wrote:
> >>>> On 01/14/2014 04:55 PM, Ian Campbell wrote:
> >>>>> These mappings are global and therefore need flushing on all processors. Add
> >>>>> flush_all_xen_data_tlb_range_va which accomplishes this.
> >>>>
> >>>> Can we make name consistent across every *tlb* function call? On
> >>>> flushtlb.h we use *_local for maintenance on the current processor only.
> >>>> If the suffix is not present then the maintenance will be done on every
> >>>> processor.
> >>>
> >>> I was trying to avoid a massive renaming of the existing flush_xen_*. I
> >>> suppose I should just go ahead and do it.
> >>
> >> If it's too big for 4.4,
> > 
> > With my temporary-RM hat on I've struggled with this a few times this
> > week -- that is, larger, mostly mechanical, textual changes which come
> > about because it is the correct/cleanest thing to do as part of a
> > smaller change which on their own would be pretty clear candidates for
> > an exception. Chen's change "xen/arm{32, 64}: fix section shift when
> > mapping 2MB block in boot page table" is in a similar boat.
> > 
> > I'm not sure where the balance should lie really.
> 
> The "issue" I see is backporting patch from Xen 4.5 to Xen 4.4 will be
> less trivial. We will have to think about the function name.

Yes, especially where the old function name continues to exist but with
different semantics (at least in this case it would be a wider, and
therefore safe, flush, but still).

That does seems to be an argument for doing the rename sooner rather
than later.

Ian.
diff mbox

Patch

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 35af1ad..cddb174 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -234,7 +234,7 @@  void set_fixmap(unsigned map, unsigned long mfn, unsigned attributes)
     pte.pt.ai = attributes;
     pte.pt.xn = 1;
     write_pte(xen_fixmap + third_table_offset(FIXMAP_ADDR(map)), pte);
-    flush_xen_data_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
+    flush_all_xen_data_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
 }
 
 /* Remove a mapping from a fixmap entry */
@@ -242,7 +242,7 @@  void clear_fixmap(unsigned map)
 {
     lpae_t pte = {0};
     write_pte(xen_fixmap + third_table_offset(FIXMAP_ADDR(map)), pte);
-    flush_xen_data_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
+    flush_all_xen_data_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
 }
 
 #ifdef CONFIG_DOMAIN_PAGE
diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
index cf12a89..533b253 100644
--- a/xen/include/asm-arm/arm32/page.h
+++ b/xen/include/asm-arm/arm32/page.h
@@ -23,7 +23,9 @@  static inline void write_pte(lpae_t *p, lpae_t pte)
 #define __flush_xen_dcache_one(R) STORE_CP32(R, DCCMVAC)
 
 /*
- * Flush all hypervisor mappings from the TLB and branch predictor.
+ * Flush all hypervisor mappings from the TLB and branch predictor of
+ * the local processor.
+ *
  * This is needed after changing Xen code mappings.
  *
  * The caller needs to issue the necessary DSB and D-cache flushes
@@ -43,8 +45,9 @@  static inline void flush_xen_text_tlb(void)
 }
 
 /*
- * Flush all hypervisor mappings from the data TLB. This is not
- * sufficient when changing code mappings or for self modifying code.
+ * Flush all hypervisor mappings from the data TLB of the local
+ * processor. This is not sufficient when changing code mappings or
+ * for self modifying code.
  */
 static inline void flush_xen_data_tlb(void)
 {
@@ -57,10 +60,12 @@  static inline void flush_xen_data_tlb(void)
 }
 
 /*
- * Flush a range of VA's hypervisor mappings from the data TLB. This is not
- * sufficient when changing code mappings or for self modifying code.
+ * Flush a range of VA's hypervisor mappings from the data TLB of the
+ * local processor. This is not sufficient when changing code mappings
+ * or for self modifying code.
  */
-static inline void flush_xen_data_tlb_range_va(unsigned long va, unsigned long size)
+static inline void flush_xen_data_tlb_range_va(unsigned long va,
+                                               unsigned long size)
 {
     unsigned long end = va + size;
     dsb(); /* Ensure preceding are visible */
@@ -73,6 +78,25 @@  static inline void flush_xen_data_tlb_range_va(unsigned long va, unsigned long s
     isb();
 }
 
+/*
+ * Flush a range of VA's hypervisor mappings from the data TLB on all
+ * processors in the inner-shareable domain. This is not sufficient
+ * when changing code mappings or for self modifying code.
+ */
+static inline void flush_all_xen_data_tlb_range_va(unsigned long va,
+                                                   unsigned long size)
+{
+    unsigned long end = va + size;
+    dsb(); /* Ensure preceding are visible */
+    while ( va < end ) {
+        asm volatile(STORE_CP32(0, TLBIMVAHIS)
+                     : : "r" (va) : "memory");
+        va += PAGE_SIZE;
+    }
+    dsb(); /* Ensure completion of the TLB flush */
+    isb();
+}
+
 /* Ask the MMU to translate a VA for us */
 static inline uint64_t __va_to_par(vaddr_t va)
 {
diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
index 9551f90..42023cc 100644
--- a/xen/include/asm-arm/arm64/page.h
+++ b/xen/include/asm-arm/arm64/page.h
@@ -18,7 +18,8 @@  static inline void write_pte(lpae_t *p, lpae_t pte)
 #define __flush_xen_dcache_one(R) "dc cvac, %" #R ";"
 
 /*
- * Flush all hypervisor mappings from the TLB
+ * Flush all hypervisor mappings from the TLB of the local processor.
+ *
  * This is needed after changing Xen code mappings.
  *
  * The caller needs to issue the necessary DSB and D-cache flushes
@@ -36,8 +37,9 @@  static inline void flush_xen_text_tlb(void)
 }
 
 /*
- * Flush all hypervisor mappings from the data TLB. This is not
- * sufficient when changing code mappings or for self modifying code.
+ * Flush all hypervisor mappings from the data TLB of the local
+ * processor. This is not sufficient when changing code mappings or
+ * for self modifying code.
  */
 static inline void flush_xen_data_tlb(void)
 {
@@ -50,10 +52,12 @@  static inline void flush_xen_data_tlb(void)
 }
 
 /*
- * Flush a range of VA's hypervisor mappings from the data TLB. This is not
- * sufficient when changing code mappings or for self modifying code.
+ * Flush a range of VA's hypervisor mappings from the data TLB of the
+ * local processor. This is not sufficient when changing code mappings
+ * or for self modifying code.
  */
-static inline void flush_xen_data_tlb_range_va(unsigned long va, unsigned long size)
+static inline void flush_xen_data_tlb_range_va(unsigned long va,
+                                               unsigned long size)
 {
     unsigned long end = va + size;
     dsb(); /* Ensure preceding are visible */
@@ -66,6 +70,25 @@  static inline void flush_xen_data_tlb_range_va(unsigned long va, unsigned long s
     isb();
 }
 
+/*
+ * Flush a range of VA's hypervisor mappings from the data TLB of all
+ * processors in the inner-shareable domain. This is not sufficient
+ * when changing code mappings or for self modifying code.
+ */
+static inline void flush_all_xen_data_tlb_range_va(unsigned long va,
+                                                   unsigned long size)
+{
+    unsigned long end = va + size;
+    dsb(); /* Ensure preceding are visible */
+    while ( va < end ) {
+        asm volatile("tlbi vae2is, %0;"
+                     : : "r" (va>>PAGE_SHIFT) : "memory");
+        va += PAGE_SIZE;
+    }
+    dsb(); /* Ensure completion of the TLB flush */
+    isb();
+}
+
 /* Ask the MMU to translate a VA for us */
 static inline uint64_t __va_to_par(vaddr_t va)
 {