diff mbox

[Xen-devel,v7,1/4] xen/arm: p2m: Clean cache PT when the IOMMU doesn't support coherent walk

Message ID 1400163385-19863-2-git-send-email-julien.grall@linaro.org
State Superseded, archived
Headers show

Commit Message

Julien Grall May 15, 2014, 2:16 p.m. UTC
Some IOMMU don't suppport coherent PT walk. When the p2m is shared with
the CPU, Xen has to make sure the PT changes have reached the memory.

Introduce new IOMMU function that will check if the IOMMU feature is enabled
for a specified domain.

On ARM, the platform can contain multiple IOMMUs. Each of them may not
have the same set of feature. The domain parameter will be used to get the
set of features for IOMMUs used by this domain.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Xiantao Zhang <xiantao.zhang@intel.com>

---
    Changes in v7:
        - Add IOMMU_FEAT_count
        - Use DECLARE_BITMAP

    Changes in v6:
        - Rework the condition to flush cache for PT
        - Use {set,clear,test}_bit
        - Store features in hvm_iommu structure and add accessor
        - Don't specificed value in the enum

    Changes in v5:
        - Flush on every write_pte instead of unmap page. This will
        avoid to flush a whole page when only few bytes are modified
        - Only get iommu feature once.
        - Add bits to flush cache when a new table is created
        - Fix typoes in commit message and comment
        - Use an enum to describe the feature. Each items are a bit
        position

    Changes in v4:
        - Patch added
---
 xen/arch/arm/p2m.c              |   38 ++++++++++++++++++++++++++++++--------
 xen/drivers/passthrough/iommu.c |   10 ++++++++++
 xen/include/xen/hvm/iommu.h     |    6 ++++++
 xen/include/xen/iommu.h         |    9 +++++++++
 4 files changed, 55 insertions(+), 8 deletions(-)

Comments

Ian Campbell May 15, 2014, 3:20 p.m. UTC | #1
On Thu, 2014-05-15 at 15:16 +0100, Julien Grall wrote:
> Some IOMMU don't suppport coherent PT walk. When the p2m is shared with
> the CPU, Xen has to make sure the PT changes have reached the memory.
> 
> Introduce new IOMMU function that will check if the IOMMU feature is enabled
> for a specified domain.
> 
> On ARM, the platform can contain multiple IOMMUs. Each of them may not
> have the same set of feature. The domain parameter will be used to get the
> set of features for IOMMUs used by this domain.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com> for the arm bit.

> @@ -548,10 +568,12 @@ int p2m_alloc_table(struct domain *d)
>      /* Clear both first level pages */
>      p = __map_domain_page(page);
>      clear_page(p);
> +    clean_xen_dcache_va_range(p, PAGE_SIZE);
>      unmap_domain_page(p);
>  
>      p = __map_domain_page(page + 1);
>      clear_page(p);
> +    clean_xen_dcache_va_range(p, PAGE_SIZE);


I wonder if we need clear_and_clean_page()?

>  
> +enum iommu_feature
> +{
> +    IOMMU_FEAT_COHERENT_WALK,
> +    IOMMU_FEAT_count,

Lowercase?

Ian.
Julien Grall May 15, 2014, 3:24 p.m. UTC | #2
On 05/15/2014 04:20 PM, Ian Campbell wrote:
> On Thu, 2014-05-15 at 15:16 +0100, Julien Grall wrote:
>> Some IOMMU don't suppport coherent PT walk. When the p2m is shared with
>> the CPU, Xen has to make sure the PT changes have reached the memory.
>>
>> Introduce new IOMMU function that will check if the IOMMU feature is enabled
>> for a specified domain.
>>
>> On ARM, the platform can contain multiple IOMMUs. Each of them may not
>> have the same set of feature. The domain parameter will be used to get the
>> set of features for IOMMUs used by this domain.
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com> for the arm bit.

Thanks!

>> @@ -548,10 +568,12 @@ int p2m_alloc_table(struct domain *d)
>>      /* Clear both first level pages */
>>      p = __map_domain_page(page);
>>      clear_page(p);
>> +    clean_xen_dcache_va_range(p, PAGE_SIZE);
>>      unmap_domain_page(p);
>>  
>>      p = __map_domain_page(page + 1);
>>      clear_page(p);
>> +    clean_xen_dcache_va_range(p, PAGE_SIZE);
> 
> 
> I wonder if we need clear_and_clean_page()?

I though about it but I was too lazy to create one. I will add a helper
if I need to resend this series.

>>  
>> +enum iommu_feature
>> +{
>> +    IOMMU_FEAT_COHERENT_WALK,
>> +    IOMMU_FEAT_count,
> 
> Lowercase?

What do you mean?

Regards,
Ian Campbell May 15, 2014, 3:51 p.m. UTC | #3
On Thu, 2014-05-15 at 16:24 +0100, Julien Grall wrote:
> On 05/15/2014 04:20 PM, Ian Campbell wrote:
> > On Thu, 2014-05-15 at 15:16 +0100, Julien Grall wrote:
> >> Some IOMMU don't suppport coherent PT walk. When the p2m is shared with
> >> the CPU, Xen has to make sure the PT changes have reached the memory.
> >>
> >> Introduce new IOMMU function that will check if the IOMMU feature is enabled
> >> for a specified domain.
> >>
> >> On ARM, the platform can contain multiple IOMMUs. Each of them may not
> >> have the same set of feature. The domain parameter will be used to get the
> >> set of features for IOMMUs used by this domain.
> >>
> >> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> > 
> > Acked-by: Ian Campbell <ian.campbell@citrix.com> for the arm bit.
> 
> Thanks!
> 
> >> @@ -548,10 +568,12 @@ int p2m_alloc_table(struct domain *d)
> >>      /* Clear both first level pages */
> >>      p = __map_domain_page(page);
> >>      clear_page(p);
> >> +    clean_xen_dcache_va_range(p, PAGE_SIZE);
> >>      unmap_domain_page(p);
> >>  
> >>      p = __map_domain_page(page + 1);
> >>      clear_page(p);
> >> +    clean_xen_dcache_va_range(p, PAGE_SIZE);
> > 
> > 
> > I wonder if we need clear_and_clean_page()?
> 
> I though about it but I was too lazy to create one. I will add a helper
> if I need to resend this series.
> 
> >>  
> >> +enum iommu_feature
> >> +{
> >> +    IOMMU_FEAT_COHERENT_WALK,
> >> +    IOMMU_FEAT_count,
> > 
> > Lowercase?
> 
> What do you mean?

Enums are normally all uppercase.

Ian.
Jan Beulich May 15, 2014, 4:04 p.m. UTC | #4
>>> On 15.05.14 at 16:16, <julien.grall@linaro.org> wrote:
> Some IOMMU don't suppport coherent PT walk. When the p2m is shared with
> the CPU, Xen has to make sure the PT changes have reached the memory.
> 
> Introduce new IOMMU function that will check if the IOMMU feature is enabled
> for a specified domain.
> 
> On ARM, the platform can contain multiple IOMMUs. Each of them may not
> have the same set of feature. The domain parameter will be used to get the
> set of features for IOMMUs used by this domain.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

IOMMU part:
Acked-by: Jan Beulich <jbeulich@suse.com>

with one final nit:

> --- a/xen/include/xen/iommu.h
> +++ b/xen/include/xen/iommu.h
> @@ -67,6 +67,15 @@ int iommu_map_page(struct domain *d, unsigned long gfn, 
> unsigned long mfn,
>                     unsigned int flags);
>  int iommu_unmap_page(struct domain *d, unsigned long gfn);
>  
> +enum iommu_feature
> +{
> +    IOMMU_FEAT_COHERENT_WALK,
> +    IOMMU_FEAT_count,

This is supposed to be the terminal entry, no matter what future
additions may arise. Therefore it shouldn't have a (potentially
misleading) trailing comma on it.

Jan
Jan Beulich May 15, 2014, 4:05 p.m. UTC | #5
>>> On 15.05.14 at 17:20, <Ian.Campbell@citrix.com> wrote:
> On Thu, 2014-05-15 at 15:16 +0100, Julien Grall wrote:
>> +enum iommu_feature
>> +{
>> +    IOMMU_FEAT_COHERENT_WALK,
>> +    IOMMU_FEAT_count,
> 
> Lowercase?

I intentionally suggested it that way to make the sentinel stand out
from the "normal" entries.

Jan
Julien Grall May 15, 2014, 5:06 p.m. UTC | #6
On 05/15/2014 05:04 PM, Jan Beulich wrote:
>>>> On 15.05.14 at 16:16, <julien.grall@linaro.org> wrote:
>> Some IOMMU don't suppport coherent PT walk. When the p2m is shared with
>> the CPU, Xen has to make sure the PT changes have reached the memory.
>>
>> Introduce new IOMMU function that will check if the IOMMU feature is enabled
>> for a specified domain.
>>
>> On ARM, the platform can contain multiple IOMMUs. Each of them may not
>> have the same set of feature. The domain parameter will be used to get the
>> set of features for IOMMUs used by this domain.
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> IOMMU part:
> Acked-by: Jan Beulich <jbeulich@suse.com>

Thanks. Ian, will you take care to remove the "," in this patch? It will
avoid me to resend the whole series for this small change.

> with one final nit:
> 
>> --- a/xen/include/xen/iommu.h
>> +++ b/xen/include/xen/iommu.h
>> @@ -67,6 +67,15 @@ int iommu_map_page(struct domain *d, unsigned long gfn, 
>> unsigned long mfn,
>>                     unsigned int flags);
>>  int iommu_unmap_page(struct domain *d, unsigned long gfn);
>>  
>> +enum iommu_feature
>> +{
>> +    IOMMU_FEAT_COHERENT_WALK,
>> +    IOMMU_FEAT_count,

Regards,
Ian Campbell May 16, 2014, 10 a.m. UTC | #7
On Thu, 2014-05-15 at 18:06 +0100, Julien Grall wrote:
> On 05/15/2014 05:04 PM, Jan Beulich wrote:
> >>>> On 15.05.14 at 16:16, <julien.grall@linaro.org> wrote:
> >> Some IOMMU don't suppport coherent PT walk. When the p2m is shared with
> >> the CPU, Xen has to make sure the PT changes have reached the memory.
> >>
> >> Introduce new IOMMU function that will check if the IOMMU feature is enabled
> >> for a specified domain.
> >>
> >> On ARM, the platform can contain multiple IOMMUs. Each of them may not
> >> have the same set of feature. The domain parameter will be used to get the
> >> set of features for IOMMUs used by this domain.
> >>
> >> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> > 
> > IOMMU part:
> > Acked-by: Jan Beulich <jbeulich@suse.com>
> 
> Thanks. Ian, will you take care to remove the "," in this patch? It will
> avoid me to resend the whole series for this small change.

I can do, yes.

> 
> > with one final nit:
> > 
> >> --- a/xen/include/xen/iommu.h
> >> +++ b/xen/include/xen/iommu.h
> >> @@ -67,6 +67,15 @@ int iommu_map_page(struct domain *d, unsigned long gfn, 
> >> unsigned long mfn,
> >>                     unsigned int flags);
> >>  int iommu_unmap_page(struct domain *d, unsigned long gfn);
> >>  
> >> +enum iommu_feature
> >> +{
> >> +    IOMMU_FEAT_COHERENT_WALK,
> >> +    IOMMU_FEAT_count,
> 
> Regards,
>
diff mbox

Patch

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 816da21..dd145c1 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -253,9 +253,15 @@  static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
     return e;
 }
 
+static inline void p2m_write_pte(lpae_t *p, lpae_t pte, bool_t flush_cache)
+{
+    write_pte(p, pte);
+    if ( flush_cache )
+        clean_xen_dcache(*p);
+}
+
 /* Allocate a new page table page and hook it in via the given entry */
-static int p2m_create_table(struct domain *d,
-                            lpae_t *entry)
+static int p2m_create_table(struct domain *d, lpae_t *entry, bool_t flush_cache)
 {
     struct p2m_domain *p2m = &d->arch.p2m;
     struct page_info *page;
@@ -272,11 +278,13 @@  static int p2m_create_table(struct domain *d,
 
     p = __map_domain_page(page);
     clear_page(p);
+    if ( flush_cache )
+        clean_xen_dcache_va_range(p, PAGE_SIZE);
     unmap_domain_page(p);
 
     pte = mfn_to_p2m_entry(page_to_mfn(page), MATTR_MEM, p2m_invalid);
 
-    write_pte(entry, pte);
+    p2m_write_pte(entry, pte, flush_cache);
 
     return 0;
 }
@@ -308,6 +316,13 @@  static int apply_p2m_changes(struct domain *d,
     unsigned int flush = 0;
     bool_t populate = (op == INSERT || op == ALLOCATE);
     lpae_t pte;
+    bool_t flush_pt;
+
+    /* Some IOMMU don't support coherent PT walk. When the p2m is
+     * shared with the CPU, Xen has to make sure that the PT changes have
+     * reached the memory
+     */
+    flush_pt = iommu_enabled && !iommu_has_feature(d, IOMMU_FEAT_COHERENT_WALK);
 
     spin_lock(&p2m->lock);
 
@@ -334,7 +349,8 @@  static int apply_p2m_changes(struct domain *d,
                 continue;
             }
 
-            rc = p2m_create_table(d, &first[first_table_offset(addr)]);
+            rc = p2m_create_table(d, &first[first_table_offset(addr)],
+                                  flush_pt);
             if ( rc < 0 )
             {
                 printk("p2m_populate_ram: L1 failed\n");
@@ -360,7 +376,8 @@  static int apply_p2m_changes(struct domain *d,
                 continue;
             }
 
-            rc = p2m_create_table(d, &second[second_table_offset(addr)]);
+            rc = p2m_create_table(d, &second[second_table_offset(addr)],
+                                  flush_pt);
             if ( rc < 0 ) {
                 printk("p2m_populate_ram: L2 failed\n");
                 goto out;
@@ -411,13 +428,15 @@  static int apply_p2m_changes(struct domain *d,
 
                     pte = mfn_to_p2m_entry(page_to_mfn(page), mattr, t);
 
-                    write_pte(&third[third_table_offset(addr)], pte);
+                    p2m_write_pte(&third[third_table_offset(addr)],
+                                  pte, flush_pt);
                 }
                 break;
             case INSERT:
                 {
                     pte = mfn_to_p2m_entry(maddr >> PAGE_SHIFT, mattr, t);
-                    write_pte(&third[third_table_offset(addr)], pte);
+                    p2m_write_pte(&third[third_table_offset(addr)],
+                                  pte, flush_pt);
                     maddr += PAGE_SIZE;
                 }
                 break;
@@ -433,7 +452,8 @@  static int apply_p2m_changes(struct domain *d,
                     count += 0x10;
 
                     memset(&pte, 0x00, sizeof(pte));
-                    write_pte(&third[third_table_offset(addr)], pte);
+                    p2m_write_pte(&third[third_table_offset(addr)],
+                                  pte, flush_pt);
                     count++;
                 }
                 break;
@@ -548,10 +568,12 @@  int p2m_alloc_table(struct domain *d)
     /* Clear both first level pages */
     p = __map_domain_page(page);
     clear_page(p);
+    clean_xen_dcache_va_range(p, PAGE_SIZE);
     unmap_domain_page(p);
 
     p = __map_domain_page(page + 1);
     clear_page(p);
+    clean_xen_dcache_va_range(p, PAGE_SIZE);
     unmap_domain_page(p);
 
     p2m->first_level = page;
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 59f1c3e..cc12735 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -344,6 +344,16 @@  void iommu_crash_shutdown(void)
     iommu_enabled = iommu_intremap = 0;
 }
 
+bool_t iommu_has_feature(struct domain *d, enum iommu_feature feature)
+{
+    const struct hvm_iommu *hd = domain_hvm_iommu(d);
+
+    if ( !iommu_enabled )
+        return 0;
+
+    return test_bit(feature, hd->features);
+}
+
 static void iommu_dump_p2m_table(unsigned char key)
 {
     struct domain *d;
diff --git a/xen/include/xen/hvm/iommu.h b/xen/include/xen/hvm/iommu.h
index 1259e16..693346c 100644
--- a/xen/include/xen/hvm/iommu.h
+++ b/xen/include/xen/hvm/iommu.h
@@ -34,6 +34,12 @@  struct hvm_iommu {
     /* List of DT devices assigned to this domain */
     struct list_head dt_devices;
 #endif
+
+    /* Features supported by the IOMMU */
+    DECLARE_BITMAP(features, IOMMU_FEAT_count);
 };
 
+#define iommu_set_feature(d, f)   set_bit((f), domain_hvm_iommu(d)->features)
+#define iommu_clear_feature(d, f) clear_bit((f), domain_hvm_iommu(d)->features)
+
 #endif /* __XEN_HVM_IOMMU_H__ */
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index b7481dac..4c633bf 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -67,6 +67,15 @@  int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
                    unsigned int flags);
 int iommu_unmap_page(struct domain *d, unsigned long gfn);
 
+enum iommu_feature
+{
+    IOMMU_FEAT_COHERENT_WALK,
+    IOMMU_FEAT_count,
+};
+
+bool_t iommu_has_feature(struct domain *d, enum iommu_feature feature);
+
+
 #ifdef HAS_PCI
 void pt_pci_init(void);