diff mbox series

[Xen-devel,v2] xen/arm: p2m: Prevent deadlock when using memaccess

Message ID 20180312153452.24314-1-julien.grall@arm.com
State New
Headers show
Series [Xen-devel,v2] xen/arm: p2m: Prevent deadlock when using memaccess | expand

Commit Message

Julien Grall March 12, 2018, 3:34 p.m. UTC
From: Julien Grall <julien.grall@arm.com>

Commit 7d623b358a4 "arm/mem_access: Add long-descriptor based gpt"
assumed the read-write lock can be taken recursively. However, this
assumption is wrong and will lead to deadlock when the lock is
contended.

The read lock is taken recursively in the following case:
    1) get_page_from_gva
        => Take the read lock (first read lock)
        => Call p2m_mem_access_check_and_get_page on failure when
        memaccess is enabled
    2) p2m_mem_access_check_and_get_page
        => If hardware translation failed fallback to software lookup
        => Call guest_walk_tables
    3) guest_walk_tables
        => Will use access_guest_memory_ipa to access stage-1 page-table
    4) access_guest_memory_ipa
        => Because Arm does not have hardware instruction to only do
        stage-2 page-table, this is done in software.
        => Take the read lock (second read lock)

To avoid the nested lock, rework the locking in get_page_from_gva and
p2m_mem_access_check_and_get_page. The latter will now be called without
the p2m lock. The new locking in p2m_mem_accces_check_and_get_page will
not cover the translation of the VA to an IPA.

This is fine because we can't promise that the stage-1 page-table have
changed behind our back (they are under guest control). Modification in
the stage-2 page-table can now happen, but I can't issue any potential
issue here except with the break-before-make sequence used when updating
page-table. gva_to_ipa may fail if the sequence is executed at the same
on another CPU. In that case we would fallback in the software lookup
path.

Signed-off-by: Julien Grall <julien.grall@arm.com>

---
    This patch should be backported to Xen 4.10. There are other
    potential optimization that I am working on. Although, I don't think
    they are backport material.

    Changes in v2:
        - Update the commit message to explain where the lock is taken
        recursively.
---
 xen/arch/arm/mem_access.c | 8 ++++++--
 xen/arch/arm/p2m.c        | 4 ++--
 xen/include/asm-arm/p2m.h | 4 ----
 3 files changed, 8 insertions(+), 8 deletions(-)

Comments

Sergej Proskurin March 14, 2018, 9:21 a.m. UTC | #1
Hi Julien,


On 03/12/2018 04:34 PM, julien.grall@arm.com wrote:
> From: Julien Grall <julien.grall@arm.com>
>
> Commit 7d623b358a4 "arm/mem_access: Add long-descriptor based gpt"
> assumed the read-write lock can be taken recursively. However, this
> assumption is wrong and will lead to deadlock when the lock is
> contended.
>
> The read lock is taken recursively in the following case:
>     1) get_page_from_gva
>         => Take the read lock (first read lock)
>         => Call p2m_mem_access_check_and_get_page on failure when
>         memaccess is enabled
>     2) p2m_mem_access_check_and_get_page
>         => If hardware translation failed fallback to software lookup
>         => Call guest_walk_tables
>     3) guest_walk_tables
>         => Will use access_guest_memory_ipa to access stage-1 page-table
>     4) access_guest_memory_ipa
>         => Because Arm does not have hardware instruction to only do
>         stage-2 page-table, this is done in software.
>         => Take the read lock (second read lock)
>
> To avoid the nested lock, rework the locking in get_page_from_gva and
> p2m_mem_access_check_and_get_page. The latter will now be called without
> the p2m lock. The new locking in p2m_mem_accces_check_and_get_page will
> not cover the translation of the VA to an IPA.
>
> This is fine because we can't promise that the stage-1 page-table have
> changed behind our back (they are under guest control). Modification in
> the stage-2 page-table can now happen, but I can't issue any potential
> issue here except with the break-before-make sequence used when updating
> page-table. gva_to_ipa may fail if the sequence is executed at the same
> on another CPU. In that case we would fallback in the software lookup
> path.
>
> Signed-off-by: Julien Grall <julien.grall@arm.com>


Thanks for fixing the nested locking issue :)

Reviewed-by: Sergej Proskurin <proskurin@sec.in.tum.de>

>
> ---
>     This patch should be backported to Xen 4.10. There are other
>     potential optimization that I am working on. Although, I don't think
>     they are backport material.
>
>     Changes in v2:
>         - Update the commit message to explain where the lock is taken
>         recursively.
> ---
>  xen/arch/arm/mem_access.c | 8 ++++++--
>  xen/arch/arm/p2m.c        | 4 ++--
>  xen/include/asm-arm/p2m.h | 4 ----
>  3 files changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/xen/arch/arm/mem_access.c b/xen/arch/arm/mem_access.c
> index 0f2cbb81d3..11c2b03b7b 100644
> --- a/xen/arch/arm/mem_access.c
> +++ b/xen/arch/arm/mem_access.c
> @@ -126,7 +126,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag,
>           * is not mapped.
>           */
>          if ( guest_walk_tables(v, gva, &ipa, &perms) < 0 )
> -            goto err;
> +            return NULL;
>  
>          /*
>           * Check permissions that are assumed by the caller. For instance in
> @@ -139,11 +139,13 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag,
>           * test for execute permissions this check can be left out.
>           */
>          if ( (flag & GV2M_WRITE) && !(perms & GV2M_WRITE) )
> -            goto err;
> +            return NULL;
>      }
>  
>      gfn = gaddr_to_gfn(ipa);
>  
> +    p2m_read_lock(p2m);
> +
>      /*
>       * We do this first as this is faster in the default case when no
>       * permission is set on the page.
> @@ -216,6 +218,8 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag,
>          page = NULL;
>  
>  err:
> +    p2m_read_unlock(p2m);
> +
>      return page;
>  }
>  
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 65e8b9c6ea..5de82aafe1 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1449,11 +1449,11 @@ struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
>      }
>  
>  err:
> +    p2m_read_unlock(p2m);
> +
>      if ( !page && p2m->mem_access_enabled )
>          page = p2m_mem_access_check_and_get_page(va, flags, v);
>  
> -    p2m_read_unlock(p2m);
> -
>      return page;
>  }
>  
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index a0abc84ed8..45ef2cd58b 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -23,10 +23,6 @@ extern void memory_type_changed(struct domain *);
>  struct p2m_domain {
>      /*
>       * Lock that protects updates to the p2m.
> -     *
> -     * Please note that we use this lock in a nested way by calling
> -     * access_guest_memory_by_ipa in guest_walk_(sd|ld). This must be
> -     * considered in the future implementation.
>       */
>      rwlock_t lock;
>
Stefano Stabellini March 16, 2018, 9:01 p.m. UTC | #2
On Mon, 12 Mar 2018, julien.grall@arm.com wrote:
> From: Julien Grall <julien.grall@arm.com>
> 
> Commit 7d623b358a4 "arm/mem_access: Add long-descriptor based gpt"
> assumed the read-write lock can be taken recursively. However, this
> assumption is wrong and will lead to deadlock when the lock is
> contended.
> 
> The read lock is taken recursively in the following case:
>     1) get_page_from_gva
>         => Take the read lock (first read lock)
>         => Call p2m_mem_access_check_and_get_page on failure when
>         memaccess is enabled
>     2) p2m_mem_access_check_and_get_page
>         => If hardware translation failed fallback to software lookup
>         => Call guest_walk_tables
>     3) guest_walk_tables
>         => Will use access_guest_memory_ipa to access stage-1 page-table

                      ^ access_guest_memory_by_ipa


Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

I fixed on commit


>     4) access_guest_memory_ipa
>         => Because Arm does not have hardware instruction to only do
>         stage-2 page-table, this is done in software.
>         => Take the read lock (second read lock)
> 
> To avoid the nested lock, rework the locking in get_page_from_gva and
> p2m_mem_access_check_and_get_page. The latter will now be called without
> the p2m lock. The new locking in p2m_mem_accces_check_and_get_page will
> not cover the translation of the VA to an IPA.
> 
> This is fine because we can't promise that the stage-1 page-table have
> changed behind our back (they are under guest control). Modification in
> the stage-2 page-table can now happen, but I can't issue any potential
> issue here except with the break-before-make sequence used when updating
> page-table. gva_to_ipa may fail if the sequence is executed at the same
> on another CPU. In that case we would fallback in the software lookup
> path.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> 
> ---
>     This patch should be backported to Xen 4.10. There are other
>     potential optimization that I am working on. Although, I don't think
>     they are backport material.
> 
>     Changes in v2:
>         - Update the commit message to explain where the lock is taken
>         recursively.
> ---
>  xen/arch/arm/mem_access.c | 8 ++++++--
>  xen/arch/arm/p2m.c        | 4 ++--
>  xen/include/asm-arm/p2m.h | 4 ----
>  3 files changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/arch/arm/mem_access.c b/xen/arch/arm/mem_access.c
> index 0f2cbb81d3..11c2b03b7b 100644
> --- a/xen/arch/arm/mem_access.c
> +++ b/xen/arch/arm/mem_access.c
> @@ -126,7 +126,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag,
>           * is not mapped.
>           */
>          if ( guest_walk_tables(v, gva, &ipa, &perms) < 0 )
> -            goto err;
> +            return NULL;
>  
>          /*
>           * Check permissions that are assumed by the caller. For instance in
> @@ -139,11 +139,13 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag,
>           * test for execute permissions this check can be left out.
>           */
>          if ( (flag & GV2M_WRITE) && !(perms & GV2M_WRITE) )
> -            goto err;
> +            return NULL;
>      }
>  
>      gfn = gaddr_to_gfn(ipa);
>  
> +    p2m_read_lock(p2m);
> +
>      /*
>       * We do this first as this is faster in the default case when no
>       * permission is set on the page.
> @@ -216,6 +218,8 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag,
>          page = NULL;
>  
>  err:
> +    p2m_read_unlock(p2m);
> +
>      return page;
>  }
>  
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 65e8b9c6ea..5de82aafe1 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1449,11 +1449,11 @@ struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
>      }
>  
>  err:
> +    p2m_read_unlock(p2m);
> +
>      if ( !page && p2m->mem_access_enabled )
>          page = p2m_mem_access_check_and_get_page(va, flags, v);
>  
> -    p2m_read_unlock(p2m);
> -
>      return page;
>  }
>  
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index a0abc84ed8..45ef2cd58b 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -23,10 +23,6 @@ extern void memory_type_changed(struct domain *);
>  struct p2m_domain {
>      /*
>       * Lock that protects updates to the p2m.
> -     *
> -     * Please note that we use this lock in a nested way by calling
> -     * access_guest_memory_by_ipa in guest_walk_(sd|ld). This must be
> -     * considered in the future implementation.
>       */
>      rwlock_t lock;
>  
> -- 
> 2.11.0
>
diff mbox series

Patch

diff --git a/xen/arch/arm/mem_access.c b/xen/arch/arm/mem_access.c
index 0f2cbb81d3..11c2b03b7b 100644
--- a/xen/arch/arm/mem_access.c
+++ b/xen/arch/arm/mem_access.c
@@ -126,7 +126,7 @@  p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag,
          * is not mapped.
          */
         if ( guest_walk_tables(v, gva, &ipa, &perms) < 0 )
-            goto err;
+            return NULL;
 
         /*
          * Check permissions that are assumed by the caller. For instance in
@@ -139,11 +139,13 @@  p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag,
          * test for execute permissions this check can be left out.
          */
         if ( (flag & GV2M_WRITE) && !(perms & GV2M_WRITE) )
-            goto err;
+            return NULL;
     }
 
     gfn = gaddr_to_gfn(ipa);
 
+    p2m_read_lock(p2m);
+
     /*
      * We do this first as this is faster in the default case when no
      * permission is set on the page.
@@ -216,6 +218,8 @@  p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag,
         page = NULL;
 
 err:
+    p2m_read_unlock(p2m);
+
     return page;
 }
 
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 65e8b9c6ea..5de82aafe1 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1449,11 +1449,11 @@  struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
     }
 
 err:
+    p2m_read_unlock(p2m);
+
     if ( !page && p2m->mem_access_enabled )
         page = p2m_mem_access_check_and_get_page(va, flags, v);
 
-    p2m_read_unlock(p2m);
-
     return page;
 }
 
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index a0abc84ed8..45ef2cd58b 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -23,10 +23,6 @@  extern void memory_type_changed(struct domain *);
 struct p2m_domain {
     /*
      * Lock that protects updates to the p2m.
-     *
-     * Please note that we use this lock in a nested way by calling
-     * access_guest_memory_by_ipa in guest_walk_(sd|ld). This must be
-     * considered in the future implementation.
      */
     rwlock_t lock;