diff mbox series

[03/10] Reduce the time of checkpoint for COLO

Message ID 20201014072555.12515-4-chen.zhang@intel.com
State New
Headers show
Series COLO project queued patches 20-Oct | expand

Commit Message

Zhang, Chen Oct. 14, 2020, 7:25 a.m. UTC
From: "Rao, Lei" <lei.rao@intel.com>

we should set ram_bulk_stage to false after ram_state_init,
otherwise the bitmap will be unused in migration_bitmap_find_dirty.
all pages in ram cache will be flushed to the ram of secondary guest
for each checkpoint.

Signed-off-by: leirao <lei.rao@intel.com>
Signed-off-by: Zhang Chen <chen.zhang@intel.com>
Reviewed-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Reviewed-by: Zhang Chen <chen.zhang@intel.com>
---
 migration/ram.c | 14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

Comments

Zhang, Chen Oct. 14, 2020, 12:16 p.m. UTC | #1
> -----Original Message-----
> From: Zhang Chen <chen.zhang@intel.com>
> Sent: Wednesday, October 14, 2020 3:26 PM
> To: Jason Wang <jasowang@redhat.com>; qemu-dev <qemu-
> devel@nongnu.org>
> Cc: Zhang Chen <zhangckid@gmail.com>; Rao, Lei <lei.rao@intel.com>;
> Zhang, Chen <chen.zhang@intel.com>
> Subject: [PATCH 03/10] Reduce the time of checkpoint for COLO
> 
> From: "Rao, Lei" <lei.rao@intel.com>
> 
> we should set ram_bulk_stage to false after ram_state_init, otherwise the
> bitmap will be unused in migration_bitmap_find_dirty.
> all pages in ram cache will be flushed to the ram of secondary guest for each
> checkpoint.
> 
> Signed-off-by: leirao <lei.rao@intel.com>
> Signed-off-by: Zhang Chen <chen.zhang@intel.com>
> Reviewed-by: Li Zhijian <lizhijian@cn.fujitsu.com>
> Reviewed-by: Zhang Chen <chen.zhang@intel.com>

Sorry, I forgot to add:
Signed-off-by: Derek Su <dereksu@qnap.com>

Thanks
Zhang Chen

> ---
>  migration/ram.c | 14 +++++++++++++-
>  1 file changed, 13 insertions(+), 1 deletion(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c index 433489d633..9cfac3d9ba
> 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -3009,6 +3009,18 @@ static void
> decompress_data_with_multi_threads(QEMUFile *f,
>      qemu_mutex_unlock(&decomp_done_lock);
>  }
> 
> + /*
> +  * we must set ram_bulk_stage to false, otherwise in
> +  * migation_bitmap_find_dirty the bitmap will be unused and
> +  * all the pages in ram cache wil be flushed to the ram of
> +  * secondary VM.
> +  */
> +static void colo_init_ram_state(void)
> +{
> +    ram_state_init(&ram_state);
> +    ram_state->ram_bulk_stage = false;
> +}
> +
>  /*
>   * colo cache: this is for secondary VM, we cache the whole
>   * memory of the secondary VM, it is need to hold the global lock @@ -
> 3052,7 +3064,7 @@ int colo_init_ram_cache(void)
>          }
>      }
> 
> -    ram_state_init(&ram_state);
> +    colo_init_ram_state();
>      return 0;
>  }
> 
> --
> 2.17.1
diff mbox series

Patch

diff --git a/migration/ram.c b/migration/ram.c
index 433489d633..9cfac3d9ba 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -3009,6 +3009,18 @@  static void decompress_data_with_multi_threads(QEMUFile *f,
     qemu_mutex_unlock(&decomp_done_lock);
 }
 
+ /*
+  * we must set ram_bulk_stage to false, otherwise in
+  * migation_bitmap_find_dirty the bitmap will be unused and
+  * all the pages in ram cache wil be flushed to the ram of
+  * secondary VM.
+  */
+static void colo_init_ram_state(void)
+{
+    ram_state_init(&ram_state);
+    ram_state->ram_bulk_stage = false;
+}
+
 /*
  * colo cache: this is for secondary VM, we cache the whole
  * memory of the secondary VM, it is need to hold the global lock
@@ -3052,7 +3064,7 @@  int colo_init_ram_cache(void)
         }
     }
 
-    ram_state_init(&ram_state);
+    colo_init_ram_state();
     return 0;
 }