diff mbox series

[23/25] block/nvme: Change size and alignment of prp_list_pages

Message ID 20201027135547.374946-24-philmd@redhat.com
State New
Headers show
Series block/nvme: Fix Aarch64 host | expand

Commit Message

Philippe Mathieu-Daudé Oct. 27, 2020, 1:55 p.m. UTC
From: Eric Auger <eric.auger@redhat.com>

In preparation of 64kB host page support, let's change the size
and alignment of the prp_list_pages so that the VFIO DMA MAP succeeds
with 64kB host page size. We align on the host page size.

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 block/nvme.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

Comments

Stefan Hajnoczi Oct. 28, 2020, 3:38 p.m. UTC | #1
On Tue, Oct 27, 2020 at 02:55:45PM +0100, Philippe Mathieu-Daudé wrote:
> From: Eric Auger <eric.auger@redhat.com>

> 

> In preparation of 64kB host page support, let's change the size

> and alignment of the prp_list_pages so that the VFIO DMA MAP succeeds

> with 64kB host page size. We align on the host page size.

> 

> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>

> Signed-off-by: Eric Auger <eric.auger@redhat.com>

> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>

> ---

>  block/nvme.c | 11 ++++++-----

>  1 file changed, 6 insertions(+), 5 deletions(-)


Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
diff mbox series

Patch

diff --git a/block/nvme.c b/block/nvme.c
index 941f96b4c92..e3626045565 100644
--- a/block/nvme.c
+++ b/block/nvme.c
@@ -214,6 +214,7 @@  static NVMeQueuePair *nvme_create_queue_pair(BDRVNVMeState *s,
     int i, r;
     NVMeQueuePair *q;
     uint64_t prp_list_iova;
+    size_t bytes;
 
     q = g_try_new0(NVMeQueuePair, 1);
     if (!q) {
@@ -221,19 +222,19 @@  static NVMeQueuePair *nvme_create_queue_pair(BDRVNVMeState *s,
     }
     trace_nvme_create_queue_pair(idx, q, size, aio_context,
                                  event_notifier_get_fd(s->irq_notifier));
-    q->prp_list_pages = qemu_try_memalign(s->page_size,
-                                          s->page_size * NVME_NUM_REQS);
+    bytes = QEMU_ALIGN_UP(s->page_size * NVME_NUM_REQS,
+                          qemu_real_host_page_size);
+    q->prp_list_pages = qemu_try_memalign(qemu_real_host_page_size, bytes);
     if (!q->prp_list_pages) {
         goto fail;
     }
-    memset(q->prp_list_pages, 0, s->page_size * NVME_NUM_REQS);
+    memset(q->prp_list_pages, 0, bytes);
     qemu_mutex_init(&q->lock);
     q->s = s;
     q->index = idx;
     qemu_co_queue_init(&q->free_req_queue);
     q->completion_bh = aio_bh_new(aio_context, nvme_process_completion_bh, q);
-    r = qemu_vfio_dma_map(s->vfio, q->prp_list_pages,
-                          s->page_size * NVME_NUM_REQS,
+    r = qemu_vfio_dma_map(s->vfio, q->prp_list_pages, bytes,
                           false, &prp_list_iova);
     if (r) {
         goto fail;