mbox series

[00/16] util/vfio-helpers: Allow using multiple MSIX IRQs

Message ID 20201020172428.2220726-1-philmd@redhat.com
Headers show
Series util/vfio-helpers: Allow using multiple MSIX IRQs | expand

Message

Philippe Mathieu-Daudé Oct. 20, 2020, 5:24 p.m. UTC
This series allow using multiple MSIX IRQs
We currently share a single IRQ between 2 NVMe queues
(ADMIN and I/O). This series still uses 1 shared IRQ
but prepare for using multiple ones.

The series is organized as:
- Fix device minimum page size (prerequisite: patch 1)
- Check IOMMU minimum page size (patches 3, 4)
- Boring cleanups already reviewed (patches 2, 5-12)
- Introduce helpers to use multiple MSIX (patches 13, 14)
- Switch NVMe block driver to use the multiple MSIX API (15)
- Remove single MSIX helper (16).

Most patches are trivial, except 13 and 14 which are the
important VFIO ones.

Please review,

Phil.

Philippe Mathieu-Daudé (16):
  block/nvme: Correct minimum device page size
  util/vfio-helpers: Improve reporting unsupported IOMMU type
  util/vfio-helpers: Pass minimum page size to qemu_vfio_open_pci()
  util/vfio-helpers: Report error when IOMMU page size is not supported
  util/vfio-helpers: Trace PCI I/O config accesses
  util/vfio-helpers: Trace PCI BAR region info
  util/vfio-helpers: Trace where BARs are mapped
  util/vfio-helpers: Improve DMA trace events
  util/vfio-helpers: Convert vfio_dump_mapping to trace events
  util/vfio-helpers: Let qemu_vfio_dma_map() propagate Error
  util/vfio-helpers: Let qemu_vfio_do_mapping() propagate Error
  util/vfio-helpers: Let qemu_vfio_verify_mappings() use error_report()
  util/vfio-helpers: Introduce qemu_vfio_pci_msix_init_irqs()
  util/vfio-helpers: Introduce qemu_vfio_pci_msix_set_irq()
  block/nvme: Switch to using the MSIX API
  util/vfio-helpers: Remove now unused qemu_vfio_pci_init_irq()

 include/qemu/vfio-helpers.h |  15 ++-
 block/nvme.c                |  33 ++++---
 util/vfio-helpers.c         | 183 +++++++++++++++++++++++++++---------
 util/trace-events           |  13 ++-
 4 files changed, 182 insertions(+), 62 deletions(-)

-- 
2.26.2

Comments

Stefan Hajnoczi Oct. 22, 2020, 1:53 p.m. UTC | #1
On Tue, Oct 20, 2020 at 07:24:14PM +0200, Philippe Mathieu-Daudé wrote:
> Change the confuse "VFIO IOMMU check failed" error message by
> the explicit "VFIO IOMMU Type1 is not supported" once.
> 
> Example on POWER:
> 
>  $ qemu-system-ppc64 -drive if=none,id=nvme0,file=nvme://0001:01:00.0/1,format=raw
>  qemu-system-ppc64: -drive if=none,id=nvme0,file=nvme://0001:01:00.0/1,format=raw: VFIO IOMMU Type1 is not supported
> 
> Suggested-by: Alex Williamson <alex.williamson@redhat.com>
> Reviewed-by: Fam Zheng <fam@euphon.net>
> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> ---
>  util/vfio-helpers.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Stefan Hajnoczi Oct. 22, 2020, 2 p.m. UTC | #2
On Tue, Oct 20, 2020 at 07:24:15PM +0200, Philippe Mathieu-Daudé wrote:
> @@ -724,7 +725,7 @@ static int nvme_init(BlockDriverState *bs, const char *device, int namespace,
>          goto out;
>      }
>  
> -    s->page_size = MAX(4096, 1u << (12 + NVME_CAP_MPSMIN(cap)));
> +    s->page_size = MAX(min_page_size, 1u << (12 + NVME_CAP_MPSMIN(cap)));

Is there a guarantee that the NVMe drive supports our min_page_size?

Stefan
Stefan Hajnoczi Oct. 22, 2020, 2:13 p.m. UTC | #3
On Tue, Oct 20, 2020 at 07:24:18PM +0200, Philippe Mathieu-Daudé wrote:
> For debug purpose, trace BAR regions info.
> 
> Reviewed-by: Fam Zheng <fam@euphon.net>
> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> ---
>  util/vfio-helpers.c | 8 ++++++++
>  util/trace-events   | 1 +
>  2 files changed, 9 insertions(+)

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Stefan Hajnoczi Oct. 22, 2020, 2:25 p.m. UTC | #4
On Tue, Oct 20, 2020 at 07:24:24PM +0200, Philippe Mathieu-Daudé wrote:
> Instead of displaying the error on stderr, use error_report()
> which also report to the monitor.
> 
> Reviewed-by: Fam Zheng <fam@euphon.net>
> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> ---
>  util/vfio-helpers.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Stefan Hajnoczi Oct. 22, 2020, 2:49 p.m. UTC | #5
On Tue, Oct 20, 2020 at 07:24:27PM +0200, Philippe Mathieu-Daudé wrote:
> In preparation of using multiple IRQs, switch to using the recently
> introduced MSIX API. Instead of allocating and assigning IRQ in
> a single step, we now have to use two distinct calls.
> 
> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> ---
>  block/nvme.c | 14 ++++++++++++--
>  1 file changed, 12 insertions(+), 2 deletions(-)

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Philippe Mathieu-Daudé Oct. 24, 2020, 7:52 p.m. UTC | #6
On 10/22/20 4:00 PM, Stefan Hajnoczi wrote:
> On Tue, Oct 20, 2020 at 07:24:15PM +0200, Philippe Mathieu-Daudé wrote:

>> @@ -724,7 +725,7 @@ static int nvme_init(BlockDriverState *bs, const char *device, int namespace,

>>           goto out;

>>       }

>>   

>> -    s->page_size = MAX(4096, 1u << (12 + NVME_CAP_MPSMIN(cap)));

>> +    s->page_size = MAX(min_page_size, 1u << (12 + NVME_CAP_MPSMIN(cap)));

> 

> Is there a guarantee that the NVMe drive supports our min_page_size?


No, good point!

> 

> Stefan

>