diff mbox series

[1/2] block/export: add vhost-user-blk multi-queue support

Message ID 20201001144604.559733-2-stefanha@redhat.com
State New
Headers show
Series block/export: add vhost-user-blk multi-queue support | expand

Commit Message

Stefan Hajnoczi Oct. 1, 2020, 2:46 p.m. UTC
Allow the number of queues to be configured using --export
vhost-user-blk,num-queues=N. This setting should match the QEMU --device
vhost-user-blk-pci,num-queues=N setting but QEMU vhost-user-blk.c lowers
its own value if the vhost-user-blk backend offers fewer queues than
QEMU.

The vhost-user-blk-server.c code is already capable of multi-queue. All
virtqueue processing runs in the same AioContext. No new locking is
needed.

Add the num-queues=N option and set the VIRTIO_BLK_F_MQ feature bit.
Note that the feature bit only announces the presence of the num_queues
configuration space field. It does not promise that there is more than 1
virtqueue, so we can set it unconditionally.

I tested multi-queue by running a random read fio test with numjobs=4 on
an -smp 4 guest. After the benchmark finished the guest /proc/interrupts
file showed activity on all 4 virtio-blk MSI-X. The /sys/block/vda/mq/
directory shows that Linux blk-mq has 4 queues configured.

An automated test is included in the next commit.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 qapi/block-export.json               |  6 +++++-
 block/export/vhost-user-blk-server.c | 24 ++++++++++++++++++------
 2 files changed, 23 insertions(+), 7 deletions(-)

Comments

Markus Armbruster Oct. 2, 2020, 5:32 a.m. UTC | #1
Stefan Hajnoczi <stefanha@redhat.com> writes:

> Allow the number of queues to be configured using --export

> vhost-user-blk,num-queues=N. This setting should match the QEMU --device

> vhost-user-blk-pci,num-queues=N setting but QEMU vhost-user-blk.c lowers

> its own value if the vhost-user-blk backend offers fewer queues than

> QEMU.

>

> The vhost-user-blk-server.c code is already capable of multi-queue. All

> virtqueue processing runs in the same AioContext. No new locking is

> needed.

>

> Add the num-queues=N option and set the VIRTIO_BLK_F_MQ feature bit.

> Note that the feature bit only announces the presence of the num_queues

> configuration space field. It does not promise that there is more than 1

> virtqueue, so we can set it unconditionally.

>

> I tested multi-queue by running a random read fio test with numjobs=4 on

> an -smp 4 guest. After the benchmark finished the guest /proc/interrupts

> file showed activity on all 4 virtio-blk MSI-X. The /sys/block/vda/mq/

> directory shows that Linux blk-mq has 4 queues configured.

>

> An automated test is included in the next commit.

>

> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>

> ---

>  qapi/block-export.json               |  6 +++++-

>  block/export/vhost-user-blk-server.c | 24 ++++++++++++++++++------

>  2 files changed, 23 insertions(+), 7 deletions(-)

>

> diff --git a/qapi/block-export.json b/qapi/block-export.json

> index a793e34af9..17020de257 100644

> --- a/qapi/block-export.json

> +++ b/qapi/block-export.json

> @@ -93,11 +93,15 @@

>  #        SocketAddress types are supported. Passed fds must be UNIX domain

>  #        sockets.

>  # @logical-block-size: Logical block size in bytes. Defaults to 512 bytes.

> +# @num-queues: Number of request virtqueues. Must be greater than 0. Defaults

> +#              to 1.

>  #

>  # Since: 5.2

>  ##

>  { 'struct': 'BlockExportOptionsVhostUserBlk',

> -  'data': { 'addr': 'SocketAddress', '*logical-block-size': 'size' } }

> +  'data': { 'addr': 'SocketAddress',

> +	    '*logical-block-size': 'size',


Tab damage.

> +            '*num-queues': 'uint16'} }


Out of curiosity: what made you pick 16 bit signed?  net.json uses both
32 and 64 bit signed.  Odd...

>  

>  ##

>  # @NbdServerAddOptions:


Acked-by: Markus Armbruster <armbru@redhat.com>
Stefan Hajnoczi Oct. 2, 2020, 1:47 p.m. UTC | #2
On Fri, Oct 02, 2020 at 07:32:39AM +0200, Markus Armbruster wrote:
> Stefan Hajnoczi <stefanha@redhat.com> writes:

> 

> > Allow the number of queues to be configured using --export

> > vhost-user-blk,num-queues=N. This setting should match the QEMU --device

> > vhost-user-blk-pci,num-queues=N setting but QEMU vhost-user-blk.c lowers

> > its own value if the vhost-user-blk backend offers fewer queues than

> > QEMU.

> >

> > The vhost-user-blk-server.c code is already capable of multi-queue. All

> > virtqueue processing runs in the same AioContext. No new locking is

> > needed.

> >

> > Add the num-queues=N option and set the VIRTIO_BLK_F_MQ feature bit.

> > Note that the feature bit only announces the presence of the num_queues

> > configuration space field. It does not promise that there is more than 1

> > virtqueue, so we can set it unconditionally.

> >

> > I tested multi-queue by running a random read fio test with numjobs=4 on

> > an -smp 4 guest. After the benchmark finished the guest /proc/interrupts

> > file showed activity on all 4 virtio-blk MSI-X. The /sys/block/vda/mq/

> > directory shows that Linux blk-mq has 4 queues configured.

> >

> > An automated test is included in the next commit.

> >

> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>

> > ---

> >  qapi/block-export.json               |  6 +++++-

> >  block/export/vhost-user-blk-server.c | 24 ++++++++++++++++++------

> >  2 files changed, 23 insertions(+), 7 deletions(-)

> >

> > diff --git a/qapi/block-export.json b/qapi/block-export.json

> > index a793e34af9..17020de257 100644

> > --- a/qapi/block-export.json

> > +++ b/qapi/block-export.json

> > @@ -93,11 +93,15 @@

> >  #        SocketAddress types are supported. Passed fds must be UNIX domain

> >  #        sockets.

> >  # @logical-block-size: Logical block size in bytes. Defaults to 512 bytes.

> > +# @num-queues: Number of request virtqueues. Must be greater than 0. Defaults

> > +#              to 1.

> >  #

> >  # Since: 5.2

> >  ##

> >  { 'struct': 'BlockExportOptionsVhostUserBlk',

> > -  'data': { 'addr': 'SocketAddress', '*logical-block-size': 'size' } }

> > +  'data': { 'addr': 'SocketAddress',

> > +	    '*logical-block-size': 'size',

> 

> Tab damage.


Oops, thanks! I have updated my editor configuration to use 4-space
indents for .json files :).

> > +            '*num-queues': 'uint16'} }

> 

> Out of curiosity: what made you pick 16 bit signed?  net.json uses both

> 32 and 64 bit signed.  Odd...


struct virtio_blk_config {
    __u16 num_queues;
}

Also, virtio-pci and virtio-ccw use 16-bit types for the queue count.
diff mbox series

Patch

diff --git a/qapi/block-export.json b/qapi/block-export.json
index a793e34af9..17020de257 100644
--- a/qapi/block-export.json
+++ b/qapi/block-export.json
@@ -93,11 +93,15 @@ 
 #        SocketAddress types are supported. Passed fds must be UNIX domain
 #        sockets.
 # @logical-block-size: Logical block size in bytes. Defaults to 512 bytes.
+# @num-queues: Number of request virtqueues. Must be greater than 0. Defaults
+#              to 1.
 #
 # Since: 5.2
 ##
 { 'struct': 'BlockExportOptionsVhostUserBlk',
-  'data': { 'addr': 'SocketAddress', '*logical-block-size': 'size' } }
+  'data': { 'addr': 'SocketAddress',
+	    '*logical-block-size': 'size',
+            '*num-queues': 'uint16'} }
 
 ##
 # @NbdServerAddOptions:
diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
index 81072a5a46..bf84b45ecd 100644
--- a/block/export/vhost-user-blk-server.c
+++ b/block/export/vhost-user-blk-server.c
@@ -21,7 +21,7 @@ 
 #include "util/block-helpers.h"
 
 enum {
-    VHOST_USER_BLK_MAX_QUEUES = 1,
+    VHOST_USER_BLK_NUM_QUEUES_DEFAULT = 1,
 };
 struct virtio_blk_inhdr {
     unsigned char status;
@@ -242,6 +242,7 @@  static uint64_t vu_blk_get_features(VuDev *dev)
                1ull << VIRTIO_BLK_F_DISCARD |
                1ull << VIRTIO_BLK_F_WRITE_ZEROES |
                1ull << VIRTIO_BLK_F_CONFIG_WCE |
+               1ull << VIRTIO_BLK_F_MQ |
                1ull << VIRTIO_F_VERSION_1 |
                1ull << VIRTIO_RING_F_INDIRECT_DESC |
                1ull << VIRTIO_RING_F_EVENT_IDX |
@@ -334,7 +335,9 @@  static void blk_aio_detach(void *opaque)
 
 static void
 vu_blk_initialize_config(BlockDriverState *bs,
-                           struct virtio_blk_config *config, uint32_t blk_size)
+                         struct virtio_blk_config *config,
+                         uint32_t blk_size,
+                         uint16_t num_queues)
 {
     config->capacity = bdrv_getlength(bs) >> BDRV_SECTOR_BITS;
     config->blk_size = blk_size;
@@ -342,7 +345,7 @@  vu_blk_initialize_config(BlockDriverState *bs,
     config->seg_max = 128 - 2;
     config->min_io_size = 1;
     config->opt_io_size = 1;
-    config->num_queues = VHOST_USER_BLK_MAX_QUEUES;
+    config->num_queues = num_queues;
     config->max_discard_sectors = 32768;
     config->max_discard_seg = 1;
     config->discard_sector_alignment = config->blk_size >> 9;
@@ -364,6 +367,7 @@  static int vu_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
     BlockExportOptionsVhostUserBlk *vu_opts = &opts->u.vhost_user_blk;
     Error *local_err = NULL;
     uint64_t logical_block_size;
+    uint16_t num_queues = VHOST_USER_BLK_NUM_QUEUES_DEFAULT;
 
     vexp->writable = opts->writable;
     vexp->blkcfg.wce = 0;
@@ -381,16 +385,24 @@  static int vu_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
     }
     vexp->blk_size = logical_block_size;
     blk_set_guest_block_size(exp->blk, logical_block_size);
+
+    if (vu_opts->has_num_queues) {
+        num_queues = vu_opts->num_queues;
+    }
+    if (num_queues == 0) {
+        error_setg(errp, "num-queues must be greater than 0");
+        return -EINVAL;
+    }
+
     vu_blk_initialize_config(blk_bs(exp->blk), &vexp->blkcfg,
-                               logical_block_size);
+                             logical_block_size, num_queues);
 
     blk_set_allow_aio_context_change(exp->blk, true);
     blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
                                  vexp);
 
     if (!vhost_user_server_start(&vexp->vu_server, vu_opts->addr, exp->ctx,
-                                 VHOST_USER_BLK_MAX_QUEUES, &vu_blk_iface,
-                                 errp)) {
+                                 num_queues, &vu_blk_iface, errp)) {
         blk_remove_aio_context_notifier(exp->blk, blk_aio_attached,
                                         blk_aio_detach, vexp);
         return -EADDRNOTAVAIL;