From patchwork Tue Apr 18 16:21:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 674526 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp59285wrs; Tue, 18 Apr 2023 09:24:16 -0700 (PDT) X-Google-Smtp-Source: AKy350ZuXxfhw+LufYdxMTqh545u/Wmnzzm4/1znfMCpkRm5IIbAz5Eut1NgAGUK/dbwb1I3gTVF X-Received: by 2002:ac8:5a45:0:b0:3ef:2199:1a79 with SMTP id o5-20020ac85a45000000b003ef21991a79mr796448qta.5.1681835056537; Tue, 18 Apr 2023 09:24:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681835056; cv=none; d=google.com; s=arc-20160816; b=q44+F0ysmkWRIWU2ESEetvSb38L3Mn1Ruq98s+KboJpAX1Nr5QPVEHZfw87xyraByC S4ZROhA62J8RnPFcFfkJpqWgPqrWwl8gl6O870H3A/3NbqbEXCmRRF7IATNg+TBy4Cma cP0U7cuEda3KZTMg4UJJVvbZg5vJ2TBGmVdKlt4IacNL9/boCfm3dBurtWorXLEviBQI hPGWuZUlWzh9RgIvbRahHvolCR0QAbAkUNWPTYv8dT21BELuDMDJPQUfTb1ApW4HBIQD GdjP4fSFKtKxXCODxB1Kx3ErjxloHiQwB8PC4lfdC/lTeoeXVV7E0deXbX7+QH5ZG8wg 1ERw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=RHdGm7j8I+C9yYQqKwhoXKBTWIBzPYm6rSCDYdpEQFY=; b=LK55xZujxjS/SzHPlr+aR24warMLBe9+8ry0WZA14RrRlsFcr2hdJ8ZL/XPsXSkoFh AfRTtVOnbpLJ9OFd4zCdm72LVzSxMoG0DbZuXyM/N6ZBss61i8Mz5oGcTPYu9Kl8OAXN LX1nxNTv0ylAcEBVVJH/F4wIaYhfgrdK8D5BNdic8yloXss1Oi/PJgbqh3z+XPTRjqZc z3PL0iNf9aUhfEW5vToZljm4XsBemHxtr5XYtZOje9mPvWtMh8C9jYDftEI1J5oXPjG4 b0Ofvb0unMCDgkZ2t+hhtTjmzX/+wJwpes3zg7TKIVN7vszS/VOIUZ1UzZYW1ZjA9eah 20uw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=deuGwp5r; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id 13-20020a370a0d000000b0074ced3e0003si3471584qkk.26.2023.04.18.09.24.16 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 18 Apr 2023 09:24:16 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=deuGwp5r; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1poo5a-0006pD-0G; Tue, 18 Apr 2023 12:21:54 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1poo5X-0006nE-Fx for qemu-devel@nongnu.org; Tue, 18 Apr 2023 12:21:51 -0400 Received: from mail-wr1-x434.google.com ([2a00:1450:4864:20::434]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1poo5U-0001CJ-7b for qemu-devel@nongnu.org; Tue, 18 Apr 2023 12:21:51 -0400 Received: by mail-wr1-x434.google.com with SMTP id ffacd0b85a97d-2f9b9aa9d75so1291904f8f.0 for ; Tue, 18 Apr 2023 09:21:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1681834907; x=1684426907; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=RHdGm7j8I+C9yYQqKwhoXKBTWIBzPYm6rSCDYdpEQFY=; b=deuGwp5rAZYC5syALPZhWioCmWGRvpOYvx58ZCYZ7WjLW70hU8BaMdUoPk3XWoGQeG A6EDNB9tLeYj2TPgVYAilqzopjWPzRvzX7LWho4R+0QEayd8YwLf5hh72gxHI7S8E+I7 mOHK2IIZRHZnOW7l7VteuVBMRl8c850wahOQhPZF/OF2hlGepIQQNJLV6uwuTKOOFVIi GMcgApm9FJNX+dHqzEHLPZ4CU02mO5i2qc8dyFaB3wFIpN+kRub7s41uG9ghK9bjPuev y6Yr36GzV+UTjHbPxUOHBD2nq84ACoQ+7L0ruxwSOIHjTn7OGM5gCgBO7UCXvRepD6bk U+vg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681834907; x=1684426907; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RHdGm7j8I+C9yYQqKwhoXKBTWIBzPYm6rSCDYdpEQFY=; b=VCY6LVisK+VDvbDezb09B7h38hBOqBg4QFBOVIn9oVv1guxZpnH/DqKRRC5KjpPhUR 9N9EOYUpeGcpBUJ5UNfw/crrdDNdurYPzmr8n3EvWqg6IDDGEAX+c0A/I9GoAyb4yZfJ poAQgawcAZz4W4959rl7c9uFDbV9+9V44hVYIeTArl2KoJWBbprhqtrXfuTje4gBxJs0 YdUFrqGY12CyezlD5OJYrjY0UfMQVMj/6w5RcCKcMu5qUnnbBTunGu22l0cp+rmMKiYG ++BoBI/g66kTBL68BcsDVwZL/Z3HWOshBsRveaxwljPIiYMRNEcuvZd9B54zYF7WQg3c 9+rg== X-Gm-Message-State: AAQBX9eTmCCBBMrRSfG9TwOpsYDbPEzyUA0Slk0nYS2o/LS+iScCGvRE M5k8XoqNbCx7tHk0QfqU7l4DLQ== X-Received: by 2002:adf:fe4c:0:b0:2d5:39d:514f with SMTP id m12-20020adffe4c000000b002d5039d514fmr2439055wrs.65.1681834906801; Tue, 18 Apr 2023 09:21:46 -0700 (PDT) Received: from zen.linaroharston ([85.9.250.243]) by smtp.gmail.com with ESMTPSA id m4-20020a5d6a04000000b002f0442a2d3asm13456035wru.48.2023.04.18.09.21.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 18 Apr 2023 09:21:44 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 2DFCC1FFC0; Tue, 18 Apr 2023 17:21:41 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , =?utf-8?q?Daniel_?= =?utf-8?q?P=2E_Berrang=C3=A9?= , Eduardo Habkost , Viresh Kumar , Mathieu Poirier , "Gonglei (Arei)" , Markus Armbruster , Erik Schilling , Jason Wang , Mark Cave-Ayland , Gerd Hoffmann , virtio-fs@redhat.com, =?utf-8?q?Marc-And?= =?utf-8?q?r=C3=A9_Lureau?= , Paolo Bonzini , "Michael S. Tsirkin" , Stefan Hajnoczi , Eric Blake Subject: [PATCH v2 09/13] hw/virtio: derive vhost-user-rng from vhost-user-device Date: Tue, 18 Apr 2023 17:21:36 +0100 Message-Id: <20230418162140.373219-10-alex.bennee@linaro.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230418162140.373219-1-alex.bennee@linaro.org> References: <20230418162140.373219-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::434; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x434.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Now we can take advantage of our new base class and make vhost-user-rng a much simpler boilerplate wrapper. Also as this doesn't require any target specific hacks we only need to build the stubs once. Signed-off-by: Alex Bennée Acked-by: Mark Cave-Ayland --- v2 - new derivation layout - move directly to softmmu_virtio_ss --- include/hw/virtio/vhost-user-rng.h | 11 +- hw/virtio/vhost-user-rng.c | 277 +++-------------------------- hw/virtio/meson.build | 7 +- 3 files changed, 28 insertions(+), 267 deletions(-) diff --git a/include/hw/virtio/vhost-user-rng.h b/include/hw/virtio/vhost-user-rng.h index ddd9f01eea..13139c0d9d 100644 --- a/include/hw/virtio/vhost-user-rng.h +++ b/include/hw/virtio/vhost-user-rng.h @@ -12,21 +12,14 @@ #include "hw/virtio/virtio.h" #include "hw/virtio/vhost.h" #include "hw/virtio/vhost-user.h" -#include "chardev/char-fe.h" +#include "hw/virtio/vhost-user-device.h" #define TYPE_VHOST_USER_RNG "vhost-user-rng" OBJECT_DECLARE_SIMPLE_TYPE(VHostUserRNG, VHOST_USER_RNG) struct VHostUserRNG { /*< private >*/ - VirtIODevice parent; - CharBackend chardev; - struct vhost_virtqueue *vhost_vq; - struct vhost_dev vhost_dev; - VhostUserState vhost_user; - VirtQueue *req_vq; - bool connected; - + VHostUserBase parent; /*< public >*/ }; diff --git a/hw/virtio/vhost-user-rng.c b/hw/virtio/vhost-user-rng.c index efc54cd3fb..71d3991f93 100644 --- a/hw/virtio/vhost-user-rng.c +++ b/hw/virtio/vhost-user-rng.c @@ -3,7 +3,7 @@ * * Copyright (c) 2021 Mathieu Poirier * - * Implementation seriously tailored on vhost-user-i2c.c + * Simple wrapper of the generic vhost-user-device. * * SPDX-License-Identifier: GPL-2.0-or-later */ @@ -13,281 +13,46 @@ #include "hw/qdev-properties.h" #include "hw/virtio/virtio-bus.h" #include "hw/virtio/vhost-user-rng.h" -#include "qemu/error-report.h" #include "standard-headers/linux/virtio_ids.h" -static const int feature_bits[] = { - VIRTIO_F_RING_RESET, - VHOST_INVALID_FEATURE_BIT -}; - -static void vu_rng_start(VirtIODevice *vdev) -{ - VHostUserRNG *rng = VHOST_USER_RNG(vdev); - BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(vdev))); - VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus); - int ret; - int i; - - if (!k->set_guest_notifiers) { - error_report("binding does not support guest notifiers"); - return; - } - - ret = vhost_dev_enable_notifiers(&rng->vhost_dev, vdev); - if (ret < 0) { - error_report("Error enabling host notifiers: %d", -ret); - return; - } - - ret = k->set_guest_notifiers(qbus->parent, rng->vhost_dev.nvqs, true); - if (ret < 0) { - error_report("Error binding guest notifier: %d", -ret); - goto err_host_notifiers; - } - - rng->vhost_dev.acked_features = vdev->guest_features; - ret = vhost_dev_start(&rng->vhost_dev, vdev, true); - if (ret < 0) { - error_report("Error starting vhost-user-rng: %d", -ret); - goto err_guest_notifiers; - } - - /* - * guest_notifier_mask/pending not used yet, so just unmask - * everything here. virtio-pci will do the right thing by - * enabling/disabling irqfd. - */ - for (i = 0; i < rng->vhost_dev.nvqs; i++) { - vhost_virtqueue_mask(&rng->vhost_dev, vdev, i, false); - } - - return; - -err_guest_notifiers: - k->set_guest_notifiers(qbus->parent, rng->vhost_dev.nvqs, false); -err_host_notifiers: - vhost_dev_disable_notifiers(&rng->vhost_dev, vdev); -} - -static void vu_rng_stop(VirtIODevice *vdev) -{ - VHostUserRNG *rng = VHOST_USER_RNG(vdev); - BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(vdev))); - VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus); - int ret; - - if (!k->set_guest_notifiers) { - return; - } - - vhost_dev_stop(&rng->vhost_dev, vdev, true); - - ret = k->set_guest_notifiers(qbus->parent, rng->vhost_dev.nvqs, false); - if (ret < 0) { - error_report("vhost guest notifier cleanup failed: %d", ret); - return; - } - - vhost_dev_disable_notifiers(&rng->vhost_dev, vdev); -} - -static void vu_rng_set_status(VirtIODevice *vdev, uint8_t status) -{ - VHostUserRNG *rng = VHOST_USER_RNG(vdev); - bool should_start = virtio_device_should_start(vdev, status); - - if (vhost_dev_is_started(&rng->vhost_dev) == should_start) { - return; - } - - if (should_start) { - vu_rng_start(vdev); - } else { - vu_rng_stop(vdev); - } -} - -static uint64_t vu_rng_get_features(VirtIODevice *vdev, - uint64_t requested_features, Error **errp) -{ - VHostUserRNG *rng = VHOST_USER_RNG(vdev); - - return vhost_get_features(&rng->vhost_dev, feature_bits, - requested_features); -} - -static void vu_rng_handle_output(VirtIODevice *vdev, VirtQueue *vq) -{ - /* - * Not normally called; it's the daemon that handles the queue; - * however virtio's cleanup path can call this. - */ -} - -static void vu_rng_guest_notifier_mask(VirtIODevice *vdev, int idx, bool mask) -{ - VHostUserRNG *rng = VHOST_USER_RNG(vdev); - - vhost_virtqueue_mask(&rng->vhost_dev, vdev, idx, mask); -} - -static bool vu_rng_guest_notifier_pending(VirtIODevice *vdev, int idx) -{ - VHostUserRNG *rng = VHOST_USER_RNG(vdev); - - return vhost_virtqueue_pending(&rng->vhost_dev, idx); -} - -static void vu_rng_connect(DeviceState *dev) -{ - VirtIODevice *vdev = VIRTIO_DEVICE(dev); - VHostUserRNG *rng = VHOST_USER_RNG(vdev); - - if (rng->connected) { - return; - } - - rng->connected = true; - - /* restore vhost state */ - if (virtio_device_started(vdev, vdev->status)) { - vu_rng_start(vdev); - } -} - -static void vu_rng_disconnect(DeviceState *dev) -{ - VirtIODevice *vdev = VIRTIO_DEVICE(dev); - VHostUserRNG *rng = VHOST_USER_RNG(vdev); - - if (!rng->connected) { - return; - } - - rng->connected = false; - - if (vhost_dev_is_started(&rng->vhost_dev)) { - vu_rng_stop(vdev); - } -} - -static void vu_rng_event(void *opaque, QEMUChrEvent event) -{ - DeviceState *dev = opaque; - - switch (event) { - case CHR_EVENT_OPENED: - vu_rng_connect(dev); - break; - case CHR_EVENT_CLOSED: - vu_rng_disconnect(dev); - break; - case CHR_EVENT_BREAK: - case CHR_EVENT_MUX_IN: - case CHR_EVENT_MUX_OUT: - /* Ignore */ - break; - } -} - -static void vu_rng_device_realize(DeviceState *dev, Error **errp) -{ - VirtIODevice *vdev = VIRTIO_DEVICE(dev); - VHostUserRNG *rng = VHOST_USER_RNG(dev); - int ret; - - if (!rng->chardev.chr) { - error_setg(errp, "missing chardev"); - return; - } - - if (!vhost_user_init(&rng->vhost_user, &rng->chardev, errp)) { - return; - } - - virtio_init(vdev, VIRTIO_ID_RNG, 0); - - rng->req_vq = virtio_add_queue(vdev, 4, vu_rng_handle_output); - if (!rng->req_vq) { - error_setg_errno(errp, -1, "virtio_add_queue() failed"); - goto virtio_add_queue_failed; - } - - rng->vhost_dev.nvqs = 1; - rng->vhost_dev.vqs = g_new0(struct vhost_virtqueue, rng->vhost_dev.nvqs); - ret = vhost_dev_init(&rng->vhost_dev, &rng->vhost_user, - VHOST_BACKEND_TYPE_USER, 0, errp); - if (ret < 0) { - error_setg_errno(errp, -ret, "vhost_dev_init() failed"); - goto vhost_dev_init_failed; - } - - qemu_chr_fe_set_handlers(&rng->chardev, NULL, NULL, vu_rng_event, NULL, - dev, NULL, true); - - return; - -vhost_dev_init_failed: - g_free(rng->vhost_dev.vqs); - virtio_delete_queue(rng->req_vq); -virtio_add_queue_failed: - virtio_cleanup(vdev); - vhost_user_cleanup(&rng->vhost_user); -} - -static void vu_rng_device_unrealize(DeviceState *dev) -{ - VirtIODevice *vdev = VIRTIO_DEVICE(dev); - VHostUserRNG *rng = VHOST_USER_RNG(dev); - struct vhost_virtqueue *vhost_vqs = rng->vhost_dev.vqs; - - vu_rng_set_status(vdev, 0); - - vhost_dev_cleanup(&rng->vhost_dev); - g_free(vhost_vqs); - virtio_delete_queue(rng->req_vq); - virtio_cleanup(vdev); - vhost_user_cleanup(&rng->vhost_user); -} - -static struct vhost_dev *vu_rng_get_vhost(VirtIODevice *vdev) -{ - VHostUserRNG *rng = VHOST_USER_RNG(vdev); - return &rng->vhost_dev; -} - static const VMStateDescription vu_rng_vmstate = { .name = "vhost-user-rng", .unmigratable = 1, }; -static Property vu_rng_properties[] = { - DEFINE_PROP_CHR("chardev", VHostUserRNG, chardev), +static Property vrng_properties[] = { + DEFINE_PROP_CHR("chardev", VHostUserBase, chardev), DEFINE_PROP_END_OF_LIST(), }; +static void vu_rng_base_realize(DeviceState *dev, Error **errp) +{ + VHostUserBase *vub = VHOST_USER_BASE(dev); + VHostUserBaseClass *vubs = VHOST_USER_BASE_GET_CLASS(dev); + + /* Fixed for RNG */ + vub->virtio_id = VIRTIO_ID_RNG; + vub->num_vqs = 1; + + vubs->parent_realize(dev, errp); +} + static void vu_rng_class_init(ObjectClass *klass, void *data) { DeviceClass *dc = DEVICE_CLASS(klass); - VirtioDeviceClass *vdc = VIRTIO_DEVICE_CLASS(klass); + VHostUserBaseClass *vubc = VHOST_USER_BASE_CLASS(klass); - device_class_set_props(dc, vu_rng_properties); dc->vmsd = &vu_rng_vmstate; - set_bit(DEVICE_CATEGORY_INPUT, dc->categories); + device_class_set_props(dc, vrng_properties); + device_class_set_parent_realize(dc, vu_rng_base_realize, + &vubc->parent_realize); - vdc->realize = vu_rng_device_realize; - vdc->unrealize = vu_rng_device_unrealize; - vdc->get_features = vu_rng_get_features; - vdc->set_status = vu_rng_set_status; - vdc->guest_notifier_mask = vu_rng_guest_notifier_mask; - vdc->guest_notifier_pending = vu_rng_guest_notifier_pending; - vdc->get_vhost = vu_rng_get_vhost; + set_bit(DEVICE_CATEGORY_INPUT, dc->categories); } static const TypeInfo vu_rng_info = { .name = TYPE_VHOST_USER_RNG, - .parent = TYPE_VIRTIO_DEVICE, + .parent = TYPE_VHOST_USER_BASE, .instance_size = sizeof(VHostUserRNG), .class_init = vu_rng_class_init, }; diff --git a/hw/virtio/meson.build b/hw/virtio/meson.build index c0a86b94ae..de442dcb96 100644 --- a/hw/virtio/meson.build +++ b/hw/virtio/meson.build @@ -3,6 +3,11 @@ softmmu_virtio_ss.add(files('virtio-bus.c')) softmmu_virtio_ss.add(when: 'CONFIG_VIRTIO_PCI', if_true: files('virtio-pci.c')) softmmu_virtio_ss.add(when: 'CONFIG_VIRTIO_MMIO', if_true: files('virtio-mmio.c')) +# VirtIO stubs which don't need building per-guest +softmmu_virtio_ss.add(when: 'CONFIG_VHOST_USER_RNG', if_true: files('vhost-user-rng.c')) +softmmu_virtio_ss.add(when: ['CONFIG_VIRTIO_PCI', 'CONFIG_VHOST_USER_RNG'], + if_true: files('vhost-user-rng-pci.c')) + specific_virtio_ss = ss.source_set() specific_virtio_ss.add(files('virtio.c')) specific_virtio_ss.add(files('virtio-config-io.c', 'virtio-qmp.c')) @@ -32,7 +37,6 @@ specific_virtio_ss.add(when: 'CONFIG_VIRTIO_RNG', if_true: files('virtio-rng.c') specific_virtio_ss.add(when: 'CONFIG_VIRTIO_IOMMU', if_true: files('virtio-iommu.c')) specific_virtio_ss.add(when: 'CONFIG_VIRTIO_MEM', if_true: files('virtio-mem.c')) specific_virtio_ss.add(when: 'CONFIG_VHOST_USER_I2C', if_true: files('vhost-user-i2c.c')) -specific_virtio_ss.add(when: 'CONFIG_VHOST_USER_RNG', if_true: files('vhost-user-rng.c')) specific_virtio_ss.add(when: 'CONFIG_VHOST_USER_GPIO', if_true: files('vhost-user-gpio.c')) specific_virtio_ss.add(when: ['CONFIG_VIRTIO_PCI', 'CONFIG_VHOST_USER_GPIO'], if_true: files('vhost-user-gpio-pci.c')) specific_virtio_ss.add(when: 'CONFIG_VHOST_VDPA_DEV', if_true: files('vdpa-dev.c')) @@ -43,7 +47,6 @@ virtio_pci_ss.add(when: 'CONFIG_VHOST_USER_VSOCK', if_true: files('vhost-user-vs virtio_pci_ss.add(when: 'CONFIG_VHOST_USER_BLK', if_true: files('vhost-user-blk-pci.c')) virtio_pci_ss.add(when: 'CONFIG_VHOST_USER_I2C', if_true: files('vhost-user-i2c-pci.c')) virtio_pci_ss.add(when: 'CONFIG_VHOST_USER_INPUT', if_true: files('vhost-user-input-pci.c')) -virtio_pci_ss.add(when: 'CONFIG_VHOST_USER_RNG', if_true: files('vhost-user-rng-pci.c')) virtio_pci_ss.add(when: 'CONFIG_VHOST_USER_SCSI', if_true: files('vhost-user-scsi-pci.c')) virtio_pci_ss.add(when: 'CONFIG_VHOST_SCSI', if_true: files('vhost-scsi-pci.c')) virtio_pci_ss.add(when: 'CONFIG_VHOST_USER_FS', if_true: files('vhost-user-fs-pci.c'))