From patchwork Thu Jul 2 08:21:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kishon Vijay Abraham I X-Patchwork-Id: 192187 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp1229983ilg; Thu, 2 Jul 2020 01:22:18 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwVjJUkW6gFNq2lSua8IFx43UO4Vz1nVmhAbDnNQcCZ+DEvjqSYO7BAeCg/onglCECTCpQN X-Received: by 2002:a17:906:6d49:: with SMTP id a9mr25587868ejt.435.1593678138550; Thu, 02 Jul 2020 01:22:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593678138; cv=none; d=google.com; s=arc-20160816; b=OzRC5pbAKFzVUufIaIE7ieY+wwLnEmA5vDG7pJnS//LqlqOs+dUTB/TFIcXCBnCoJI AXWLZYNKBvdYKJi38EEL+k2fdljv9Hm5SbQnj5wlp9H0nAYOyB7s6Rl9/ki5qL3Z9nBK fqWpEIKIrfNsohO/H8pOyatu1e5tsPj+ID821VKnqNtyh0f8HmiyXaB0dWT+pAS8C/yr 8ENfWolq81nwN1tCeuKYdfQsEDR20ZpKT4Re54QTDcS4YII85xNvVRZVlc5afVafZXOl 4694JyQoxWG83K6llcsaaWDY0HROtNz/WxzNkgdZdcFHvUQ5rPelRrgFg3dB1wgQiimL zvdg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=ABGiAo420c56SSOxSDxeuSuOeuV6cI8PSAJ9Fas2di4=; b=IkZQ+xaRzCF/pRgz5pZ8zThwOefvaTBPKt1z5pDpamnOjejZDLgSmzY2kYvD1Hp0CZ 5fCFjwSJhpuIc5g5oIPNphJxXODHteEJ1T0sTw0ZJpzwkrPkIna0+5ZQ4NK+QRoNdEi0 CLaHEg1h9K6SMlfIoLu8JNq8jQMfGpzEn/OPsJwxQbICJ9g3J4KSoJadgHsuezhESgu1 ownDCY2H0NLSWVtZK5lVWMjwtNw/Ldh50UNAcbqZVWIyGvVPndD9TSALyJaJXz6pXpku kresrHHtPq6c4UOiA8Qkh3MsSut1ARC3n5cqL5Tph6MtL9t/6JXiJYGHK9owCpPG2X9W Ixyg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=yXQ+RTwS; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id eb10si7160013edb.410.2020.07.02.01.22.18; Thu, 02 Jul 2020 01:22:18 -0700 (PDT) Received-SPF: pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=yXQ+RTwS; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726292AbgGBIWQ (ORCPT + 9 others); Thu, 2 Jul 2020 04:22:16 -0400 Received: from fllv0015.ext.ti.com ([198.47.19.141]:34350 "EHLO fllv0015.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726183AbgGBIWK (ORCPT ); Thu, 2 Jul 2020 04:22:10 -0400 Received: from fllv0034.itg.ti.com ([10.64.40.246]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0628LuLH016993; Thu, 2 Jul 2020 03:21:56 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1593678116; bh=ABGiAo420c56SSOxSDxeuSuOeuV6cI8PSAJ9Fas2di4=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=yXQ+RTwS5M9CIUtd5IuR47WT4ucoslxL6bkH0Pgrp91jiJI5naqA5olIis/Zv0t0C +53APmY4ZnBgRw2ZA9ckgoq9xEf37dHIXMTrS2N5GrJndkPCjEYnOjadIWuGlQuGok pUdmwVPHhN0OqnGenibNZrzqA499mhcZzXaltE5c= Received: from DLEE114.ent.ti.com (dlee114.ent.ti.com [157.170.170.25]) by fllv0034.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0628Lu9h029608 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 2 Jul 2020 03:21:56 -0500 Received: from DLEE110.ent.ti.com (157.170.170.21) by DLEE114.ent.ti.com (157.170.170.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Thu, 2 Jul 2020 03:21:56 -0500 Received: from lelv0327.itg.ti.com (10.180.67.183) by DLEE110.ent.ti.com (157.170.170.21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Thu, 2 Jul 2020 03:21:56 -0500 Received: from a0393678ub.india.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0628LiYD006145; Thu, 2 Jul 2020 03:21:51 -0500 From: Kishon Vijay Abraham I To: Ohad Ben-Cohen , Bjorn Andersson , Jon Mason , Dave Jiang , Allen Hubbe , Lorenzo Pieralisi , Bjorn Helgaas , "Michael S. Tsirkin" , Jason Wang , Paolo Bonzini , Stefan Hajnoczi , Stefano Garzarella CC: , , , , , , , Subject: [RFC PATCH 01/22] vhost: Make _feature_ bits a property of vhost device Date: Thu, 2 Jul 2020 13:51:22 +0530 Message-ID: <20200702082143.25259-2-kishon@ti.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200702082143.25259-1-kishon@ti.com> References: <20200702082143.25259-1-kishon@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org No functional change intended. The feature bits defined in virtio specification is associated with virtio device and not virtqueue. In order to correctly reflect this in the vhost backend, remove "acked_features" from struct vhost_virtqueue and add "features" in struct vhost_dev. This will also make it look symmetrical to virtio in guest. Signed-off-by: Kishon Vijay Abraham I --- drivers/vhost/net.c | 7 ++++--- drivers/vhost/scsi.c | 22 ++++++++-------------- drivers/vhost/test.c | 14 +++++--------- drivers/vhost/vhost.c | 33 +++++++++++++++++++++------------ drivers/vhost/vhost.h | 6 +++--- drivers/vhost/vsock.c | 18 ++++++------------ 6 files changed, 47 insertions(+), 53 deletions(-) -- 2.17.1 diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index 516519dcc8ff..437126219116 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -1137,9 +1137,9 @@ static void handle_rx(struct vhost_net *net) vhost_hlen = nvq->vhost_hlen; sock_hlen = nvq->sock_hlen; - vq_log = unlikely(vhost_has_feature(vq, VHOST_F_LOG_ALL)) ? + vq_log = unlikely(vhost_has_feature(&net->dev, VHOST_F_LOG_ALL)) ? vq->log : NULL; - mergeable = vhost_has_feature(vq, VIRTIO_NET_F_MRG_RXBUF); + mergeable = vhost_has_feature(&net->dev, VIRTIO_NET_F_MRG_RXBUF); do { sock_len = vhost_net_rx_peek_head_len(net, sock->sk, @@ -1633,6 +1633,7 @@ static int vhost_net_set_backend_features(struct vhost_net *n, u64 features) static int vhost_net_set_features(struct vhost_net *n, u64 features) { size_t vhost_hlen, sock_hlen, hdr_len; + struct vhost_dev *vdev = &n->dev; int i; hdr_len = (features & ((1ULL << VIRTIO_NET_F_MRG_RXBUF) | @@ -1658,9 +1659,9 @@ static int vhost_net_set_features(struct vhost_net *n, u64 features) goto out_unlock; } + vdev->features = features; for (i = 0; i < VHOST_NET_VQ_MAX; ++i) { mutex_lock(&n->vqs[i].vq.mutex); - n->vqs[i].vq.acked_features = features; n->vqs[i].vhost_hlen = vhost_hlen; n->vqs[i].sock_hlen = sock_hlen; mutex_unlock(&n->vqs[i].vq.mutex); diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c index 8b104f76f324..f5138379659e 100644 --- a/drivers/vhost/scsi.c +++ b/drivers/vhost/scsi.c @@ -921,7 +921,7 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq) int ret, prot_bytes, c = 0; u16 lun; u8 task_attr; - bool t10_pi = vhost_has_feature(vq, VIRTIO_SCSI_F_T10_PI); + bool t10_pi = vhost_has_feature(&vs->dev, VIRTIO_SCSI_F_T10_PI); void *cdb; mutex_lock(&vq->mutex); @@ -1573,26 +1573,20 @@ vhost_scsi_clear_endpoint(struct vhost_scsi *vs, static int vhost_scsi_set_features(struct vhost_scsi *vs, u64 features) { - struct vhost_virtqueue *vq; - int i; + struct vhost_dev *vdev = &vs->dev; if (features & ~VHOST_SCSI_FEATURES) return -EOPNOTSUPP; - mutex_lock(&vs->dev.mutex); + mutex_lock(&vdev->mutex); if ((features & (1 << VHOST_F_LOG_ALL)) && - !vhost_log_access_ok(&vs->dev)) { - mutex_unlock(&vs->dev.mutex); + !vhost_log_access_ok(vdev)) { + mutex_unlock(&vdev->mutex); return -EFAULT; } - for (i = 0; i < VHOST_SCSI_MAX_VQ; i++) { - vq = &vs->vqs[i].vq; - mutex_lock(&vq->mutex); - vq->acked_features = features; - mutex_unlock(&vq->mutex); - } - mutex_unlock(&vs->dev.mutex); + vdev->features = features; + mutex_unlock(&vdev->mutex); return 0; } @@ -1789,7 +1783,7 @@ vhost_scsi_do_plug(struct vhost_scsi_tpg *tpg, vq = &vs->vqs[VHOST_SCSI_VQ_EVT].vq; mutex_lock(&vq->mutex); - if (vhost_has_feature(vq, VIRTIO_SCSI_F_HOTPLUG)) + if (vhost_has_feature(&vs->dev, VIRTIO_SCSI_F_HOTPLUG)) vhost_scsi_send_evt(vs, tpg, lun, VIRTIO_SCSI_T_TRANSPORT_RESET, reason); mutex_unlock(&vq->mutex); diff --git a/drivers/vhost/test.c b/drivers/vhost/test.c index 9a3a09005e03..6518b48c0633 100644 --- a/drivers/vhost/test.c +++ b/drivers/vhost/test.c @@ -247,19 +247,15 @@ static long vhost_test_reset_owner(struct vhost_test *n) static int vhost_test_set_features(struct vhost_test *n, u64 features) { - struct vhost_virtqueue *vq; + struct vhost_dev *vdev = &n->dev; - mutex_lock(&n->dev.mutex); + mutex_lock(&vdev->mutex); if ((features & (1 << VHOST_F_LOG_ALL)) && - !vhost_log_access_ok(&n->dev)) { - mutex_unlock(&n->dev.mutex); + !vhost_log_access_ok(vdev)) { + mutex_unlock(&vdev->mutex); return -EFAULT; } - vq = &n->vqs[VHOST_TEST_VQ]; - mutex_lock(&vq->mutex); - vq->acked_features = features; - mutex_unlock(&vq->mutex); - mutex_unlock(&n->dev.mutex); + mutex_unlock(&vdev->mutex); return 0; } diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index 21a59b598ed8..3c2633fb519d 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -104,12 +104,14 @@ static long vhost_get_vring_endian(struct vhost_virtqueue *vq, u32 idx, static void vhost_init_is_le(struct vhost_virtqueue *vq) { + struct vhost_dev *vdev = vq->dev; + /* Note for legacy virtio: user_be is initialized at reset time * according to the host endianness. If userspace does not set an * explicit endianness, the default behavior is native endian, as * expected by legacy virtio. */ - vq->is_le = vhost_has_feature(vq, VIRTIO_F_VERSION_1) || !vq->user_be; + vq->is_le = vhost_has_feature(vdev, VIRTIO_F_VERSION_1) || !vq->user_be; } #else static void vhost_disable_cross_endian(struct vhost_virtqueue *vq) @@ -129,7 +131,9 @@ static long vhost_get_vring_endian(struct vhost_virtqueue *vq, u32 idx, static void vhost_init_is_le(struct vhost_virtqueue *vq) { - vq->is_le = vhost_has_feature(vq, VIRTIO_F_VERSION_1) + struct vhost_dev *vdev = vq->dev; + + vq->is_le = vhost_has_feature(vdev, VIRTIO_F_VERSION_1) || virtio_legacy_is_little_endian(); } #endif /* CONFIG_VHOST_CROSS_ENDIAN_LEGACY */ @@ -310,7 +314,6 @@ static void vhost_vq_reset(struct vhost_dev *dev, vq->log_used = false; vq->log_addr = -1ull; vq->private_data = NULL; - vq->acked_features = 0; vq->acked_backend_features = 0; vq->log_base = NULL; vq->error_ctx = NULL; @@ -428,8 +431,9 @@ EXPORT_SYMBOL_GPL(vhost_exceeds_weight); static size_t vhost_get_avail_size(struct vhost_virtqueue *vq, unsigned int num) { + struct vhost_dev *vdev = vq->dev; size_t event __maybe_unused = - vhost_has_feature(vq, VIRTIO_RING_F_EVENT_IDX) ? 2 : 0; + vhost_has_feature(vdev, VIRTIO_RING_F_EVENT_IDX) ? 2 : 0; return sizeof(*vq->avail) + sizeof(*vq->avail->ring) * num + event; @@ -438,8 +442,9 @@ static size_t vhost_get_avail_size(struct vhost_virtqueue *vq, static size_t vhost_get_used_size(struct vhost_virtqueue *vq, unsigned int num) { + struct vhost_dev *vdev = vq->dev; size_t event __maybe_unused = - vhost_has_feature(vq, VIRTIO_RING_F_EVENT_IDX) ? 2 : 0; + vhost_has_feature(vdev, VIRTIO_RING_F_EVENT_IDX) ? 2 : 0; return sizeof(*vq->used) + sizeof(*vq->used->ring) * num + event; @@ -468,6 +473,7 @@ void vhost_dev_init(struct vhost_dev *dev, dev->iotlb = NULL; dev->mm = NULL; dev->worker = NULL; + dev->features = 0; dev->iov_limit = iov_limit; dev->weight = weight; dev->byte_weight = byte_weight; @@ -738,14 +744,15 @@ static inline void __user *vhost_vq_meta_fetch(struct vhost_virtqueue *vq, static bool memory_access_ok(struct vhost_dev *d, struct vhost_iotlb *umem, int log_all) { + bool log; int i; + log = log_all || vhost_has_feature(d, VHOST_F_LOG_ALL); + for (i = 0; i < d->nvqs; ++i) { bool ok; - bool log; mutex_lock(&d->vqs[i]->mutex); - log = log_all || vhost_has_feature(d->vqs[i], VHOST_F_LOG_ALL); /* If ring is inactive, will check when it's enabled. */ if (d->vqs[i]->private_data) ok = vq_memory_access_ok(d->vqs[i]->log_base, @@ -1329,8 +1336,10 @@ EXPORT_SYMBOL_GPL(vhost_log_access_ok); static bool vq_log_access_ok(struct vhost_virtqueue *vq, void __user *log_base) { + struct vhost_dev *vdev = vq->dev; + return vq_memory_access_ok(log_base, vq->umem, - vhost_has_feature(vq, VHOST_F_LOG_ALL)) && + vhost_has_feature(vdev, VHOST_F_LOG_ALL)) && (!vq->log_used || log_access_ok(log_base, vq->log_addr, vhost_get_used_size(vq, vq->num))); } @@ -2376,11 +2385,11 @@ static bool vhost_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq) * interrupts. */ smp_mb(); - if (vhost_has_feature(vq, VIRTIO_F_NOTIFY_ON_EMPTY) && + if (vhost_has_feature(dev, VIRTIO_F_NOTIFY_ON_EMPTY) && unlikely(vq->avail_idx == vq->last_avail_idx)) return true; - if (!vhost_has_feature(vq, VIRTIO_RING_F_EVENT_IDX)) { + if (!vhost_has_feature(dev, VIRTIO_RING_F_EVENT_IDX)) { __virtio16 flags; if (vhost_get_avail_flags(vq, &flags)) { vq_err(vq, "Failed to get flags"); @@ -2459,7 +2468,7 @@ bool vhost_enable_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq) if (!(vq->used_flags & VRING_USED_F_NO_NOTIFY)) return false; vq->used_flags &= ~VRING_USED_F_NO_NOTIFY; - if (!vhost_has_feature(vq, VIRTIO_RING_F_EVENT_IDX)) { + if (!vhost_has_feature(dev, VIRTIO_RING_F_EVENT_IDX)) { r = vhost_update_used_flags(vq); if (r) { vq_err(vq, "Failed to enable notification at %p: %d\n", @@ -2496,7 +2505,7 @@ void vhost_disable_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq) if (vq->used_flags & VRING_USED_F_NO_NOTIFY) return; vq->used_flags |= VRING_USED_F_NO_NOTIFY; - if (!vhost_has_feature(vq, VIRTIO_RING_F_EVENT_IDX)) { + if (!vhost_has_feature(dev, VIRTIO_RING_F_EVENT_IDX)) { r = vhost_update_used_flags(vq); if (r) vq_err(vq, "Failed to enable notification at %p: %d\n", diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h index f8403bd46b85..5d1d00363e79 100644 --- a/drivers/vhost/vhost.h +++ b/drivers/vhost/vhost.h @@ -111,7 +111,6 @@ struct vhost_virtqueue { struct vhost_iotlb *umem; struct vhost_iotlb *iotlb; void *private_data; - u64 acked_features; u64 acked_backend_features; /* Log write descriptors */ void __user *log_base; @@ -140,6 +139,7 @@ struct vhost_dev { struct mm_struct *mm; struct mutex mutex; struct vhost_virtqueue **vqs; + u64 features; int nvqs; struct eventfd_ctx *log_ctx; struct llist_head work_list; @@ -258,9 +258,9 @@ static inline void *vhost_vq_get_backend(struct vhost_virtqueue *vq) return vq->private_data; } -static inline bool vhost_has_feature(struct vhost_virtqueue *vq, int bit) +static inline bool vhost_has_feature(struct vhost_dev *vdev, int bit) { - return vq->acked_features & (1ULL << bit); + return vdev->features & (1ULL << bit); } static inline bool vhost_backend_has_feature(struct vhost_virtqueue *vq, int bit) diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c index fb4e944c4d0d..8317ad026e3d 100644 --- a/drivers/vhost/vsock.c +++ b/drivers/vhost/vsock.c @@ -757,26 +757,20 @@ static int vhost_vsock_set_cid(struct vhost_vsock *vsock, u64 guest_cid) static int vhost_vsock_set_features(struct vhost_vsock *vsock, u64 features) { - struct vhost_virtqueue *vq; - int i; + struct vhost_dev *vdev = &vsock->dev; if (features & ~VHOST_VSOCK_FEATURES) return -EOPNOTSUPP; - mutex_lock(&vsock->dev.mutex); + mutex_lock(&vdev->mutex); if ((features & (1 << VHOST_F_LOG_ALL)) && - !vhost_log_access_ok(&vsock->dev)) { - mutex_unlock(&vsock->dev.mutex); + !vhost_log_access_ok(vdev)) { + mutex_unlock(&vdev->mutex); return -EFAULT; } - for (i = 0; i < ARRAY_SIZE(vsock->vqs); i++) { - vq = &vsock->vqs[i]; - mutex_lock(&vq->mutex); - vq->acked_features = features; - mutex_unlock(&vq->mutex); - } - mutex_unlock(&vsock->dev.mutex); + vdev->features = features; + mutex_unlock(&vdev->mutex); return 0; } From patchwork Thu Jul 2 08:21:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kishon Vijay Abraham I X-Patchwork-Id: 192188 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp1230014ilg; Thu, 2 Jul 2020 01:22:22 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwsYh/GJvyvGN4VzfV6uDCFPR21xWk2raI9TBxUcDh4gfCIhqiOwSJyHnN0z6+cwyk59w0A X-Received: by 2002:a50:9306:: with SMTP id m6mr33238302eda.216.1593678141997; Thu, 02 Jul 2020 01:22:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593678141; cv=none; d=google.com; s=arc-20160816; b=svVmne4VgaijO8ZWkGHBc6RO+fM1mEjCO73ZfgDovcacQuIbEQHxvgLbUVurXSeJej uFuJAYZBQNrFv0uAHod5ceXTe23E1dc0tz/dWS4n1ObfPyeAnL48DHsMUOGfGkxZ7Wkj wBlA0vGHla8ROPBDliVKJg06PzWkbb5pmygnBvg+I+t3wZXPUX/sHwD4pupm8IAINZhh W+jdguBdtoG2cmFOpu8w9yA92JE4aamN1pHOvAtWgIKQvpAhd9U7wpzZC+mlu73/Gf5l 1InAnGeEpf5tfyAlFzxnCl5LTMURtpRiQXJqdM0FfYl1I/fZxoX1P/FwCMiKMT5wxdO/ pl6g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=+aGJiU5V4G+i0lcwnv8aYxc6DjyYC/equHpyXkZ20Wk=; b=DeGTBJnTMT5+1Tozo787LLVqVboMrZFJwZ3l9aIqBgfsXin7VZuzCkLMols40XYUof 3queG1xeXR0P5e85E3a5BacBzRwXEkzSvRz+iaYqcLI36QfBklf6ieU47raUzCna9YIN BoMiikNUYORGep4Fs3S5OYR7/cskRzfxYzbeJq9TvYy+dqbFR6TKrTbOtrdbn+rxtOiL eWZmz7QAIJiqG0f+9t05TWFOo90c7kAZGDwaUHBVQlLV37J/48iYCzhf5R/DqquJdFMT cTjLoFNX9ra6Q6e2CUYVCGs3r9lzwTbWUf7xrEyCqNQoym2LtTbVZH/wh+j2lE4bzNWu +14g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=XSyOgQSE; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id w20si4950233eds.305.2020.07.02.01.22.21; Thu, 02 Jul 2020 01:22:21 -0700 (PDT) Received-SPF: pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=XSyOgQSE; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728267AbgGBIWR (ORCPT + 9 others); Thu, 2 Jul 2020 04:22:17 -0400 Received: from lelv0142.ext.ti.com ([198.47.23.249]:59364 "EHLO lelv0142.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726648AbgGBIWN (ORCPT ); Thu, 2 Jul 2020 04:22:13 -0400 Received: from lelv0265.itg.ti.com ([10.180.67.224]) by lelv0142.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0628M2Op081717; Thu, 2 Jul 2020 03:22:02 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1593678122; bh=+aGJiU5V4G+i0lcwnv8aYxc6DjyYC/equHpyXkZ20Wk=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=XSyOgQSElhUwnW2/1xSvHQA9tvtPgssdIiW9G1W3QtCRYlGnb7azt1GGTaN8xwU9m nSSRfg9ajCMpfZlSJA2UJyL64XdQnIcWCuPVlYVK5Ef1aFxdLe0VeX82iX6Ycs9uPi BO0073wwyl3CLiCxhUFW4ZKCOpto6QSAh7g43W2g= Received: from DLEE104.ent.ti.com (dlee104.ent.ti.com [157.170.170.34]) by lelv0265.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0628M21U065272 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 2 Jul 2020 03:22:02 -0500 Received: from DLEE112.ent.ti.com (157.170.170.23) by DLEE104.ent.ti.com (157.170.170.34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Thu, 2 Jul 2020 03:22:01 -0500 Received: from lelv0327.itg.ti.com (10.180.67.183) by DLEE112.ent.ti.com (157.170.170.23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Thu, 2 Jul 2020 03:22:02 -0500 Received: from a0393678ub.india.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0628LiYE006145; Thu, 2 Jul 2020 03:21:56 -0500 From: Kishon Vijay Abraham I To: Ohad Ben-Cohen , Bjorn Andersson , Jon Mason , Dave Jiang , Allen Hubbe , Lorenzo Pieralisi , Bjorn Helgaas , "Michael S. Tsirkin" , Jason Wang , Paolo Bonzini , Stefan Hajnoczi , Stefano Garzarella CC: , , , , , , , Subject: [RFC PATCH 02/22] vhost: Introduce standard Linux driver model in VHOST Date: Thu, 2 Jul 2020 13:51:23 +0530 Message-ID: <20200702082143.25259-3-kishon@ti.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200702082143.25259-1-kishon@ti.com> References: <20200702082143.25259-1-kishon@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Introduce standard driver model in VHOST. This will facilitate using multiple VHOST drivers (like net, scsi etc.,) over different VHOST devices using MMIO (like PCIe or NTB), using kernel pointers (like platform devices) or using userspace pointers. Signed-off-by: Kishon Vijay Abraham I --- drivers/vhost/net.c | 3 +- drivers/vhost/scsi.c | 2 +- drivers/vhost/test.c | 3 +- drivers/vhost/vdpa.c | 2 +- drivers/vhost/vhost.c | 157 ++++++++++++++++++++++- drivers/vhost/vsock.c | 2 +- include/linux/mod_devicetable.h | 6 + {drivers/vhost => include/linux}/vhost.h | 22 +++- tools/virtio/virtio_test.c | 2 +- 9 files changed, 190 insertions(+), 9 deletions(-) rename {drivers/vhost => include/linux}/vhost.h (93%) -- 2.17.1 diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index 437126219116..3c57c345cbfd 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include @@ -33,7 +34,7 @@ #include #include -#include "vhost.h" +#include static int experimental_zcopytx = 0; module_param(experimental_zcopytx, int, 0444); diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c index f5138379659e..06898b7ce7dd 100644 --- a/drivers/vhost/scsi.c +++ b/drivers/vhost/scsi.c @@ -47,7 +47,7 @@ #include #include -#include "vhost.h" +#include #define VHOST_SCSI_VERSION "v0.1" #define VHOST_SCSI_NAMELEN 256 diff --git a/drivers/vhost/test.c b/drivers/vhost/test.c index 6518b48c0633..07508526182f 100644 --- a/drivers/vhost/test.c +++ b/drivers/vhost/test.c @@ -14,9 +14,10 @@ #include #include #include +#include +#include #include "test.h" -#include "vhost.h" /* Max number of bytes transferred before requeueing the job. * Using this limit prevents one virtqueue from starving others. */ diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c index 0968361e3b77..61d90100db89 100644 --- a/drivers/vhost/vdpa.c +++ b/drivers/vhost/vdpa.c @@ -22,7 +22,7 @@ #include #include -#include "vhost.h" +#include enum { VHOST_VDPA_FEATURES = diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index 3c2633fb519d..fa2bc6e68be2 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -32,8 +32,6 @@ #include #include -#include "vhost.h" - static ushort max_mem_regions = 64; module_param(max_mem_regions, ushort, 0444); MODULE_PARM_DESC(max_mem_regions, @@ -43,6 +41,9 @@ module_param(max_iotlb_entries, int, 0444); MODULE_PARM_DESC(max_iotlb_entries, "Maximum number of iotlb entries. (default: 2048)"); +static DEFINE_IDA(vhost_index_ida); +static DEFINE_MUTEX(vhost_index_mutex); + enum { VHOST_MEMORY_F_LOG = 0x1, }; @@ -2557,14 +2558,166 @@ struct vhost_msg_node *vhost_dequeue_msg(struct vhost_dev *dev, } EXPORT_SYMBOL_GPL(vhost_dequeue_msg); +static inline int vhost_id_match(const struct vhost_dev *vdev, + const struct vhost_device_id *id) +{ + if (id->device != vdev->id.device && id->device != VIRTIO_DEV_ANY_ID) + return 0; + + return id->vendor == VIRTIO_DEV_ANY_ID || id->vendor == vdev->id.vendor; +} + +static int vhost_dev_match(struct device *dev, struct device_driver *drv) +{ + struct vhost_driver *driver = to_vhost_driver(drv); + struct vhost_dev *vdev = to_vhost_dev(dev); + const struct vhost_device_id *ids; + int i; + + ids = driver->id_table; + for (i = 0; ids[i].device; i++) + if (vhost_id_match(vdev, &ids[i])) + return 1; + + return 0; +} + +static int vhost_dev_probe(struct device *dev) +{ + struct vhost_driver *driver = to_vhost_driver(dev->driver); + struct vhost_dev *vdev = to_vhost_dev(dev); + + if (!driver->probe) + return -ENODEV; + + vdev->driver = driver; + + return driver->probe(vdev); +} + +static int vhost_dev_remove(struct device *dev) +{ + struct vhost_driver *driver = to_vhost_driver(dev->driver); + struct vhost_dev *vdev = to_vhost_dev(dev); + int ret = 0; + + if (driver->remove) + ret = driver->remove(vdev); + vdev->driver = NULL; + + return ret; +} + +static struct bus_type vhost_bus_type = { + .name = "vhost", + .match = vhost_dev_match, + .probe = vhost_dev_probe, + .remove = vhost_dev_remove, +}; + +/** + * vhost_register_driver() - Register a vhost driver + * @driver: Vhost driver that has to be registered + * + * Register a vhost driver. + */ +int vhost_register_driver(struct vhost_driver *driver) +{ + int ret; + + driver->driver.bus = &vhost_bus_type; + + ret = driver_register(&driver->driver); + if (ret) + return ret; + + return 0; +} +EXPORT_SYMBOL_GPL(vhost_register_driver); + +/** + * vhost_unregister_driver() - Unregister a vhost driver + * @driver: Vhost driver that has to be un-registered + * + * Unregister a vhost driver. + */ +void vhost_unregister_driver(struct vhost_driver *driver) +{ + driver_unregister(&driver->driver); +} +EXPORT_SYMBOL_GPL(vhost_unregister_driver); + +/** + * vhost_register_device() - Register vhost device + * @vdev: Vhost device that has to be registered + * + * Allocate a ID and register vhost device. + */ +int vhost_register_device(struct vhost_dev *vdev) +{ + struct device *dev = &vdev->dev; + int ret; + + mutex_lock(&vhost_index_mutex); + ret = ida_simple_get(&vhost_index_ida, 0, 0, GFP_KERNEL); + mutex_unlock(&vhost_index_mutex); + if (ret < 0) + return ret; + + vdev->index = ret; + dev->bus = &vhost_bus_type; + device_initialize(dev); + + dev_set_name(dev, "vhost%u", ret); + + ret = device_add(dev); + if (ret) { + put_device(dev); + goto err; + } + + return 0; + +err: + mutex_lock(&vhost_index_mutex); + ida_simple_remove(&vhost_index_ida, vdev->index); + mutex_unlock(&vhost_index_mutex); + + return ret; +} +EXPORT_SYMBOL_GPL(vhost_register_device); + +/** + * vhost_unregister_device() - Un-register vhost device + * @vdev: Vhost device that has to be un-registered + * + * Un-register vhost device and free the allocated ID. + */ +void vhost_unregister_device(struct vhost_dev *vdev) +{ + device_unregister(&vdev->dev); + mutex_lock(&vhost_index_mutex); + ida_simple_remove(&vhost_index_ida, vdev->index); + mutex_unlock(&vhost_index_mutex); +} +EXPORT_SYMBOL_GPL(vhost_unregister_device); static int __init vhost_init(void) { + int ret; + + ret = bus_register(&vhost_bus_type); + if (ret) { + pr_err("failed to register vhost bus --> %d\n", ret); + return ret; + } + return 0; } static void __exit vhost_exit(void) { + bus_unregister(&vhost_bus_type); } module_init(vhost_init); diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c index 8317ad026e3d..5753048b7405 100644 --- a/drivers/vhost/vsock.c +++ b/drivers/vhost/vsock.c @@ -17,7 +17,7 @@ #include #include -#include "vhost.h" +#include #define VHOST_VSOCK_DEFAULT_HOST_CID 2 /* Max number of bytes transferred before requeueing the job. diff --git a/include/linux/mod_devicetable.h b/include/linux/mod_devicetable.h index 8d764aab29de..c7df018989e3 100644 --- a/include/linux/mod_devicetable.h +++ b/include/linux/mod_devicetable.h @@ -430,6 +430,12 @@ struct virtio_device_id { }; #define VIRTIO_DEV_ANY_ID 0xffffffff +/* VHOST */ +struct vhost_device_id { + __u32 device; + __u32 vendor; +}; + /* * For Hyper-V devices we use the device guid as the id. */ diff --git a/drivers/vhost/vhost.h b/include/linux/vhost.h similarity index 93% rename from drivers/vhost/vhost.h rename to include/linux/vhost.h index 5d1d00363e79..16c374a8fa12 100644 --- a/drivers/vhost/vhost.h +++ b/include/linux/vhost.h @@ -3,7 +3,6 @@ #define _VHOST_H #include -#include #include #include #include @@ -13,6 +12,7 @@ #include #include #include +#include struct vhost_work; typedef void (*vhost_work_fn_t)(struct vhost_work *work); @@ -135,7 +135,20 @@ struct vhost_msg_node { struct list_head node; }; +struct vhost_driver { + struct device_driver driver; + struct vhost_device_id *id_table; + int (*probe)(struct vhost_dev *dev); + int (*remove)(struct vhost_dev *dev); +}; + +#define to_vhost_driver(drv) (container_of((drv), struct vhost_driver, driver)) + struct vhost_dev { + struct device dev; + struct vhost_driver *driver; + struct vhost_device_id id; + int index; struct mm_struct *mm; struct mutex mutex; struct vhost_virtqueue **vqs; @@ -158,6 +171,13 @@ struct vhost_dev { struct vhost_iotlb_msg *msg); }; +#define to_vhost_dev(d) container_of((d), struct vhost_dev, dev) + +int vhost_register_driver(struct vhost_driver *driver); +void vhost_unregister_driver(struct vhost_driver *driver); +int vhost_register_device(struct vhost_dev *vdev); +void vhost_unregister_device(struct vhost_dev *vdev); + bool vhost_exceeds_weight(struct vhost_virtqueue *vq, int pkts, int total_len); void vhost_dev_init(struct vhost_dev *, struct vhost_virtqueue **vqs, int nvqs, int iov_limit, int weight, int byte_weight, diff --git a/tools/virtio/virtio_test.c b/tools/virtio/virtio_test.c index b427def67e7e..b13434d6c976 100644 --- a/tools/virtio/virtio_test.c +++ b/tools/virtio/virtio_test.c @@ -13,9 +13,9 @@ #include #include #include -#include #include #include +#include #include "../../drivers/vhost/test.h" /* Unused */ From patchwork Thu Jul 2 08:21:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kishon Vijay Abraham I X-Patchwork-Id: 192189 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp1230052ilg; Thu, 2 Jul 2020 01:22:24 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyUGWOACAnRHSXMoB6i1jFU0NmltCoV2zjxznQwotP4TI9fFF/yjIMCa84yloqfZsBm/MDf X-Received: by 2002:a17:906:6d56:: with SMTP id a22mr27436073ejt.440.1593678144725; Thu, 02 Jul 2020 01:22:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593678144; cv=none; d=google.com; s=arc-20160816; b=QnweWHWCMhOMyUEn/l4FMm8LwB12H16icQAyB5ebdPv7W+ijbm84lPOodXRBI3GsZj hAv+SJP66QjmvaSTKXjf7OigdsCmS2H3KN3cUPPlSuTC60QPis6sCLM51YRtmhlZRuMZ IM2G42mzMt5CTPVacCk4QOaLhZQZpVzIl9aCak5OAHbpdm2d+QpEN1Lc1M1LlMjmifFY X8bORBkQgZK3e6mjp8MrqOhFoA8yHfRXPgBVT8rmxfn/jVHwgDkL3HAlfkrtUhRBstHD 2rUFQMsGB2OI0f0hZEVCzLqIo7CNOhZwDfwBHBWUQAclLVnLWmW2Pjqx+lLYT5dLOxsU dntg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=TQCRsl2Wf/cBX19r3WcTQDbCBoCPjZmiqMgNeLy0j10=; b=FlQYHDKFqiUx8e2YUICMBj7U5ToddQIwVOR+aC1i6/3jBPzZr46oxrjv0I3QKAwPGH GmwcpJP8xr2+1hQdYBHzJEcol20keS+zidiySEGXWjUwBTIlN2BtDL8YiZA3qd7fla3c hJkLmOqWAA7lnEjozewE6CGqbZPsLX0CW+3Mcsho5OcSM+hxBZlVm0D7DVUGpQr/72MI Ss1i9HMXGBWwVFDmwXyiXt9Nvf352BnKmzP4+DBsDg2ErWWM7vaH+dB7hx9+376MLIm0 R2GK1skncqc+73SY/NUmKbrvd7QeHvA8UKQit4Z4BOUl6/TIbj3eaEYklAgNksJXfYfu 0jRw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=NSF4RCGe; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id w20si4950233eds.305.2020.07.02.01.22.24; Thu, 02 Jul 2020 01:22:24 -0700 (PDT) Received-SPF: pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=NSF4RCGe; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728306AbgGBIWX (ORCPT + 9 others); Thu, 2 Jul 2020 04:22:23 -0400 Received: from lelv0143.ext.ti.com ([198.47.23.248]:55156 "EHLO lelv0143.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726630AbgGBIWV (ORCPT ); Thu, 2 Jul 2020 04:22:21 -0400 Received: from lelv0265.itg.ti.com ([10.180.67.224]) by lelv0143.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0628M8wT042184; Thu, 2 Jul 2020 03:22:08 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1593678128; bh=TQCRsl2Wf/cBX19r3WcTQDbCBoCPjZmiqMgNeLy0j10=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=NSF4RCGePLHDU0tzHbZrKqZoG859of6arKU3cKrky/sYT4eTNtFPFI4NM5vPQRxiZ vOBQ8yN520SwoqiANq0vRnWfZ0Qp42jayyV3sx3r8p7FCbli86ovzpTRo2jHGAKYe4 HV475XJBbCwfBB1UM7wng253433f8KM4QI4tnv7Q= Received: from DLEE105.ent.ti.com (dlee105.ent.ti.com [157.170.170.35]) by lelv0265.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0628M8d7065901 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 2 Jul 2020 03:22:08 -0500 Received: from DLEE109.ent.ti.com (157.170.170.41) by DLEE105.ent.ti.com (157.170.170.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Thu, 2 Jul 2020 03:22:07 -0500 Received: from lelv0327.itg.ti.com (10.180.67.183) by DLEE109.ent.ti.com (157.170.170.41) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Thu, 2 Jul 2020 03:22:07 -0500 Received: from a0393678ub.india.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0628LiYF006145; Thu, 2 Jul 2020 03:22:02 -0500 From: Kishon Vijay Abraham I To: Ohad Ben-Cohen , Bjorn Andersson , Jon Mason , Dave Jiang , Allen Hubbe , Lorenzo Pieralisi , Bjorn Helgaas , "Michael S. Tsirkin" , Jason Wang , Paolo Bonzini , Stefan Hajnoczi , Stefano Garzarella CC: , , , , , , , Subject: [RFC PATCH 03/22] vhost: Add ops for the VHOST driver to configure VHOST device Date: Thu, 2 Jul 2020 13:51:24 +0530 Message-ID: <20200702082143.25259-4-kishon@ti.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200702082143.25259-1-kishon@ti.com> References: <20200702082143.25259-1-kishon@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Add "vhost_config_ops" in *struct vhost_driver* for the VHOST driver to configure VHOST device and add facility for VHOST device to notify VHOST driver (whenever VIRTIO sets a new status or finalize features). This is in preparation to use the same vhost_driver across different VHOST devices (like PCIe or NTB or platform device). Signed-off-by: Kishon Vijay Abraham I --- drivers/vhost/vhost.c | 185 ++++++++++++++++++++++++++++++++++++++++++ include/linux/vhost.h | 57 +++++++++++++ 2 files changed, 242 insertions(+) -- 2.17.1 diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index fa2bc6e68be2..f959abb0b1bb 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -2558,6 +2558,190 @@ struct vhost_msg_node *vhost_dequeue_msg(struct vhost_dev *dev, } EXPORT_SYMBOL_GPL(vhost_dequeue_msg); +/** + * vhost_create_vqs() - Invoke vhost_config_ops to create virtqueue + * @vdev: Vhost device that provides create_vqs() callback to create virtqueue + * @nvqs: Number of vhost virtqueues to be created + * @num_bufs: The number of buffers that should be supported by the vhost + * virtqueue (number of descriptors in the vhost virtqueue) + * @vqs: Pointers to all the created vhost virtqueues + * @callback: Callback function associated with the virtqueue + * @names: Names associated with each virtqueue + * + * Wrapper that invokes vhost_config_ops to create virtqueue. + */ +int vhost_create_vqs(struct vhost_dev *vdev, unsigned int nvqs, + unsigned int num_bufs, struct vhost_virtqueue *vqs[], + vhost_vq_callback_t *callbacks[], + const char * const names[]) +{ + int ret; + + if (IS_ERR_OR_NULL(vdev)) + return -EINVAL; + + if (!vdev->ops && !vdev->ops->create_vqs) + return -EINVAL; + + mutex_lock(&vdev->mutex); + ret = vdev->ops->create_vqs(vdev, nvqs, num_bufs, vqs, callbacks, + names); + mutex_unlock(&vdev->mutex); + + return ret; +} +EXPORT_SYMBOL_GPL(vhost_create_vqs); + +/* vhost_del_vqs - Invoke vhost_config_ops to delete the created virtqueues + * @vdev: Vhost device that provides del_vqs() callback to delete virtqueue + * + * Wrapper that invokes vhost_config_ops to delete all the virtqueues + * associated with the vhost device. + */ +void vhost_del_vqs(struct vhost_dev *vdev) +{ + if (IS_ERR_OR_NULL(vdev)) + return; + + if (!vdev->ops && !vdev->ops->del_vqs) + return; + + mutex_lock(&vdev->mutex); + vdev->ops->del_vqs(vdev); + mutex_unlock(&vdev->mutex); +} +EXPORT_SYMBOL_GPL(vhost_del_vqs); + +/* vhost_write - Invoke vhost_config_ops to write data to buffer provided + * by remote virtio driver + * @vdev: Vhost device that provides write() callback to write data + * @dst: Buffer address in the remote device provided by the remote virtio + * driver + * @src: Buffer address in the local device provided by the vhost client driver + * @len: Length of the data to be copied from @src to @dst + * + * Wrapper that invokes vhost_config_ops to write data to buffer provided by + * remote virtio driver from buffer provided by vhost client driver. + */ +int vhost_write(struct vhost_dev *vdev, u64 vhost_dst, void *src, int len) +{ + if (IS_ERR_OR_NULL(vdev)) + return -EINVAL; + + if (!vdev->ops && !vdev->ops->write) + return -EINVAL; + + return vdev->ops->write(vdev, vhost_dst, src, len); +} +EXPORT_SYMBOL_GPL(vhost_write); + +/* vhost_read - Invoke vhost_config_ops to read data from buffers provided by + * remote virtio driver + * @vdev: Vhost device that provides read() callback to read data + * @dst: Buffer address in the local device provided by the vhost client driver + * @src: Buffer address in the remote device provided by the remote virtio + * driver + * @len: Length of the data to be copied from @src to @dst + * + * Wrapper that invokes vhost_config_ops to read data from buffers provided by + * remote virtio driver to the address provided by vhost client driver. + */ +int vhost_read(struct vhost_dev *vdev, void *dst, u64 vhost_src, int len) +{ + if (IS_ERR_OR_NULL(vdev)) + return -EINVAL; + + if (!vdev->ops && !vdev->ops->read) + return -EINVAL; + + return vdev->ops->read(vdev, dst, vhost_src, len); +} +EXPORT_SYMBOL_GPL(vhost_read); + +/* vhost_set_status - Invoke vhost_config_ops to set vhost device status + * @vdev: Vhost device that provides set_status() callback to set device status + * @status: Vhost device status configured by vhost client driver + * + * Wrapper that invokes vhost_config_ops to set vhost device status. + */ +int vhost_set_status(struct vhost_dev *vdev, u8 status) +{ + int ret; + + if (IS_ERR_OR_NULL(vdev)) + return -EINVAL; + + if (!vdev->ops && !vdev->ops->set_status) + return -EINVAL; + + mutex_lock(&vdev->mutex); + ret = vdev->ops->set_status(vdev, status); + mutex_unlock(&vdev->mutex); + + return ret; +} +EXPORT_SYMBOL_GPL(vhost_set_status); + +/* vhost_get_status - Invoke vhost_config_ops to get vhost device status + * @vdev: Vhost device that provides get_status() callback to get device status + * + * Wrapper that invokes vhost_config_ops to get vhost device status. + */ +u8 vhost_get_status(struct vhost_dev *vdev) +{ + u8 status; + + if (IS_ERR_OR_NULL(vdev)) + return -EINVAL; + + if (!vdev->ops && !vdev->ops->get_status) + return -EINVAL; + + mutex_lock(&vdev->mutex); + status = vdev->ops->get_status(vdev); + mutex_unlock(&vdev->mutex); + + return status; +} +EXPORT_SYMBOL_GPL(vhost_get_status); + +/* vhost_set_features - Invoke vhost_config_ops to set vhost device features + * @vdev: Vhost device that provides set_features() callback to set device + * features + * + * Wrapper that invokes vhost_config_ops to set device features. + */ +int vhost_set_features(struct vhost_dev *vdev, u64 device_features) +{ + int ret; + + if (IS_ERR_OR_NULL(vdev)) + return -EINVAL; + + if (!vdev->ops && !vdev->ops->set_features) + return -EINVAL; + + mutex_lock(&vdev->mutex); + ret = vdev->ops->set_features(vdev, device_features); + mutex_unlock(&vdev->mutex); + + return ret; +} +EXPORT_SYMBOL_GPL(vhost_set_features); + +/* vhost_register_notifier - Register notifier to receive notification from + * vhost device + * @vdev: Vhost device from which notification has to be received. + * @nb: Notifier block holding the callback function + * + * Invoked by vhost client to receive notification from vhost device. + */ +int vhost_register_notifier(struct vhost_dev *vdev, struct notifier_block *nb) +{ + return blocking_notifier_chain_register(&vdev->notifier, nb); +} +EXPORT_SYMBOL_GPL(vhost_register_notifier); + static inline int vhost_id_match(const struct vhost_dev *vdev, const struct vhost_device_id *id) { @@ -2669,6 +2853,7 @@ int vhost_register_device(struct vhost_dev *vdev) device_initialize(dev); dev_set_name(dev, "vhost%u", ret); + BLOCKING_INIT_NOTIFIER_HEAD(&vdev->notifier); ret = device_add(dev); if (ret) { diff --git a/include/linux/vhost.h b/include/linux/vhost.h index 16c374a8fa12..b22a19c66109 100644 --- a/include/linux/vhost.h +++ b/include/linux/vhost.h @@ -135,6 +135,37 @@ struct vhost_msg_node { struct list_head node; }; +enum vhost_notify_event { + NOTIFY_SET_STATUS, + NOTIFY_FINALIZE_FEATURES, + NOTIFY_RESET, +}; + +typedef void vhost_vq_callback_t(struct vhost_virtqueue *); +/** + * struct vhost_config_ops - set of function pointers for performing vhost + * device specific operation + * @create_vqs: ops to create vhost virtqueue + * @del_vqs: ops to delete vhost virtqueue + * @write: ops to write data to buffer provided by remote virtio driver + * @read: ops to read data from buffer provided by remote virtio driver + * @set_features: ops to set vhost device features + * @set_status: ops to set vhost device status + * @get_status: ops to get vhost device status + */ +struct vhost_config_ops { + int (*create_vqs)(struct vhost_dev *vdev, unsigned int nvqs, + unsigned int num_bufs, struct vhost_virtqueue *vqs[], + vhost_vq_callback_t *callbacks[], + const char * const names[]); + void (*del_vqs)(struct vhost_dev *vdev); + int (*write)(struct vhost_dev *vdev, u64 vhost_dst, void *src, int len); + int (*read)(struct vhost_dev *vdev, void *dst, u64 vhost_src, int len); + int (*set_features)(struct vhost_dev *vdev, u64 device_features); + int (*set_status)(struct vhost_dev *vdev, u8 status); + u8 (*get_status)(struct vhost_dev *vdev); +}; + struct vhost_driver { struct device_driver driver; struct vhost_device_id *id_table; @@ -149,6 +180,8 @@ struct vhost_dev { struct vhost_driver *driver; struct vhost_device_id id; int index; + const struct vhost_config_ops *ops; + struct blocking_notifier_head notifier; struct mm_struct *mm; struct mutex mutex; struct vhost_virtqueue **vqs; @@ -173,11 +206,35 @@ struct vhost_dev { #define to_vhost_dev(d) container_of((d), struct vhost_dev, dev) +static inline void vhost_set_drvdata(struct vhost_dev *vdev, void *data) +{ + dev_set_drvdata(&vdev->dev, data); +} + +static inline void *vhost_get_drvdata(struct vhost_dev *vdev) +{ + return dev_get_drvdata(&vdev->dev); +} + int vhost_register_driver(struct vhost_driver *driver); void vhost_unregister_driver(struct vhost_driver *driver); int vhost_register_device(struct vhost_dev *vdev); void vhost_unregister_device(struct vhost_dev *vdev); +int vhost_create_vqs(struct vhost_dev *vdev, unsigned int nvqs, + unsigned int num_bufs, struct vhost_virtqueue *vqs[], + vhost_vq_callback_t *callbacks[], + const char * const names[]); +void vhost_del_vqs(struct vhost_dev *vdev); +int vhost_write(struct vhost_dev *vdev, u64 vhost_dst, void *src, int len); +int vhost_read(struct vhost_dev *vdev, void *dst, u64 vhost_src, int len); +int vhost_set_features(struct vhost_dev *vdev, u64 device_features); +u64 vhost_get_features(struct vhost_dev *vdev); +int vhost_set_status(struct vhost_dev *vdev, u8 status); +u8 vhost_get_status(struct vhost_dev *vdev); + +int vhost_register_notifier(struct vhost_dev *vdev, struct notifier_block *nb); + bool vhost_exceeds_weight(struct vhost_virtqueue *vq, int pkts, int total_len); void vhost_dev_init(struct vhost_dev *, struct vhost_virtqueue **vqs, int nvqs, int iov_limit, int weight, int byte_weight, From patchwork Thu Jul 2 08:21:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kishon Vijay Abraham I X-Patchwork-Id: 192207 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp1231991ilg; Thu, 2 Jul 2020 01:25:43 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzuyjw+Axwexo4MvTHMYlb6FuHsgzLBBuEM/TCUFbQp2IlWEY+XstYxPet7+3ujoDcZ9oX+ X-Received: by 2002:aa7:d8c2:: with SMTP id k2mr32733055eds.346.1593678343206; Thu, 02 Jul 2020 01:25:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593678343; cv=none; d=google.com; s=arc-20160816; b=IU0OkHZ0OELA9Aj6J9vH199IRwN5n03wzt5Xi7Dc81phS+ZKshBRzNu1dLOOBjCKua NjvZbBQfR+1leWrm3yDIzMYN7vRRhhdwVtp4vCAqF2LdoABXKS/Wbe6O2iVdZqwqQ99L D6FsX9ZcpWX512KBx+j/jbkUKjLTtIk4V7AalftF12PDhYtj82+wiMCD6SihCAOAAtmI I9e4dNCn9/8xaslZ6a3VmcHkbHb5wTKbFaP4QRNAPaOJv/RygTF1ZRkL9miEhzJZI2iI pd9hltB479uzSQ7Rl1UBjPPiPnVfvEFEQW0BR4gzqK5dGw9HSEVSpnU1Yyy8qJ4REPTn XPVA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=N2Gbtsq8k3jkbnqxN/KZBC1QC1TkA9iW5uvR+twVvhg=; b=CafohoI8bwvH6zliaj9EYo6Msgo+a4Hd212oOzOi7D2kUms3cVxShZ1iocZJeh6pMO 5/xtT4/LFtkaC0rrNN989aBM0dR/ek99wO+EtLDnIg2C+LoQsed0A42flYNR6Ew7Yiwu /elkVnSPvRWXoMiMCI1VlQNxvxHkix0qR3BEzcKSAAjyUw1NDRIoOV5PtRGyr7jOMREg UBG4ulktZMqzjsUAO/5bI8qzCujPgxmqctOyO4nVAnmPCmdLcCaq/RUNbQYM0258QiXd GwCNefgojOwxRRGCo28Pba8+ygFoBcvy2GySezAa601/t6X8wMhTOBz1fAGp4i6QJTDy coag== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=a5rx8XCu; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id w12si6631344edf.481.2020.07.02.01.25.42; Thu, 02 Jul 2020 01:25:43 -0700 (PDT) Received-SPF: pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=a5rx8XCu; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728398AbgGBIWn (ORCPT + 9 others); Thu, 2 Jul 2020 04:22:43 -0400 Received: from fllv0015.ext.ti.com ([198.47.19.141]:34456 "EHLO fllv0015.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728246AbgGBIWi (ORCPT ); Thu, 2 Jul 2020 04:22:38 -0400 Received: from fllv0034.itg.ti.com ([10.64.40.246]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0628MD3P017061; Thu, 2 Jul 2020 03:22:13 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1593678133; bh=N2Gbtsq8k3jkbnqxN/KZBC1QC1TkA9iW5uvR+twVvhg=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=a5rx8XCugNxdGtJhwEWB3iGK5xebh/uFIDVYhLg2Cw88Wk339mPGydv6v0QZk3Pvg PMlRwp3+RyLk8uPeeR/n0wJrl+jYUUZ6s5YiFhWUO5bWYmYJW5tP4WGeQlL2VfqI0d txOtIlUUNH94QTu3HyEwTZk5yTbMCoyXwtcVIxLI= Received: from DLEE114.ent.ti.com (dlee114.ent.ti.com [157.170.170.25]) by fllv0034.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0628MDP2030503 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 2 Jul 2020 03:22:13 -0500 Received: from DLEE109.ent.ti.com (157.170.170.41) by DLEE114.ent.ti.com (157.170.170.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Thu, 2 Jul 2020 03:22:13 -0500 Received: from lelv0327.itg.ti.com (10.180.67.183) by DLEE109.ent.ti.com (157.170.170.41) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Thu, 2 Jul 2020 03:22:13 -0500 Received: from a0393678ub.india.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0628LiYG006145; Thu, 2 Jul 2020 03:22:08 -0500 From: Kishon Vijay Abraham I To: Ohad Ben-Cohen , Bjorn Andersson , Jon Mason , Dave Jiang , Allen Hubbe , Lorenzo Pieralisi , Bjorn Helgaas , "Michael S. Tsirkin" , Jason Wang , Paolo Bonzini , Stefan Hajnoczi , Stefano Garzarella CC: , , , , , , , Subject: [RFC PATCH 04/22] vringh: Add helpers to access vring in MMIO Date: Thu, 2 Jul 2020 13:51:25 +0530 Message-ID: <20200702082143.25259-5-kishon@ti.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200702082143.25259-1-kishon@ti.com> References: <20200702082143.25259-1-kishon@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Add helpers to access vring in memory mapped IO. This would require using IO accessors to access vring and the addresses populated by virtio driver (in the descriptor) should be directly given to the vhost client driver. Even if the vhost runs in 32 bit system, it can access 64 bit address provided by the virtio if the vhost device supports translation. This is in preparation for adding VHOST devices (PCIe Endpoint or Host in NTB) to access vrings created by VIRTIO devices (PCIe RC or Host in NTB) over memory mapped IO. Signed-off-by: Kishon Vijay Abraham I --- drivers/vhost/vringh.c | 332 +++++++++++++++++++++++++++++++++++++++++ include/linux/vringh.h | 46 ++++++ 2 files changed, 378 insertions(+) -- 2.17.1 diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c index ba8e0d6cfd97..b3f1910b99ec 100644 --- a/drivers/vhost/vringh.c +++ b/drivers/vhost/vringh.c @@ -5,6 +5,7 @@ * Since these may be in userspace, we use (inline) accessors. */ #include +#include #include #include #include @@ -188,6 +189,32 @@ static int move_to_indirect(const struct vringh *vrh, return 0; } +static int resize_mmiovec(struct vringh_mmiov *iov, gfp_t gfp) +{ + unsigned int flag, new_num = (iov->max_num & ~VRINGH_IOV_ALLOCATED) * 2; + struct mmiovec *new; + + if (new_num < 8) + new_num = 8; + + flag = (iov->max_num & VRINGH_IOV_ALLOCATED); + if (flag) { + new = krealloc(iov->iov, new_num * sizeof(struct iovec), gfp); + } else { + new = kmalloc_array(new_num, sizeof(struct iovec), gfp); + if (new) { + memcpy(new, iov->iov, + iov->max_num * sizeof(struct iovec)); + flag = VRINGH_IOV_ALLOCATED; + } + } + if (!new) + return -ENOMEM; + iov->iov = new; + iov->max_num = (new_num | flag); + return 0; +} + static int resize_iovec(struct vringh_kiov *iov, gfp_t gfp) { struct kvec *new; @@ -261,6 +288,142 @@ static int slow_copy(struct vringh *vrh, void *dst, const void *src, return 0; } +static inline int +__vringh_mmiov(struct vringh *vrh, u16 i, struct vringh_mmiov *riov, + struct vringh_mmiov *wiov, + bool (*rcheck)(struct vringh *vrh, u64 addr, size_t *len, + struct vringh_range *range, + bool (*getrange)(struct vringh *, u64, + struct vringh_range *)), + bool (*getrange)(struct vringh *, u64, struct vringh_range *), + gfp_t gfp, + int (*copy)(const struct vringh *vrh, + void *dst, const void *src, size_t len)) +{ + int err, count = 0, up_next, desc_max; + struct vring_desc desc, *descs; + struct vringh_range range = { -1ULL, 0 }, slowrange; + bool slow = false; + + /* We start traversing vring's descriptor table. */ + descs = vrh->vring.desc; + desc_max = vrh->vring.num; + up_next = -1; + + if (riov) { + riov->i = 0; + riov->used = 0; + } else if (wiov) { + wiov->i = 0; + wiov->used = 0; + } else { + /* You must want something! */ + WARN_ON(1); + } + + for (;;) { + u64 addr; + struct vringh_mmiov *iov; + size_t len; + + if (unlikely(slow)) + err = slow_copy(vrh, &desc, &descs[i], rcheck, getrange, + &slowrange, copy); + else + err = copy(vrh, &desc, &descs[i], sizeof(desc)); + if (unlikely(err)) + goto fail; + + if (unlikely(desc.flags & + cpu_to_vringh16(vrh, VRING_DESC_F_INDIRECT))) { + /* VRING_DESC_F_INDIRECT is not supported */ + err = -EINVAL; + goto fail; + } + + if (count++ == vrh->vring.num) { + vringh_bad("Descriptor loop in %p", descs); + err = -ELOOP; + goto fail; + } + + if (desc.flags & cpu_to_vringh16(vrh, VRING_DESC_F_WRITE)) { + iov = wiov; + } else { + iov = riov; + if (unlikely(wiov && wiov->i)) { + vringh_bad("Readable desc %p after writable", + &descs[i]); + err = -EINVAL; + goto fail; + } + } + + if (!iov) { + vringh_bad("Unexpected %s desc", + !wiov ? "writable" : "readable"); + err = -EPROTO; + goto fail; + } + +again: + /* Make sure it's OK, and get offset. */ + len = vringh32_to_cpu(vrh, desc.len); + if (!rcheck(vrh, vringh64_to_cpu(vrh, desc.addr), &len, &range, + getrange)) { + err = -EINVAL; + goto fail; + } + addr = vringh64_to_cpu(vrh, desc.addr) + range.offset; + + if (unlikely(iov->used == (iov->max_num & ~VRINGH_IOV_ALLOCATED))) { + err = resize_mmiovec(iov, gfp); + if (err) + goto fail; + } + + iov->iov[iov->used].iov_base = addr; + iov->iov[iov->used].iov_len = len; + iov->used++; + + if (unlikely(len != vringh32_to_cpu(vrh, desc.len))) { + desc.len = + cpu_to_vringh32(vrh, + vringh32_to_cpu(vrh, desc.len) + - len); + desc.addr = + cpu_to_vringh64(vrh, + vringh64_to_cpu(vrh, desc.addr) + + len); + goto again; + } + + if (desc.flags & cpu_to_vringh16(vrh, VRING_DESC_F_NEXT)) { + i = vringh16_to_cpu(vrh, desc.next); + } else { + /* Just in case we need to finish traversing above. */ + if (unlikely(up_next > 0)) { + i = return_from_indirect(vrh, &up_next, + &descs, &desc_max); + slow = false; + } else { + break; + } + } + + if (i >= desc_max) { + vringh_bad("Chained index %u > %u", i, desc_max); + err = -EINVAL; + goto fail; + } + } + + return 0; + +fail: + return err; +} + static inline int __vringh_iov(struct vringh *vrh, u16 i, struct vringh_kiov *riov, @@ -833,6 +996,175 @@ int vringh_need_notify_user(struct vringh *vrh) } EXPORT_SYMBOL(vringh_need_notify_user); +/* MMIO access helpers */ +static inline int getu16_mmio(const struct vringh *vrh, + u16 *val, const __virtio16 *p) +{ + *val = vringh16_to_cpu(vrh, readw(p)); + return 0; +} + +static inline int putu16_mmio(const struct vringh *vrh, __virtio16 *p, u16 val) +{ + writew(cpu_to_vringh16(vrh, val), p); + return 0; +} + +static inline int copydesc_mmio(const struct vringh *vrh, + void *dst, const void *src, size_t len) +{ + memcpy_fromio(dst, src, len); + return 0; +} + +static inline int putused_mmio(const struct vringh *vrh, + struct vring_used_elem *dst, + const struct vring_used_elem *src, + unsigned int num) +{ + memcpy_toio(dst, src, num * sizeof(*dst)); + return 0; +} + +/** + * vringh_init_mmio - initialize a vringh for a MMIO vring. + * @vrh: the vringh to initialize. + * @features: the feature bits for this ring. + * @num: the number of elements. + * @weak_barriers: true if we only need memory barriers, not I/O. + * @desc: the userpace descriptor pointer. + * @avail: the userpace avail pointer. + * @used: the userpace used pointer. + * + * Returns an error if num is invalid. + */ +int vringh_init_mmio(struct vringh *vrh, u64 features, + unsigned int num, bool weak_barriers, + struct vring_desc *desc, + struct vring_avail *avail, + struct vring_used *used) +{ + /* Sane power of 2 please! */ + if (!num || num > 0xffff || (num & (num - 1))) { + vringh_bad("Bad ring size %u", num); + return -EINVAL; + } + + vrh->little_endian = (features & (1ULL << VIRTIO_F_VERSION_1)); + vrh->event_indices = (features & (1 << VIRTIO_RING_F_EVENT_IDX)); + vrh->weak_barriers = weak_barriers; + vrh->completed = 0; + vrh->last_avail_idx = 0; + vrh->last_used_idx = 0; + vrh->vring.num = num; + vrh->vring.desc = desc; + vrh->vring.avail = avail; + vrh->vring.used = used; + return 0; +} +EXPORT_SYMBOL(vringh_init_mmio); + +/** + * vringh_getdesc_mmio - get next available descriptor from MMIO ring. + * @vrh: the MMIO vring. + * @riov: where to put the readable descriptors (or NULL) + * @wiov: where to put the writable descriptors (or NULL) + * @head: head index we received, for passing to vringh_complete_mmio(). + * @gfp: flags for allocating larger riov/wiov. + * + * Returns 0 if there was no descriptor, 1 if there was, or -errno. + * + * Note that on error return, you can tell the difference between an + * invalid ring and a single invalid descriptor: in the former case, + * *head will be vrh->vring.num. You may be able to ignore an invalid + * descriptor, but there's not much you can do with an invalid ring. + * + * Note that you may need to clean up riov and wiov, even on error! + */ +int vringh_getdesc_mmio(struct vringh *vrh, + struct vringh_mmiov *riov, + struct vringh_mmiov *wiov, + u16 *head, + gfp_t gfp) +{ + int err; + + err = __vringh_get_head(vrh, getu16_mmio, &vrh->last_avail_idx); + if (err < 0) + return err; + + /* Empty... */ + if (err == vrh->vring.num) + return 0; + + *head = err; + err = __vringh_mmiov(vrh, *head, riov, wiov, no_range_check, NULL, + gfp, copydesc_mmio); + if (err) + return err; + + return 1; +} +EXPORT_SYMBOL(vringh_getdesc_mmio); + +/** + * vringh_complete_mmio - we've finished with descriptor, publish it. + * @vrh: the vring. + * @head: the head as filled in by vringh_getdesc_mmio. + * @len: the length of data we have written. + * + * You should check vringh_need_notify_mmio() after one or more calls + * to this function. + */ +int vringh_complete_mmio(struct vringh *vrh, u16 head, u32 len) +{ + struct vring_used_elem used; + + used.id = cpu_to_vringh32(vrh, head); + used.len = cpu_to_vringh32(vrh, len); + + return __vringh_complete(vrh, &used, 1, putu16_mmio, putused_mmio); +} +EXPORT_SYMBOL(vringh_complete_mmio); + +/** + * vringh_notify_enable_mmio - we want to know if something changes. + * @vrh: the vring. + * + * This always enables notifications, but returns false if there are + * now more buffers available in the vring. + */ +bool vringh_notify_enable_mmio(struct vringh *vrh) +{ + return __vringh_notify_enable(vrh, getu16_mmio, putu16_mmio); +} +EXPORT_SYMBOL(vringh_notify_enable_mmio); + +/** + * vringh_notify_disable_mmio - don't tell us if something changes. + * @vrh: the vring. + * + * This is our normal running state: we disable and then only enable when + * we're going to sleep. + */ +void vringh_notify_disable_mmio(struct vringh *vrh) +{ + __vringh_notify_disable(vrh, putu16_mmio); +} +EXPORT_SYMBOL(vringh_notify_disable_mmio); + +/** + * vringh_need_notify_mmio - must we tell the other side about used buffers? + * @vrh: the vring we've called vringh_complete_mmio() on. + * + * Returns -errno or 0 if we don't need to tell the other side, 1 if we do. + */ +int vringh_need_notify_mmio(struct vringh *vrh) +{ + return __vringh_need_notify(vrh, getu16_mmio); +} +EXPORT_SYMBOL(vringh_need_notify_mmio); + /* Kernelspace access helpers. */ static inline int getu16_kern(const struct vringh *vrh, u16 *val, const __virtio16 *p) diff --git a/include/linux/vringh.h b/include/linux/vringh.h index 9e2763d7c159..0ba63a72b124 100644 --- a/include/linux/vringh.h +++ b/include/linux/vringh.h @@ -99,6 +99,23 @@ struct vringh_kiov { unsigned i, used, max_num; }; +struct mmiovec { + u64 iov_base; + size_t iov_len; +}; + +/** + * struct vringh_mmiov - mmiovec mangler. + * + * Mangles mmiovec in place, and restores it. + * Remaining data is iov + i, of used - i elements. + */ +struct vringh_mmiov { + struct mmiovec *iov; + size_t consumed; /* Within iov[i] */ + unsigned int i, used, max_num; +}; + /* Flag on max_num to indicate we're kmalloced. */ #define VRINGH_IOV_ALLOCATED 0x8000000 @@ -213,6 +230,35 @@ void vringh_notify_disable_kern(struct vringh *vrh); int vringh_need_notify_kern(struct vringh *vrh); +/* Helpers for kernelspace vrings. */ +int vringh_init_mmio(struct vringh *vrh, u64 features, + unsigned int num, bool weak_barriers, + struct vring_desc *desc, + struct vring_avail *avail, + struct vring_used *used); + +static inline void vringh_mmiov_init(struct vringh_mmiov *mmiov, + struct mmiovec *mmiovec, unsigned int num) +{ + mmiov->used = 0; + mmiov->i = 0; + mmiov->consumed = 0; + mmiov->max_num = num; + mmiov->iov = mmiovec; +} + +int vringh_getdesc_mmio(struct vringh *vrh, + struct vringh_mmiov *riov, + struct vringh_mmiov *wiov, + u16 *head, + gfp_t gfp); + +int vringh_complete_mmio(struct vringh *vrh, u16 head, u32 len); + +bool vringh_notify_enable_mmio(struct vringh *vrh); +void vringh_notify_disable_mmio(struct vringh *vrh); +int vringh_need_notify_mmio(struct vringh *vrh); + /* Notify the guest about buffers added to the used ring */ static inline void vringh_notify(struct vringh *vrh) { From patchwork Thu Jul 2 08:21:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kishon Vijay Abraham I X-Patchwork-Id: 192190 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp1230171ilg; Thu, 2 Jul 2020 01:22:36 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzatD/DbgcK5GR031nLm53287m2v7UyliFEeXUXPqyL61M1782YtWDhNkYwfi3xSMsX+xPR X-Received: by 2002:a17:906:970a:: with SMTP id k10mr27466181ejx.236.1593678156485; Thu, 02 Jul 2020 01:22:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593678156; cv=none; d=google.com; s=arc-20160816; b=X+wNt4LnOfFBHhQaBNpz1cqBsvUdzF76mxPiC/FRnLllBlXGjMv3rm46yHBJ/dyYeF UmDuFLlwdglI8M4y8WuT5tsJH8wcPw0yg/7Z+AAB2t0d6nwM1Ru2Tj6qenE3aHnSohDp l7TbCHbvwwwQ+1z717lPWoiJsTZGTTSXGdYL0faLqdtpB2jsyzRx+paqwzUbtTfloRpc f+2i31AHTyic3K4PzPzbj9DFg8Rcae/Q09BtiFq7jQp8f6RDBsmcJd+r7bQuU+JY7IZC lk2dr252MdCNFDbxyqz9gtYrRX4kqErlvlhZPt2MZuX4PfslA6Cyaw/ip5I2Vjqzx7Wb gbXQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=Uhwq7t9xmJ09dppFbQNiaS41cCziAqDB/ZcJ4oUfEXo=; b=jTnB3+SmYpYpDINmd6Tcg7/dSE6rz1bMibhuH6Q65PDhY0OACCO3Awh0soh1T4mRsD h6k9ZNV73EYmzEaLww3RmXb8F4ylw4rOGpJeFzfai+79t/76IkMWoP4xxlq1OKX37Xaq I08X3rNvf6vZkxzG+oVDabzgdNgcgjcZDN202PK7K+T4HEquShgYd5+dPzstKzOGR9P6 vS0oqv6SseG+0VUVfyq7ura7d7gKEHUEUlge0ruHu3y7I0AFVQc1jcu5jrcO1Dy1hXdx zrZKA93rPINRLopRHNsidQIgsgTtRH7m5BWMftzcUkGQonxA7C7A9LmkE8TOrb0G3j+E 93YQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=UC+AmEhN; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id r12si5250924edo.320.2020.07.02.01.22.36; Thu, 02 Jul 2020 01:22:36 -0700 (PDT) Received-SPF: pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=UC+AmEhN; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728331AbgGBIWd (ORCPT + 9 others); Thu, 2 Jul 2020 04:22:33 -0400 Received: from lelv0143.ext.ti.com ([198.47.23.248]:55200 "EHLO lelv0143.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726462AbgGBIWb (ORCPT ); Thu, 2 Jul 2020 04:22:31 -0400 Received: from lelv0266.itg.ti.com ([10.180.67.225]) by lelv0143.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0628MJuo042209; Thu, 2 Jul 2020 03:22:19 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1593678139; bh=Uhwq7t9xmJ09dppFbQNiaS41cCziAqDB/ZcJ4oUfEXo=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=UC+AmEhNh3aW86RcPHPX1bzomjjyeb4yGiNuHcVwomWqfxi27c57DQN6LFLaEaB03 X1y9lNs64MBrJRQQJ3r2u/Xgaust371pATFRpx89ycYyB/mfdagKqybKoItdeC7BrO K1wC2lsJ6nZLGWKJU+PzA0JtSsehQnRsA7rzMwD8= Received: from DLEE103.ent.ti.com (dlee103.ent.ti.com [157.170.170.33]) by lelv0266.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0628MJW0065146 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 2 Jul 2020 03:22:19 -0500 Received: from DLEE104.ent.ti.com (157.170.170.34) by DLEE103.ent.ti.com (157.170.170.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Thu, 2 Jul 2020 03:22:18 -0500 Received: from lelv0327.itg.ti.com (10.180.67.183) by DLEE104.ent.ti.com (157.170.170.34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Thu, 2 Jul 2020 03:22:18 -0500 Received: from a0393678ub.india.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0628LiYH006145; Thu, 2 Jul 2020 03:22:13 -0500 From: Kishon Vijay Abraham I To: Ohad Ben-Cohen , Bjorn Andersson , Jon Mason , Dave Jiang , Allen Hubbe , Lorenzo Pieralisi , Bjorn Helgaas , "Michael S. Tsirkin" , Jason Wang , Paolo Bonzini , Stefan Hajnoczi , Stefano Garzarella CC: , , , , , , , Subject: [RFC PATCH 05/22] vhost: Add MMIO helpers for operations on vhost virtqueue Date: Thu, 2 Jul 2020 13:51:26 +0530 Message-ID: <20200702082143.25259-6-kishon@ti.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200702082143.25259-1-kishon@ti.com> References: <20200702082143.25259-1-kishon@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Add helpers for VHOST drivers to read descriptor data from vhost_virtqueue for IN transfers or write descriptor data to vhost_virtqueue for OUT transfers respectively. Also add helpers to enable callback, disable callback and notify remote virtio for events on virtqueue. This adds helpers only for virtqueue in MMIO (helpers for virtqueue in kernel space and user space can be added later). Signed-off-by: Kishon Vijay Abraham I --- drivers/vhost/Kconfig | 1 + drivers/vhost/vhost.c | 292 ++++++++++++++++++++++++++++++++++++++++++ include/linux/vhost.h | 22 ++++ 3 files changed, 315 insertions(+) -- 2.17.1 diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig index c4f273793595..77e195a38469 100644 --- a/drivers/vhost/Kconfig +++ b/drivers/vhost/Kconfig @@ -24,6 +24,7 @@ config VHOST_DPN config VHOST tristate + select VHOST_RING select VHOST_IOTLB help This option is selected by any driver which needs to access diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index f959abb0b1bb..8a3ad4698393 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -2558,6 +2558,298 @@ struct vhost_msg_node *vhost_dequeue_msg(struct vhost_dev *dev, } EXPORT_SYMBOL_GPL(vhost_dequeue_msg); +/** + * vhost_virtqueue_disable_cb_mmio() - Write to used ring in virtio accessed + * using MMIO to stop notification + * @vq: vhost_virtqueue for which callbacks have to be disabled + * + * Write to used ring in virtio accessed using MMIO to stop sending notification + * to the vhost virtqueue. + */ +static void vhost_virtqueue_disable_cb_mmio(struct vhost_virtqueue *vq) +{ + struct vringh *vringh; + + vringh = &vq->vringh; + vringh_notify_disable_mmio(vringh); +} + +/** + * vhost_virtqueue_disable_cb() - Write to used ring in virtio to stop + * notification + * @vq: vhost_virtqueue for which callbacks have to be disabled + * + * Wrapper to write to used ring in virtio to stop sending notification + * to the vhost virtqueue. + */ +void vhost_virtqueue_disable_cb(struct vhost_virtqueue *vq) +{ + enum vhost_type type; + + type = vq->type; + + /* TODO: Add support for other VHOST TYPES */ + if (type == VHOST_TYPE_MMIO) + return vhost_virtqueue_disable_cb_mmio(vq); +} +EXPORT_SYMBOL_GPL(vhost_virtqueue_disable_cb); + +/** + * vhost_virtqueue_enable_cb_mmio() - Write to used ring in virtio accessed + * using MMIO to enable notification + * @vq: vhost_virtqueue for which callbacks have to be enabled + * + * Write to used ring in virtio accessed using MMIO to enable notification + * to the vhost virtqueue. + */ +static bool vhost_virtqueue_enable_cb_mmio(struct vhost_virtqueue *vq) +{ + struct vringh *vringh; + + vringh = &vq->vringh; + return vringh_notify_enable_mmio(vringh); +} + +/** + * vhost_virtqueue_enable_cb() - Write to used ring in virtio to enable + * notification + * @vq: vhost_virtqueue for which callbacks have to be enabled + * + * Wrapper to write to used ring in virtio to enable notification to the + * vhost virtqueue. + */ +bool vhost_virtqueue_enable_cb(struct vhost_virtqueue *vq) +{ + enum vhost_type type; + + type = vq->type; + + /* TODO: Add support for other VHOST TYPES */ + if (type == VHOST_TYPE_MMIO) + return vhost_virtqueue_enable_cb_mmio(vq); + + return false; +} +EXPORT_SYMBOL_GPL(vhost_virtqueue_enable_cb); + +/** + * vhost_virtqueue_notify() - Send notification to the remote virtqueue + * @vq: vhost_virtqueue that sends the notification + * + * Invokes ->notify() callback to send notification to the remote virtqueue. + */ +void vhost_virtqueue_notify(struct vhost_virtqueue *vq) +{ + if (!vq->notify) + return; + + vq->notify(vq); +} +EXPORT_SYMBOL_GPL(vhost_virtqueue_notify); + +/** + * vhost_virtqueue_kick_mmio() - Check if the remote virtqueue has enabled + * notification (by reading available ring in virtio accessed using MMIO) + * before sending notification + * @vq: vhost_virtqueue that sends the notification + * + * Check if the remote virtqueue has enabled notification (by reading available + * ring in virtio accessed using MMIO) and then invoke vhost_virtqueue_notify() + * to send notification to the remote virtqueue. + */ +static void vhost_virtqueue_kick_mmio(struct vhost_virtqueue *vq) +{ + if (vringh_need_notify_mmio(&vq->vringh)) + vhost_virtqueue_notify(vq); +} + +/** + * vhost_virtqueue_kick() - Check if the remote virtqueue has enabled + * notification before sending notification + * @vq: vhost_virtqueue that sends the notification + * + * Wrapper to send notification to the remote virtqueue using + * vhost_virtqueue_kick_mmio() that checks if the remote virtqueue has + * enabled notification before sending the notification. + */ +void vhost_virtqueue_kick(struct vhost_virtqueue *vq) +{ + enum vhost_type type; + + type = vq->type; + + /* TODO: Add support for other VHOST TYPES */ + if (type == VHOST_TYPE_MMIO) + return vhost_virtqueue_kick_mmio(vq); +} +EXPORT_SYMBOL_GPL(vhost_virtqueue_kick); + +/** + * vhost_virtqueue_callback() - Invoke vhost virtqueue callback provided by + * vhost client driver + * @vq: vhost_virtqueue for which the callback is invoked + * + * Invoked by the driver that creates vhost device when the remote virtio + * driver sends notification to this virtqueue. + */ +void vhost_virtqueue_callback(struct vhost_virtqueue *vq) +{ + if (!vq->callback) + return; + + vq->callback(vq); +} +EXPORT_SYMBOL_GPL(vhost_virtqueue_callback); + +/** + * vhost_virtqueue_get_outbuf_mmio() - Get the output buffer address by reading + * virtqueue descriptor accessed using MMIO + * @vq: vhost_virtqueue used to access the descriptor + * @head: head index for passing to vhost_virtqueue_put_buf() + * @len: Length of the buffer + * + * Get the output buffer address by reading virtqueue descriptor accessed using + * MMIO. + */ +static u64 vhost_virtqueue_get_outbuf_mmio(struct vhost_virtqueue *vq, + u16 *head, int *len) +{ + struct vringh_mmiov wiov; + struct mmiovec *mmiovec; + struct vringh *vringh; + int desc; + + vringh = &vq->vringh; + vringh_mmiov_init(&wiov, NULL, 0); + + desc = vringh_getdesc_mmio(vringh, NULL, &wiov, head, GFP_KERNEL); + if (!desc) + return 0; + mmiovec = &wiov.iov[0]; + + *len = mmiovec->iov_len; + return mmiovec->iov_base; +} + +/** + * vhost_virtqueue_get_outbuf() - Get the output buffer address by reading + * virtqueue descriptor + * @vq: vhost_virtqueue used to access the descriptor + * @head: head index for passing to vhost_virtqueue_put_buf() + * @len: Length of the buffer + * + * Wrapper to get the output buffer address by reading virtqueue descriptor. + */ +u64 vhost_virtqueue_get_outbuf(struct vhost_virtqueue *vq, u16 *head, int *len) +{ + enum vhost_type type; + + type = vq->type; + + /* TODO: Add support for other VHOST TYPES */ + if (type == VHOST_TYPE_MMIO) + return vhost_virtqueue_get_outbuf_mmio(vq, head, len); + + return 0; +} +EXPORT_SYMBOL_GPL(vhost_virtqueue_get_outbuf); + +/** + * vhost_virtqueue_get_inbuf_mmio() - Get the input buffer address by reading + * virtqueue descriptor accessed using MMIO + * @vq: vhost_virtqueue used to access the descriptor + * @head: Head index for passing to vhost_virtqueue_put_buf() + * @len: Length of the buffer + * + * Get the input buffer address by reading virtqueue descriptor accessed using + * MMIO. + */ +static u64 vhost_virtqueue_get_inbuf_mmio(struct vhost_virtqueue *vq, + u16 *head, int *len) +{ + struct vringh_mmiov riov; + struct mmiovec *mmiovec; + struct vringh *vringh; + int desc; + + vringh = &vq->vringh; + vringh_mmiov_init(&riov, NULL, 0); + + desc = vringh_getdesc_mmio(vringh, &riov, NULL, head, GFP_KERNEL); + if (!desc) + return 0; + + mmiovec = &riov.iov[0]; + + *len = mmiovec->iov_len; + return mmiovec->iov_base; +} + +/** + * vhost_virtqueue_get_inbuf() - Get the input buffer address by reading + * virtqueue descriptor + * @vq: vhost_virtqueue used to access the descriptor + * @head: head index for passing to vhost_virtqueue_put_buf() + * @len: Length of the buffer + * + * Wrapper to get the input buffer address by reading virtqueue descriptor. + */ +u64 vhost_virtqueue_get_inbuf(struct vhost_virtqueue *vq, u16 *head, int *len) +{ + enum vhost_type type; + + type = vq->type; + + /* TODO: Add support for other VHOST TYPES */ + if (type == VHOST_TYPE_MMIO) + return vhost_virtqueue_get_inbuf_mmio(vq, head, len); + + return 0; +} +EXPORT_SYMBOL_GPL(vhost_virtqueue_get_inbuf); + +/** + * vhost_virtqueue_put_buf_mmio() - Publish to the remote virtio (update + * used ring in virtio using MMIO) to indicate the buffer has been processed + * @vq: vhost_virtqueue used to update the used ring + * @head: Head index receive from vhost_virtqueue_get_*() + * @len: Length of the buffer + * + * Publish to the remote virtio (update used ring in virtio using MMIO) to + * indicate the buffer has been processed + */ +static void vhost_virtqueue_put_buf_mmio(struct vhost_virtqueue *vq, + u16 head, int len) +{ + struct vringh *vringh; + + vringh = &vq->vringh; + + vringh_complete_mmio(vringh, head, len); +} + +/** + * vhost_virtqueue_put_buf() - Publish to the remote virtio to indicate the + * buffer has been processed + * @vq: vhost_virtqueue used to update the used ring + * @head: Head index receive from vhost_virtqueue_get_*() + * @len: Length of the buffer + * + * Wrapper to publish to the remote virtio to indicate the buffer has been + * processed. + */ +void vhost_virtqueue_put_buf(struct vhost_virtqueue *vq, u16 head, int len) +{ + enum vhost_type type; + + type = vq->type; + + /* TODO: Add support for other VHOST TYPES */ + if (type == VHOST_TYPE_MMIO) + return vhost_virtqueue_put_buf_mmio(vq, head, len); +} +EXPORT_SYMBOL_GPL(vhost_virtqueue_put_buf); + /** * vhost_create_vqs() - Invoke vhost_config_ops to create virtqueue * @vdev: Vhost device that provides create_vqs() callback to create virtqueue diff --git a/include/linux/vhost.h b/include/linux/vhost.h index b22a19c66109..8efb9829c1b1 100644 --- a/include/linux/vhost.h +++ b/include/linux/vhost.h @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -60,9 +61,20 @@ enum vhost_uaddr_type { VHOST_NUM_ADDRS = 3, }; +enum vhost_type { + VHOST_TYPE_UNKNOWN, + VHOST_TYPE_USER, + VHOST_TYPE_KERN, + VHOST_TYPE_MMIO, +}; + /* The virtqueue structure describes a queue attached to a device. */ struct vhost_virtqueue { struct vhost_dev *dev; + enum vhost_type type; + struct vringh vringh; + void (*callback)(struct vhost_virtqueue *vq); + void (*notify)(struct vhost_virtqueue *vq); /* The actual ring of buffers. */ struct mutex mutex; @@ -235,6 +247,16 @@ u8 vhost_get_status(struct vhost_dev *vdev); int vhost_register_notifier(struct vhost_dev *vdev, struct notifier_block *nb); +u64 vhost_virtqueue_get_outbuf(struct vhost_virtqueue *vq, u16 *head, int *len); +u64 vhost_virtqueue_get_inbuf(struct vhost_virtqueue *vq, u16 *head, int *len); +void vhost_virtqueue_put_buf(struct vhost_virtqueue *vq, u16 head, int len); + +void vhost_virtqueue_disable_cb(struct vhost_virtqueue *vq); +bool vhost_virtqueue_enable_cb(struct vhost_virtqueue *vq); +void vhost_virtqueue_notify(struct vhost_virtqueue *vq); +void vhost_virtqueue_kick(struct vhost_virtqueue *vq); +void vhost_virtqueue_callback(struct vhost_virtqueue *vq); + bool vhost_exceeds_weight(struct vhost_virtqueue *vq, int pkts, int total_len); void vhost_dev_init(struct vhost_dev *, struct vhost_virtqueue **vqs, int nvqs, int iov_limit, int weight, int byte_weight, From patchwork Thu Jul 2 08:21:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kishon Vijay Abraham I X-Patchwork-Id: 192208 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp1232008ilg; Thu, 2 Jul 2020 01:25:45 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzHCvDK4dX3YCU9d0DVcs3HI3dsLmd1Y5E8OgSHPouFNkjWRBj+1P3nJ7T+pnAnM/extZaz X-Received: by 2002:a17:906:414c:: with SMTP id l12mr5710667ejk.417.1593678345697; Thu, 02 Jul 2020 01:25:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593678345; cv=none; d=google.com; s=arc-20160816; b=jxJO3HtrVmnM2zeVeAtfhPegFQXhYT7gT0zsZPvD5Z3CtiGGIve0b+NdmxzPlgkRk2 RruUGLGnCeB0o5uQ2Hrn7Vy61l7MuGsl31PoSuuUWK+A5CWhhMbtDyCqMwk8kJr1g7zQ LJWZjqYjyqZI7HrRYzwuFts3qqQ4BTUyfafbVGk6XW2iwjz3eTQc7HvcSntlY5KaqZSJ 3mA7cUw1h3o7qhruuWrjq1cphm1X1xWRt42luaXhmdd0vBngBNrtGtAQUwM8BeRJoQ7h k4f7ah9nlhEzRwsMNB0mnVFNK1mj6JwMTdyeMMAq55izbTfVuujXs2Kp1pGGYfq/kFCr 1kHQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=iPErWFbFZdjeoHTdwXdeAC7MLZFVAu9f66Iyz/tSVV8=; b=IxQ4/+PUXDuxX1kEpRZTK0s5TfnPzZtJGrFvIkXKD13xh6hSX8VZLukZ7j6NOJi/hX J+p9fJIrp2ep81SB9Hi3qftsvat9XAvmUgGhdYSBen5ibwCjVsMh/6J8nWweemK2qCgE SMcMnLSDK8nC34IRr2Wh7xC+k8mqd1zu/nBIIZdq7mq1mNcPaCn25xqZuWMSDpgqfwfo PSgwlC2Ayg1UNSUD8WWR3z/s2fv/ueY06rsnpEHb13TZeH2wmZ6UaUOLzk3gDlBvLC6G KNJuED2oRJyFT+bES3BB9855MUmOz+nyzoWwxuGwoIM89pwDn3p8kGDHp4LVsaWMKcDx JfKA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=bvbPsKl3; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id w12si6631344edf.481.2020.07.02.01.25.45; Thu, 02 Jul 2020 01:25:45 -0700 (PDT) Received-SPF: pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=bvbPsKl3; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728377AbgGBIWl (ORCPT + 9 others); Thu, 2 Jul 2020 04:22:41 -0400 Received: from fllv0015.ext.ti.com ([198.47.19.141]:34448 "EHLO fllv0015.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726462AbgGBIWh (ORCPT ); Thu, 2 Jul 2020 04:22:37 -0400 Received: from fllv0035.itg.ti.com ([10.64.41.0]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0628MOtF017106; Thu, 2 Jul 2020 03:22:24 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1593678144; bh=iPErWFbFZdjeoHTdwXdeAC7MLZFVAu9f66Iyz/tSVV8=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=bvbPsKl3Fy2ybJ3UjcuU2C/Nd0bvujeA2Xeo0G9/qvyq7+suLiKN+nx1nFNE9Feq9 rpb3JtSG5MuOPWvII6iQgtvTkprIJLsRK2mYRx2ptoaHYus+eDZ30JbL8oYvTafOnv iKPdWRCI5G0IITtNJwocmoHVna/71N5/5E0Z+BRk= Received: from DFLE104.ent.ti.com (dfle104.ent.ti.com [10.64.6.25]) by fllv0035.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0628MOYp086031; Thu, 2 Jul 2020 03:22:24 -0500 Received: from DFLE110.ent.ti.com (10.64.6.31) by DFLE104.ent.ti.com (10.64.6.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Thu, 2 Jul 2020 03:22:24 -0500 Received: from lelv0327.itg.ti.com (10.180.67.183) by DFLE110.ent.ti.com (10.64.6.31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Thu, 2 Jul 2020 03:22:24 -0500 Received: from a0393678ub.india.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0628LiYI006145; Thu, 2 Jul 2020 03:22:19 -0500 From: Kishon Vijay Abraham I To: Ohad Ben-Cohen , Bjorn Andersson , Jon Mason , Dave Jiang , Allen Hubbe , Lorenzo Pieralisi , Bjorn Helgaas , "Michael S. Tsirkin" , Jason Wang , Paolo Bonzini , Stefan Hajnoczi , Stefano Garzarella CC: , , , , , , , Subject: [RFC PATCH 06/22] vhost: Introduce configfs entry for configuring VHOST Date: Thu, 2 Jul 2020 13:51:27 +0530 Message-ID: <20200702082143.25259-7-kishon@ti.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200702082143.25259-1-kishon@ti.com> References: <20200702082143.25259-1-kishon@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Create a configfs entry for each entry populated in "struct vhost_device_id" in the VHOST driver and create a configfs entry for each VHOST device. This is used to link VHOST driver to VHOST device (by assigning deviceID and vendorID to the VHOST device) and register VHOST device, thereby letting VHOST client driver to be selected in the userspace. Signed-off-by: Kishon Vijay Abraham I --- drivers/vhost/Makefile | 2 +- drivers/vhost/vhost.c | 63 +++++++ drivers/vhost/vhost_cfs.c | 354 ++++++++++++++++++++++++++++++++++++++ include/linux/vhost.h | 11 ++ 4 files changed, 429 insertions(+), 1 deletion(-) create mode 100644 drivers/vhost/vhost_cfs.c -- 2.17.1 diff --git a/drivers/vhost/Makefile b/drivers/vhost/Makefile index f3e1897cce85..6520c820c896 100644 --- a/drivers/vhost/Makefile +++ b/drivers/vhost/Makefile @@ -13,7 +13,7 @@ obj-$(CONFIG_VHOST_RING) += vringh.o obj-$(CONFIG_VHOST_VDPA) += vhost_vdpa.o vhost_vdpa-y := vdpa.o -obj-$(CONFIG_VHOST) += vhost.o +obj-$(CONFIG_VHOST) += vhost.o vhost_cfs.o obj-$(CONFIG_VHOST_IOTLB) += vhost_iotlb.o vhost_iotlb-y := iotlb.o diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index 8a3ad4698393..539619208783 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -3091,6 +3091,64 @@ static struct bus_type vhost_bus_type = { .remove = vhost_dev_remove, }; +/** + * vhost_remove_cfs() - Remove configfs directory for vhost driver + * @driver: Vhost driver for which configfs directory has to be removed + * + * Remove configfs directory for vhost driver. + */ +static void vhost_remove_cfs(struct vhost_driver *driver) +{ + struct config_group *driver_group, *group; + struct config_item *item, *tmp; + + driver_group = driver->group; + + list_for_each_entry_safe(item, tmp, &driver_group->cg_children, + ci_entry) { + group = to_config_group(item); + vhost_cfs_remove_driver_item(group); + } + + vhost_cfs_remove_driver_group(driver_group); +} + +/** + * vhost_add_cfs() - Add configfs directory for vhost driver + * @driver: Vhost driver for which configfs directory has to be added + * + * Add configfs directory for vhost driver. + */ +static int vhost_add_cfs(struct vhost_driver *driver) +{ + struct config_group *driver_group, *group; + const struct vhost_device_id *ids; + int ret, i; + + driver_group = vhost_cfs_add_driver_group(driver->driver.name); + if (IS_ERR(driver_group)) + return PTR_ERR(driver_group); + + driver->group = driver_group; + + ids = driver->id_table; + for (i = 0; ids[i].device; i++) { + group = vhost_cfs_add_driver_item(driver_group, ids[i].vendor, + ids[i].device); + if (IS_ERR(group)) { + ret = PTR_ERR(driver_group); + goto err; + } + } + + return 0; + +err: + vhost_remove_cfs(driver); + + return ret; +} + /** * vhost_register_driver() - Register a vhost driver * @driver: Vhost driver that has to be registered @@ -3107,6 +3165,10 @@ int vhost_register_driver(struct vhost_driver *driver) if (ret) return ret; + ret = vhost_add_cfs(driver); + if (ret) + return ret; + return 0; } EXPORT_SYMBOL_GPL(vhost_register_driver); @@ -3119,6 +3181,7 @@ EXPORT_SYMBOL_GPL(vhost_register_driver); */ void vhost_unregister_driver(struct vhost_driver *driver) { + vhost_remove_cfs(driver); driver_unregister(&driver->driver); } EXPORT_SYMBOL_GPL(vhost_unregister_driver); diff --git a/drivers/vhost/vhost_cfs.c b/drivers/vhost/vhost_cfs.c new file mode 100644 index 000000000000..ae46e71968f1 --- /dev/null +++ b/drivers/vhost/vhost_cfs.c @@ -0,0 +1,354 @@ +// SPDX-License-Identifier: GPL-2.0 +/** + * configfs to configure VHOST + * + * Copyright (C) 2020 Texas Instruments + * Author: Kishon Vijay Abraham I + */ + +#include +#include +#include +#include +#include + +/* VHOST driver like net, scsi etc., */ +static struct config_group *vhost_driver_group; + +/* VHOST device like PCIe EP, NTB etc., */ +static struct config_group *vhost_device_group; + +struct vhost_driver_item { + struct config_group group; + u32 vendor; + u32 device; +}; + +struct vhost_driver_group { + struct config_group group; +}; + +struct vhost_device_item { + struct config_group group; + struct vhost_dev *vdev; +}; + +static inline +struct vhost_driver_item *to_vhost_driver_item(struct config_item *item) +{ + return container_of(to_config_group(item), struct vhost_driver_item, + group); +} + +static inline +struct vhost_device_item *to_vhost_device_item(struct config_item *item) +{ + return container_of(to_config_group(item), struct vhost_device_item, + group); +} + +/** + * vhost_cfs_device_link() - Create softlink of driver directory to device + * directory + * @device_item: Represents configfs entry of vhost_dev + * @driver_item: Represents configfs of a particular entry of + * vhost_device_id table in vhost driver + * + * Bind a vhost driver to vhost device in order to assign a particular + * device ID and vendor ID + */ +static int vhost_cfs_device_link(struct config_item *device_item, + struct config_item *driver_item) +{ + struct vhost_driver_item *vdriver_item; + struct vhost_device_item *vdevice_item; + struct vhost_dev *vdev; + int ret; + + vdriver_item = to_vhost_driver_item(driver_item); + vdevice_item = to_vhost_device_item(device_item); + + vdev = vdevice_item->vdev; + vdev->id.device = vdriver_item->device; + vdev->id.vendor = vdriver_item->vendor; + + ret = vhost_register_device(vdev); + if (ret) + return ret; + + return 0; +} + +/** + * vhost_cfs_device_unlink() - Delete softlink of driver directory from device + * directory + * @device_item: Represents configfs entry of vhost_dev + * @driver_item: Represents configfs of a particular entry of + * vhost_device_id table in vhost driver + * + * Un-bind vhost driver from vhost device. + */ +static void vhost_cfs_device_unlink(struct config_item *device_item, + struct config_item *driver_item) +{ + struct vhost_driver_item *vdriver_item; + struct vhost_device_item *vdevice_item; + struct vhost_dev *vdev; + + vdriver_item = to_vhost_driver_item(driver_item); + vdevice_item = to_vhost_device_item(device_item); + + vdev = vdevice_item->vdev; + vhost_unregister_device(vdev); +} + +static struct configfs_item_operations vhost_cfs_device_item_ops = { + .allow_link = vhost_cfs_device_link, + .drop_link = vhost_cfs_device_unlink, +}; + +static const struct config_item_type vhost_cfs_device_item_type = { + .ct_item_ops = &vhost_cfs_device_item_ops, + .ct_owner = THIS_MODULE, +}; + +/** + * vhost_cfs_add_device_item() - Create configfs directory for new vhost_dev + * @vdev: vhost device for which configfs directory has to be created + * + * Create configfs directory for new vhost device. Drivers that create + * vhost device can invoke this API if they require the vhost device to + * be assigned a device ID and vendorID by the user. + */ +struct config_group *vhost_cfs_add_device_item(struct vhost_dev *vdev) +{ + struct device *dev = &vdev->dev; + struct vhost_device_item *vdevice_item; + struct config_group *group; + const char *name; + int ret; + + vdevice_item = kzalloc(sizeof(*vdevice_item), GFP_KERNEL); + if (!vdevice_item) + return ERR_PTR(-ENOMEM); + + name = dev_name(dev->parent); + group = &vdevice_item->group; + config_group_init_type_name(group, name, &vhost_cfs_device_item_type); + + ret = configfs_register_group(vhost_device_group, group); + if (ret) + return ERR_PTR(ret); + + vdevice_item->vdev = vdev; + + return group; +} +EXPORT_SYMBOL(vhost_cfs_add_device_item); + +/** + * vhost_cfs_remove_device_item() - Remove configfs directory for the vhost_dev + * @vdev: vhost device for which configfs directory has to be removed + * + * Remove configfs directory for the vhost device. + */ +void vhost_cfs_remove_device_item(struct config_group *group) +{ + struct vhost_device_item *vdevice_item; + + if (!group) + return; + + vdevice_item = container_of(group, struct vhost_device_item, group); + configfs_unregister_group(&vdevice_item->group); + kfree(vdevice_item); +} +EXPORT_SYMBOL(vhost_cfs_remove_device_item); + +static const struct config_item_type vhost_driver_item_type = { + .ct_owner = THIS_MODULE, +}; + +/** + * vhost_cfs_add_driver_item() - Add configfs directory for an entry in + * vhost_device_id + * @driver_group: configfs directory corresponding to the vhost driver + * @vendor: vendor ID populated in vhost_device_id table by vhost driver + * @device: device ID populated in vhost_device_id table by vhost driver + * + * Add configfs directory for each entry in vhost_device_id populated by + * vhost driver. Store the device ID and vendor ID in a local data structure + * and use it when user links this directory with a vhost device configfs + * directory. + */ +struct config_group * +vhost_cfs_add_driver_item(struct config_group *driver_group, u32 vendor, + u32 device) +{ + struct vhost_driver_item *vdriver_item; + struct config_group *group; + char name[20]; + int ret; + + vdriver_item = kzalloc(sizeof(*vdriver_item), GFP_KERNEL); + if (!vdriver_item) + return ERR_PTR(-ENOMEM); + + vdriver_item->vendor = vendor; + vdriver_item->device = device; + + snprintf(name, sizeof(name), "%08x:%08x", vendor, device); + group = &vdriver_item->group; + + config_group_init_type_name(group, name, &vhost_driver_item_type); + ret = configfs_register_group(driver_group, group); + if (ret) + return ERR_PTR(ret); + + return group; +} +EXPORT_SYMBOL(vhost_cfs_add_driver_item); + +/** + * vhost_cfs_remove_driver_item() - Remove configfs directory corresponding + * to an entry in vhost_device_id + * @group: Configfs group corresponding to an entry in vhost_device_id + * + * Remove configfs directory corresponding to an entry in vhost_device_id + */ +void vhost_cfs_remove_driver_item(struct config_group *group) +{ + struct vhost_driver_item *vdriver_item; + + if (!group) + return; + + vdriver_item = container_of(group, struct vhost_driver_item, group); + configfs_unregister_group(&vdriver_item->group); + kfree(vdriver_item); +} +EXPORT_SYMBOL(vhost_cfs_remove_driver_item); + +static const struct config_item_type vhost_driver_group_type = { + .ct_owner = THIS_MODULE, +}; + +/** + * vhost_cfs_add_driver_group() - Add configfs directory for vhost driver + * @name: Name of the vhost driver as populated in driver structure + * + * Add configfs directory for vhost driver. + */ +struct config_group *vhost_cfs_add_driver_group(const char *name) +{ + struct vhost_driver_group *vdriver_group; + struct config_group *group; + + vdriver_group = kzalloc(sizeof(*vdriver_group), GFP_KERNEL); + if (!vdriver_group) + return ERR_PTR(-ENOMEM); + + group = &vdriver_group->group; + + config_group_init_type_name(group, name, &vhost_driver_group_type); + configfs_register_group(vhost_driver_group, group); + + return group; +} +EXPORT_SYMBOL(vhost_cfs_add_driver_group); + +/** + * vhost_cfs_remove_driver_group() - Remove configfs directory for vhost driver + * @group: Configfs group corresponding to the vhost driver + * + * Remove configfs directory for vhost driver. + */ +void vhost_cfs_remove_driver_group(struct config_group *group) +{ + if (IS_ERR_OR_NULL(group)) + return; + + configfs_unregister_default_group(group); +} +EXPORT_SYMBOL(vhost_cfs_remove_driver_group); + +static const struct config_item_type vhost_driver_type = { + .ct_owner = THIS_MODULE, +}; + +static const struct config_item_type vhost_device_type = { + .ct_owner = THIS_MODULE, +}; + +static const struct config_item_type vhost_type = { + .ct_owner = THIS_MODULE, +}; + +static struct configfs_subsystem vhost_cfs_subsys = { + .su_group = { + .cg_item = { + .ci_namebuf = "vhost", + .ci_type = &vhost_type, + }, + }, + .su_mutex = __MUTEX_INITIALIZER(vhost_cfs_subsys.su_mutex), +}; + +static int __init vhost_cfs_init(void) +{ + int ret; + struct config_group *root = &vhost_cfs_subsys.su_group; + + config_group_init(root); + + ret = configfs_register_subsystem(&vhost_cfs_subsys); + if (ret) { + pr_err("Error %d while registering subsystem %s\n", + ret, root->cg_item.ci_namebuf); + goto err; + } + + vhost_driver_group = + configfs_register_default_group(root, "vhost-client", + &vhost_driver_type); + if (IS_ERR(vhost_driver_group)) { + ret = PTR_ERR(vhost_driver_group); + pr_err("Error %d while registering channel group\n", + ret); + goto err_vhost_driver_group; + } + + vhost_device_group = + configfs_register_default_group(root, "vhost-transport", + &vhost_device_type); + if (IS_ERR(vhost_device_group)) { + ret = PTR_ERR(vhost_device_group); + pr_err("Error %d while registering virtproc group\n", + ret); + goto err_vhost_device_group; + } + + return 0; + +err_vhost_device_group: + configfs_unregister_default_group(vhost_driver_group); + +err_vhost_driver_group: + configfs_unregister_subsystem(&vhost_cfs_subsys); + +err: + return ret; +} +module_init(vhost_cfs_init); + +static void __exit vhost_cfs_exit(void) +{ + configfs_unregister_default_group(vhost_device_group); + configfs_unregister_default_group(vhost_driver_group); + configfs_unregister_subsystem(&vhost_cfs_subsys); +} +module_exit(vhost_cfs_exit); + +MODULE_DESCRIPTION("PCI VHOST CONFIGFS"); +MODULE_AUTHOR("Kishon Vijay Abraham I "); +MODULE_LICENSE("GPL v2"); diff --git a/include/linux/vhost.h b/include/linux/vhost.h index 8efb9829c1b1..be9341ffd266 100644 --- a/include/linux/vhost.h +++ b/include/linux/vhost.h @@ -2,6 +2,7 @@ #ifndef _VHOST_H #define _VHOST_H +#include #include #include #include @@ -181,6 +182,7 @@ struct vhost_config_ops { struct vhost_driver { struct device_driver driver; struct vhost_device_id *id_table; + struct config_group *group; int (*probe)(struct vhost_dev *dev); int (*remove)(struct vhost_dev *dev); }; @@ -233,6 +235,15 @@ void vhost_unregister_driver(struct vhost_driver *driver); int vhost_register_device(struct vhost_dev *vdev); void vhost_unregister_device(struct vhost_dev *vdev); +struct config_group *vhost_cfs_add_driver_group(const char *name); +void vhost_cfs_remove_driver_group(struct config_group *group); +struct config_group * +vhost_cfs_add_driver_item(struct config_group *driver_group, u32 vendor, + u32 device); +void vhost_cfs_remove_driver_item(struct config_group *group); +struct config_group *vhost_cfs_add_device_item(struct vhost_dev *vdev); +void vhost_cfs_remove_device_item(struct config_group *group); + int vhost_create_vqs(struct vhost_dev *vdev, unsigned int nvqs, unsigned int num_bufs, struct vhost_virtqueue *vqs[], vhost_vq_callback_t *callbacks[], From patchwork Thu Jul 2 08:21:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kishon Vijay Abraham I X-Patchwork-Id: 192191 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp1230320ilg; Thu, 2 Jul 2020 01:22:51 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwAumSq/YHV5dZ1j/O5Sb5P3hKOoGlEs0bKQ7DRoAY03GWnWKkYalMnpEA7ZSq5G0sy9QrH X-Received: by 2002:a05:6402:21c2:: with SMTP id bi2mr33053645edb.296.1593678171389; Thu, 02 Jul 2020 01:22:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593678171; cv=none; d=google.com; s=arc-20160816; b=WqEnYl4F5mtzObux6Z0l8rvQ3uffYE+3cywIEheJ2kPnG/wLpLMj608Kz5SuuQuUqr OoNmQ/hMDTSWB9KYmqyz/QENTiQMoj5sU45bO396iv2dD09dWFLC2xBtjJ8LYaZhC4Ru zPBCejhr7mXYLkQm/BJoXg+Y2Zz3XSbo82TZWY7rx69B64UwZAgMTLUDRHK/t2Et6q4C 36Qf83XRvjUstR5twY3ulDitfLE5Hbn9sBo/9nvoFRYpu7NdhMGjf3hnyz/Wgm9kj+tC 9XrqI3Xiwp8fDuQcwhArNCrEEI2LJh+1Cu0ISSoMbP7fD56p/a7bGM1m/oo5wSH2Groo WFjA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=IPYADznch2oF3OT/2Ql57mNeeC/uA2ZY2VCc9l5fLDQ=; b=ZJi8cThleM8QCWgFiqzYIJ0a3UwOTqowEBto516L4regVUIvABk9+vSCkzyYW+fWQD sWqCl3sJ77GiEmEvJhwTauq4wro9bxORCvGaJIxEJB2F6+8D6BU/ObE/OHJ3dAw9q3IY q8GIELqg2Blft/aaiel4SPqPd8sAziiB4RwXykQfmCOEzx//1RqhrqOAXTkCd9x6BwUh A8S6SxHGJ6zNK8+sloHf+q1oh5lJT0Y2tRcuDvcCSc37VWzUkei/43adxOX2rwu6iSRn +yJ/QO17/qc4QrtfQkBCy/E7sW4/2KV6XgBJ4B7uluzaOOBCJkrbtf7+hIbXzzgjiKDy /f9w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=jqWlLeQ9; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g4si5196438edq.459.2020.07.02.01.22.51; Thu, 02 Jul 2020 01:22:51 -0700 (PDT) Received-SPF: pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=jqWlLeQ9; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728439AbgGBIWr (ORCPT + 9 others); Thu, 2 Jul 2020 04:22:47 -0400 Received: from lelv0142.ext.ti.com ([198.47.23.249]:59442 "EHLO lelv0142.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728405AbgGBIWo (ORCPT ); Thu, 2 Jul 2020 04:22:44 -0400 Received: from fllv0034.itg.ti.com ([10.64.40.246]) by lelv0142.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0628MVLp081794; Thu, 2 Jul 2020 03:22:31 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1593678151; bh=IPYADznch2oF3OT/2Ql57mNeeC/uA2ZY2VCc9l5fLDQ=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=jqWlLeQ9s6a2LzJ90q0eTRXI5o1EmG8SozWZzBplGqsuBatOB7Zko1A+JOTHZj22m 9DGm7+CujGMVyKRoLMYgkJHUNzoFaqWqAdgztVhdGI1jwaNFR+IxX+ZEnx9G/DHKPQ 2PbN+666Ca0qrF+bUCdRBfv2TAG8DeJST3AMvhxw= Received: from DFLE108.ent.ti.com (dfle108.ent.ti.com [10.64.6.29]) by fllv0034.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0628MV1X030784 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 2 Jul 2020 03:22:31 -0500 Received: from DFLE105.ent.ti.com (10.64.6.26) by DFLE108.ent.ti.com (10.64.6.29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Thu, 2 Jul 2020 03:22:31 -0500 Received: from lelv0327.itg.ti.com (10.180.67.183) by DFLE105.ent.ti.com (10.64.6.26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Thu, 2 Jul 2020 03:22:31 -0500 Received: from a0393678ub.india.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0628LiYJ006145; Thu, 2 Jul 2020 03:22:24 -0500 From: Kishon Vijay Abraham I To: Ohad Ben-Cohen , Bjorn Andersson , Jon Mason , Dave Jiang , Allen Hubbe , Lorenzo Pieralisi , Bjorn Helgaas , "Michael S. Tsirkin" , Jason Wang , Paolo Bonzini , Stefan Hajnoczi , Stefano Garzarella CC: , , , , , , , Subject: [RFC PATCH 07/22] virtio_pci: Use request_threaded_irq() instead of request_irq() Date: Thu, 2 Jul 2020 13:51:28 +0530 Message-ID: <20200702082143.25259-8-kishon@ti.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200702082143.25259-1-kishon@ti.com> References: <20200702082143.25259-1-kishon@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Some of the virtio drivers (like virtio_rpmsg_bus.c) use sleeping functions like mutex_*() in the virtqueue callback. Use request_threaded_irq() instead of request_irq() in order for the virtqueue callbacks to be executed in thread context instead of interrupt context. Signed-off-by: Kishon Vijay Abraham I --- drivers/virtio/virtio_pci_common.c | 25 +++++++++++++------------ 1 file changed, 13 insertions(+), 12 deletions(-) -- 2.17.1 diff --git a/drivers/virtio/virtio_pci_common.c b/drivers/virtio/virtio_pci_common.c index 222d630c41fc..60998b4f1f30 100644 --- a/drivers/virtio/virtio_pci_common.c +++ b/drivers/virtio/virtio_pci_common.c @@ -140,9 +140,9 @@ static int vp_request_msix_vectors(struct virtio_device *vdev, int nvectors, v = vp_dev->msix_used_vectors; snprintf(vp_dev->msix_names[v], sizeof *vp_dev->msix_names, "%s-config", name); - err = request_irq(pci_irq_vector(vp_dev->pci_dev, v), - vp_config_changed, 0, vp_dev->msix_names[v], - vp_dev); + err = request_threaded_irq(pci_irq_vector(vp_dev->pci_dev, v), 0, + vp_config_changed, 0, vp_dev->msix_names[v], + vp_dev); if (err) goto error; ++vp_dev->msix_used_vectors; @@ -159,9 +159,9 @@ static int vp_request_msix_vectors(struct virtio_device *vdev, int nvectors, v = vp_dev->msix_used_vectors; snprintf(vp_dev->msix_names[v], sizeof *vp_dev->msix_names, "%s-virtqueues", name); - err = request_irq(pci_irq_vector(vp_dev->pci_dev, v), - vp_vring_interrupt, 0, vp_dev->msix_names[v], - vp_dev); + err = request_threaded_irq(pci_irq_vector(vp_dev->pci_dev, v), + 0, vp_vring_interrupt, 0, + vp_dev->msix_names[v], vp_dev); if (err) goto error; ++vp_dev->msix_used_vectors; @@ -336,10 +336,11 @@ static int vp_find_vqs_msix(struct virtio_device *vdev, unsigned nvqs, sizeof *vp_dev->msix_names, "%s-%s", dev_name(&vp_dev->vdev.dev), names[i]); - err = request_irq(pci_irq_vector(vp_dev->pci_dev, msix_vec), - vring_interrupt, 0, - vp_dev->msix_names[msix_vec], - vqs[i]); + err = request_threaded_irq(pci_irq_vector(vp_dev->pci_dev, + msix_vec), + 0, vring_interrupt, 0, + vp_dev->msix_names[msix_vec], + vqs[i]); if (err) goto error_find; } @@ -361,8 +362,8 @@ static int vp_find_vqs_intx(struct virtio_device *vdev, unsigned nvqs, if (!vp_dev->vqs) return -ENOMEM; - err = request_irq(vp_dev->pci_dev->irq, vp_interrupt, IRQF_SHARED, - dev_name(&vdev->dev), vp_dev); + err = request_threaded_irq(vp_dev->pci_dev->irq, 0, vp_interrupt, + IRQF_SHARED, dev_name(&vdev->dev), vp_dev); if (err) goto out_del_vqs; From patchwork Thu Jul 2 08:21:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kishon Vijay Abraham I X-Patchwork-Id: 192206 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp1231892ilg; Thu, 2 Jul 2020 01:25:31 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxNO7Nuncu1K+bC1HSr3OioihgO1sNcBh77jCi+jzrUDQsyMU1Ifzcsn6/QaXa6DQKJIreu X-Received: by 2002:aa7:d802:: with SMTP id v2mr26210891edq.77.1593678331640; Thu, 02 Jul 2020 01:25:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593678331; cv=none; d=google.com; s=arc-20160816; b=YM7GMoRwY3q3g0TLZn6YKapNXPL9cAdgy1NVIUYBZVegFApqrn0xJ5otYZFQQLqHqt T5IDHhhf1BTZayqpnh/2uE2CeyBNn2rBEaVLsJI/wliby5lvhNNeO0Ns7s7ABFVJ1yz8 RL3FsV8RqcQED5sKSTEhN8WNS1GXneifAQVUJ5iSyOJnJ7rgzLn56StpTVo/EMfV9PIZ 0hRbaqbB1BR5q0jyB+/YDf08ij73DhOtsToXd+VokugmEqC6+5YR+ehrxY11j5bLK8MD d2Oqmij3xTj2ufvFqjK2uakwCvBfP4wFdvKt3dolXSV6aYx/hb9bWbCyF1DYMG7B1j62 99tw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=xFcqYzNmQgKuE1+hIXHcn1dlvHlbBiHv2NFJRAR7N1Q=; b=M69u6XVHLZonfRb05WIrHT2wwTYzbfTAjpb33UYOuFbIU+/1MNca8arRlfDjKO6hmU dvSZHep7RM0Xg+zXs4Nc16eDuYZ1v8s/05eK/3PImT+Ma4ulqXt6QM/93OP1cnx6KtFp yffvnImAU3iA/LcgGvXxIYyHy6EGY1dKGOcc6ky9ALPvVwObaPveg3nYEKg03NXyLYoz 3BI66VuAM1jxrVCYNVkGpYQMc0BbeiLg0RnkNlk/pjla8wxYf6e3nG9+FWnUv7uX7PuQ dyFVP116gNKcw4nfIe1wXYmCsk1SI9xwnGye44AydH3+IKholtsVvsCDeYdbS4IT7RPj SLAA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=tsLOiQk2; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id u9si5234451ejo.663.2020.07.02.01.25.31; Thu, 02 Jul 2020 01:25:31 -0700 (PDT) Received-SPF: pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=tsLOiQk2; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728402AbgGBIZ2 (ORCPT + 9 others); Thu, 2 Jul 2020 04:25:28 -0400 Received: from lelv0142.ext.ti.com ([198.47.23.249]:59466 "EHLO lelv0142.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728274AbgGBIWv (ORCPT ); Thu, 2 Jul 2020 04:22:51 -0400 Received: from lelv0266.itg.ti.com ([10.180.67.225]) by lelv0142.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0628Mawm081816; Thu, 2 Jul 2020 03:22:36 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1593678156; bh=xFcqYzNmQgKuE1+hIXHcn1dlvHlbBiHv2NFJRAR7N1Q=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=tsLOiQk2xuDiTEuvWhyS7fcI0xjQfuNmWsXd0mOnCnouBoFsnFY3s17NgErsbNi/4 PFs2foICGoxb8mg8C12Te8IiISySv2tLVXECFDTbp1UJaw4ofcOKOg0FX5Q7lhU4ir lTvouUvV44mf/mPovGOaHW3TTMRHUq7Eli+vuyjE= Received: from DFLE106.ent.ti.com (dfle106.ent.ti.com [10.64.6.27]) by lelv0266.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0628MaYH065363 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 2 Jul 2020 03:22:36 -0500 Received: from DFLE101.ent.ti.com (10.64.6.22) by DFLE106.ent.ti.com (10.64.6.27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Thu, 2 Jul 2020 03:22:35 -0500 Received: from lelv0327.itg.ti.com (10.180.67.183) by DFLE101.ent.ti.com (10.64.6.22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Thu, 2 Jul 2020 03:22:35 -0500 Received: from a0393678ub.india.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0628LiYK006145; Thu, 2 Jul 2020 03:22:30 -0500 From: Kishon Vijay Abraham I To: Ohad Ben-Cohen , Bjorn Andersson , Jon Mason , Dave Jiang , Allen Hubbe , Lorenzo Pieralisi , Bjorn Helgaas , "Michael S. Tsirkin" , Jason Wang , Paolo Bonzini , Stefan Hajnoczi , Stefano Garzarella CC: , , , , , , , Subject: [RFC PATCH 08/22] rpmsg: virtio_rpmsg_bus: Disable receive virtqueue callback when reading messages Date: Thu, 2 Jul 2020 13:51:29 +0530 Message-ID: <20200702082143.25259-9-kishon@ti.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200702082143.25259-1-kishon@ti.com> References: <20200702082143.25259-1-kishon@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Since rpmsg_recv_done() reads messages in a while loop, disable callbacks until the while loop exits. This helps to get rid of the annoying "uhm, incoming signal, but no used buffer ?" message. Signed-off-by: Kishon Vijay Abraham I --- drivers/rpmsg/virtio_rpmsg_bus.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) -- 2.17.1 diff --git a/drivers/rpmsg/virtio_rpmsg_bus.c b/drivers/rpmsg/virtio_rpmsg_bus.c index 376ebbf880d6..2d0d42084ac0 100644 --- a/drivers/rpmsg/virtio_rpmsg_bus.c +++ b/drivers/rpmsg/virtio_rpmsg_bus.c @@ -777,6 +777,7 @@ static void rpmsg_recv_done(struct virtqueue *rvq) return; } + virtqueue_disable_cb(rvq); while (msg) { err = rpmsg_recv_single(vrp, dev, msg, len); if (err) @@ -786,6 +787,19 @@ static void rpmsg_recv_done(struct virtqueue *rvq) msg = virtqueue_get_buf(rvq, &len); } + virtqueue_enable_cb(rvq); + + /* + * Try to read message one more time in case a new message is submitted + * after virtqueue_get_buf() inside the while loop but before enabling + * callbacks + */ + msg = virtqueue_get_buf(rvq, &len); + if (msg) { + err = rpmsg_recv_single(vrp, dev, msg, len); + if (!err) + msgs_received++; + } dev_dbg(dev, "Received %u messages\n", msgs_received); From patchwork Thu Jul 2 08:21:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kishon Vijay Abraham I X-Patchwork-Id: 192192 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp1230398ilg; Thu, 2 Jul 2020 01:23:00 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxHRaoW/xP2ofs6C8ML66DBuI3ibQqzAGD8zgqNLI9Unc3ko68xvjwIsA8OxyJKkFoI4J5u X-Received: by 2002:a17:906:9244:: with SMTP id c4mr26032076ejx.60.1593678180018; Thu, 02 Jul 2020 01:23:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593678180; cv=none; d=google.com; s=arc-20160816; b=EFdtckdcpjrNF+OoByWPtlYf7qh4qGIp/kUPS+qnePCHPWalzny4zjmnbClj1fdA6v wjekRO8KtA+3lEIjOvIj6FJSSBlhn5QDkUHWwnvIBV0KXD6NmQLHCoYiCq/HRYVR5mcg Xuxj07DOJxcceR1Wwj0gJQ1VaIbK22sBqZmls5qr1RJlLhLDS10JOc7sooVWtKku9rd3 XkzIH07LuEC/RAsBvgPh4FrWdlEabXCh+tn86O4lgrvHxFR8prn0wR7eNfsE4XQCAfFE OZ6h2OHh/PiS+YmL0lZgY3xp0xskFE+Xn8Omuaqxhc0KUZGCzDbZ2yNriLVq6xHhl51E 0h8g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=1qzLaSMoiyDzc/yo3NiznOfrfn9/79kRBidf51KJ/xI=; b=GC9umi7L8xgzgbR/+wy9ZU9eCaUkD6YCNVrrFZRsk623Ss0OKTx8Dc6f4Gwtc0cy+M PCI/Io/oE3zmLKYP1yWfcLX1R/9kHWE9Jut5kCkK+7qmCh2dDBFsKRS5gHvRmoUk26tu 5gJYxzRB1WvfH/J++hlIiUxXFXPjlE8Ke5Oa5B+UTfiznLtNXh9Jcw0ynjdVJ+vgbOEk Y9SQGMUkLJylRvdUupfBKv3rLcJlUa37w5PDsp6EDgpW+Blw64bIOfohyxMm+fq3SFYC zuDjUQJDlo3vIW5wKWpv4oLKtJq7n1HFk1YXYhlPDIpFHS/XVKZ/cNYuVhP94Ce0p9bN 4Czw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=AoUMUelO; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g4si5196438edq.459.2020.07.02.01.22.59; Thu, 02 Jul 2020 01:23:00 -0700 (PDT) Received-SPF: pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=AoUMUelO; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728484AbgGBIW5 (ORCPT + 9 others); Thu, 2 Jul 2020 04:22:57 -0400 Received: from lelv0143.ext.ti.com ([198.47.23.248]:55256 "EHLO lelv0143.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728436AbgGBIWy (ORCPT ); Thu, 2 Jul 2020 04:22:54 -0400 Received: from lelv0266.itg.ti.com ([10.180.67.225]) by lelv0143.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0628Mf2X042283; Thu, 2 Jul 2020 03:22:41 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1593678161; bh=1qzLaSMoiyDzc/yo3NiznOfrfn9/79kRBidf51KJ/xI=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=AoUMUelOmFc5FPX9IU+fY6fjWX3sgrqH1xuDnj8KojzU6hdjZidfBzI6cD5FkVdjU Ke05qNugUzId+5+HcudcK8PJh8VDYAmJBW4JE+Xuz7Xq/L2kSgdfSUGjGB1F6TDFQn FwGXsclS9ET5atdfPLjYMPSlVCutKefYklblq5R0= Received: from DLEE111.ent.ti.com (dlee111.ent.ti.com [157.170.170.22]) by lelv0266.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0628Mfc5065436 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 2 Jul 2020 03:22:41 -0500 Received: from DLEE109.ent.ti.com (157.170.170.41) by DLEE111.ent.ti.com (157.170.170.22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Thu, 2 Jul 2020 03:22:41 -0500 Received: from lelv0327.itg.ti.com (10.180.67.183) by DLEE109.ent.ti.com (157.170.170.41) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Thu, 2 Jul 2020 03:22:41 -0500 Received: from a0393678ub.india.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0628LiYL006145; Thu, 2 Jul 2020 03:22:36 -0500 From: Kishon Vijay Abraham I To: Ohad Ben-Cohen , Bjorn Andersson , Jon Mason , Dave Jiang , Allen Hubbe , Lorenzo Pieralisi , Bjorn Helgaas , "Michael S. Tsirkin" , Jason Wang , Paolo Bonzini , Stefan Hajnoczi , Stefano Garzarella CC: , , , , , , , Subject: [RFC PATCH 09/22] rpmsg: Introduce configfs entry for configuring rpmsg Date: Thu, 2 Jul 2020 13:51:30 +0530 Message-ID: <20200702082143.25259-10-kishon@ti.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200702082143.25259-1-kishon@ti.com> References: <20200702082143.25259-1-kishon@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Create a configfs entry for each "struct rpmsg_device_id" populated in rpmsg client driver and create a configfs entry for each rpmsg device. This will be used to bind a rpmsg client driver to a rpmsg device in order to create a new rpmsg channel. This is used for creating channel for VHOST based rpmsg bus (channels are created in VIRTIO based bus during namespace announcement). Signed-off-by: Kishon Vijay Abraham I --- drivers/rpmsg/Makefile | 2 +- drivers/rpmsg/rpmsg_cfs.c | 394 +++++++++++++++++++++++++++++++++ drivers/rpmsg/rpmsg_core.c | 7 + drivers/rpmsg/rpmsg_internal.h | 16 ++ include/linux/rpmsg.h | 5 + 5 files changed, 423 insertions(+), 1 deletion(-) create mode 100644 drivers/rpmsg/rpmsg_cfs.c -- 2.17.1 diff --git a/drivers/rpmsg/Makefile b/drivers/rpmsg/Makefile index ae92a7fb08f6..047acfda518a 100644 --- a/drivers/rpmsg/Makefile +++ b/drivers/rpmsg/Makefile @@ -1,5 +1,5 @@ # SPDX-License-Identifier: GPL-2.0 -obj-$(CONFIG_RPMSG) += rpmsg_core.o +obj-$(CONFIG_RPMSG) += rpmsg_core.o rpmsg_cfs.o obj-$(CONFIG_RPMSG_CHAR) += rpmsg_char.o obj-$(CONFIG_RPMSG_MTK_SCP) += mtk_rpmsg.o obj-$(CONFIG_RPMSG_QCOM_GLINK_RPM) += qcom_glink_rpm.o diff --git a/drivers/rpmsg/rpmsg_cfs.c b/drivers/rpmsg/rpmsg_cfs.c new file mode 100644 index 000000000000..a5c77aba00ee --- /dev/null +++ b/drivers/rpmsg/rpmsg_cfs.c @@ -0,0 +1,394 @@ +// SPDX-License-Identifier: GPL-2.0 +/** + * configfs to configure RPMSG + * + * Copyright (C) 2020 Texas Instruments + * Author: Kishon Vijay Abraham I + */ + +#include +#include +#include +#include +#include + +#include "rpmsg_internal.h" + +static struct config_group *channel_group; +static struct config_group *virtproc_group; + +enum rpmsg_channel_status { + STATUS_FREE, + STATUS_BUSY, +}; + +struct rpmsg_channel { + struct config_item item; + struct device *dev; + enum rpmsg_channel_status status; +}; + +struct rpmsg_channel_group { + struct config_group group; +}; + +struct rpmsg_virtproc_group { + struct config_group group; + struct device *dev; + const struct rpmsg_virtproc_ops *ops; +}; + +static inline +struct rpmsg_channel *to_rpmsg_channel(struct config_item *channel_item) +{ + return container_of(channel_item, struct rpmsg_channel, item); +} + +static inline struct rpmsg_channel_group +*to_rpmsg_channel_group(struct config_group *channel_group) +{ + return container_of(channel_group, struct rpmsg_channel_group, group); +} + +static inline +struct rpmsg_virtproc_group *to_rpmsg_virtproc_group(struct config_item *item) +{ + return container_of(to_config_group(item), struct rpmsg_virtproc_group, + group); +} + +/** + * rpmsg_virtproc_channel_link() - Create softlink of rpmsg client device + * directory to virtproc configfs directory + * @virtproc_item: Config item representing configfs entry of virtual remote + * processor + * @channel_item: Config item representing configfs entry of rpmsg client + * driver + * + * Bind rpmsg client device to virtual remote processor by creating softlink + * between rpmsg client device directory to virtproc configfs directory + * in order to create a new rpmsg channel. + */ +static int rpmsg_virtproc_channel_link(struct config_item *virtproc_item, + struct config_item *channel_item) +{ + struct rpmsg_virtproc_group *vgroup; + struct rpmsg_channel *channel; + struct config_group *cgroup; + struct device *dev; + + vgroup = to_rpmsg_virtproc_group(virtproc_item); + channel = to_rpmsg_channel(channel_item); + + if (channel->status == STATUS_BUSY) + return -EBUSY; + + cgroup = channel_item->ci_group; + + if (vgroup->ops && vgroup->ops->create_channel) { + dev = vgroup->ops->create_channel(vgroup->dev, + cgroup->cg_item.ci_name); + if (IS_ERR_OR_NULL(dev)) + return PTR_ERR(dev); + } + + channel->dev = dev; + channel->status = STATUS_BUSY; + + return 0; +} + +/** + * rpmsg_virtproc_channel_unlink() - Remove softlink of rpmsg client device + * directory from virtproc configfs directory + * @virtproc_item: Config item representing configfs entry of virtual remote + * processor + * @channel_item: Config item representing configfs entry of rpmsg client + * driver + * + * Unbind rpmsg client device from virtual remote processor by removing softlink + * of rpmsg client device directory from virtproc configfs directory which + * deletes the rpmsg channel. + */ +static void rpmsg_virtproc_channel_unlink(struct config_item *virtproc_item, + struct config_item *channel_item) +{ + struct rpmsg_virtproc_group *vgroup; + struct rpmsg_channel *channel; + + channel = to_rpmsg_channel(channel_item); + vgroup = to_rpmsg_virtproc_group(virtproc_item); + + if (vgroup->ops && vgroup->ops->delete_channel) + vgroup->ops->delete_channel(channel->dev); + + channel->status = STATUS_FREE; +} + +static struct configfs_item_operations rpmsg_virtproc_item_ops = { + .allow_link = rpmsg_virtproc_channel_link, + .drop_link = rpmsg_virtproc_channel_unlink, +}; + +static const struct config_item_type rpmsg_virtproc_item_type = { + .ct_item_ops = &rpmsg_virtproc_item_ops, + .ct_owner = THIS_MODULE, +}; + +/** + * rpmsg_cfs_add_virtproc_group() - Add new configfs directory for virtproc + * device + * @dev: Device representing the virtual remote processor + * @ops: rpmsg_virtproc_ops to create or delete rpmsg channel + * + * Add new configfs directory for virtproc device. The rpmsg client driver's + * configfs entry can be linked with this directory for creating a new + * rpmsg channel and the link can be removed for deleting the rpmsg channel. + */ +struct config_group * +rpmsg_cfs_add_virtproc_group(struct device *dev, + const struct rpmsg_virtproc_ops *ops) +{ + struct rpmsg_virtproc_group *vgroup; + struct config_group *group; + struct device *vdev; + int ret; + + vgroup = kzalloc(sizeof(*vgroup), GFP_KERNEL); + if (!vgroup) + return ERR_PTR(-ENOMEM); + + group = &vgroup->group; + config_group_init_type_name(group, dev_name(dev), + &rpmsg_virtproc_item_type); + ret = configfs_register_group(virtproc_group, group); + if (ret) + goto err_register_group; + + if (!try_module_get(ops->owner)) { + ret = -EPROBE_DEFER; + goto err_module_get; + } + + vdev = get_device(dev); + vgroup->dev = vdev; + vgroup->ops = ops; + + return group; + +err_module_get: + configfs_unregister_group(group); + +err_register_group: + kfree(vgroup); + + return ERR_PTR(ret); +} +EXPORT_SYMBOL(rpmsg_cfs_add_virtproc_group); + +/** + * rpmsg_cfs_remove_virtproc_group() - Remove the configfs directory for + * virtproc device + * @group: config_group of the virtproc device + * + * Remove the configfs directory for virtproc device. + */ +void rpmsg_cfs_remove_virtproc_group(struct config_group *group) +{ + struct rpmsg_virtproc_group *vgroup; + + if (!group) + return; + + vgroup = container_of(group, struct rpmsg_virtproc_group, group); + put_device(vgroup->dev); + module_put(vgroup->ops->owner); + configfs_unregister_group(&vgroup->group); + kfree(vgroup); +} +EXPORT_SYMBOL(rpmsg_cfs_remove_virtproc_group); + +static const struct config_item_type rpmsg_channel_item_type = { + .ct_owner = THIS_MODULE, +}; + +/** + * rpmsg_channel_make() - Allow user to create sub-directory of rpmsg client + * driver + * @name: Name of the sub-directory created by the user. + * + * Invoked when user creates a sub-directory to the configfs directory + * representing the rpmsg client driver. This can be linked with the virtproc + * directory for creating a new rpmsg channel. + */ +static struct config_item * +rpmsg_channel_make(struct config_group *group, const char *name) +{ + struct rpmsg_channel *channel; + + channel = kzalloc(sizeof(*channel), GFP_KERNEL); + if (!channel) + return ERR_PTR(-ENOMEM); + + channel->status = STATUS_FREE; + + config_item_init_type_name(&channel->item, name, &rpmsg_channel_item_type); + return &channel->item; +} + +/** + * rpmsg_channel_drop() - Allow user to delete sub-directory of rpmsg client + * driver + * @item: Config item representing the sub-directory the user created returned + * by rpmsg_channel_make() + * + * Invoked when user creates a sub-directory to the configfs directory + * representing the rpmsg client driver. This can be linked with the virtproc + * directory for creating a new rpmsg channel. + */ +static void rpmsg_channel_drop(struct config_group *group, struct config_item *item) +{ + struct rpmsg_channel *channel; + + channel = to_rpmsg_channel(item); + kfree(channel); +} + +static struct configfs_group_operations rpmsg_channel_group_ops = { + .make_item = &rpmsg_channel_make, + .drop_item = &rpmsg_channel_drop, +}; + +static const struct config_item_type rpmsg_channel_group_type = { + .ct_group_ops = &rpmsg_channel_group_ops, + .ct_owner = THIS_MODULE, +}; + +/** + * rpmsg_cfs_add_channel_group() - Create a configfs directory for each + * registered rpmsg client driver + * @name: The name of the rpmsg client driver + * + * Create a configfs directory for each registered rpmsg client driver. The + * user can create sub-directory within this directory for creating + * rpmsg channels to be used by the rpmsg client driver. + */ +struct config_group *rpmsg_cfs_add_channel_group(const char *name) +{ + struct rpmsg_channel_group *cgroup; + struct config_group *group; + int ret; + + cgroup = kzalloc(sizeof(*cgroup), GFP_KERNEL); + if (!cgroup) + return ERR_PTR(-ENOMEM); + + group = &cgroup->group; + config_group_init_type_name(group, name, &rpmsg_channel_group_type); + ret = configfs_register_group(channel_group, group); + if (ret) + return ERR_PTR(ret); + + return group; +} +EXPORT_SYMBOL(rpmsg_cfs_add_channel_group); + +/** + * rpmsg_cfs_remove_channel_group() - Remove the configfs directory associated + * with the rpmsg client driver + * @group: Config group representing the rpmsg client driver + * + * Remove the configfs directory associated with the rpmsg client driver. + */ +void rpmsg_cfs_remove_channel_group(struct config_group *group) +{ + struct rpmsg_channel_group *cgroup; + + if (IS_ERR_OR_NULL(group)) + return; + + cgroup = to_rpmsg_channel_group(group); + configfs_unregister_default_group(group); + kfree(cgroup); +} +EXPORT_SYMBOL(rpmsg_cfs_remove_channel_group); + +static const struct config_item_type rpmsg_channel_type = { + .ct_owner = THIS_MODULE, +}; + +static const struct config_item_type rpmsg_virtproc_type = { + .ct_owner = THIS_MODULE, +}; + +static const struct config_item_type rpmsg_type = { + .ct_owner = THIS_MODULE, +}; + +static struct configfs_subsystem rpmsg_cfs_subsys = { + .su_group = { + .cg_item = { + .ci_namebuf = "rpmsg", + .ci_type = &rpmsg_type, + }, + }, + .su_mutex = __MUTEX_INITIALIZER(rpmsg_cfs_subsys.su_mutex), +}; + +static int __init rpmsg_cfs_init(void) +{ + int ret; + struct config_group *root = &rpmsg_cfs_subsys.su_group; + + config_group_init(root); + + ret = configfs_register_subsystem(&rpmsg_cfs_subsys); + if (ret) { + pr_err("Error %d while registering subsystem %s\n", + ret, root->cg_item.ci_namebuf); + goto err; + } + + channel_group = configfs_register_default_group(root, "channel", + &rpmsg_channel_type); + if (IS_ERR(channel_group)) { + ret = PTR_ERR(channel_group); + pr_err("Error %d while registering channel group\n", + ret); + goto err_channel_group; + } + + virtproc_group = + configfs_register_default_group(root, "virtproc", + &rpmsg_virtproc_type); + if (IS_ERR(virtproc_group)) { + ret = PTR_ERR(virtproc_group); + pr_err("Error %d while registering virtproc group\n", + ret); + goto err_virtproc_group; + } + + return 0; + +err_virtproc_group: + configfs_unregister_default_group(channel_group); + +err_channel_group: + configfs_unregister_subsystem(&rpmsg_cfs_subsys); + +err: + return ret; +} +module_init(rpmsg_cfs_init); + +static void __exit rpmsg_cfs_exit(void) +{ + configfs_unregister_default_group(virtproc_group); + configfs_unregister_default_group(channel_group); + configfs_unregister_subsystem(&rpmsg_cfs_subsys); +} +module_exit(rpmsg_cfs_exit); + +MODULE_DESCRIPTION("PCI RPMSG CONFIGFS"); +MODULE_AUTHOR("Kishon Vijay Abraham I "); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/rpmsg/rpmsg_core.c b/drivers/rpmsg/rpmsg_core.c index e330ec4dfc33..68569fec03e2 100644 --- a/drivers/rpmsg/rpmsg_core.c +++ b/drivers/rpmsg/rpmsg_core.c @@ -563,8 +563,15 @@ EXPORT_SYMBOL(rpmsg_unregister_device); */ int __register_rpmsg_driver(struct rpmsg_driver *rpdrv, struct module *owner) { + const struct rpmsg_device_id *ids = rpdrv->id_table; rpdrv->drv.bus = &rpmsg_bus; rpdrv->drv.owner = owner; + + while (ids && ids->name[0]) { + rpmsg_cfs_add_channel_group(ids->name); + ids++; + } + return driver_register(&rpdrv->drv); } EXPORT_SYMBOL(__register_rpmsg_driver); diff --git a/drivers/rpmsg/rpmsg_internal.h b/drivers/rpmsg/rpmsg_internal.h index 3fc83cd50e98..39b3a5caf242 100644 --- a/drivers/rpmsg/rpmsg_internal.h +++ b/drivers/rpmsg/rpmsg_internal.h @@ -68,6 +68,18 @@ struct rpmsg_endpoint_ops { poll_table *wait); }; +/** + * struct rpmsg_virtproc_ops - indirection table for rpmsg_virtproc operations + * @create_channel: Create a new rpdev channel + * @delete_channel: Delete the rpdev channel + * @owner: Owner of the module holding the ops + */ +struct rpmsg_virtproc_ops { + struct device *(*create_channel)(struct device *dev, const char *name); + void (*delete_channel)(struct device *dev); + struct module *owner; +}; + int rpmsg_register_device(struct rpmsg_device *rpdev); int rpmsg_unregister_device(struct device *parent, struct rpmsg_channel_info *chinfo); @@ -75,6 +87,10 @@ int rpmsg_unregister_device(struct device *parent, struct device *rpmsg_find_device(struct device *parent, struct rpmsg_channel_info *chinfo); +struct config_group * +rpmsg_cfs_add_virtproc_group(struct device *dev, + const struct rpmsg_virtproc_ops *ops); + /** * rpmsg_chrdev_register_device() - register chrdev device based on rpdev * @rpdev: prepared rpdev to be used for creating endpoints diff --git a/include/linux/rpmsg.h b/include/linux/rpmsg.h index 9fe156d1c018..b9d9283b46ac 100644 --- a/include/linux/rpmsg.h +++ b/include/linux/rpmsg.h @@ -135,6 +135,7 @@ int rpmsg_trysend_offchannel(struct rpmsg_endpoint *ept, u32 src, u32 dst, __poll_t rpmsg_poll(struct rpmsg_endpoint *ept, struct file *filp, poll_table *wait); +struct config_group *rpmsg_cfs_add_channel_group(const char *name); #else static inline int register_rpmsg_device(struct rpmsg_device *dev) @@ -242,6 +243,10 @@ static inline __poll_t rpmsg_poll(struct rpmsg_endpoint *ept, return 0; } +static inline struct config_group *rpmsg_cfs_add_channel_group(const char *name) +{ + return NULL; +} #endif /* IS_ENABLED(CONFIG_RPMSG) */ /* use a macro to avoid include chaining to get THIS_MODULE */ From patchwork Thu Jul 2 08:21:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kishon Vijay Abraham I X-Patchwork-Id: 192205 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp1231758ilg; Thu, 2 Jul 2020 01:25:18 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxkTBIzAfLuwmolfv/c3FFGbhtM/A1MEQQ2smTEfv8xRFIDUUTL2pLvQRpsrbmhjyHZKGa9 X-Received: by 2002:a50:a451:: with SMTP id v17mr15388363edb.256.1593678318059; Thu, 02 Jul 2020 01:25:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593678318; cv=none; d=google.com; s=arc-20160816; b=Zpf4VPte2eMnk5qWo6ogAAqTPH0jrrfX7lGEoaO/yeekImF4aTyfbaRXzXq2/C5VYO CN0v5+D0eC+TBnku6y2Z3Z1l+LYVFkRMApEXZgXq9gb/2Ns12/ywee5foq0HI114btwM PHwuaJFNj9Bh2QFp07yiLlqOAv9vmIfIsrwKLwX5n+8Siunl/Tks9fE844zrDbC8tsJG yWGG7SS/x2cZSWZW0LRed24HEiOR59+VhHiFJdcitrBwx7NKc0J2w5bBGuXY/MofkwZa ZgZGKpmfnfmA8fEQftNnFANHLiHUtO/4iQ+ZXfi+FHSIrvYRmCMxxScfqE3aTQilVtG6 1Kcg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=Am7wFkFFAaUQImlGyftO9CDeQuGJEcsN8HZMuWnBRcM=; b=lqKAZowJthknFt/oFB78tdH3ETRpmb7MujRQTaOQgFRRqTa3+LnePMLpfsILMXJU/u zxveoon3JrpmIoPIh7Lc+Yntnpvmo3yU1I+30VqSkJG0b3Lv4Sp7HNGnwdB9WzCf1Bkh 4lquJ7u/j3Ir43SG9tFlL/3TZLrNqEvFO+EufQSdJEE/FippS3+cQ9a272Vu5M/567CV /LlKTTBLbA0rBYlaaD/4+hgSfhTzNCr4Y+UATYg1XhoxRcRO9OVM1tRB/5IRp0fLgHgi bhqi16riQ0V2zFBpSWsvXSVrO9uVyUqKFWcXJo0SdtBlnt++3MhblJ/FmW8FGkf+mHzx Z2QA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=jsOM0ruA; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id h22si5670297ejf.256.2020.07.02.01.25.17; Thu, 02 Jul 2020 01:25:18 -0700 (PDT) Received-SPF: pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=jsOM0ruA; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728513AbgGBIXD (ORCPT + 9 others); Thu, 2 Jul 2020 04:23:03 -0400 Received: from fllv0016.ext.ti.com ([198.47.19.142]:53902 "EHLO fllv0016.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728482AbgGBIW7 (ORCPT ); Thu, 2 Jul 2020 04:22:59 -0400 Received: from fllv0034.itg.ti.com ([10.64.40.246]) by fllv0016.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0628Mld5086485; Thu, 2 Jul 2020 03:22:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1593678167; bh=Am7wFkFFAaUQImlGyftO9CDeQuGJEcsN8HZMuWnBRcM=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=jsOM0ruAXjlbAIoaWqeHn0vYI7vPtnQO8ZZCByx9EXiaxrRsrj8HzV2HMQUjKnmmV 9H/U5Z+caJrgOy2eZ+FUlCVoqCEAq6vW1qJoV7Qp4AIv9DJK4fPfBU2D4kVCtrbpRf TH1Sk9aAPIkJNMMRQfOI0SHfv7DweInWkfkeUi2s= Received: from DLEE109.ent.ti.com (dlee109.ent.ti.com [157.170.170.41]) by fllv0034.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0628MlJN031070 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 2 Jul 2020 03:22:47 -0500 Received: from DLEE102.ent.ti.com (157.170.170.32) by DLEE109.ent.ti.com (157.170.170.41) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Thu, 2 Jul 2020 03:22:47 -0500 Received: from lelv0327.itg.ti.com (10.180.67.183) by DLEE102.ent.ti.com (157.170.170.32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Thu, 2 Jul 2020 03:22:47 -0500 Received: from a0393678ub.india.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0628LiYM006145; Thu, 2 Jul 2020 03:22:41 -0500 From: Kishon Vijay Abraham I To: Ohad Ben-Cohen , Bjorn Andersson , Jon Mason , Dave Jiang , Allen Hubbe , Lorenzo Pieralisi , Bjorn Helgaas , "Michael S. Tsirkin" , Jason Wang , Paolo Bonzini , Stefan Hajnoczi , Stefano Garzarella CC: , , , , , , , Subject: [RFC PATCH 10/22] rpmsg: virtio_rpmsg_bus: Add Address Service Notification support Date: Thu, 2 Jul 2020 13:51:31 +0530 Message-ID: <20200702082143.25259-11-kishon@ti.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200702082143.25259-1-kishon@ti.com> References: <20200702082143.25259-1-kishon@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Add support to send address service notification message to the backend rpmsg device. This informs the backend rpmsg device about the address allocated to the channel that is created in response to the name service announce message from backend rpmsg device. This is in preparation to add backend rpmsg device using VHOST framework in Linux. Signed-off-by: Kishon Vijay Abraham I --- drivers/rpmsg/virtio_rpmsg_bus.c | 92 +++++++++++++++++++++++++++++--- 1 file changed, 85 insertions(+), 7 deletions(-) -- 2.17.1 diff --git a/drivers/rpmsg/virtio_rpmsg_bus.c b/drivers/rpmsg/virtio_rpmsg_bus.c index 2d0d42084ac0..19d930c9fc2c 100644 --- a/drivers/rpmsg/virtio_rpmsg_bus.c +++ b/drivers/rpmsg/virtio_rpmsg_bus.c @@ -71,6 +71,7 @@ struct virtproc_info { /* The feature bitmap for virtio rpmsg */ #define VIRTIO_RPMSG_F_NS 0 /* RP supports name service notifications */ +#define VIRTIO_RPMSG_F_AS 1 /* RP supports address service notifications */ /** * struct rpmsg_hdr - common header for all rpmsg messages @@ -110,6 +111,26 @@ struct rpmsg_ns_msg { u32 flags; } __packed; +/** + * struct rpmsg_as_msg - dynamic address service announcement message + * @name: name of the created channel + * @dst: destination address to be used by the backend rpdev + * @src: source address of the backend rpdev (the one that sent name service + * announcement message) + * @flags: indicates whether service is created or destroyed + * + * This message is sent (by virtio_rpmsg_bus) when a new channel is created + * in response to name service announcement message by backend rpdev to create + * a new channel. This sends the allocated source address for the channel + * (destination address for the backend rpdev) to the backend rpdev. + */ +struct rpmsg_as_msg { + char name[RPMSG_NAME_SIZE]; + u32 dst; + u32 src; + u32 flags; +} __packed; + /** * enum rpmsg_ns_flags - dynamic name service announcement flags * @@ -119,6 +140,19 @@ struct rpmsg_ns_msg { enum rpmsg_ns_flags { RPMSG_NS_CREATE = 0, RPMSG_NS_DESTROY = 1, + RPMSG_AS_ANNOUNCE = 2, +}; + +/** + * enum rpmsg_as_flags - dynamic address service announcement flags + * + * @RPMSG_AS_ASSIGN: address has been assigned to the newly created channel + * @RPMSG_AS_FREE: assigned address is freed from the channel and no longer can + * be used + */ +enum rpmsg_as_flags { + RPMSG_AS_ASSIGN = 1, + RPMSG_AS_FREE = 2, }; /** @@ -164,6 +198,9 @@ struct virtio_rpmsg_channel { /* Address 53 is reserved for advertising remote services */ #define RPMSG_NS_ADDR (53) +/* Address 54 is reserved for advertising address services */ +#define RPMSG_AS_ADDR (54) + static void virtio_rpmsg_destroy_ept(struct rpmsg_endpoint *ept); static int virtio_rpmsg_send(struct rpmsg_endpoint *ept, void *data, int len); static int virtio_rpmsg_sendto(struct rpmsg_endpoint *ept, void *data, int len, @@ -329,9 +366,11 @@ static int virtio_rpmsg_announce_create(struct rpmsg_device *rpdev) struct device *dev = &rpdev->dev; int err = 0; + if (!rpdev->ept || !rpdev->announce) + return err; + /* need to tell remote processor's name service about this channel ? */ - if (rpdev->announce && rpdev->ept && - virtio_has_feature(vrp->vdev, VIRTIO_RPMSG_F_NS)) { + if (virtio_has_feature(vrp->vdev, VIRTIO_RPMSG_F_NS)) { struct rpmsg_ns_msg nsm; strncpy(nsm.name, rpdev->id.name, RPMSG_NAME_SIZE); @@ -343,6 +382,23 @@ static int virtio_rpmsg_announce_create(struct rpmsg_device *rpdev) dev_err(dev, "failed to announce service %d\n", err); } + /* + * need to tell remote processor's address service about the address allocated + * to this channel + */ + if (virtio_has_feature(vrp->vdev, VIRTIO_RPMSG_F_AS)) { + struct rpmsg_as_msg asmsg; + + strncpy(asmsg.name, rpdev->id.name, RPMSG_NAME_SIZE); + asmsg.dst = rpdev->src; + asmsg.src = rpdev->dst; + asmsg.flags = RPMSG_AS_ASSIGN; + + err = rpmsg_sendto(rpdev->ept, &asmsg, sizeof(asmsg), RPMSG_AS_ADDR); + if (err) + dev_err(dev, "failed to announce service %d\n", err); + } + return err; } @@ -353,9 +409,28 @@ static int virtio_rpmsg_announce_destroy(struct rpmsg_device *rpdev) struct device *dev = &rpdev->dev; int err = 0; + if (!rpdev->ept || !rpdev->announce) + return err; + + /* + * need to tell remote processor's address service that we're freeing + * the address allocated to this channel + */ + if (virtio_has_feature(vrp->vdev, VIRTIO_RPMSG_F_AS)) { + struct rpmsg_as_msg asmsg; + + strncpy(asmsg.name, rpdev->id.name, RPMSG_NAME_SIZE); + asmsg.dst = rpdev->src; + asmsg.src = rpdev->dst; + asmsg.flags = RPMSG_AS_FREE; + + err = rpmsg_sendto(rpdev->ept, &asmsg, sizeof(asmsg), RPMSG_AS_ADDR); + if (err) + dev_err(dev, "failed to announce service %d\n", err); + } + /* tell remote processor's name service we're removing this channel */ - if (rpdev->announce && rpdev->ept && - virtio_has_feature(vrp->vdev, VIRTIO_RPMSG_F_NS)) { + if (virtio_has_feature(vrp->vdev, VIRTIO_RPMSG_F_NS)) { struct rpmsg_ns_msg nsm; strncpy(nsm.name, rpdev->id.name, RPMSG_NAME_SIZE); @@ -390,7 +465,8 @@ static void virtio_rpmsg_release_device(struct device *dev) * channels. */ static struct rpmsg_device *rpmsg_create_channel(struct virtproc_info *vrp, - struct rpmsg_channel_info *chinfo) + struct rpmsg_channel_info *chinfo, + bool announce) { struct virtio_rpmsg_channel *vch; struct rpmsg_device *rpdev; @@ -424,7 +500,8 @@ static struct rpmsg_device *rpmsg_create_channel(struct virtproc_info *vrp, * rpmsg server channels has predefined local address (for now), * and their existence needs to be announced remotely */ - rpdev->announce = rpdev->src != RPMSG_ADDR_ANY; + if (rpdev->src != RPMSG_ADDR_ANY || announce) + rpdev->announce = true; strncpy(rpdev->id.name, chinfo->name, RPMSG_NAME_SIZE); @@ -873,7 +950,7 @@ static int rpmsg_ns_cb(struct rpmsg_device *rpdev, void *data, int len, if (ret) dev_err(dev, "rpmsg_destroy_channel failed: %d\n", ret); } else { - newch = rpmsg_create_channel(vrp, &chinfo); + newch = rpmsg_create_channel(vrp, &chinfo, msg->flags & RPMSG_AS_ANNOUNCE); if (!newch) dev_err(dev, "rpmsg_create_channel failed\n"); } @@ -1042,6 +1119,7 @@ static struct virtio_device_id id_table[] = { static unsigned int features[] = { VIRTIO_RPMSG_F_NS, + VIRTIO_RPMSG_F_AS, }; static struct virtio_driver virtio_ipc_driver = { From patchwork Thu Jul 2 08:21:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kishon Vijay Abraham I X-Patchwork-Id: 192193 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp1230482ilg; Thu, 2 Jul 2020 01:23:10 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyG66CMrjEGvkn3uF+mBCfniFVjJteXRLiUXPWtSSqr6OKkBPp3lc6e2Zx2A/xScJMTR+yQ X-Received: by 2002:a17:906:140e:: with SMTP id p14mr25546709ejc.430.1593678190491; Thu, 02 Jul 2020 01:23:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593678190; cv=none; d=google.com; s=arc-20160816; b=A6Jnp51UItADMo6NQWNx8MGokl3F87odJec86kwuwbzM32whhZBrr0D5bLmAmqOOYy vw0aWn0GD0DELvfSUuDn3/LGPgxvb31EAfhOd3Uj5SYpUPLAdNCBvglFZ0Muni/3difv lJkALPFofec0AAabtUXyKJNp1wxKfad4j3nM1OW4iqXopwjOqzTo6/iF2d1sbr8r2TsC k2hGMrmmGrFe93xnDciMHS8165a1pns3qL0UNu4nBrN68ux7/IbaDFGug15LKJYkcFK0 ppvyVAOgeXRS+kztiX2+cKdfeATo7/fYmXHa1fmWGCltlOAoDurN2YYFMbAZdie36xtz yiqQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=YsmvRj9z3gDFXuXNfVPm9r+zKmA28FIRtmnlRZD7Iho=; b=qwMkbZaz0MmqVs6P63tkQVf4fI1WqDzfdlbuDC/RsZRej27LCr+hC29UGhju0EXXwm fx4xdnGNI6Vw1KFgSRshAPpviRohZOUGqTovlPYLrfBXw5epbbiLAG/StqSuZoWVxeJg l8yiKtHH4fHeNC3Osy3uzdEkkt3UBh5z3a5uRga3lD7pcxchTe8antCfD02yF5+LBJsR Y9Q3qeak01Jno0SmutXAGNVnfgmYQjuVvvkSjtmt0apQfvy5T8w0+dDqCGi30yYroS1O +GZv3bLs4Zdz5AGppDaBVcTKgY0bi3YZwUuHazIuCWzVYHCr7W4bBcOmQgtaigWUgtd0 6oHw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=aQhF8M4Y; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id l16si5147897edw.377.2020.07.02.01.23.10; Thu, 02 Jul 2020 01:23:10 -0700 (PDT) Received-SPF: pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=aQhF8M4Y; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728546AbgGBIXI (ORCPT + 9 others); Thu, 2 Jul 2020 04:23:08 -0400 Received: from fllv0015.ext.ti.com ([198.47.19.141]:34528 "EHLO fllv0015.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728508AbgGBIXD (ORCPT ); Thu, 2 Jul 2020 04:23:03 -0400 Received: from fllv0035.itg.ti.com ([10.64.41.0]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0628MrUY017271; Thu, 2 Jul 2020 03:22:53 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1593678173; bh=YsmvRj9z3gDFXuXNfVPm9r+zKmA28FIRtmnlRZD7Iho=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=aQhF8M4YLxWDJO7BAd979sAKCntR8LcDEH/T8ayhukRGiDPHrETsOhRPjcrluzmWm KncENH+ifMKjs11GVKalH0lButeSiGxFUJyiNNV9fvafOI5iZ1jGjbKg7KOkbRRHjM mJ+xggI6dPyPAAFTcCkOs6J/9X1eeV4X56HdqQ9Q= Received: from DLEE102.ent.ti.com (dlee102.ent.ti.com [157.170.170.32]) by fllv0035.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0628MruT086357; Thu, 2 Jul 2020 03:22:53 -0500 Received: from DLEE108.ent.ti.com (157.170.170.38) by DLEE102.ent.ti.com (157.170.170.32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Thu, 2 Jul 2020 03:22:52 -0500 Received: from lelv0327.itg.ti.com (10.180.67.183) by DLEE108.ent.ti.com (157.170.170.38) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Thu, 2 Jul 2020 03:22:52 -0500 Received: from a0393678ub.india.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0628LiYN006145; Thu, 2 Jul 2020 03:22:47 -0500 From: Kishon Vijay Abraham I To: Ohad Ben-Cohen , Bjorn Andersson , Jon Mason , Dave Jiang , Allen Hubbe , Lorenzo Pieralisi , Bjorn Helgaas , "Michael S. Tsirkin" , Jason Wang , Paolo Bonzini , Stefan Hajnoczi , Stefano Garzarella CC: , , , , , , , Subject: [RFC PATCH 11/22] rpmsg: virtio_rpmsg_bus: Move generic rpmsg structure to rpmsg_internal.h Date: Thu, 2 Jul 2020 13:51:32 +0530 Message-ID: <20200702082143.25259-12-kishon@ti.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200702082143.25259-1-kishon@ti.com> References: <20200702082143.25259-1-kishon@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org No functional change intended. Move generic rpmsg structures like "struct rpmsg_hdr", "struct rpmsg_ns_msg" and "struct rpmsg_as_msg", its associated flags and generic macros to rpmsg_internal.h. This is in preparation to add VHOST based vhost_rpmsg_bus.c which will use the same structures and macros. Signed-off-by: Kishon Vijay Abraham I --- drivers/rpmsg/rpmsg_internal.h | 120 +++++++++++++++++++++++++++++++ drivers/rpmsg/virtio_rpmsg_bus.c | 120 ------------------------------- 2 files changed, 120 insertions(+), 120 deletions(-) -- 2.17.1 diff --git a/drivers/rpmsg/rpmsg_internal.h b/drivers/rpmsg/rpmsg_internal.h index 39b3a5caf242..69d9c9579b50 100644 --- a/drivers/rpmsg/rpmsg_internal.h +++ b/drivers/rpmsg/rpmsg_internal.h @@ -15,6 +15,126 @@ #include #include +/* The feature bitmap for virtio rpmsg */ +#define VIRTIO_RPMSG_F_NS 0 /* RP supports name service notifications */ +#define VIRTIO_RPMSG_F_AS 1 /* RP supports address service notifications */ + +/** + * struct rpmsg_hdr - common header for all rpmsg messages + * @src: source address + * @dst: destination address + * @reserved: reserved for future use + * @len: length of payload (in bytes) + * @flags: message flags + * @data: @len bytes of message payload data + * + * Every message sent(/received) on the rpmsg bus begins with this header. + */ +struct rpmsg_hdr { + u32 src; + u32 dst; + u32 reserved; + u16 len; + u16 flags; + u8 data[0]; +} __packed; + +/** + * struct rpmsg_ns_msg - dynamic name service announcement message + * @name: name of remote service that is published + * @addr: address of remote service that is published + * @flags: indicates whether service is created or destroyed + * + * This message is sent across to publish a new service, or announce + * about its removal. When we receive these messages, an appropriate + * rpmsg channel (i.e device) is created/destroyed. In turn, the ->probe() + * or ->remove() handler of the appropriate rpmsg driver will be invoked + * (if/as-soon-as one is registered). + */ +struct rpmsg_ns_msg { + char name[RPMSG_NAME_SIZE]; + u32 addr; + u32 flags; +} __packed; + +/** + * struct rpmsg_as_msg - dynamic address service announcement message + * @name: name of the created channel + * @dst: destination address to be used by the backend rpdev + * @src: source address of the backend rpdev (the one that sent name service + * announcement message) + * @flags: indicates whether service is created or destroyed + * + * This message is sent (by virtio_rpmsg_bus) when a new channel is created + * in response to name service announcement message by backend rpdev to create + * a new channel. This sends the allocated source address for the channel + * (destination address for the backend rpdev) to the backend rpdev. + */ +struct rpmsg_as_msg { + char name[RPMSG_NAME_SIZE]; + u32 dst; + u32 src; + u32 flags; +} __packed; + +/** + * enum rpmsg_ns_flags - dynamic name service announcement flags + * + * @RPMSG_NS_CREATE: a new remote service was just created + * @RPMSG_NS_DESTROY: a known remote service was just destroyed + */ +enum rpmsg_ns_flags { + RPMSG_NS_CREATE = 0, + RPMSG_NS_DESTROY = 1, + RPMSG_AS_ANNOUNCE = 2, +}; + +/** + * enum rpmsg_as_flags - dynamic address service announcement flags + * + * @RPMSG_AS_ASSIGN: address has been assigned to the newly created channel + * @RPMSG_AS_FREE: assigned address is freed from the channel and no longer can + * be used + */ +enum rpmsg_as_flags { + RPMSG_AS_ASSIGN = 1, + RPMSG_AS_FREE = 2, +}; + +/* + * We're allocating buffers of 512 bytes each for communications. The + * number of buffers will be computed from the number of buffers supported + * by the vring, upto a maximum of 512 buffers (256 in each direction). + * + * Each buffer will have 16 bytes for the msg header and 496 bytes for + * the payload. + * + * This will utilize a maximum total space of 256KB for the buffers. + * + * We might also want to add support for user-provided buffers in time. + * This will allow bigger buffer size flexibility, and can also be used + * to achieve zero-copy messaging. + * + * Note that these numbers are purely a decision of this driver - we + * can change this without changing anything in the firmware of the remote + * processor. + */ +#define MAX_RPMSG_NUM_BUFS (512) +#define MAX_RPMSG_BUF_SIZE (512) + +/* + * Local addresses are dynamically allocated on-demand. + * We do not dynamically assign addresses from the low 1024 range, + * in order to reserve that address range for predefined services. + */ +#define RPMSG_RESERVED_ADDRESSES (1024) + +/* Address 53 is reserved for advertising remote services */ +#define RPMSG_NS_ADDR (53) + +/* Address 54 is reserved for advertising address services */ +#define RPMSG_AS_ADDR (54) + #define to_rpmsg_device(d) container_of(d, struct rpmsg_device, dev) #define to_rpmsg_driver(d) container_of(d, struct rpmsg_driver, drv) diff --git a/drivers/rpmsg/virtio_rpmsg_bus.c b/drivers/rpmsg/virtio_rpmsg_bus.c index 19d930c9fc2c..f91143b25af7 100644 --- a/drivers/rpmsg/virtio_rpmsg_bus.c +++ b/drivers/rpmsg/virtio_rpmsg_bus.c @@ -69,92 +69,6 @@ struct virtproc_info { struct rpmsg_endpoint *ns_ept; }; -/* The feature bitmap for virtio rpmsg */ -#define VIRTIO_RPMSG_F_NS 0 /* RP supports name service notifications */ -#define VIRTIO_RPMSG_F_AS 1 /* RP supports address service notifications */ - -/** - * struct rpmsg_hdr - common header for all rpmsg messages - * @src: source address - * @dst: destination address - * @reserved: reserved for future use - * @len: length of payload (in bytes) - * @flags: message flags - * @data: @len bytes of message payload data - * - * Every message sent(/received) on the rpmsg bus begins with this header. - */ -struct rpmsg_hdr { - u32 src; - u32 dst; - u32 reserved; - u16 len; - u16 flags; - u8 data[0]; -} __packed; - -/** - * struct rpmsg_ns_msg - dynamic name service announcement message - * @name: name of remote service that is published - * @addr: address of remote service that is published - * @flags: indicates whether service is created or destroyed - * - * This message is sent across to publish a new service, or announce - * about its removal. When we receive these messages, an appropriate - * rpmsg channel (i.e device) is created/destroyed. In turn, the ->probe() - * or ->remove() handler of the appropriate rpmsg driver will be invoked - * (if/as-soon-as one is registered). - */ -struct rpmsg_ns_msg { - char name[RPMSG_NAME_SIZE]; - u32 addr; - u32 flags; -} __packed; - -/** - * struct rpmsg_as_msg - dynamic address service announcement message - * @name: name of the created channel - * @dst: destination address to be used by the backend rpdev - * @src: source address of the backend rpdev (the one that sent name service - * announcement message) - * @flags: indicates whether service is created or destroyed - * - * This message is sent (by virtio_rpmsg_bus) when a new channel is created - * in response to name service announcement message by backend rpdev to create - * a new channel. This sends the allocated source address for the channel - * (destination address for the backend rpdev) to the backend rpdev. - */ -struct rpmsg_as_msg { - char name[RPMSG_NAME_SIZE]; - u32 dst; - u32 src; - u32 flags; -} __packed; - -/** - * enum rpmsg_ns_flags - dynamic name service announcement flags - * - * @RPMSG_NS_CREATE: a new remote service was just created - * @RPMSG_NS_DESTROY: a known remote service was just destroyed - */ -enum rpmsg_ns_flags { - RPMSG_NS_CREATE = 0, - RPMSG_NS_DESTROY = 1, - RPMSG_AS_ANNOUNCE = 2, -}; - -/** - * enum rpmsg_as_flags - dynamic address service announcement flags - * - * @RPMSG_AS_ASSIGN: address has been assigned to the newly created channel - * @RPMSG_AS_FREE: assigned address is freed from the channel and no longer can - * be used - */ -enum rpmsg_as_flags { - RPMSG_AS_ASSIGN = 1, - RPMSG_AS_FREE = 2, -}; - /** * @vrp: the remote processor this channel belongs to */ @@ -167,40 +81,6 @@ struct virtio_rpmsg_channel { #define to_virtio_rpmsg_channel(_rpdev) \ container_of(_rpdev, struct virtio_rpmsg_channel, rpdev) -/* - * We're allocating buffers of 512 bytes each for communications. The - * number of buffers will be computed from the number of buffers supported - * by the vring, upto a maximum of 512 buffers (256 in each direction). - * - * Each buffer will have 16 bytes for the msg header and 496 bytes for - * the payload. - * - * This will utilize a maximum total space of 256KB for the buffers. - * - * We might also want to add support for user-provided buffers in time. - * This will allow bigger buffer size flexibility, and can also be used - * to achieve zero-copy messaging. - * - * Note that these numbers are purely a decision of this driver - we - * can change this without changing anything in the firmware of the remote - * processor. - */ -#define MAX_RPMSG_NUM_BUFS (512) -#define MAX_RPMSG_BUF_SIZE (512) - -/* - * Local addresses are dynamically allocated on-demand. - * We do not dynamically assign addresses from the low 1024 range, - * in order to reserve that address range for predefined services. - */ -#define RPMSG_RESERVED_ADDRESSES (1024) - -/* Address 53 is reserved for advertising remote services */ -#define RPMSG_NS_ADDR (53) - -/* Address 54 is reserved for advertising address services */ -#define RPMSG_AS_ADDR (54) - static void virtio_rpmsg_destroy_ept(struct rpmsg_endpoint *ept); static int virtio_rpmsg_send(struct rpmsg_endpoint *ept, void *data, int len); static int virtio_rpmsg_sendto(struct rpmsg_endpoint *ept, void *data, int len, From patchwork Thu Jul 2 08:21:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kishon Vijay Abraham I X-Patchwork-Id: 192194 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp1230519ilg; Thu, 2 Jul 2020 01:23:13 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzitv0n6kTp3lbb+cC91AlCYaIv9UuaKW0+3/rDluVh2TzsQ/NNAHsDYFOZNX3+WcNXOmqf X-Received: by 2002:aa7:c991:: with SMTP id c17mr25681186edt.278.1593678193482; Thu, 02 Jul 2020 01:23:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593678193; cv=none; d=google.com; s=arc-20160816; b=bZq+71vVqdh5GIcyDYqjyIdKt/gXumBfUf8fXQCYY6aHN72wrb8ZdkSPo/4NXGZxA9 YT+5WKrIhORA0tPNRbqF26mYDJXocl3zjNW4nrIeggt6t5tcv4/kssRRP6KXNiJvusqM hrz/O2UL7Bin1z8rFQueIDuXlcmtC8YYRZehUi2hIdUU3I1XSpcXR44+rxPdw3Mbm2Ec KPiFnMU8mxe89rVZvrlmIURArA4HFiuQgxuISiemd3qNZaDl5XBUppM4Acgav5Zn9woq /AN53giXwWyqdPc5h0PmsRkPutlEBQgI7Dj3I/aIfc1Bj+gG5bZfQgRLW4WQ9ut/l6j5 uGzw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=JPB+vJhEmEbr7wwXfNji0ZXjBCp2o1x06a2a2R9o46s=; b=m3FxcQD+jI95TAvrVHj6jxQOxIg7ghozhw/VY9+JO52JuFZAiHY/7Z/r7m7/NKHnO8 nA1+HMoijcCJ1gEQKRoQ7UOu5FZZS2oOPo4YOjpm+frg0exvqXrlLVfeKqUiO1nhiSUg u5AZmacgOIOiFLDTFeT785turjD1Cyn0pZR1NOfUwK2ObDgqJczrTxslJVVHKEWT/Chu 3uIy7ShJeA2OBOzb8/NWwGN4sVAkcXQUi9B3/k6qtvlojrkJh/4zqH5xNoIn2qUFvI6r TO1/pZSzaUuZ0578d3k2aM675uuxazD3CWCMiUT/lNXIfZv8jl+W7WuAdkaKHPIZZCaB f4rw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=NTB4WUvW; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id l16si5147897edw.377.2020.07.02.01.23.13; Thu, 02 Jul 2020 01:23:13 -0700 (PDT) Received-SPF: pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=NTB4WUvW; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728574AbgGBIXM (ORCPT + 9 others); Thu, 2 Jul 2020 04:23:12 -0400 Received: from lelv0143.ext.ti.com ([198.47.23.248]:55304 "EHLO lelv0143.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728550AbgGBIXJ (ORCPT ); Thu, 2 Jul 2020 04:23:09 -0400 Received: from fllv0034.itg.ti.com ([10.64.40.246]) by lelv0143.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0628MwtA042318; Thu, 2 Jul 2020 03:22:58 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1593678178; bh=JPB+vJhEmEbr7wwXfNji0ZXjBCp2o1x06a2a2R9o46s=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=NTB4WUvWIwRjwR4PEyvNndWbHFmbpB3bHQ/bjfZA4D488oQ/PPwZMpG8QVgVEECfl GirS8Zv5wtU6kWNfh8qRzOwRe3jDX36NKWLH6IAzNFElVyTE8WUhGN+Gh+6mDie4th ehSVLDopPkGuaZZq1dslHla/kif2QjPpAYyNCWHw= Received: from DFLE110.ent.ti.com (dfle110.ent.ti.com [10.64.6.31]) by fllv0034.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0628Mw7t031199 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 2 Jul 2020 03:22:58 -0500 Received: from DFLE112.ent.ti.com (10.64.6.33) by DFLE110.ent.ti.com (10.64.6.31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Thu, 2 Jul 2020 03:22:58 -0500 Received: from lelv0327.itg.ti.com (10.180.67.183) by DFLE112.ent.ti.com (10.64.6.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Thu, 2 Jul 2020 03:22:58 -0500 Received: from a0393678ub.india.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0628LiYO006145; Thu, 2 Jul 2020 03:22:53 -0500 From: Kishon Vijay Abraham I To: Ohad Ben-Cohen , Bjorn Andersson , Jon Mason , Dave Jiang , Allen Hubbe , Lorenzo Pieralisi , Bjorn Helgaas , "Michael S. Tsirkin" , Jason Wang , Paolo Bonzini , Stefan Hajnoczi , Stefano Garzarella CC: , , , , , , , Subject: [RFC PATCH 12/22] virtio: Add ops to allocate and free buffer Date: Thu, 2 Jul 2020 13:51:33 +0530 Message-ID: <20200702082143.25259-13-kishon@ti.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200702082143.25259-1-kishon@ti.com> References: <20200702082143.25259-1-kishon@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Add ops to allocate and free buffer in struct virtio_config_ops. Certain vhost devices can have restriction on the range of memory it can access on the virtio. The virtio drivers attached to such vhost devices reserves memory that can be accessed by vhost. This function allocates buffer for such reserved region. For instance when virtio-vhost is used by two hosts connected to NTB, the vhost can access only memory exposed by memory windows and the size of the memory window can be limited. Here the NTB virtio driver can reserve a small region (few MBs) and provide buffer address from this pool whenever requested by virtio client driver. Signed-off-by: Kishon Vijay Abraham I --- include/linux/virtio_config.h | 42 +++++++++++++++++++++++++++++++++++ 1 file changed, 42 insertions(+) -- 2.17.1 diff --git a/include/linux/virtio_config.h b/include/linux/virtio_config.h index bb4cc4910750..419f733017c2 100644 --- a/include/linux/virtio_config.h +++ b/include/linux/virtio_config.h @@ -65,6 +65,9 @@ struct irq_affinity; * the caller can then copy. * @set_vq_affinity: set the affinity for a virtqueue (optional). * @get_vq_affinity: get the affinity for a virtqueue (optional). + * @alloc_buffer: Allocate and provide buffer addresses that can be + * accessed by both virtio and vhost + * @free_buffer: Free the allocated buffer address */ typedef void vq_callback_t(struct virtqueue *); struct virtio_config_ops { @@ -88,6 +91,9 @@ struct virtio_config_ops { const struct cpumask *cpu_mask); const struct cpumask *(*get_vq_affinity)(struct virtio_device *vdev, int index); + void * (*alloc_buffer)(struct virtio_device *vdev, size_t size); + void (*free_buffer)(struct virtio_device *vdev, void *addr, + size_t size); }; /* If driver didn't advertise the feature, it will never appear. */ @@ -232,6 +238,42 @@ const char *virtio_bus_name(struct virtio_device *vdev) return vdev->config->bus_name(vdev); } +/** + * virtio_alloc_buffer - Allocate buffer from the reserved memory + * @vdev: Virtio device which manages the reserved memory + * @size: Size of the buffer to be allocated + * + * Certain vhost devices can have restriction on the range of memory + * it can access on the virtio. The virtio drivers attached to + * such vhost devices reserves memory that can be accessed by + * vhost. This function allocates buffer for such reserved region. + */ +static inline void * +virtio_alloc_buffer(struct virtio_device *vdev, size_t size) +{ + if (!vdev->config->alloc_buffer) + return NULL; + + return vdev->config->alloc_buffer(vdev, size); +} + +/** + * virtio_free_buffer - Free the allocated buffer + * @vdev: Virtio device which manages the reserved memory + * @addr: Address returned by virtio_alloc_buffer() + * @size: Size of the buffer that has to be freed + * + * Free the allocated buffer address given by virtio_alloc_buffer(). + */ +static inline void +virtio_free_buffer(struct virtio_device *vdev, void *addr, size_t size) +{ + if (!vdev->config->free_buffer) + return; + + return vdev->config->free_buffer(vdev, addr, size); +} + /** * virtqueue_set_affinity - setting affinity for a virtqueue * @vq: the virtqueue From patchwork Thu Jul 2 08:21:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kishon Vijay Abraham I X-Patchwork-Id: 192195 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp1230626ilg; Thu, 2 Jul 2020 01:23:24 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxCoJiuhBKzBZOz5elCc5AJshVQJkrT53CRZHnoYF4FXL+ggAy5Tn01ViNzQffTNBhXrxAw X-Received: by 2002:a05:6402:cb3:: with SMTP id cn19mr30239116edb.368.1593678204189; Thu, 02 Jul 2020 01:23:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593678204; cv=none; d=google.com; s=arc-20160816; b=jdr/Yxd/7slrGOU1g9KJJaontB9/xT/N3VzDrLJ92+yyUXW0SUgUR9DXiWrfhb/hMB DyVOUvZrUpe07erglJ9X0SJ3XQVloZqMb7qXzPq/S9Uweis/kMWK+4MCliUL3r9PwEd1 nXe/WVjpFYfjnLWpQ0JE2giB6B4/kTNRr6le4N0jtIxBy8aYjHIb+40kNUsLoUW9I39R Yc5wFqiVY6zLXGIlpMqxCKsCeLWWTzjbqEeS1CVo9FTCjCm3kzwSWkTxCWYEbkHbP83Z bKT3IGCjsDb1BFrwFLtyeSYU/bNUjCR8Uezx/IXAqlL5y7KRp4slOCcqEI1Ob+2VSi7S gzXA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=33BgNyAskD4tPtltq/G/c+6YM0cxUZYr1ZA8AxsWt2I=; b=NsL/KNXOVOS1fN4ioyHDoC2f7Ref6ribZRU+L2LhCs4E8esINTom3hm0COQzAI8xO6 34PH+OX3xVMg1SgulcA97LmliK/nGqKy86F9FlUtb3/hIC+J3Yw62Y5q3L4t/U0Nz6HO IJ0knRhf8N8hijJL6tLsGPaYFG4OuR8A4aP/SnrrZ0MIGld36wFvVn8X+ywVrH7Dy8Ah fUq9iy8+vCAxIBqHZB9dqYtYMpkvQVR1el1FIR1tKfznUsuK5qDFuLVj4jUjjpTPYhvj 0w4xbnKvLU25O5fU85qI/TZEmolaxjlmL6GJIvyynF1dYspTIAaqZblgFnv7ZbACOQ8Y Awfg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=vw7QPSUT; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id bh13si5332884ejb.48.2020.07.02.01.23.24; Thu, 02 Jul 2020 01:23:24 -0700 (PDT) Received-SPF: pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=vw7QPSUT; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728613AbgGBIXW (ORCPT + 9 others); Thu, 2 Jul 2020 04:23:22 -0400 Received: from fllv0015.ext.ti.com ([198.47.19.141]:34564 "EHLO fllv0015.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728585AbgGBIXP (ORCPT ); Thu, 2 Jul 2020 04:23:15 -0400 Received: from fllv0034.itg.ti.com ([10.64.40.246]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0628N4OM017422; Thu, 2 Jul 2020 03:23:04 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1593678184; bh=33BgNyAskD4tPtltq/G/c+6YM0cxUZYr1ZA8AxsWt2I=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=vw7QPSUTKuZgj+GGqy+fujjjqAOcGN41ewsIzNJue29GXuL9A+DLJjyd3tqBdh5BH O9/Rfrx1a9YJNcgGMO2Eh2drcJN2jYNr9Sx9EIE9PySCFwDqgs8l0elpxZ+a7oBiSf gqpT9osCNxXgKtOLe7vjZSZii8dZ43XKpUewjFVg= Received: from DFLE113.ent.ti.com (dfle113.ent.ti.com [10.64.6.34]) by fllv0034.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0628N4t0031613 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 2 Jul 2020 03:23:04 -0500 Received: from DFLE101.ent.ti.com (10.64.6.22) by DFLE113.ent.ti.com (10.64.6.34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Thu, 2 Jul 2020 03:23:04 -0500 Received: from lelv0327.itg.ti.com (10.180.67.183) by DFLE101.ent.ti.com (10.64.6.22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Thu, 2 Jul 2020 03:23:03 -0500 Received: from a0393678ub.india.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0628LiYP006145; Thu, 2 Jul 2020 03:22:58 -0500 From: Kishon Vijay Abraham I To: Ohad Ben-Cohen , Bjorn Andersson , Jon Mason , Dave Jiang , Allen Hubbe , Lorenzo Pieralisi , Bjorn Helgaas , "Michael S. Tsirkin" , Jason Wang , Paolo Bonzini , Stefan Hajnoczi , Stefano Garzarella CC: , , , , , , , Subject: [RFC PATCH 13/22] rpmsg: virtio_rpmsg_bus: Use virtio_alloc_buffer() and virtio_free_buffer() Date: Thu, 2 Jul 2020 13:51:34 +0530 Message-ID: <20200702082143.25259-14-kishon@ti.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200702082143.25259-1-kishon@ti.com> References: <20200702082143.25259-1-kishon@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Use virtio_alloc_buffer() and virtio_free_buffer() to allocate and free memory buffer respectively. Only if buffer allocation using virtio_alloc_buffer() try using dma_alloc_coherent(). This is required for devices like NTB to use rpmsg for communicating with other host. Signed-off-by: Kishon Vijay Abraham I --- drivers/rpmsg/virtio_rpmsg_bus.c | 32 ++++++++++++++++++++++---------- 1 file changed, 22 insertions(+), 10 deletions(-) -- 2.17.1 diff --git a/drivers/rpmsg/virtio_rpmsg_bus.c b/drivers/rpmsg/virtio_rpmsg_bus.c index f91143b25af7..2b25a8ae1539 100644 --- a/drivers/rpmsg/virtio_rpmsg_bus.c +++ b/drivers/rpmsg/virtio_rpmsg_bus.c @@ -882,13 +882,16 @@ static int rpmsg_probe(struct virtio_device *vdev) total_buf_space = vrp->num_bufs * vrp->buf_size; - /* allocate coherent memory for the buffers */ - bufs_va = dma_alloc_coherent(vdev->dev.parent, - total_buf_space, &vrp->bufs_dma, - GFP_KERNEL); + bufs_va = virtio_alloc_buffer(vdev, total_buf_space); if (!bufs_va) { - err = -ENOMEM; - goto vqs_del; + /* allocate coherent memory for the buffers */ + bufs_va = dma_alloc_coherent(vdev->dev.parent, + total_buf_space, &vrp->bufs_dma, + GFP_KERNEL); + if (!bufs_va) { + err = -ENOMEM; + goto vqs_del; + } } dev_dbg(&vdev->dev, "buffers: va %pK, dma %pad\n", @@ -951,8 +954,13 @@ static int rpmsg_probe(struct virtio_device *vdev) return 0; free_coherent: - dma_free_coherent(vdev->dev.parent, total_buf_space, - bufs_va, vrp->bufs_dma); + if (!vrp->bufs_dma) { + virtio_free_buffer(vdev, bufs_va, total_buf_space); + } else { + dma_free_coherent(vdev->dev.parent, total_buf_space, + bufs_va, vrp->bufs_dma); + } + vqs_del: vdev->config->del_vqs(vrp->vdev); free_vrp: @@ -986,8 +994,12 @@ static void rpmsg_remove(struct virtio_device *vdev) vdev->config->del_vqs(vrp->vdev); - dma_free_coherent(vdev->dev.parent, total_buf_space, - vrp->rbufs, vrp->bufs_dma); + if (!vrp->bufs_dma) { + virtio_free_buffer(vdev, vrp->rbufs, total_buf_space); + } else { + dma_free_coherent(vdev->dev.parent, total_buf_space, + vrp->rbufs, vrp->bufs_dma); + } kfree(vrp); } From patchwork Thu Jul 2 08:21:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kishon Vijay Abraham I X-Patchwork-Id: 192199 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp1231009ilg; Thu, 2 Jul 2020 01:24:02 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx0XXdcfD+5ORwZEUcQwIzBKCHd8NZUWT5krJrKpcM9qOqHuoxfoKgvyNotmLViFwhFjlwX X-Received: by 2002:a17:906:1403:: with SMTP id p3mr18273971ejc.106.1593678241999; Thu, 02 Jul 2020 01:24:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593678241; cv=none; d=google.com; s=arc-20160816; b=EobZPspTrnRdaLZl7q9P3T1vL7EI02oglbE+uMJzrEpDPDsqTFq3PlVI6j4IOpuYM1 IeOYQTczrcBiaEDofTYrZgRWIOdBS17y9P090mw0BqEH2Ksi6A+83R81hi1yx1zmXOL9 z6XcyOY8jyc7BdQ1WSy9vFGtsHhU3J55GXXv+3r5pOO4og1p9R2m3PqZhaaXBGP0bIq6 Hcj2WTDValSseCzfkhzY9hyo8oubkyWcGcdtzC1fxwuG/tWsnOTjGj/9A6Aoleu+n814 ToT6y6cfnv4wdz5XAHH3h5+W+GJCznw19Tmn0uGlFZNNqApuK6By/NdC0CGBAfQ/Gp7b XSHA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=4weDqdhxb2wj5llXwjbhfQ/l+OmVwyAeC8Zu2T5KfnE=; b=Cg0NmuAvrUeOuK2kSilRm0PvZWcr6zKLakd0SgYwaOnlrnhfc2n2+rfg9SVtXIYxLr HI9bsMP4DMq/tib3pWRnGyDiiz/STI2pvpgynvz/VkyArhbX7lglNzyeJkpePBzo9uOj WWSvztBosxepJ7lj/6n5GieNOJANX75xyT3VKCl7doRtlAgSKT7LfLvphV1JckkUxIbL ocLqtgBE4I0xFT3ISvwCJWoRfclfYihwalmpNz9RszTo6CY2G+HPdpaQUckpus1fuUgB 6+WjW1U27eWbyhAC7XrL4FyhGVPBqcVjeg71qBMwHvUztI8HiY0GLHrJ47yjfY4jjQzP zEoQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=GVzlvQgl; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id n1si5296677edy.409.2020.07.02.01.24.01; Thu, 02 Jul 2020 01:24:01 -0700 (PDT) Received-SPF: pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=GVzlvQgl; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728657AbgGBIXe (ORCPT + 9 others); Thu, 2 Jul 2020 04:23:34 -0400 Received: from fllv0015.ext.ti.com ([198.47.19.141]:34638 "EHLO fllv0015.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727819AbgGBIX2 (ORCPT ); Thu, 2 Jul 2020 04:23:28 -0400 Received: from lelv0266.itg.ti.com ([10.180.67.225]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0628NGj5017460; Thu, 2 Jul 2020 03:23:16 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1593678196; bh=4weDqdhxb2wj5llXwjbhfQ/l+OmVwyAeC8Zu2T5KfnE=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=GVzlvQglsI77iLhf5oBePjCkWknZGyANGm6NnxaPX7M6mGM9SKZsQXsL+LHSbqY2i ybY4EqwDRW1NPqYV88Q/TQMS0DH3evtUbLKmF7JsbHqwNqiILhnrwFXV/NR2QhHm/f hDSFHzqBZIIaDZOg7+33GN2z1Sg0N2ROCJ2GO1CU= Received: from DFLE106.ent.ti.com (dfle106.ent.ti.com [10.64.6.27]) by lelv0266.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0628NAlH066403 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 2 Jul 2020 03:23:10 -0500 Received: from DFLE114.ent.ti.com (10.64.6.35) by DFLE106.ent.ti.com (10.64.6.27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Thu, 2 Jul 2020 03:23:09 -0500 Received: from lelv0327.itg.ti.com (10.180.67.183) by DFLE114.ent.ti.com (10.64.6.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Thu, 2 Jul 2020 03:23:09 -0500 Received: from a0393678ub.india.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0628LiYQ006145; Thu, 2 Jul 2020 03:23:04 -0500 From: Kishon Vijay Abraham I To: Ohad Ben-Cohen , Bjorn Andersson , Jon Mason , Dave Jiang , Allen Hubbe , Lorenzo Pieralisi , Bjorn Helgaas , "Michael S. Tsirkin" , Jason Wang , Paolo Bonzini , Stefan Hajnoczi , Stefano Garzarella CC: , , , , , , , Subject: [RFC PATCH 14/22] rpmsg: Add VHOST based remote processor messaging bus Date: Thu, 2 Jul 2020 13:51:35 +0530 Message-ID: <20200702082143.25259-15-kishon@ti.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200702082143.25259-1-kishon@ti.com> References: <20200702082143.25259-1-kishon@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Add a VHOST-based inter-processor communication bus, which enables kernel drivers to communicate with VIRTIO-based messaging bus, running on remote processors, over shared memory using a simple messaging protocol. Signed-off-by: Kishon Vijay Abraham I --- drivers/rpmsg/Kconfig | 10 + drivers/rpmsg/Makefile | 1 + drivers/rpmsg/vhost_rpmsg_bus.c | 1151 +++++++++++++++++++++++++++++++ include/linux/rpmsg.h | 1 + 4 files changed, 1163 insertions(+) create mode 100644 drivers/rpmsg/vhost_rpmsg_bus.c -- 2.17.1 diff --git a/drivers/rpmsg/Kconfig b/drivers/rpmsg/Kconfig index a9108ff563dc..881712f424d3 100644 --- a/drivers/rpmsg/Kconfig +++ b/drivers/rpmsg/Kconfig @@ -64,4 +64,14 @@ config RPMSG_VIRTIO select RPMSG select VIRTIO +config RPMSG_VHOST + tristate "Vhost RPMSG bus driver" + depends on HAS_DMA + select RPMSG + select VHOST + help + Say y here to enable support for the RPMSG VHOST driver + providing communication channels to remote processors running + RPMSG VIRTIO driver. + endmenu diff --git a/drivers/rpmsg/Makefile b/drivers/rpmsg/Makefile index 047acfda518a..44023b0abe9e 100644 --- a/drivers/rpmsg/Makefile +++ b/drivers/rpmsg/Makefile @@ -7,3 +7,4 @@ obj-$(CONFIG_RPMSG_QCOM_GLINK_NATIVE) += qcom_glink_native.o obj-$(CONFIG_RPMSG_QCOM_GLINK_SMEM) += qcom_glink_smem.o obj-$(CONFIG_RPMSG_QCOM_SMD) += qcom_smd.o obj-$(CONFIG_RPMSG_VIRTIO) += virtio_rpmsg_bus.o +obj-$(CONFIG_RPMSG_VHOST) += vhost_rpmsg_bus.o diff --git a/drivers/rpmsg/vhost_rpmsg_bus.c b/drivers/rpmsg/vhost_rpmsg_bus.c new file mode 100644 index 000000000000..6bfb5c64c95a --- /dev/null +++ b/drivers/rpmsg/vhost_rpmsg_bus.c @@ -0,0 +1,1151 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Vhost-based remote processor messaging bus + * + * Based on virtio_rpmsg_bus.c + * + * Copyright (C) 2020 Texas Instruments + * Author: Kishon Vijay Abraham I + * + */ + +#define pr_fmt(fmt) "%s: " fmt, __func__ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "rpmsg_internal.h" + +/** + * struct virtproc_info - virtual remote processor state + * @vdev: the virtio device + * @rvq: rx vhost_virtqueue + * @svq: tx vhost_virtqueue + * @buf_size: size of one rx or tx buffer + * @tx_lock: protects svq, sbufs and sleepers, to allow concurrent senders. + * sending a message might require waking up a dozing remote + * processor, which involves sleeping, hence the mutex. + * @endpoints: idr of local endpoints, allows fast retrieval + * @endpoints_lock: lock of the endpoints set + * @sendq: wait queue of sending contexts waiting for a tx buffers + * @sleepers: number of senders that are waiting for a tx buffer + * @as_ept: the bus's address service endpoint + * @nb: notifier block for receiving notifications from vhost device + * driver + * @list: maintain list of client drivers bound to rpmsg vhost device + * @list_lock: mutex to protect updating the list + * + * This structure stores the rpmsg state of a given vhost remote processor + * device (there might be several virtio proc devices for each physical + * remote processor). + */ +struct virtproc_info { + struct vhost_dev *vdev; + struct vhost_virtqueue *rvq, *svq; + unsigned int buf_size; + /* mutex to protect sending messages */ + struct mutex tx_lock; + /* mutex to protect receiving messages */ + struct mutex rx_lock; + struct idr endpoints; + /* mutex to protect receiving accessing idr */ + struct mutex endpoints_lock; + wait_queue_head_t sendq; + atomic_t sleepers; + struct rpmsg_endpoint *as_ept; + struct notifier_block nb; + struct list_head list; + /* mutex to protect updating pending rpdev in vrp */ + struct mutex list_lock; +}; + +/** + * @vrp: the remote processor this channel belongs to + */ +struct vhost_rpmsg_channel { + struct rpmsg_device rpdev; + + struct virtproc_info *vrp; +}; + +#define to_vhost_rpmsg_channel(_rpdev) \ + container_of(_rpdev, struct vhost_rpmsg_channel, rpdev) + +static void vhost_rpmsg_destroy_ept(struct rpmsg_endpoint *ept); +static int vhost_rpmsg_send(struct rpmsg_endpoint *ept, void *data, int len); +static int vhost_rpmsg_sendto(struct rpmsg_endpoint *ept, void *data, int len, + u32 dst); +static int vhost_rpmsg_send_offchannel(struct rpmsg_endpoint *ept, u32 src, + u32 dst, void *data, int len); +static int vhost_rpmsg_trysend(struct rpmsg_endpoint *ept, void *data, int len); +static int vhost_rpmsg_trysendto(struct rpmsg_endpoint *ept, void *data, + int len, u32 dst); +static int vhost_rpmsg_trysend_offchannel(struct rpmsg_endpoint *ept, u32 src, + u32 dst, void *data, int len); + +static const struct rpmsg_endpoint_ops vhost_endpoint_ops = { + .destroy_ept = vhost_rpmsg_destroy_ept, + .send = vhost_rpmsg_send, + .sendto = vhost_rpmsg_sendto, + .send_offchannel = vhost_rpmsg_send_offchannel, + .trysend = vhost_rpmsg_trysend, + .trysendto = vhost_rpmsg_trysendto, + .trysend_offchannel = vhost_rpmsg_trysend_offchannel, +}; + +/** + * __ept_release() - deallocate an rpmsg endpoint + * @kref: the ept's reference count + * + * This function deallocates an ept, and is invoked when its @kref refcount + * drops to zero. + * + * Never invoke this function directly! + */ +static void __ept_release(struct kref *kref) +{ + struct rpmsg_endpoint *ept = container_of(kref, struct rpmsg_endpoint, + refcount); + /* + * At this point no one holds a reference to ept anymore, + * so we can directly free it + */ + kfree(ept); +} + +/** + * __rpmsg_create_ept() - Create rpmsg endpoint + * @vrp: virtual remote processor of the vhost device where endpoint has to be + * created + * @rpdev: rpmsg device on which endpoint has to be created + * @cb: callback associated with the endpoint + * @priv: private data for the driver's use + * @addr: channel_info with the local rpmsg address to bind with @cb + * + * Allows drivers to create an endpoint, and bind a callback with some + * private data, to an rpmsg address. + */ +static struct rpmsg_endpoint *__rpmsg_create_ept(struct virtproc_info *vrp, + struct rpmsg_device *rpdev, + rpmsg_rx_cb_t cb, + void *priv, u32 addr) +{ + int id_min, id_max, id; + struct rpmsg_endpoint *ept; + struct device *dev = rpdev ? &rpdev->dev : &vrp->vdev->dev; + + ept = kzalloc(sizeof(*ept), GFP_KERNEL); + if (!ept) + return NULL; + + kref_init(&ept->refcount); + mutex_init(&ept->cb_lock); + + ept->rpdev = rpdev; + ept->cb = cb; + ept->priv = priv; + ept->ops = &vhost_endpoint_ops; + + /* do we need to allocate a local address ? */ + if (addr == RPMSG_ADDR_ANY) { + id_min = RPMSG_RESERVED_ADDRESSES; + id_max = 0; + } else { + id_min = addr; + id_max = addr + 1; + } + + mutex_lock(&vrp->endpoints_lock); + + /* bind the endpoint to an rpmsg address (and allocate one if needed) */ + id = idr_alloc(&vrp->endpoints, ept, id_min, id_max, GFP_KERNEL); + if (id < 0) { + dev_err(dev, "idr_alloc failed: %d\n", id); + goto free_ept; + } + ept->addr = id; + + mutex_unlock(&vrp->endpoints_lock); + + return ept; + +free_ept: + mutex_unlock(&vrp->endpoints_lock); + kref_put(&ept->refcount, __ept_release); + return NULL; +} + +/** + * vhost_rpmsg_create_ept() - Create rpmsg endpoint + * @rpdev: rpmsg device on which endpoint has to be created + * @cb: callback associated with the endpoint + * @priv: private data for the driver's use + * @chinfo: channel_info with the local rpmsg address to bind with @cb + * + * Wrapper to __rpmsg_create_ept() to create rpmsg endpoint + */ +static struct rpmsg_endpoint +*vhost_rpmsg_create_ept(struct rpmsg_device *rpdev, rpmsg_rx_cb_t cb, void *priv, + struct rpmsg_channel_info chinfo) +{ + struct vhost_rpmsg_channel *vch = to_vhost_rpmsg_channel(rpdev); + + return __rpmsg_create_ept(vch->vrp, rpdev, cb, priv, chinfo.src); +} + +/** + * __rpmsg_destroy_ept() - destroy an existing rpmsg endpoint + * @vrp: virtproc which owns this ept + * @ept: endpoing to destroy + * + * An internal function which destroy an ept without assuming it is + * bound to an rpmsg channel. This is needed for handling the internal + * name service endpoint, which isn't bound to an rpmsg channel. + * See also __rpmsg_create_ept(). + */ +static void +__rpmsg_destroy_ept(struct virtproc_info *vrp, struct rpmsg_endpoint *ept) +{ + /* make sure new inbound messages can't find this ept anymore */ + mutex_lock(&vrp->endpoints_lock); + idr_remove(&vrp->endpoints, ept->addr); + mutex_unlock(&vrp->endpoints_lock); + + /* make sure in-flight inbound messages won't invoke cb anymore */ + mutex_lock(&ept->cb_lock); + ept->cb = NULL; + mutex_unlock(&ept->cb_lock); + + kref_put(&ept->refcount, __ept_release); +} + +/** + * vhost_rpmsg_destroy_ept() - destroy an existing rpmsg endpoint + * @ept: endpoing to destroy + * + * Wrapper to __rpmsg_destroy_ept() to destroy rpmsg endpoint + */ +static void vhost_rpmsg_destroy_ept(struct rpmsg_endpoint *ept) +{ + struct vhost_rpmsg_channel *vch = to_vhost_rpmsg_channel(ept->rpdev); + + __rpmsg_destroy_ept(vch->vrp, ept); +} + +/** + * vhost_rpmsg_announce_create() - Announce creation of new channel + * @rpdev: rpmsg device on which new endpoint channel is created + * + * Send a message to the remote processor's name service about the + * creation of this channel. + */ +static int vhost_rpmsg_announce_create(struct rpmsg_device *rpdev) +{ + struct vhost_rpmsg_channel *vch = to_vhost_rpmsg_channel(rpdev); + struct virtproc_info *vrp = vch->vrp; + struct device *dev = &rpdev->dev; + int err = 0; + + /* need to tell remote processor's name service about this channel ? */ + if (rpdev->ept && vhost_has_feature(vrp->vdev, VIRTIO_RPMSG_F_NS)) { + struct rpmsg_ns_msg nsm; + + strncpy(nsm.name, rpdev->id.name, RPMSG_NAME_SIZE); + nsm.addr = rpdev->ept->addr; + nsm.flags = RPMSG_NS_CREATE | RPMSG_AS_ANNOUNCE; + + err = rpmsg_sendto(rpdev->ept, &nsm, sizeof(nsm), RPMSG_NS_ADDR); + if (err) + dev_err(dev, "failed to announce service %d\n", err); + } + + return err; +} + +/** + * vhost_rpmsg_announce_destroy() - Announce deletion of channel + * @rpdev: rpmsg device on which this endpoint channel is created + * + * Send a message to the remote processor's name service about the + * deletion of this channel. + */ +static int vhost_rpmsg_announce_destroy(struct rpmsg_device *rpdev) +{ + struct vhost_rpmsg_channel *vch = to_vhost_rpmsg_channel(rpdev); + struct virtproc_info *vrp = vch->vrp; + struct device *dev = &rpdev->dev; + int err = 0; + + /* tell remote processor's name service we're removing this channel */ + if (rpdev->announce && rpdev->ept && + vhost_has_feature(vrp->vdev, VIRTIO_RPMSG_F_NS)) { + struct rpmsg_ns_msg nsm; + + strncpy(nsm.name, rpdev->id.name, RPMSG_NAME_SIZE); + nsm.addr = rpdev->ept->addr; + nsm.flags = RPMSG_NS_DESTROY; + + err = rpmsg_sendto(rpdev->ept, &nsm, sizeof(nsm), RPMSG_NS_ADDR); + if (err) + dev_err(dev, "failed to announce service %d\n", err); + } + + return err; +} + +static const struct rpmsg_device_ops vhost_rpmsg_ops = { + .create_ept = vhost_rpmsg_create_ept, + .announce_create = vhost_rpmsg_announce_create, + .announce_destroy = vhost_rpmsg_announce_destroy, +}; + +/** + * vhost_rpmsg_release_device() - Callback to free vhost_rpmsg_channel + * @dev: struct device of rpmsg_device + * + * Invoked from device core after all references to "dev" is removed + * to free the wrapper vhost_rpmsg_channel. + */ +static void vhost_rpmsg_release_device(struct device *dev) +{ + struct rpmsg_device *rpdev = to_rpmsg_device(dev); + struct vhost_rpmsg_channel *vch = to_vhost_rpmsg_channel(rpdev); + + kfree(vch); +} + +/** + * vhost_rpmsg_create_channel - Create an rpmsg channel + * @dev: struct device of vhost_dev + * @name: name of the rpmsg channel to be created + * + * Create an rpmsg channel using its name. Invokes rpmsg_register_device() + * only if status is VIRTIO_CONFIG_S_DRIVER_OK or else just adds it to + * list of pending rpmsg devices. This is because if the rpmsg client + * driver is already loaded when rpmsg is being registered, it'll try + * to start accessing virtqueue which will be ready only after VIRTIO + * sets status as VIRTIO_CONFIG_S_DRIVER_OK. + */ +struct device *vhost_rpmsg_create_channel(struct device *dev, const char *name) +{ + struct vhost_rpmsg_channel *vch; + struct rpmsg_device *rpdev; + struct virtproc_info *vrp; + struct vhost_dev *vdev; + u8 status; + int ret; + + vdev = to_vhost_dev(dev); + status = vhost_get_status(vdev); + vrp = vhost_get_drvdata(vdev); + + vch = kzalloc(sizeof(*vch), GFP_KERNEL); + if (!vch) + return ERR_PTR(-ENOMEM); + + /* Link the channel to our vrp */ + vch->vrp = vrp; + + /* Assign public information to the rpmsg_device */ + rpdev = &vch->rpdev; + rpdev->src = RPMSG_ADDR_ANY; + rpdev->dst = RPMSG_ADDR_ANY; + rpdev->ops = &vhost_rpmsg_ops; + + rpdev->announce = true; + + strncpy(rpdev->id.name, name, RPMSG_NAME_SIZE); + + rpdev->dev.parent = &vrp->vdev->dev; + rpdev->dev.release = vhost_rpmsg_release_device; + if (!(status & VIRTIO_CONFIG_S_DRIVER_OK)) { + mutex_lock(&vrp->list_lock); + list_add_tail(&rpdev->list, &vrp->list); + mutex_unlock(&vrp->list_lock); + } else { + ret = rpmsg_register_device(rpdev); + if (ret) + return ERR_PTR(-EINVAL); + } + + return &rpdev->dev; +} +EXPORT_SYMBOL_GPL(vhost_rpmsg_create_channel); + +/** + * vhost_rpmsg_delete_channel - Delete an rpmsg channel + * @dev: struct device of rpmsg_device + * + * Delete channel created using vhost_rpmsg_create_channel() + */ +void vhost_rpmsg_delete_channel(struct device *dev) +{ + struct rpmsg_device *rpdev = to_rpmsg_device(dev); + struct vhost_rpmsg_channel *vch; + struct virtproc_info *vrp; + struct vhost_dev *vdev; + u8 status; + + vch = to_vhost_rpmsg_channel(rpdev); + vrp = vch->vrp; + vdev = vrp->vdev; + status = vhost_get_status(vdev); + + if (!(status & VIRTIO_CONFIG_S_DRIVER_OK)) { + mutex_lock(&vrp->list_lock); + list_del(&rpdev->list); + mutex_unlock(&vrp->list_lock); + kfree(vch); + } else { + device_unregister(dev); + } +} +EXPORT_SYMBOL_GPL(vhost_rpmsg_delete_channel); + +static const struct rpmsg_virtproc_ops vhost_rpmsg_virtproc_ops = { + .create_channel = vhost_rpmsg_create_channel, + .delete_channel = vhost_rpmsg_delete_channel, +}; + +/** + * rpmsg_upref_sleepers() - enable "tx-complete" interrupts, if needed + * @vrp: virtual remote processor state + * + * This function is called before a sender is blocked, waiting for + * a tx buffer to become available. + * + * If we already have blocking senders, this function merely increases + * the "sleepers" reference count, and exits. + * + * Otherwise, if this is the first sender to block, we also enable + * virtio's tx callbacks, so we'd be immediately notified when a tx + * buffer is consumed (we rely on virtio's tx callback in order + * to wake up sleeping senders as soon as a tx buffer is used by the + * remote processor). + */ +static void rpmsg_upref_sleepers(struct virtproc_info *vrp) +{ + /* support multiple concurrent senders */ + mutex_lock(&vrp->tx_lock); + + /* are we the first sleeping context waiting for tx buffers ? */ + if (atomic_inc_return(&vrp->sleepers) == 1) + /* enable "tx-complete" interrupts before dozing off */ + vhost_virtqueue_enable_cb(vrp->svq); + + mutex_unlock(&vrp->tx_lock); +} + +/** + * rpmsg_downref_sleepers() - disable "tx-complete" interrupts, if needed + * @vrp: virtual remote processor state + * + * This function is called after a sender, that waited for a tx buffer + * to become available, is unblocked. + * + * If we still have blocking senders, this function merely decreases + * the "sleepers" reference count, and exits. + * + * Otherwise, if there are no more blocking senders, we also disable + * virtio's tx callbacks, to avoid the overhead incurred with handling + * those (now redundant) interrupts. + */ +static void rpmsg_downref_sleepers(struct virtproc_info *vrp) +{ + /* support multiple concurrent senders */ + mutex_lock(&vrp->tx_lock); + + /* are we the last sleeping context waiting for tx buffers ? */ + if (atomic_dec_and_test(&vrp->sleepers)) + /* disable "tx-complete" interrupts */ + vhost_virtqueue_disable_cb(vrp->svq); + + mutex_unlock(&vrp->tx_lock); +} + +/** + * rpmsg_send_offchannel_raw() - send a message across to the remote processor + * @rpdev: the rpmsg channel + * @src: source address + * @dst: destination address + * @data: payload of message + * @len: length of payload + * @wait: indicates whether caller should block in case no TX buffers available + * + * This function is the base implementation for all of the rpmsg sending API. + * + * It will send @data of length @len to @dst, and say it's from @src. The + * message will be sent to the remote processor which the @rpdev channel + * belongs to. + * + * The message is sent using one of the TX buffers that are available for + * communication with this remote processor. + * + * If @wait is true, the caller will be blocked until either a TX buffer is + * available, or 15 seconds elapses (we don't want callers to + * sleep indefinitely due to misbehaving remote processors), and in that + * case -ERESTARTSYS is returned. The number '15' itself was picked + * arbitrarily; there's little point in asking drivers to provide a timeout + * value themselves. + * + * Otherwise, if @wait is false, and there are no TX buffers available, + * the function will immediately fail, and -ENOMEM will be returned. + * + * Normally drivers shouldn't use this function directly; instead, drivers + * should use the appropriate rpmsg_{try}send{to, _offchannel} API + * (see include/linux/rpmsg.h). + * + * Returns 0 on success and an appropriate error value on failure. + */ +static int rpmsg_send_offchannel_raw(struct rpmsg_device *rpdev, + u32 src, u32 dst, + void *data, int len, bool wait) +{ + struct vhost_rpmsg_channel *vch = to_vhost_rpmsg_channel(rpdev); + struct virtproc_info *vrp = vch->vrp; + struct vhost_virtqueue *svq = vrp->svq; + struct vhost_dev *vdev = svq->dev; + struct device *dev = &rpdev->dev; + struct rpmsg_hdr msg; + int length; + u16 head; + u64 base; + int err; + + /* + * We currently use fixed-sized buffers, and therefore the payload + * length is limited. + * + * One of the possible improvements here is either to support + * user-provided buffers (and then we can also support zero-copy + * messaging), or to improve the buffer allocator, to support + * variable-length buffer sizes. + */ + if (len > vrp->buf_size - sizeof(struct rpmsg_hdr)) { + dev_err(dev, "message is too big (%d)\n", len); + return -EMSGSIZE; + } + + mutex_lock(&vrp->tx_lock); + /* grab a buffer */ + base = vhost_virtqueue_get_outbuf(svq, &head, &length); + if (!base && !wait) { + dev_err(dev, "Failed to get buffer for OUT transfers\n"); + err = -ENOMEM; + goto out; + } + + /* no free buffer ? wait for one (but bail after 15 seconds) */ + while (!base) { + /* enable "tx-complete" interrupts, if not already enabled */ + rpmsg_upref_sleepers(vrp); + + /* + * sleep until a free buffer is available or 15 secs elapse. + * the timeout period is not configurable because there's + * little point in asking drivers to specify that. + * if later this happens to be required, it'd be easy to add. + */ + err = wait_event_interruptible_timeout + (vrp->sendq, (base = + vhost_virtqueue_get_outbuf(svq, &head, + &length)), + msecs_to_jiffies(15000)); + + /* disable "tx-complete" interrupts if we're the last sleeper */ + rpmsg_downref_sleepers(vrp); + + /* timeout ? */ + if (!err) { + dev_err(dev, "timeout waiting for a tx buffer\n"); + err = -ERESTARTSYS; + goto out; + } + } + + msg.len = len; + msg.flags = 0; + msg.src = src; + msg.dst = dst; + msg.reserved = 0; + /* + * Perform two writes, one for rpmsg header and other for actual buffer + * data, instead of squashing the data into one buffer and then send + * them to the vhost layer. + */ + err = vhost_write(vdev, base, &msg, sizeof(struct rpmsg_hdr)); + if (err) { + dev_err(dev, "Failed to write rpmsg header to remote buffer\n"); + goto out; + } + + err = vhost_write(vdev, base + sizeof(struct rpmsg_hdr), data, len); + if (err) { + dev_err(dev, "Failed to write buffer data to remote buffer\n"); + goto out; + } + + dev_dbg(dev, "TX From 0x%x, To 0x%x, Len %d, Flags %d, Reserved %d\n", + msg.src, msg.dst, msg.len, msg.flags, msg.reserved); +#if defined(CONFIG_DYNAMIC_DEBUG) + dynamic_hex_dump("rpmsg_virtio TX: ", DUMP_PREFIX_NONE, 16, 1, + &msg, sizeof(msg) + msg.len, true); +#endif + + vhost_virtqueue_put_buf(svq, head, len + sizeof(struct rpmsg_hdr)); + + /* tell the remote processor it has a pending message to read */ + vhost_virtqueue_kick(vrp->svq); + +out: + mutex_unlock(&vrp->tx_lock); + + return err; +} + +static int vhost_rpmsg_send(struct rpmsg_endpoint *ept, void *data, int len) +{ + struct rpmsg_device *rpdev = ept->rpdev; + u32 src = ept->addr, dst = rpdev->dst; + + return rpmsg_send_offchannel_raw(rpdev, src, dst, data, len, true); +} + +static int vhost_rpmsg_sendto(struct rpmsg_endpoint *ept, void *data, int len, + u32 dst) +{ + struct rpmsg_device *rpdev = ept->rpdev; + u32 src = ept->addr; + + return rpmsg_send_offchannel_raw(rpdev, src, dst, data, len, true); +} + +static int vhost_rpmsg_send_offchannel(struct rpmsg_endpoint *ept, u32 src, + u32 dst, void *data, int len) +{ + struct rpmsg_device *rpdev = ept->rpdev; + + return rpmsg_send_offchannel_raw(rpdev, src, dst, data, len, true); +} + +static int vhost_rpmsg_trysend(struct rpmsg_endpoint *ept, void *data, int len) +{ + struct rpmsg_device *rpdev = ept->rpdev; + u32 src = ept->addr, dst = rpdev->dst; + + return rpmsg_send_offchannel_raw(rpdev, src, dst, data, len, false); +} + +static int vhost_rpmsg_trysendto(struct rpmsg_endpoint *ept, void *data, + int len, u32 dst) +{ + struct rpmsg_device *rpdev = ept->rpdev; + u32 src = ept->addr; + + return rpmsg_send_offchannel_raw(rpdev, src, dst, data, len, false); +} + +static int vhost_rpmsg_trysend_offchannel(struct rpmsg_endpoint *ept, u32 src, + u32 dst, void *data, int len) +{ + struct rpmsg_device *rpdev = ept->rpdev; + + return rpmsg_send_offchannel_raw(rpdev, src, dst, data, len, false); +} + +/** + * rpmsg_recv_single - Invoked when a buffer is received from remote VIRTIO dev + * @vrp: virtual remote processor of the vhost device which has received a msg + * @dev: struct device of vhost_dev + * @msg: pointer to the rpmsg_hdr + * @len: length of the received buffer + * + * Invoked when a buffer is received from remote VIRTIO device. It gets the + * destination address from rpmsg_hdr and invokes the callback of the endpoint + * corresponding to the address + */ +static int rpmsg_recv_single(struct virtproc_info *vrp, struct device *dev, + struct rpmsg_hdr *msg, unsigned int len) +{ + struct rpmsg_endpoint *ept; + + dev_dbg(dev, "From: 0x%x, To: 0x%x, Len: %d, Flags: %d, Reserved: %d\n", + msg->src, msg->dst, msg->len, msg->flags, msg->reserved); +#if defined(CONFIG_DYNAMIC_DEBUG) + dynamic_hex_dump("rpmsg_virtio RX: ", DUMP_PREFIX_NONE, 16, 1, + msg, sizeof(*msg) + msg->len, true); +#endif + + /* + * We currently use fixed-sized buffers, so trivially sanitize + * the reported payload length. + */ + if (len > vrp->buf_size || + msg->len > (len - sizeof(struct rpmsg_hdr))) { + dev_warn(dev, "inbound msg too big: (%d, %d)\n", len, msg->len); + return -EINVAL; + } + + /* use the dst addr to fetch the callback of the appropriate user */ + mutex_lock(&vrp->endpoints_lock); + + ept = idr_find(&vrp->endpoints, msg->dst); + + /* let's make sure no one deallocates ept while we use it */ + if (ept) + kref_get(&ept->refcount); + + mutex_unlock(&vrp->endpoints_lock); + + if (ept) { + /* make sure ept->cb doesn't go away while we use it */ + mutex_lock(&ept->cb_lock); + + if (ept->cb) + ept->cb(ept->rpdev, msg->data, msg->len, ept->priv, + msg->src); + + mutex_unlock(&ept->cb_lock); + + /* farewell, ept, we don't need you anymore */ + kref_put(&ept->refcount, __ept_release); + } else { + dev_warn(dev, "msg received with no recipient\n"); + } + + return 0; +} + +/** + * vhost_rpmsg_recv_done - Callback of the receive virtqueue + * @rvq: Receive virtqueue + * + * Invoked when the remote VIRTIO device sends a notification on the receive + * virtqueue. It gets base address of the input buffer and repeatedly calls + * rpmsg_recv_single() until no more buffers are left to be read. + */ +static void vhost_rpmsg_recv_done(struct vhost_virtqueue *rvq) +{ + struct vhost_dev *vdev = rvq->dev; + struct virtproc_info *vrp = vhost_get_drvdata(vdev); + unsigned int len, msgs_received = 0; + struct device *dev = &vdev->dev; + struct rpmsg_hdr *msg; + u64 base; + u16 head; + int err; + + base = vhost_virtqueue_get_inbuf(rvq, &head, &len); + if (!base) { + dev_err(dev, "uhm, incoming signal, but no used buffer ?\n"); + return; + } + + vhost_virtqueue_disable_cb(rvq); + while (base) { + msg = kzalloc(len, GFP_KERNEL); + if (!msg) + return; + + vhost_read(rvq->dev, msg, base, len); + + err = rpmsg_recv_single(vrp, dev, msg, len); + if (err) + break; + + kfree(msg); + vhost_virtqueue_put_buf(rvq, head, len); + msgs_received++; + + base = vhost_virtqueue_get_inbuf(rvq, &head, &len); + } + vhost_virtqueue_enable_cb(rvq); + + dev_dbg(dev, "Received %u messages\n", msgs_received); + + /* tell the remote processor we added another available rx buffer */ + if (msgs_received) + vhost_virtqueue_kick(vrp->rvq); +} + +/** + * vhost_rpmsg_xmit_done - Callback of the receive virtqueue + * @svq: Send virtqueue + * + * This is invoked whenever the remote processor completed processing + * a TX msg we just sent it, and the buffer is put back to the used ring. + * + * Normally, though, we suppress this "tx complete" interrupt in order to + * avoid the incurred overhead. + */ +static void vhost_rpmsg_xmit_done(struct vhost_virtqueue *svq) +{ + struct vhost_dev *vdev = svq->dev; + struct virtproc_info *vrp = vhost_get_drvdata(vdev); + struct device *dev = &vdev->dev; + + dev_dbg(dev, "%s\n", __func__); + + /* wake up potential senders that are waiting for a tx buffer */ + wake_up_interruptible(&vrp->sendq); +} + +/** + * vhost_rpmsg_as_cb - Callback of address service announcement + * @data: rpmsg_as_msg sent by remote VIRTIO device + * @len: length of the received message + * @priv: private data for the driver's use + * @src: source address of the remote VIRTIO device that sent the AS + * announcement + * + * Invoked when a address service announcement arrives to assign the + * destination address of the rpmsg device. + */ +static int vhost_rpmsg_as_cb(struct rpmsg_device *rpdev, void *data, int len, + void *priv, u32 hdr_src) +{ + struct virtproc_info *vrp = priv; + struct device *dev = &vrp->vdev->dev; + struct rpmsg_channel_info chinfo; + struct rpmsg_as_msg *msg = data; + struct rpmsg_device *rpmsg_dev; + struct device *rdev; + int ret = 0; + u32 flags; + u32 src; + u32 dst; + +#if defined(CONFIG_DYNAMIC_DEBUG) + dynamic_hex_dump("AS announcement: ", DUMP_PREFIX_NONE, 16, 1, + data, len, true); +#endif + + if (len == sizeof(*msg)) { + src = msg->src; + dst = msg->dst; + flags = msg->flags; + } else { + dev_err(dev, "malformed AS msg (%d)\n", len); + return -EINVAL; + } + + /* + * the name service ept does _not_ belong to a real rpmsg channel, + * and is handled by the rpmsg bus itself. + * for sanity reasons, make sure a valid rpdev has _not_ sneaked + * in somehow. + */ + if (rpdev) { + dev_err(dev, "anomaly: ns ept has an rpdev handle\n"); + return -EINVAL; + } + + /* don't trust the remote processor for null terminating the name */ + msg->name[RPMSG_NAME_SIZE - 1] = '\0'; + + dev_info(dev, "%sing dst addr 0x%x to channel %s src 0x%x\n", + flags & RPMSG_AS_ASSIGN ? "Assign" : "Free", + dst, msg->name, src); + + strncpy(chinfo.name, msg->name, sizeof(chinfo.name)); + chinfo.src = src; + chinfo.dst = RPMSG_ADDR_ANY; + + /* Find a similar channel */ + rdev = rpmsg_find_device(dev, &chinfo); + if (!rdev) { + ret = -ENODEV; + goto err_find_device; + } + + rpmsg_dev = to_rpmsg_device(rdev); + if (flags & RPMSG_AS_ASSIGN) { + if (rpmsg_dev->dst != RPMSG_ADDR_ANY) { + dev_err(dev, "Address bound to channel %s src 0x%x\n", + msg->name, src); + ret = -EBUSY; + goto err_find_device; + } + rpmsg_dev->dst = dst; + } else { + rpmsg_dev->dst = RPMSG_ADDR_ANY; + } + +err_find_device: + put_device(rdev); + + return ret; +} + +/** + * vhost_rpmsg_finalize_feature - Perform initializations for negotiated + * features + * @vrp: virtual remote processor of the vhost device where the feature has been + * negotiated + * + * Invoked when features negotiation between VHOST and VIRTIO device is + * completed. + */ +static int vhost_rpmsg_finalize_feature(struct virtproc_info *vrp) +{ + struct vhost_dev *vdev = vrp->vdev; + + /* if supported by the remote processor, enable the address service */ + if (vhost_has_feature(vdev, VIRTIO_RPMSG_F_AS)) { + /* a dedicated endpoint handles the name service msgs */ + vrp->as_ept = __rpmsg_create_ept(vrp, NULL, vhost_rpmsg_as_cb, + vrp, RPMSG_AS_ADDR); + if (!vrp->as_ept) { + dev_err(&vdev->dev, "failed to create the as ept\n"); + return -ENOMEM; + } + } else { + dev_err(&vdev->dev, "Address Service not supported\n"); + return -ENOMEM; + } + + return 0; +} + +/** + * vhost_rpmsg_set_status - Perform initialization when remote VIRTIO device + * updates status + * @vrp: virtual remote processor of the vhost device whose status has been + * updated + * + * Invoked when the remote VIRTIO device updates status. If status is set + * as VIRTIO_CONFIG_S_DRIVER_OK, invoke rpmsg_register_device() for every + * un-registered rpmsg device. + */ +static int vhost_rpmsg_set_status(struct virtproc_info *vrp) +{ + struct vhost_dev *vdev = vrp->vdev; + struct rpmsg_device *rpdev; + u8 status; + int ret; + + status = vhost_get_status(vdev); + + if (status & VIRTIO_CONFIG_S_DRIVER_OK) { + mutex_lock(&vrp->list_lock); + list_for_each_entry(rpdev, &vrp->list, list) { + ret = rpmsg_register_device(rpdev); + if (ret) { + mutex_unlock(&vrp->list_lock); + return -EINVAL; + } + } + list_del(&vrp->list); + mutex_unlock(&vrp->list_lock); + } + + return 0; +} + +/** + * vhost_rpmsg_notifier - Notifier to notify updates from remote VIRTIO device + * @nb: notifier block associated with this virtual remote processor + * @notify_reason: Indicate the updates (finalize feature or set status) by + * remote host + * @data: un-used here + * + * Invoked when the remote VIRTIO device updates status or finalize features. + */ +static int vhost_rpmsg_notifier(struct notifier_block *nb, unsigned long notify_reason, + void *data) +{ + struct virtproc_info *vrp = container_of(nb, struct virtproc_info, nb); + struct vhost_dev *vdev = vrp->vdev; + int ret; + + switch (notify_reason) { + case NOTIFY_FINALIZE_FEATURES: + ret = vhost_rpmsg_finalize_feature(vrp); + if (ret) + dev_err(&vdev->dev, "failed to finalize features\n"); + break; + case NOTIFY_SET_STATUS: + ret = vhost_rpmsg_set_status(vrp); + if (ret) + dev_err(&vdev->dev, "failed to set status\n"); + break; + default: + dev_err(&vdev->dev, "Unsupported notification\n"); + break; + } + + return NOTIFY_OK; +} + +static unsigned int vhost_rpmsg_features[] = { + VIRTIO_RPMSG_F_AS, + VIRTIO_RPMSG_F_NS, +}; + +/** + * vhost_rpmsg_set_features - Sets supported features on the VHOST device + * + * Build supported features from the feature table and invoke + * vhost_set_features() to set the supported features on the VHOST device + */ +static int vhost_rpmsg_set_features(struct vhost_dev *vdev) +{ + unsigned int feature_table_size; + unsigned int feature; + u64 device_features; + int ret, i; + + feature_table_size = ARRAY_SIZE(vhost_rpmsg_features); + for (i = 0; i < feature_table_size; i++) { + feature = vhost_rpmsg_features[i]; + WARN_ON(feature >= 64); + device_features |= (1ULL << feature); + } + + ret = vhost_set_features(vdev, device_features); + if (ret) + return ret; + + return 0; +} + +/** + * vhost_rpmsg_probe - Create virtual remote processor for the VHOST device + * @vdev - VHOST device with vendor ID and device ID supported by this driver + * + * Invoked when VHOST device is registered with vendor ID and device ID + * supported by this driver. Creates and initializes the virtual remote + * processor for the VHOST device + */ +static int vhost_rpmsg_probe(struct vhost_dev *vdev) +{ + vhost_vq_callback_t *vq_cbs[] = { vhost_rpmsg_xmit_done, vhost_rpmsg_recv_done }; + static const char * const names[] = { "output", "input" }; + struct device *dev = &vdev->dev; + struct vhost_virtqueue *vqs[2]; + struct config_group *group; + struct virtproc_info *vrp; + int err; + + vrp = devm_kzalloc(dev, sizeof(*vrp), GFP_KERNEL); + if (!vrp) + return -ENOMEM; + + vrp->vdev = vdev; + + idr_init(&vrp->endpoints); + mutex_init(&vrp->endpoints_lock); + mutex_init(&vrp->tx_lock); + mutex_init(&vrp->rx_lock); + mutex_init(&vrp->list_lock); + init_waitqueue_head(&vrp->sendq); + + err = vhost_rpmsg_set_features(vdev); + if (err) { + dev_err(dev, "Failed to set features\n"); + return err; + } + + /* We expect two vhost_virtqueues, tx and rx (and in this order) */ + err = vhost_create_vqs(vdev, 2, MAX_RPMSG_NUM_BUFS / 2, vqs, vq_cbs, + names); + if (err) { + dev_err(dev, "Failed to create virtqueues\n"); + return err; + } + + vrp->svq = vqs[0]; + vrp->rvq = vqs[1]; + + vrp->buf_size = MAX_RPMSG_BUF_SIZE; + + vhost_set_drvdata(vdev, vrp); + + vrp->nb.notifier_call = vhost_rpmsg_notifier; + vhost_register_notifier(vdev, &vrp->nb); + INIT_LIST_HEAD(&vrp->list); + + group = rpmsg_cfs_add_virtproc_group(dev, + &vhost_rpmsg_virtproc_ops); + if (IS_ERR(group)) { + err = PTR_ERR(group); + goto err; + } + + dev_info(&vdev->dev, "vhost rpmsg host is online\n"); + + return 0; + +err: + vhost_del_vqs(vdev); + + return err; +} + +static int vhost_rpmsg_remove_device(struct device *dev, void *data) +{ + device_unregister(dev); + + return 0; +} + +static int vhost_rpmsg_remove(struct vhost_dev *vdev) +{ + struct virtproc_info *vrp = vhost_get_drvdata(vdev); + int ret; + + ret = device_for_each_child(&vdev->dev, NULL, vhost_rpmsg_remove_device); + if (ret) + dev_warn(&vdev->dev, "can't remove rpmsg device: %d\n", ret); + + if (vrp->as_ept) + __rpmsg_destroy_ept(vrp, vrp->as_ept); + + idr_destroy(&vrp->endpoints); + + vhost_del_vqs(vdev); + + kfree(vrp); + return 0; +} + +static struct vhost_device_id vhost_rpmsg_id_table[] = { + { VIRTIO_ID_RPMSG, VIRTIO_DEV_ANY_ID }, + { 0 }, +}; + +static struct vhost_driver vhost_rpmsg_driver = { + .driver.name = KBUILD_MODNAME, + .driver.owner = THIS_MODULE, + .id_table = vhost_rpmsg_id_table, + .probe = vhost_rpmsg_probe, + .remove = vhost_rpmsg_remove, +}; + +static int __init vhost_rpmsg_init(void) +{ + int ret; + + ret = vhost_register_driver(&vhost_rpmsg_driver); + if (ret) + pr_err("Failed to register vhost rpmsg driver: %d\n", ret); + + return ret; +} +module_init(vhost_rpmsg_init); + +static void __exit vhost_rpmsg_exit(void) +{ + vhost_unregister_driver(&vhost_rpmsg_driver); +} +module_exit(vhost_rpmsg_exit); + +MODULE_DEVICE_TABLE(vhost, vhost_rpmsg_id_table); +MODULE_DESCRIPTION("Vhost-based remote processor messaging bus"); +MODULE_LICENSE("GPL v2"); diff --git a/include/linux/rpmsg.h b/include/linux/rpmsg.h index b9d9283b46ac..5d60d13efaf4 100644 --- a/include/linux/rpmsg.h +++ b/include/linux/rpmsg.h @@ -55,6 +55,7 @@ struct rpmsg_device { u32 dst; struct rpmsg_endpoint *ept; bool announce; + struct list_head list; const struct rpmsg_device_ops *ops; }; From patchwork Thu Jul 2 08:21:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kishon Vijay Abraham I X-Patchwork-Id: 192204 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp1231621ilg; Thu, 2 Jul 2020 01:25:06 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwCts/9Jujy+MM3XRHI7MSuZC3Xo/WYnD+HPV6O580Dslfz9xot0YCVixFuHfzZ/53veUU0 X-Received: by 2002:a50:e3c9:: with SMTP id c9mr34531356edm.90.1593678306152; Thu, 02 Jul 2020 01:25:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593678306; cv=none; d=google.com; s=arc-20160816; b=jHDWaiBMKjdhVYmtth5Dvcof+VHgZ567fYhs7Gb1+j92lvoXekZz1UYgXoFNuF+2hm TaL1kvvVTRjOk80kZPyp6oh19W+9cQFmevpR62Qu6sdNn4dpEYIeNIbjyGGBgxfUF63e F6SBYqHFX4ewVvTOQtsBEQhg9qjutVO8HXNd1zZK+I8aHieFooFYLtNVsjcWg0s4xvcm rmKdDsbHtMwFkgtfzLQmEliJbBIf8Rodr7eC7bRWBIyKfU2ELraWtBkhFIRF6U2jXiWI U5y29qkCQIRDApWr+Yj6V7vcEmFSmmxgzFQf3pXJrKDD9I/ajJe+8gJgNtFWXsh1p2ho MpqQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=z96ye//Mg15JHZ1ndtZxlE33lV8uirQIl1DtUAPpCNk=; b=HJNwEEtBK/PtlGv/BD2h3Hm/IwdFrpzuHyd4MXqovng0QDmRbCixqOHBkcZuw8Zui8 wZ+kmMU5JPMpIUL7m+tiXJ5SaZUPRcURZ/xt31TCftvUcEbQ4ET9ZYTQ0WRVDq/LpH1C bXcUZ4/FbEjUZz43Q0i+980snzUSwpGAaawCLQJn9LnaR+af6b6uewUvvubABH0Bkdcf g3IKdUGzz6cC6RHSJDhiEb2mfkhBBc2OCZsngfHI/xJ3Ocvc2lESnGCK2NklMiRZNq6Y omAJB8bhsttqc/klIRpZqBAzWuAsxZ4dkcRI21y3lMQSJ66jbEhXVCj61lmv8ckCrILE 5HXA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=hKh4Ycc0; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id gy14si5338855ejb.313.2020.07.02.01.25.06; Thu, 02 Jul 2020 01:25:06 -0700 (PDT) Received-SPF: pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=hKh4Ycc0; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728556AbgGBIZD (ORCPT + 9 others); Thu, 2 Jul 2020 04:25:03 -0400 Received: from lelv0142.ext.ti.com ([198.47.23.249]:59580 "EHLO lelv0142.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728529AbgGBIX1 (ORCPT ); Thu, 2 Jul 2020 04:23:27 -0400 Received: from lelv0266.itg.ti.com ([10.180.67.225]) by lelv0142.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0628NGhc082008; Thu, 2 Jul 2020 03:23:16 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1593678196; bh=z96ye//Mg15JHZ1ndtZxlE33lV8uirQIl1DtUAPpCNk=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=hKh4Ycc0nsKdDPMU7zvoAF1INSmeh72IPvQS5QOFMq4Kc9z2De6UHyUGJQsMmlqry DYgZNo1XEM+w2AaJimrzgI0NKEC9A5VRpDP9utmTyIV6VPFwKtQ/btpKPHG9wxz8/X /BiKQoNwEaAmwJLbq3/wJAlgDVSeBWOF5IKKiAxk= Received: from DLEE114.ent.ti.com (dlee114.ent.ti.com [157.170.170.25]) by lelv0266.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0628NFj5066600 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 2 Jul 2020 03:23:15 -0500 Received: from DLEE111.ent.ti.com (157.170.170.22) by DLEE114.ent.ti.com (157.170.170.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Thu, 2 Jul 2020 03:23:15 -0500 Received: from lelv0327.itg.ti.com (10.180.67.183) by DLEE111.ent.ti.com (157.170.170.22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Thu, 2 Jul 2020 03:23:15 -0500 Received: from a0393678ub.india.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0628LiYR006145; Thu, 2 Jul 2020 03:23:10 -0500 From: Kishon Vijay Abraham I To: Ohad Ben-Cohen , Bjorn Andersson , Jon Mason , Dave Jiang , Allen Hubbe , Lorenzo Pieralisi , Bjorn Helgaas , "Michael S. Tsirkin" , Jason Wang , Paolo Bonzini , Stefan Hajnoczi , Stefano Garzarella CC: , , , , , , , Subject: [RFC PATCH 15/22] samples/rpmsg: Setup delayed work to send message Date: Thu, 2 Jul 2020 13:51:36 +0530 Message-ID: <20200702082143.25259-16-kishon@ti.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200702082143.25259-1-kishon@ti.com> References: <20200702082143.25259-1-kishon@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Let us not send any message to the remote processor before announce_create() callback has been invoked. Since announce_create() is only invoked after ->probe() is completed, setup delayed work to start sending message to the remote processor. Signed-off-by: Kishon Vijay Abraham I --- samples/rpmsg/rpmsg_client_sample.c | 28 +++++++++++++++++++++------- 1 file changed, 21 insertions(+), 7 deletions(-) -- 2.17.1 diff --git a/samples/rpmsg/rpmsg_client_sample.c b/samples/rpmsg/rpmsg_client_sample.c index ae5081662283..514a51945d69 100644 --- a/samples/rpmsg/rpmsg_client_sample.c +++ b/samples/rpmsg/rpmsg_client_sample.c @@ -20,6 +20,8 @@ module_param(count, int, 0644); struct instance_data { int rx_count; + struct delayed_work send_msg_work; + struct rpmsg_device *rpdev; }; static int rpmsg_sample_cb(struct rpmsg_device *rpdev, void *data, int len, @@ -48,9 +50,21 @@ static int rpmsg_sample_cb(struct rpmsg_device *rpdev, void *data, int len, return 0; } -static int rpmsg_sample_probe(struct rpmsg_device *rpdev) +static void rpmsg_sample_send_msg_work(struct work_struct *work) { + struct instance_data *idata = container_of(work, struct instance_data, + send_msg_work.work); + struct rpmsg_device *rpdev = idata->rpdev; int ret; + + /* send a message to our remote processor */ + ret = rpmsg_send(rpdev->ept, MSG, strlen(MSG)); + if (ret) + dev_err(&rpdev->dev, "rpmsg_send failed: %d\n", ret); +} + +static int rpmsg_sample_probe(struct rpmsg_device *rpdev) +{ struct instance_data *idata; dev_info(&rpdev->dev, "new channel: 0x%x -> 0x%x!\n", @@ -62,18 +76,18 @@ static int rpmsg_sample_probe(struct rpmsg_device *rpdev) dev_set_drvdata(&rpdev->dev, idata); - /* send a message to our remote processor */ - ret = rpmsg_send(rpdev->ept, MSG, strlen(MSG)); - if (ret) { - dev_err(&rpdev->dev, "rpmsg_send failed: %d\n", ret); - return ret; - } + idata->rpdev = rpdev; + INIT_DELAYED_WORK(&idata->send_msg_work, rpmsg_sample_send_msg_work); + schedule_delayed_work(&idata->send_msg_work, msecs_to_jiffies(500)); return 0; } static void rpmsg_sample_remove(struct rpmsg_device *rpdev) { + struct instance_data *idata = dev_get_drvdata(&rpdev->dev); + + cancel_delayed_work_sync(&idata->send_msg_work); dev_info(&rpdev->dev, "rpmsg sample client driver is removed\n"); } From patchwork Thu Jul 2 08:21:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kishon Vijay Abraham I X-Patchwork-Id: 192198 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp1230978ilg; Thu, 2 Jul 2020 01:23:58 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzQPRHpUuJj3AY/Dkh7eZxSXTZy9/KE6eAt5GG7hpNY1BF6BFFk7E0hU4RLtIu2QcEsdZsJ X-Received: by 2002:a17:907:1059:: with SMTP id oy25mr28324921ejb.90.1593678238810; Thu, 02 Jul 2020 01:23:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593678238; cv=none; d=google.com; s=arc-20160816; b=Tutapb5J8sEcP+HmsOGBCZgPx4GDEllfkn01CRpp/qR+7ivwnAKQRQLwwM61irRCJm 42WwbL8sl1Fdghw9UWJ92Q9LY+WGBEneVneZxXk5Al1EjgSSRZxCYv8TOZoxlN+Ubwgt 7/uAZepe1BCaOU9cImnSP5JNK0DnkGOgKeTxHdFuakid0jJB2xbcTpqHtAizkvMae2po TrXOC/PvybZdu9wNmE+/7pKwhRkiCCLDhpPPeFGMOfcy+wR+ai2dAOHGmcemNzPvOGWn UNzJ3V/drMTV9to3TIVTOCC2IcD16Y6De5/qyUq74yLBfTTqeXwSyXufDIqNv+QtLvZJ VDCQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=ogem8OEd0B1KK4dvO1xOb9/ggBhFiU5Os5ncLUubhLU=; b=CS+thz+JLeQ+1cARnXdSS/vj2S+D98DeKs94ynS4udE+dm4m9R6ex082mB+GKUu+RY GFYf5txcUmlHgxsHwEHJbPCTANZE4Boyk0cn2CWsUE95TdodO2vEdFr10xYSFMTUOhL+ +F58OEzErLtk76DxhRmnuR+DLqBcKtocqMeZZcX8qj4SUn3nHBqiLN2MQwFvstbZ1yK9 uqSJyeucpEMqaytSHR7v4NdKXO6VkiV7yC1zF4gVZEbrSuFbsODOH81kfI01esFV6pcY WZQXJnZuRldSDRjqqVFjaNBWRidDHnU4u5ssKE37WGy8ZtQiTA29qhrvpJRslAg2cIc8 uFrw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b="HEU/4DXs"; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id n1si5296677edy.409.2020.07.02.01.23.58; Thu, 02 Jul 2020 01:23:58 -0700 (PDT) Received-SPF: pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b="HEU/4DXs"; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728669AbgGBIXf (ORCPT + 9 others); Thu, 2 Jul 2020 04:23:35 -0400 Received: from lelv0142.ext.ti.com ([198.47.23.249]:59596 "EHLO lelv0142.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726522AbgGBIXa (ORCPT ); Thu, 2 Jul 2020 04:23:30 -0400 Received: from lelv0265.itg.ti.com ([10.180.67.224]) by lelv0142.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0628NL3E082026; Thu, 2 Jul 2020 03:23:21 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1593678201; bh=ogem8OEd0B1KK4dvO1xOb9/ggBhFiU5Os5ncLUubhLU=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=HEU/4DXsWuzYfdUEEOO77ST1979/soZJPDuJUCzGAvxmewNGEmE4zfLzI7ww5YTzZ m7doXWRZWlJkxYO0tciaPf451IzSjvAlvkTO+hMhshLQfX4smwzZ29wHGD0WEInyd/ 7/CTp/uz2fcR0iBA4D5Z1RVCP0l6gvjPWMmC/1+4= Received: from DLEE113.ent.ti.com (dlee113.ent.ti.com [157.170.170.24]) by lelv0265.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0628NLlT067307 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 2 Jul 2020 03:23:21 -0500 Received: from DLEE114.ent.ti.com (157.170.170.25) by DLEE113.ent.ti.com (157.170.170.24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Thu, 2 Jul 2020 03:23:20 -0500 Received: from lelv0327.itg.ti.com (10.180.67.183) by DLEE114.ent.ti.com (157.170.170.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Thu, 2 Jul 2020 03:23:21 -0500 Received: from a0393678ub.india.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0628LiYS006145; Thu, 2 Jul 2020 03:23:15 -0500 From: Kishon Vijay Abraham I To: Ohad Ben-Cohen , Bjorn Andersson , Jon Mason , Dave Jiang , Allen Hubbe , Lorenzo Pieralisi , Bjorn Helgaas , "Michael S. Tsirkin" , Jason Wang , Paolo Bonzini , Stefan Hajnoczi , Stefano Garzarella CC: , , , , , , , Subject: [RFC PATCH 16/22] samples/rpmsg: Wait for address to be bound to rpdev for sending message Date: Thu, 2 Jul 2020 13:51:37 +0530 Message-ID: <20200702082143.25259-17-kishon@ti.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200702082143.25259-1-kishon@ti.com> References: <20200702082143.25259-1-kishon@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org rpmsg_client_sample can use either virtio_rpmsg_bus or vhost_rpmsg_bus to send messages. In the case of vhost_rpmsg_bus, the destination address of rpdev will be assigned only after receiving address service notification from the remote virtio_rpmsg_bus. Wait for address to be bound to rpdev (rpmsg_client_sample running in vhost system) before sending messages to the remote virtio_rpmsg_bus. Signed-off-by: Kishon Vijay Abraham I --- samples/rpmsg/rpmsg_client_sample.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) -- 2.17.1 diff --git a/samples/rpmsg/rpmsg_client_sample.c b/samples/rpmsg/rpmsg_client_sample.c index 514a51945d69..07149d7fbd0c 100644 --- a/samples/rpmsg/rpmsg_client_sample.c +++ b/samples/rpmsg/rpmsg_client_sample.c @@ -57,10 +57,14 @@ static void rpmsg_sample_send_msg_work(struct work_struct *work) struct rpmsg_device *rpdev = idata->rpdev; int ret; - /* send a message to our remote processor */ - ret = rpmsg_send(rpdev->ept, MSG, strlen(MSG)); - if (ret) - dev_err(&rpdev->dev, "rpmsg_send failed: %d\n", ret); + if (rpdev->dst != RPMSG_ADDR_ANY) { + /* send a message to our remote processor */ + ret = rpmsg_send(rpdev->ept, MSG, strlen(MSG)); + if (ret) + dev_err(&rpdev->dev, "rpmsg_send failed: %d\n", ret); + } else { + schedule_delayed_work(&idata->send_msg_work, msecs_to_jiffies(50)); + } } static int rpmsg_sample_probe(struct rpmsg_device *rpdev) From patchwork Thu Jul 2 08:21:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kishon Vijay Abraham I X-Patchwork-Id: 192196 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp1230812ilg; Thu, 2 Jul 2020 01:23:42 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwTb9iI8JMoy6gbYC/aHss/LJYJHKH0AHYkXX+lFqTpuOeyS4/+/4QUaSAkeQ/I8C0N4oIs X-Received: by 2002:a05:6402:2c2:: with SMTP id b2mr33186172edx.184.1593678222231; Thu, 02 Jul 2020 01:23:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593678222; cv=none; d=google.com; s=arc-20160816; b=kITw3ZdY0hkbNKY3xKNzCtxjCn2T/GJkgRI1b2FcGnX+i3ooYVYPmJj10NLUMGnPGY /Q7Ic/LqMZ8tDEFS1AkSHxJ9wfyxKPo02XovDFhbvmKQvHc01v2TaIOCkx74OAo9bPnb 2kBAPvmfI2tPhb3YokJfckj21c6BRQkwn6cdjR9onx7iWusFmmOAgDh0JYPJJoAc8Lvz 9faKh7MI1leet7M+iMRYrbmwnk5Xrsjpg+2wiunNkUYDG+I8cYccBQNXHI6LmLCOHbSh DC55RppALiqzX7k/MAb+suQJPt/zjBuOIGiyFAlxwpgXCYNOEm+drBmmyz5jxlTgpIkf i13A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=eGYb68Z81KhdwdWoOADJFuS9KisslFzKd0+Ho98yHF4=; b=f/+NXOYgRzk6SS7P75DdMjYi+FI2yNCPTAQlHZfvrs+z1qPMvPBKzG0sGdg27X1ryR 63PVTwfehLWm7hvQkSY4vZrjB2AwA3/zuGKKpXQgK1P/MXfnLGMLKJb51yBT99jrZHrw fARHcxINtWvyVa+uJTvxKs1TQL6Y2kmYkNqspL+g0MLnvOrJsnBFxTmb3D9dQgGVS+XL +Ku7UpwCRngr7pllZfhT+bAl0po/nRg3KM2iJ2RHQqo5BjEzB7kV1ob79eDTnmNyln++ HoGq2o3loqFkpBWHK4Qq/lAEagSD0kU8/dLrznR96yXCa8XHIbB2EHOmIFW9+ha+5t9k 0xAA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=F0noB78A; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g22si5583178edq.111.2020.07.02.01.23.42; Thu, 02 Jul 2020 01:23:42 -0700 (PDT) Received-SPF: pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=F0noB78A; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728701AbgGBIXk (ORCPT + 9 others); Thu, 2 Jul 2020 04:23:40 -0400 Received: from lelv0142.ext.ti.com ([198.47.23.249]:59632 "EHLO lelv0142.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728663AbgGBIXg (ORCPT ); Thu, 2 Jul 2020 04:23:36 -0400 Received: from fllv0034.itg.ti.com ([10.64.40.246]) by lelv0142.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0628NRFx082069; Thu, 2 Jul 2020 03:23:27 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1593678207; bh=eGYb68Z81KhdwdWoOADJFuS9KisslFzKd0+Ho98yHF4=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=F0noB78AMveWf8E6/OYl0fvkIWUkgc8p1+tAJru1wvoGuIlwhyTOKLqRzzREVMbIb c7ubOUbj1+YlQEkv9bxxVzvmWBuTCDqVMO92gzyvCxs6XfLxklkzRgHJmP67Jzn1Se PLGX/HHRmdUR/21XYGVtP4/I5Rf5cywezjz+DbXY= Received: from DFLE115.ent.ti.com (dfle115.ent.ti.com [10.64.6.36]) by fllv0034.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0628NRB2032083 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 2 Jul 2020 03:23:27 -0500 Received: from DFLE113.ent.ti.com (10.64.6.34) by DFLE115.ent.ti.com (10.64.6.36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Thu, 2 Jul 2020 03:23:26 -0500 Received: from lelv0327.itg.ti.com (10.180.67.183) by DFLE113.ent.ti.com (10.64.6.34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Thu, 2 Jul 2020 03:23:26 -0500 Received: from a0393678ub.india.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0628LiYT006145; Thu, 2 Jul 2020 03:23:21 -0500 From: Kishon Vijay Abraham I To: Ohad Ben-Cohen , Bjorn Andersson , Jon Mason , Dave Jiang , Allen Hubbe , Lorenzo Pieralisi , Bjorn Helgaas , "Michael S. Tsirkin" , Jason Wang , Paolo Bonzini , Stefan Hajnoczi , Stefano Garzarella CC: , , , , , , , Subject: [RFC PATCH 17/22] rpmsg.txt: Add Documentation to configure rpmsg using configfs Date: Thu, 2 Jul 2020 13:51:38 +0530 Message-ID: <20200702082143.25259-18-kishon@ti.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200702082143.25259-1-kishon@ti.com> References: <20200702082143.25259-1-kishon@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Add Documentation on how rpmsg device can be created using configfs required for vhost_rpmsg_bus.c Signed-off-by: Kishon Vijay Abraham I --- Documentation/rpmsg.txt | 56 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 56 insertions(+) -- 2.17.1 diff --git a/Documentation/rpmsg.txt b/Documentation/rpmsg.txt index 24b7a9e1a5f9..0e0a32b2cb66 100644 --- a/Documentation/rpmsg.txt +++ b/Documentation/rpmsg.txt @@ -339,3 +339,59 @@ by the bus, and can then start sending messages to the remote service. The plan is also to add static creation of rpmsg channels via the virtio config space, but it's not implemented yet. + +Configuring rpmsg using configfs +================================ + +Usually a rpmsg_device is created when the virtproc driver (virtio_rpmsg_bus.c) +receives a name service notification from the remote core. However there could +also be cases where the user should be given the ability to create rpmsg_device +(like in the case of vhost_rpmsg_bus.c) where vhost_rpmsg_bus should be +responsible for sending name service notification. For such cases, configfs +provides an ability to the user for binding a rpmsg_client_driver with virtproc +device in order to create rpmsg_device. + +Two configfs directories are added for configuring rpmsg +:: + + # ls /sys/kernel/config/rpmsg/ + channel virtproc + +channel: Whenever a new rpmsg_driver is registered with rpmsg core, a new +sub-directory will be created for each entry provided in rpmsg_device_id +table of rpmsg_driver. + +For instance when rpmsg_sample_client is installed, it'll create the following +entry in the mounted configfs directory +:: + + # ls /sys/kernel/config/rpmsg/channel/ + rpmsg-client-sample + +virtproc: A virtproc device can choose to add an entry in this directory. +Virtproc device adds an entry if it has to allow user to control creation of +rpmsg device. (e.g vhost_rpmsg_bus.c) +:: + + # ls /sys/kernel/config/rpmsg/virtproc/ + vhost0 + + +The first step in allowing the user to create rpmsg device is to create a +sub-directory rpmsg-client-sample. For each rpmsg_device, the user would like +to create, a separate subdirectory has to be created. +:: + + # mkdir /sys/kernel/config/rpmsg/channel/rpmsg-client-sample/c1 + +The next step is to link the created sub-directory with virtproc device to +create rpmsg device. +:: + + # ln -s /sys/kernel/config/rpmsg/channel/rpmsg-client-sample/c1 \ + /sys/kernel/config/rpmsg/virtproc/vhost0 + +This will create rpmsg_device. However the driver will not register the +rpmsg device until it receives the VIRTIO_CONFIG_S_DRIVER_OK (in the case +of vhost_rpmsg_bus.c) as it can access virtio buffers only after +VIRTIO_CONFIG_S_DRIVER_OK is set. From patchwork Thu Jul 2 08:21:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kishon Vijay Abraham I X-Patchwork-Id: 192197 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp1230877ilg; Thu, 2 Jul 2020 01:23:49 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy3rGSaDQZK3jD0upuMmPMcdIcCic2f0x1fWjhOFDIHMkeHxbVUZNlOEu1BNQhhqkvRALG0 X-Received: by 2002:a05:6402:b23:: with SMTP id bo3mr34088843edb.331.1593678229555; Thu, 02 Jul 2020 01:23:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593678229; cv=none; d=google.com; s=arc-20160816; b=NVMXXE3JSgBFTHwec27ChF214yZMsPIdp5guIYdUJ2KsRTWj5XqmLAtFcXZAcQGovs HoTw4aab3jk0RMamSWKwBTarBcH5IrNWr2vTCkAR5gbGCcaFiYZMzuzh0+VZc0aAotwO 2YTmQwUxfu4sFq7svF7GZeCsw/AHyWDmKVQMHK8BYBuCE2zCjBnos+H7YJHXTBclHprj MzMnHYCFSVKOTCvInvKnUulk5G7vdCYzimCuhNw+oUO4tLFFry5FCnMHlw0JH3o71dZ5 6wz/JbW1FxrJsstbysFTYDBxj0XgTkMBZGp5P+MBrfoaBZbU1sihh+OOh8EEB4FZQcQN rDmw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=TX2S+6DZEp/4Z36ZLha2E0i2Oh3uBP+XTXzrat1eat4=; b=Q0UiVK6J/dq6G7z3PQifPrHtEx3txI/lUuNA+cUIcfU+GISMqS2zADcg7qVFjDgrMe nyJXGseMNeqwVdp4A8OG05Ek3x/zOuVwEhqwCODR/L3upiS2ZVt/JTLa7cfMm5bYjKZ8 erka3cjlsDrC+oFfI5IPJpEBMqynjKVbe/URxiG5bxXKS9s3y2h2PLfR2oF911La2PuE 5PUQMslyBkGX3JFd6LoroQkCjCMqhm/IvHGc58ORMeWIuAg+XJd3pPqY1hi5kYOrTqps XfjN2LcK86EK7xgHxz+bIo4T6xysMxAfVrr5thaRXezzjAFaZtVa61M2iWYrBXsgQ2da ilMw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=Q8Jc6crO; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id l12si5291865edq.180.2020.07.02.01.23.49; Thu, 02 Jul 2020 01:23:49 -0700 (PDT) Received-SPF: pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=Q8Jc6crO; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728736AbgGBIXq (ORCPT + 9 others); Thu, 2 Jul 2020 04:23:46 -0400 Received: from lelv0142.ext.ti.com ([198.47.23.249]:59668 "EHLO lelv0142.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728663AbgGBIXn (ORCPT ); Thu, 2 Jul 2020 04:23:43 -0400 Received: from fllv0035.itg.ti.com ([10.64.41.0]) by lelv0142.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0628NWYW082098; Thu, 2 Jul 2020 03:23:32 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1593678212; bh=TX2S+6DZEp/4Z36ZLha2E0i2Oh3uBP+XTXzrat1eat4=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=Q8Jc6crOU+m7uaIZUDni0+I2J1elKWexu76DUhiN3zDchhrYpiRwBaqSrZ7aUtuft gJr5hasZdY1yQdqog2TYjuZbQ1RQhoBEZnqEfEHZbr6zg1gM2ijMUiHbCR5JbAlYfD xwmHWFDr9ijx6EYOBeAg6S+qLa3gzC1R+MUdy5N8= Received: from DFLE108.ent.ti.com (dfle108.ent.ti.com [10.64.6.29]) by fllv0035.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0628NWEc087430; Thu, 2 Jul 2020 03:23:32 -0500 Received: from DFLE108.ent.ti.com (10.64.6.29) by DFLE108.ent.ti.com (10.64.6.29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Thu, 2 Jul 2020 03:23:32 -0500 Received: from lelv0327.itg.ti.com (10.180.67.183) by DFLE108.ent.ti.com (10.64.6.29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Thu, 2 Jul 2020 03:23:32 -0500 Received: from a0393678ub.india.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0628LiYU006145; Thu, 2 Jul 2020 03:23:27 -0500 From: Kishon Vijay Abraham I To: Ohad Ben-Cohen , Bjorn Andersson , Jon Mason , Dave Jiang , Allen Hubbe , Lorenzo Pieralisi , Bjorn Helgaas , "Michael S. Tsirkin" , Jason Wang , Paolo Bonzini , Stefan Hajnoczi , Stefano Garzarella CC: , , , , , , , Subject: [RFC PATCH 18/22] virtio_pci: Add VIRTIO driver for VHOST on Configurable PCIe Endpoint device Date: Thu, 2 Jul 2020 13:51:39 +0530 Message-ID: <20200702082143.25259-19-kishon@ti.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200702082143.25259-1-kishon@ti.com> References: <20200702082143.25259-1-kishon@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Add VIRTIO driver to support Linux VHOST on Configurable PCIe Endpoint device in the backend. Signed-off-by: Kishon Vijay Abraham I --- drivers/virtio/Kconfig | 9 + drivers/virtio/Makefile | 1 + drivers/virtio/virtio_pci_epf.c | 670 ++++++++++++++++++++++++++++++++ include/linux/virtio.h | 3 + 4 files changed, 683 insertions(+) create mode 100644 drivers/virtio/virtio_pci_epf.c -- 2.17.1 diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig index 69a32dfc318a..4b251f8482ae 100644 --- a/drivers/virtio/Kconfig +++ b/drivers/virtio/Kconfig @@ -43,6 +43,15 @@ config VIRTIO_PCI_LEGACY If unsure, say Y. +config VIRTIO_PCI_EPF + bool "Support for virtio device on configurable PCIe Endpoint" + depends on VIRTIO_PCI + help + Select this configuration option to enable the VIRTIO driver + for configurable PCIe Endpoint running Linux in backend. + + If in doubt, say "N" to disable VIRTIO PCI EPF driver. + config VIRTIO_VDPA tristate "vDPA driver for virtio devices" depends on VDPA diff --git a/drivers/virtio/Makefile b/drivers/virtio/Makefile index 29a1386ecc03..08a158365d5e 100644 --- a/drivers/virtio/Makefile +++ b/drivers/virtio/Makefile @@ -4,6 +4,7 @@ obj-$(CONFIG_VIRTIO_MMIO) += virtio_mmio.o obj-$(CONFIG_VIRTIO_PCI) += virtio_pci.o virtio_pci-y := virtio_pci_modern.o virtio_pci_common.o virtio_pci-$(CONFIG_VIRTIO_PCI_LEGACY) += virtio_pci_legacy.o +virtio_pci-$(CONFIG_VIRTIO_PCI_EPF) += virtio_pci_epf.o obj-$(CONFIG_VIRTIO_BALLOON) += virtio_balloon.o obj-$(CONFIG_VIRTIO_INPUT) += virtio_input.o obj-$(CONFIG_VIRTIO_VDPA) += virtio_vdpa.o diff --git a/drivers/virtio/virtio_pci_epf.c b/drivers/virtio/virtio_pci_epf.c new file mode 100644 index 000000000000..20e53869f179 --- /dev/null +++ b/drivers/virtio/virtio_pci_epf.c @@ -0,0 +1,670 @@ +// SPDX-License-Identifier: GPL-2.0 +/** + * Virtio PCI driver - Configurable PCIe Endpoint device Support + * + * The Configurable PCIe Endpoint device is present on an another system running + * Linux and configured using drivers/pci/endpoint/functions/pci-epf-vhost.c + * + * Copyright (C) 2020 Texas Instruments + * Author: Kishon Vijay Abraham I + */ + +#include +#include "virtio_pci_common.h" + +#define DRV_MODULE_NAME "virtio-pci-epf" + +#define HOST_FEATURES_LOWER 0x00 +#define HOST_FEATURES_UPPER 0x04 +#define GUEST_FEATURES_LOWER 0x08 +#define GUEST_FEATURES_UPPER 0x0C +#define MSIX_CONFIG 0x10 +#define NUM_QUEUES 0x12 +#define DEVICE_STATUS 0x14 +#define CONFIG_GENERATION 0x15 +#define ISR 0x16 +#define HOST_CMD 0x1A +enum host_cmd { + HOST_CMD_NONE, + HOST_CMD_SET_STATUS, + HOST_CMD_FINALIZE_FEATURES, + HOST_CMD_RESET, +}; + +#define HOST_CMD_STATUS 0x1B +enum host_cmd_status { + HOST_CMD_STATUS_NONE, + HOST_CMD_STATUS_OKAY, + HOST_CMD_STATUS_ERROR, +}; + +#define QUEUE_BASE 0x1C +#define CMD 0x00 +enum queue_cmd { + QUEUE_CMD_NONE, + QUEUE_CMD_ACTIVATE, + QUEUE_CMD_DEACTIVATE, + QUEUE_CMD_NOTIFY, +}; + +#define CMD_STATUS 0x01 +enum queue_cmd_status { + QUEUE_CMD_STATUS_NONE, + QUEUE_CMD_STATUS_OKAY, + QUEUE_CMD_STATUS_ERROR, +}; + +#define STATUS 0x2 +#define STATUS_ACTIVE BIT(0) + +#define NUM_BUFFERS 0x04 +#define MSIX_VECTOR 0x06 +#define ADDR_LOWER 0x08 +#define ADDR_UPPER 0x0C +#define QUEUE_CMD(n) (QUEUE_BASE + CMD + (n) * 16) +#define QUEUE_CMD_STATUS(n) (QUEUE_BASE + CMD_STATUS + (n) * 16) +#define QUEUE_STATUS(n) (QUEUE_BASE + STATUS + (n) * 16) +#define QUEUE_NUM_BUFFERS(n) (QUEUE_BASE + NUM_BUFFERS + (n) * 16) +#define QUEUE_MSIX_VECTOR(n) (QUEUE_BASE + MSIX_VECTOR + (n) * 16) +#define QUEUE_ADDR_LOWER(n) (QUEUE_BASE + ADDR_LOWER + (n) * 16) +#define QUEUE_ADDR_UPPER(n) (QUEUE_BASE + ADDR_UPPER + (n) * 16) + +#define DEVICE_CFG_SPACE 0x110 + +#define COMMAND_TIMEOUT 1000 /* 1 Sec */ + +struct virtio_pci_epf { + /* mutex to protect sending commands to EPF vhost */ + struct mutex lock; + struct virtio_pci_device vp_dev; +}; + +#define to_virtio_pci_epf(dev) container_of((dev), struct virtio_pci_epf, vp_dev) + +/* virtio_pci_epf_send_command - Send commands to the remote EPF device running + * vhost driver + * @vp_dev: Virtio PCIe device that communicates with the endpoint device + * @command: The command that has to be sent to the remote endpoint device + * + * Helper function to send commands to the remote endpoint function device + * running vhost driver. + */ +static int virtio_pci_epf_send_command(struct virtio_pci_device *vp_dev, + u32 command) +{ + struct virtio_pci_epf *pci_epf; + void __iomem *ioaddr; + ktime_t timeout; + bool timedout; + int ret = 0; + u8 status; + + pci_epf = to_virtio_pci_epf(vp_dev); + ioaddr = vp_dev->ioaddr; + + mutex_lock(&pci_epf->lock); + writeb(command, ioaddr + HOST_CMD); + timeout = ktime_add_ms(ktime_get(), COMMAND_TIMEOUT); + while (1) { + timedout = ktime_after(ktime_get(), timeout); + status = readb(ioaddr + HOST_CMD_STATUS); + + if (status == HOST_CMD_STATUS_ERROR) { + ret = -EINVAL; + break; + } + + if (status == HOST_CMD_STATUS_OKAY) + break; + + if (WARN_ON(timedout)) { + ret = -ETIMEDOUT; + break; + } + + usleep_range(5, 10); + } + + writeb(HOST_CMD_STATUS_NONE, ioaddr + HOST_CMD_STATUS); + mutex_unlock(&pci_epf->lock); + + return ret; +} + +/* virtio_pci_epf_send_queue_command - Send commands to the remote EPF device + * for configuring virtqueue + * @vp_dev: Virtio PCIe device that communicates with the endpoint device + * @vq: The virtqueue that has to be configured on the remote endpoint device + * @command: The command that has to be sent to the remote endpoint device + * + * Helper function to send commands to the remote endpoint function device for + * configuring the virtqueue. + */ +static int virtio_pci_epf_send_queue_command(struct virtio_pci_device *vp_dev, + struct virtqueue *vq, u8 command) +{ + void __iomem *ioaddr; + ktime_t timeout; + bool timedout; + int ret = 0; + u8 status; + + ioaddr = vp_dev->ioaddr; + + mutex_lock(&vq->lock); + writeb(command, ioaddr + QUEUE_CMD(vq->index)); + timeout = ktime_add_ms(ktime_get(), COMMAND_TIMEOUT); + while (1) { + timedout = ktime_after(ktime_get(), timeout); + status = readb(ioaddr + QUEUE_CMD_STATUS(vq->index)); + + if (status == QUEUE_CMD_STATUS_ERROR) { + ret = -EINVAL; + break; + } + + if (status == QUEUE_CMD_STATUS_OKAY) + break; + + if (WARN_ON(timedout)) { + ret = -ETIMEDOUT; + break; + } + + usleep_range(5, 10); + } + + writeb(QUEUE_CMD_STATUS_NONE, ioaddr + QUEUE_CMD_STATUS(vq->index)); + mutex_unlock(&vq->lock); + + return ret; +} + +/* virtio_pci_epf_get_features - virtio_config_ops to get EPF vhost device + * features + * @vdev: Virtio device that communicates with the remote EPF vhost device + * + * virtio_config_ops to get EPF vhost device features. The device features + * are accessed using BAR mapped registers. + */ +static u64 virtio_pci_epf_get_features(struct virtio_device *vdev) +{ + struct virtio_pci_device *vp_dev; + void __iomem *ioaddr; + u64 features; + + vp_dev = to_vp_device(vdev); + ioaddr = vp_dev->ioaddr; + + features = readl(ioaddr + HOST_FEATURES_UPPER); + features <<= 32; + features |= readl(ioaddr + HOST_FEATURES_LOWER); + + return features; +} + +/* virtio_pci_epf_finalize_features - virtio_config_ops to finalize features + * with remote EPF vhost device + * @vdev: Virtio device that communicates with the remote vhost device + * + * Indicate the negotiated features to the remote EPF vhost device by sending + * HOST_CMD_FINALIZE_FEATURES command. + */ +static int virtio_pci_epf_finalize_features(struct virtio_device *vdev) +{ + struct virtio_pci_device *vp_dev; + void __iomem *ioaddr; + struct device *dev; + int ret; + + vp_dev = to_vp_device(vdev); + dev = &vp_dev->pci_dev->dev; + ioaddr = vp_dev->ioaddr; + + /* Give virtio_ring a chance to accept features. */ + vring_transport_features(vdev); + + writel(lower_32_bits(vdev->features), ioaddr + GUEST_FEATURES_LOWER); + writel(upper_32_bits(vdev->features), ioaddr + GUEST_FEATURES_UPPER); + + ret = virtio_pci_epf_send_command(vp_dev, HOST_CMD_FINALIZE_FEATURES); + if (ret) { + dev_err(dev, "Failed to set configuration event vector\n"); + return VIRTIO_MSI_NO_VECTOR; + } + + return 0; +} + +/* virtio_pci_epf_get - Copy the device configuration space data from + * EPF device to buffer provided by virtio driver + * @vdev: Virtio device that communicates with the remote vhost device + * @offset: Offset in the device configuration space + * @buf: Buffer address from virtio driver where configuration space + * data has to be copied + * @len: Length of the data from device configuration space to be copied + * + * Copy the device configuration space data to buffer provided by virtio + * driver. + */ +static void virtio_pci_epf_get(struct virtio_device *vdev, unsigned int offset, + void *buf, unsigned int len) +{ + struct virtio_pci_device *vp_dev; + void __iomem *ioaddr; + + vp_dev = to_vp_device(vdev); + ioaddr = vp_dev->ioaddr; + + memcpy_fromio(buf, ioaddr + DEVICE_CFG_SPACE + offset, len); +} + +/* virtio_pci_epf_set - Copy the device configuration space data from buffer + * provided by virtio driver to EPF device + * @vdev: Virtio device that communicates with the remote vhost device + * @offset: Offset in the device configuration space + * @buf: Buffer address provided by virtio driver which has the configuration + * space data to be copied + * @len: Length of the data from device configuration space to be copied + * + * Copy the device configuration space data from buffer provided by virtio + * driver to the EPF device. + */ +static void virtio_pci_epf_set(struct virtio_device *vdev, unsigned int offset, + const void *buf, unsigned int len) +{ + struct virtio_pci_device *vp_dev; + void __iomem *ioaddr; + + vp_dev = to_vp_device(vdev); + ioaddr = vp_dev->ioaddr; + + memcpy_toio(ioaddr + DEVICE_CFG_SPACE + offset, buf, len); +} + +/* virtio_pci_epf_get_status - EPF virtio_config_ops to get device status + * @vdev: Virtio device that communicates with the remote vhost device + * + * EPF virtio_config_ops to get device status. The remote EPF vhost device + * populates the vhost device status in BAR mapped region. + */ +static u8 virtio_pci_epf_get_status(struct virtio_device *vdev) +{ + struct virtio_pci_device *vp_dev; + void __iomem *ioaddr; + + vp_dev = to_vp_device(vdev); + ioaddr = vp_dev->ioaddr; + + return readb(ioaddr + DEVICE_STATUS); +} + +/* virtio_pci_epf_set_status - EPF virtio_config_ops to set device status + * @vdev: Virtio device that communicates with the remote vhost device + * + * EPF virtio_config_ops to set device status. This function updates the + * status in scratchpad register and sends a notification to the vhost + * device using HOST_CMD_SET_STATUS command. + */ +static void virtio_pci_epf_set_status(struct virtio_device *vdev, u8 status) +{ + struct virtio_pci_device *vp_dev; + void __iomem *ioaddr; + struct device *dev; + int ret; + + vp_dev = to_vp_device(vdev); + dev = &vp_dev->pci_dev->dev; + ioaddr = vp_dev->ioaddr; + + /* We should never be setting status to 0. */ + if (WARN_ON(!status)) + return; + + writeb(status, ioaddr + DEVICE_STATUS); + + ret = virtio_pci_epf_send_command(vp_dev, HOST_CMD_SET_STATUS); + if (ret) + dev_err(dev, "Failed to set device status\n"); +} + +/* virtio_pci_epf_reset - EPF virtio_config_ops to reset the device + * @vdev: Virtio device that communicates with the remote vhost device + * + * EPF virtio_config_ops to reset the device. This sends HOST_CMD_RESET + * command to reset the device. + */ +static void virtio_pci_epf_reset(struct virtio_device *vdev) +{ + struct virtio_pci_device *vp_dev; + void __iomem *ioaddr; + struct device *dev; + int ret; + + vp_dev = to_vp_device(vdev); + dev = &vp_dev->pci_dev->dev; + ioaddr = vp_dev->ioaddr; + + ret = virtio_pci_epf_send_command(vp_dev, HOST_CMD_RESET); + if (ret) + dev_err(dev, "Failed to reset device\n"); +} + +/* virtio_pci_epf_config_vector - virtio_pci_device ops to set config vector + * @vp_dev: Virtio PCI device managed by virtio_pci_common.c + * + * virtio_pci_device ops to set config vector. This writes the config MSI-X + * vector to the BAR mapped region. + */ +static u16 virtio_pci_epf_config_vector(struct virtio_pci_device *vp_dev, + u16 vector) +{ + void __iomem *ioaddr = vp_dev->ioaddr; + + writew(vector, ioaddr + MSIX_CONFIG); + + return readw(ioaddr + MSIX_CONFIG); +} + +/* virtio_pci_epf_notify - Send notification to the remote vhost virtqueue + * @vq: The local virtio virtqueue corresponding to the remote vhost virtqueue + * where the notification has to be sent + * + * Send notification to the remote vhost virtqueue by using QUEUE_CMD_NOTIFY + * command. + */ +static bool virtio_pci_epf_notify(struct virtqueue *vq) +{ + struct virtio_pci_device *vp_dev; + struct device *dev; + int ret; + + vp_dev = vq->priv; + dev = &vp_dev->pci_dev->dev; + + ret = virtio_pci_epf_send_queue_command(vp_dev, vq, QUEUE_CMD_NOTIFY); + if (ret) { + dev_err(dev, "Notifying virtqueue: %d Failed\n", vq->index); + return false; + } + + return true; +} + +/* virtio_pci_epf_setup_vq - Configure virtqueue + * @vp_dev: Virtio PCI device managed by virtio_pci_common.c + * @info: Wrapper to virtqueue to maintain list of all queues + * @callback: Callback function that has to be associated with virtqueue + * @vq: The local virtio virtqueue corresponding to the remote vhost virtqueue + * where the notification has to be sent + * + * Configure virtqueue with the number of buffers provided by EPF vhost device + * and associate a callback function for each of these virtqueues. + */ +static struct virtqueue * +virtio_pci_epf_setup_vq(struct virtio_pci_device *vp_dev, + struct virtio_pci_vq_info *info, unsigned int index, + void (*callback)(struct virtqueue *vq), + const char *name, bool ctx, u16 msix_vec) +{ + u16 queue_num_buffers; + void __iomem *ioaddr; + struct virtqueue *vq; + struct device *dev; + dma_addr_t vq_addr; + u16 status; + int err; + + dev = &vp_dev->pci_dev->dev; + ioaddr = vp_dev->ioaddr; + + status = readw(ioaddr + QUEUE_STATUS(index)); + if (status & STATUS_ACTIVE) { + dev_err(dev, "Virtqueue %d is already active\n", index); + return ERR_PTR(-ENOENT); + } + + queue_num_buffers = readw(ioaddr + QUEUE_NUM_BUFFERS(index)); + if (!queue_num_buffers) { + dev_err(dev, "Virtqueue %d is not available\n", index); + return ERR_PTR(-ENOENT); + } + + info->msix_vector = msix_vec; + + vq = vring_create_virtqueue(index, queue_num_buffers, + VIRTIO_PCI_VRING_ALIGN, &vp_dev->vdev, true, + false, ctx, virtio_pci_epf_notify, callback, + name); + if (!vq) { + dev_err(dev, "Failed to create Virtqueue %d\n", index); + return ERR_PTR(-ENOMEM); + } + mutex_init(&vq->lock); + + vq_addr = virtqueue_get_desc_addr(vq); + writel(lower_32_bits(vq_addr), ioaddr + QUEUE_ADDR_LOWER(index)); + writel(upper_32_bits(vq_addr), ioaddr + QUEUE_ADDR_UPPER(index)); + + vq->priv = vp_dev; + writew(QUEUE_CMD_ACTIVATE, ioaddr + QUEUE_CMD(index)); + + err = virtio_pci_epf_send_queue_command(vp_dev, vq, QUEUE_CMD_ACTIVATE); + if (err) { + dev_err(dev, "Failed to activate Virtqueue %d\n", index); + goto out_del_vq; + } + + if (msix_vec != VIRTIO_MSI_NO_VECTOR) + writew(msix_vec, ioaddr + QUEUE_MSIX_VECTOR(index)); + + return vq; + +out_del_vq: + vring_del_virtqueue(vq); + + return ERR_PTR(err); +} + +/* virtio_pci_epf_del_vq - Free memory allocated for virtio virtqueues + * @info: Wrapper to virtqueue to maintain list of all queues + * + * Free memory allocated for a virtqueue represented by @info + */ +static void virtio_pci_epf_del_vq(struct virtio_pci_vq_info *info) +{ + struct virtio_pci_device *vp_dev; + struct virtqueue *vq; + void __iomem *ioaddr; + unsigned int index; + + vq = info->vq; + vp_dev = to_vp_device(vq->vdev); + ioaddr = vp_dev->ioaddr; + index = vq->index; + + if (vp_dev->msix_enabled) + writew(VIRTIO_MSI_NO_VECTOR, ioaddr + QUEUE_MSIX_VECTOR(index)); + + writew(QUEUE_CMD_DEACTIVATE, ioaddr + QUEUE_CMD(index)); + vring_del_virtqueue(vq); +} + +static const struct virtio_config_ops virtio_pci_epf_config_ops = { + .get = virtio_pci_epf_get, + .set = virtio_pci_epf_set, + .get_status = virtio_pci_epf_get_status, + .set_status = virtio_pci_epf_set_status, + .reset = virtio_pci_epf_reset, + .find_vqs = vp_find_vqs, + .del_vqs = vp_del_vqs, + .get_features = virtio_pci_epf_get_features, + .finalize_features = virtio_pci_epf_finalize_features, + .bus_name = vp_bus_name, + .set_vq_affinity = vp_set_vq_affinity, + .get_vq_affinity = vp_get_vq_affinity, +}; + +/* virtio_pci_epf_release_dev - Callback function to free device + * @dev: Device in virtio_device that has to be freed + * + * Callback function from device core invoked to free the device after + * all references have been removed. This frees the allocated memory for + * struct virtio_pci_epf. + */ +static void virtio_pci_epf_release_dev(struct device *dev) +{ + struct virtio_pci_device *vp_dev; + struct virtio_pci_epf *pci_epf; + struct virtio_device *vdev; + + vdev = dev_to_virtio(dev); + vp_dev = to_vp_device(vdev); + pci_epf = to_virtio_pci_epf(vp_dev); + + kfree(pci_epf); +} + +/* virtio_pci_epf_probe - Initialize struct virtio_pci_epf when a new PCIe + * device is created + * @pdev: The pci_dev that is created by the PCIe core during enumeration + * @id: pci_device_id of the @pdev + * + * Probe function to initialize struct virtio_pci_epf when a new PCIe device is + * created. + */ +static int virtio_pci_epf_probe(struct pci_dev *pdev, + const struct pci_device_id *id) +{ + struct virtio_pci_device *vp_dev, *reg_dev = NULL; + struct virtio_pci_epf *pci_epf; + struct device *dev; + int err; + + if (pci_is_bridge(pdev)) + return -ENODEV; + + pci_epf = kzalloc(sizeof(*pci_epf), GFP_KERNEL); + if (!pci_epf) + return -ENOMEM; + + dev = &pdev->dev; + vp_dev = &pci_epf->vp_dev; + vp_dev->vdev.dev.parent = dev; + vp_dev->vdev.dev.release = virtio_pci_epf_release_dev; + vp_dev->pci_dev = pdev; + INIT_LIST_HEAD(&vp_dev->virtqueues); + spin_lock_init(&vp_dev->lock); + mutex_init(&pci_epf->lock); + + err = pci_enable_device(pdev); + if (err) { + dev_err(dev, "Cannot enable PCI device\n"); + goto err_enable_device; + } + + err = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)); + if (err) { + dev_err(dev, "Cannot set DMA mask\n"); + goto err_dma_set_mask; + } + + err = pci_request_regions(pdev, DRV_MODULE_NAME); + if (err) { + dev_err(dev, "Cannot obtain PCI resources\n"); + goto err_dma_set_mask; + } + + pci_set_master(pdev); + + vp_dev->ioaddr = pci_ioremap_bar(pdev, 0); + if (!vp_dev->ioaddr) { + dev_err(dev, "Failed to read BAR0\n"); + goto err_ioremap; + } + + pci_set_drvdata(pdev, vp_dev); + vp_dev->isr = vp_dev->ioaddr + ISR; + + /* + * we use the subsystem vendor/device id as the virtio vendor/device + * id. this allows us to use the same PCI vendor/device id for all + * virtio devices and to identify the particular virtio driver by + * the subsystem ids + */ + vp_dev->vdev.id.vendor = pdev->subsystem_vendor; + vp_dev->vdev.id.device = pdev->subsystem_device; + + vp_dev->vdev.config = &virtio_pci_epf_config_ops; + + vp_dev->config_vector = virtio_pci_epf_config_vector; + vp_dev->setup_vq = virtio_pci_epf_setup_vq; + vp_dev->del_vq = virtio_pci_epf_del_vq; + + err = register_virtio_device(&vp_dev->vdev); + reg_dev = vp_dev; + if (err) { + dev_err(dev, "Failed to register VIRTIO device\n"); + goto err_register_virtio; + } + + return 0; + +err_register_virtio: + pci_iounmap(pdev, vp_dev->ioaddr); + +err_ioremap: + pci_release_regions(pdev); + +err_dma_set_mask: + pci_disable_device(pdev); + +err_enable_device: + if (reg_dev) + put_device(&vp_dev->vdev.dev); + else + kfree(vp_dev); + + return err; +} + +/* virtio_pci_epf_remove - Free the initializations performed by virtio_pci_epf_probe() + * @pdev: The pci_dev that is created by the PCIe core during enumeration + * + * Free the initializations performed by virtio_pci_epf_probe(). + */ +void virtio_pci_epf_remove(struct pci_dev *pdev) +{ + struct virtio_pci_device *vp_dev; + + vp_dev = pci_get_drvdata(pdev); + + unregister_virtio_device(&vp_dev->vdev); + pci_iounmap(pdev, vp_dev->ioaddr); + pci_release_regions(pdev); + pci_disable_device(pdev); +} + +static const struct pci_device_id virtio_pci_epf_table[] = { + { PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_DRA74x), + }, + { PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_DRA72x), + }, + { PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_J721E), + }, + { } +}; +MODULE_DEVICE_TABLE(pci, virtio_pci_epf_table); + +static struct pci_driver virtio_pci_epf_driver = { + .name = DRV_MODULE_NAME, + .id_table = virtio_pci_epf_table, + .probe = virtio_pci_epf_probe, + .remove = virtio_pci_epf_remove, + .sriov_configure = pci_sriov_configure_simple, +}; +module_pci_driver(virtio_pci_epf_driver); + +MODULE_DESCRIPTION("VIRTIO PCI EPF DRIVER"); +MODULE_AUTHOR("Kishon Vijay Abraham I "); +MODULE_LICENSE("GPL v2"); diff --git a/include/linux/virtio.h b/include/linux/virtio.h index a493eac08393..0eb51b31545d 100644 --- a/include/linux/virtio.h +++ b/include/linux/virtio.h @@ -19,6 +19,7 @@ * @priv: a pointer for the virtqueue implementation to use. * @index: the zero-based ordinal number for this queue. * @num_free: number of elements we expect to be able to fit. + * @mutex: mutex to protect concurrent access to the same queue * * A note on @num_free: with indirect buffers, each buffer needs one * element in the queue, otherwise a buffer will need one element per @@ -31,6 +32,8 @@ struct virtqueue { struct virtio_device *vdev; unsigned int index; unsigned int num_free; + /* mutex to protect concurrent access to the queue while configuring */ + struct mutex lock; void *priv; }; From patchwork Thu Jul 2 08:21:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kishon Vijay Abraham I X-Patchwork-Id: 192201 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp1231159ilg; Thu, 2 Jul 2020 01:24:18 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx0k/QVRkVBLx2vTOApvnjNrE/gFQo2ybNX2ECQ/H5/3O/fYmu6xDSdrMTs6M0/jR/iGBDI X-Received: by 2002:a50:d7c1:: with SMTP id m1mr34018748edj.217.1593678258693; Thu, 02 Jul 2020 01:24:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593678258; cv=none; d=google.com; s=arc-20160816; b=IAT5NZkg0qFDYhm5OmhFAE2jJnACU1nPRv7oTtzGuvXJKrlbDPDrdoAtmRTV7ZGWMU x4euG9Ddijh6+Apfvk535YFSZILboRNPurV+g3C+teH8NKuud/KIVuJ6R8RjASeW5A4C APMTOC7Df7CzFFKz0Ab12pWH2jBD05mwinpvLXQQmS9Rf64E8AjHa9Fzscc9YdZfs/em Tl1wi0TGb5hI3cO+HLRWQDzKbzVRN2zaizVrylZEsTQD+n5vakfLYXB5fJ6SBtZ7VCRa dzSexP/iXBRLkR6MmEoR4su896oAVIB9bZuCNDWJHxB/8REf0PfxMDSbivHtHkYr3Bfl s1FA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=zm7vsB22cjejSvvY0tRsRTmZ5Goqgh+xoXKUJZr3zYs=; b=S8ZJYovEsLHxKoDvm/ybYquzlvAKZCdHi/p5DVUYaGTFoC6oVrLPtJYnGTa8IeBZd0 tnFuh9IXiE8fhKfzEJa7W0xGe14yLm/1b94jkDr6ml5rF5wmg0B4Etu8kcJnU/X89Usz +LAJVyaHK2CDVm0rz/tgxAha7hMUkNNMymY/89T4MIoPWSwqDnpsfOR8ScHYGN4NUXOi 9OErSAEDhj02IGFwalkZQluZXurOtBdiTa91eu1TQqs5Wd0mswfjhRgnriHsQ4opobwb l4+Ll8lpsAmdG6F0Z/u2Nxv5J7muX+bklaVtHIVKBZ/B/d5TCRL9p+eXlSnHejqEAR30 +fJA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=CCPOpSxe; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id q19si4453614eji.315.2020.07.02.01.24.18; Thu, 02 Jul 2020 01:24:18 -0700 (PDT) Received-SPF: pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=CCPOpSxe; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728823AbgGBIYO (ORCPT + 9 others); Thu, 2 Jul 2020 04:24:14 -0400 Received: from lelv0142.ext.ti.com ([198.47.23.249]:59786 "EHLO lelv0142.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728802AbgGBIYJ (ORCPT ); Thu, 2 Jul 2020 04:24:09 -0400 Received: from fllv0034.itg.ti.com ([10.64.40.246]) by lelv0142.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0628NcaD082119; Thu, 2 Jul 2020 03:23:38 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1593678218; bh=zm7vsB22cjejSvvY0tRsRTmZ5Goqgh+xoXKUJZr3zYs=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=CCPOpSxeBAuME1xDmOTn/JsRonmd8GITXRmGxtdVhklINpBtKg/tofEOdGb8PlIu5 l6v+0G7IlmXLuoB6lytDGCZQ/pSLvNgrRf6NPtcQq6V0+r6yOaWxhk9Kg9a1Hw4fuq 6imQ0jP4hR/OYKA/bSfI0wrjSrY8XrE1eNRCcN5Y= Received: from DLEE111.ent.ti.com (dlee111.ent.ti.com [157.170.170.22]) by fllv0034.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0628Nc1r032193 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 2 Jul 2020 03:23:38 -0500 Received: from DLEE115.ent.ti.com (157.170.170.26) by DLEE111.ent.ti.com (157.170.170.22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Thu, 2 Jul 2020 03:23:38 -0500 Received: from lelv0327.itg.ti.com (10.180.67.183) by DLEE115.ent.ti.com (157.170.170.26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Thu, 2 Jul 2020 03:23:37 -0500 Received: from a0393678ub.india.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0628LiYV006145; Thu, 2 Jul 2020 03:23:32 -0500 From: Kishon Vijay Abraham I To: Ohad Ben-Cohen , Bjorn Andersson , Jon Mason , Dave Jiang , Allen Hubbe , Lorenzo Pieralisi , Bjorn Helgaas , "Michael S. Tsirkin" , Jason Wang , Paolo Bonzini , Stefan Hajnoczi , Stefano Garzarella CC: , , , , , , , Subject: [RFC PATCH 19/22] PCI: endpoint: Add EP function driver to provide VHOST interface Date: Thu, 2 Jul 2020 13:51:40 +0530 Message-ID: <20200702082143.25259-20-kishon@ti.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200702082143.25259-1-kishon@ti.com> References: <20200702082143.25259-1-kishon@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Add a new endpoint function driver to register VHOST device and provide interface for the VHOST driver to access virtqueues created by the remote host (using VIRTIO). Signed-off-by: Kishon Vijay Abraham I --- drivers/pci/endpoint/functions/Kconfig | 11 + drivers/pci/endpoint/functions/Makefile | 1 + .../pci/endpoint/functions/pci-epf-vhost.c | 1144 +++++++++++++++++ drivers/vhost/vhost_cfs.c | 13 - include/linux/vhost.h | 14 + 5 files changed, 1170 insertions(+), 13 deletions(-) create mode 100644 drivers/pci/endpoint/functions/pci-epf-vhost.c -- 2.17.1 diff --git a/drivers/pci/endpoint/functions/Kconfig b/drivers/pci/endpoint/functions/Kconfig index 55ac7bb2d469..21830576e1f4 100644 --- a/drivers/pci/endpoint/functions/Kconfig +++ b/drivers/pci/endpoint/functions/Kconfig @@ -24,3 +24,14 @@ config PCI_EPF_NTB device tree. If in doubt, say "N" to disable Endpoint NTB driver. + +config PCI_EPF_VHOST + tristate "PCI Endpoint VHOST driver" + depends on PCI_ENDPOINT + help + Select this configuration option to enable the VHOST driver + for PCI Endpoint. EPF VHOST driver implements VIRTIO backend + for EPF and uses the VHOST framework to bind any VHOST driver + to the VHOST device created by this driver. + + If in doubt, say "N" to disable Endpoint VHOST driver. diff --git a/drivers/pci/endpoint/functions/Makefile b/drivers/pci/endpoint/functions/Makefile index 96ab932a537a..39d4f9daf63a 100644 --- a/drivers/pci/endpoint/functions/Makefile +++ b/drivers/pci/endpoint/functions/Makefile @@ -5,3 +5,4 @@ obj-$(CONFIG_PCI_EPF_TEST) += pci-epf-test.o obj-$(CONFIG_PCI_EPF_NTB) += pci-epf-ntb.o +obj-$(CONFIG_PCI_EPF_VHOST) += pci-epf-vhost.o diff --git a/drivers/pci/endpoint/functions/pci-epf-vhost.c b/drivers/pci/endpoint/functions/pci-epf-vhost.c new file mode 100644 index 000000000000..d090e5e88575 --- /dev/null +++ b/drivers/pci/endpoint/functions/pci-epf-vhost.c @@ -0,0 +1,1144 @@ +// SPDX-License-Identifier: GPL-2.0 +/** + * Endpoint Function Driver to implement VHOST functionality + * + * Copyright (C) 2020 Texas Instruments + * Author: Kishon Vijay Abraham I + */ + +#include +#include +#include +#include +#include +#include + +#include +#include + +#include + +#define MAX_VQS 8 + +#define VHOST_QUEUE_STATUS_ENABLE BIT(0) + +#define VHOST_DEVICE_CONFIG_SIZE 1024 +#define EPF_VHOST_MAX_INTERRUPTS (MAX_VQS + 1) + +static struct workqueue_struct *kpcivhost_workqueue; + +struct epf_vhost_queue { + struct delayed_work cmd_handler; + struct vhost_virtqueue *vq; + struct epf_vhost *vhost; + phys_addr_t phys_addr; + void __iomem *addr; + unsigned int size; +}; + +struct epf_vhost { + const struct pci_epc_features *epc_features; + struct epf_vhost_queue vqueue[MAX_VQS]; + struct delayed_work cmd_handler; + struct delayed_work cfs_work; + struct epf_vhost_reg *reg; + struct config_group group; + size_t msix_table_offset; + struct vhost_dev vdev; + struct pci_epf *epf; + struct vring vring; + int msix_bar; +}; + +static inline struct epf_vhost *to_epf_vhost_from_ci(struct config_item *item) +{ + return container_of(to_config_group(item), struct epf_vhost, group); +} + +#define to_epf_vhost(v) container_of((v), struct epf_vhost, vdev) + +struct epf_vhost_reg_queue { + u8 cmd; + u8 cmd_status; + u16 status; + u16 num_buffers; + u16 msix_vector; + u64 queue_addr; +} __packed; + +enum queue_cmd { + QUEUE_CMD_NONE, + QUEUE_CMD_ACTIVATE, + QUEUE_CMD_DEACTIVATE, + QUEUE_CMD_NOTIFY, +}; + +enum queue_cmd_status { + QUEUE_CMD_STATUS_NONE, + QUEUE_CMD_STATUS_OKAY, + QUEUE_CMD_STATUS_ERROR, +}; + +struct epf_vhost_reg { + u64 host_features; + u64 guest_features; + u16 msix_config; + u16 num_queues; + u8 device_status; + u8 config_generation; + u32 isr; + u8 cmd; + u8 cmd_status; + struct epf_vhost_reg_queue vq[MAX_VQS]; +} __packed; + +enum host_cmd { + HOST_CMD_NONE, + HOST_CMD_SET_STATUS, + HOST_CMD_FINALIZE_FEATURES, + HOST_CMD_RESET, +}; + +enum host_cmd_status { + HOST_CMD_STATUS_NONE, + HOST_CMD_STATUS_OKAY, + HOST_CMD_STATUS_ERROR, +}; + +static struct pci_epf_header epf_vhost_header = { + .vendorid = PCI_ANY_ID, + .deviceid = PCI_ANY_ID, + .baseclass_code = PCI_CLASS_OTHERS, + .interrupt_pin = PCI_INTERRUPT_INTA, +}; + +/* pci_epf_vhost_cmd_handler - Handle commands from remote EPF virtio driver + * @work: The work_struct holding the pci_epf_vhost_cmd_handler() function that + * is scheduled + * + * Handle commands from the remote EPF virtio driver and sends notification to + * the vhost client driver. The remote EPF virtio driver sends commands when the + * virtio driver status is updated or when the feature negotiation is complete or + * if the virtio driver wants to reset the device. + */ +static void pci_epf_vhost_cmd_handler(struct work_struct *work) +{ + struct epf_vhost_reg *reg; + struct epf_vhost *vhost; + struct vhost_dev *vdev; + struct device *dev; + u8 command; + + vhost = container_of(work, struct epf_vhost, cmd_handler.work); + vdev = &vhost->vdev; + dev = &vhost->epf->dev; + reg = vhost->reg; + + command = reg->cmd; + if (!command) + goto reset_handler; + + reg->cmd = 0; + + switch (command) { + case HOST_CMD_SET_STATUS: + blocking_notifier_call_chain(&vdev->notifier, NOTIFY_SET_STATUS, + NULL); + reg->cmd_status = HOST_CMD_STATUS_OKAY; + break; + case HOST_CMD_FINALIZE_FEATURES: + vdev->features = reg->guest_features; + blocking_notifier_call_chain(&vdev->notifier, + NOTIFY_FINALIZE_FEATURES, 0); + reg->cmd_status = HOST_CMD_STATUS_OKAY; + break; + case HOST_CMD_RESET: + blocking_notifier_call_chain(&vdev->notifier, NOTIFY_RESET, 0); + reg->cmd_status = HOST_CMD_STATUS_OKAY; + break; + default: + dev_err(dev, "UNKNOWN command: %d\n", command); + break; + } + +reset_handler: + queue_delayed_work(kpcivhost_workqueue, &vhost->cmd_handler, + msecs_to_jiffies(1)); +} + +/* pci_epf_vhost_queue_activate - Map virtqueue local address to remote + * virtqueue address provided by EPF virtio + * @vqueue: struct epf_vhost_queue holding the local virtqueue address + * + * In order for the local system to access the remote virtqueue, the address + * reserved in local system should be mapped to the remote virtqueue address. + * Map local virtqueue address to remote virtqueue address here. + */ +static int pci_epf_vhost_queue_activate(struct epf_vhost_queue *vqueue) +{ + struct epf_vhost_reg_queue *reg_queue; + struct vhost_virtqueue *vq; + struct epf_vhost_reg *reg; + phys_addr_t vq_phys_addr; + struct epf_vhost *vhost; + struct pci_epf *epf; + struct pci_epc *epc; + struct device *dev; + u64 vq_remote_addr; + size_t vq_size; + u8 func_no; + int ret; + + vhost = vqueue->vhost; + epf = vhost->epf; + dev = &epf->dev; + epc = epf->epc; + func_no = epf->func_no; + + vq = vqueue->vq; + reg = vhost->reg; + reg_queue = ®->vq[vq->index]; + vq_phys_addr = vqueue->phys_addr; + vq_remote_addr = reg_queue->queue_addr; + vq_size = vqueue->size; + + ret = pci_epc_map_addr(epc, func_no, vq_phys_addr, vq_remote_addr, + vq_size); + if (ret) { + dev_err(dev, "Failed to map outbound address\n"); + return ret; + } + + reg_queue->status |= VHOST_QUEUE_STATUS_ENABLE; + + return 0; +} + +/* pci_epf_vhost_queue_deactivate - Unmap virtqueue local address from remote + * virtqueue address + * @vqueue: struct epf_vhost_queue holding the local virtqueue address + * + * Unmap virtqueue local address from remote virtqueue address. + */ +static void pci_epf_vhost_queue_deactivate(struct epf_vhost_queue *vqueue) +{ + struct epf_vhost_reg_queue *reg_queue; + struct vhost_virtqueue *vq; + struct epf_vhost_reg *reg; + phys_addr_t vq_phys_addr; + struct epf_vhost *vhost; + struct pci_epf *epf; + struct pci_epc *epc; + u8 func_no; + + vhost = vqueue->vhost; + + epf = vhost->epf; + epc = epf->epc; + func_no = epf->func_no; + vq_phys_addr = vqueue->phys_addr; + + pci_epc_unmap_addr(epc, func_no, vq_phys_addr); + + reg = vhost->reg; + vq = vqueue->vq; + reg_queue = ®->vq[vq->index]; + reg_queue->status &= ~VHOST_QUEUE_STATUS_ENABLE; +} + +/* pci_epf_vhost_queue_cmd_handler - Handle commands from remote EPF virtio + * driver sent for a particular virtqueue + * @work: The work_struct holding the pci_epf_vhost_queue_cmd_handler() + * function that is scheduled + * + * Handle commands from the remote EPF virtio driver sent for a particular + * virtqueue to activate/de-activate a virtqueue or to send notification to + * the vhost client driver. + */ +static void pci_epf_vhost_queue_cmd_handler(struct work_struct *work) +{ + struct epf_vhost_reg_queue *reg_queue; + struct epf_vhost_queue *vqueue; + struct vhost_virtqueue *vq; + struct epf_vhost_reg *reg; + struct epf_vhost *vhost; + struct device *dev; + u8 command; + int ret; + + vqueue = container_of(work, struct epf_vhost_queue, cmd_handler.work); + vhost = vqueue->vhost; + reg = vhost->reg; + vq = vqueue->vq; + reg_queue = ®->vq[vq->index]; + dev = &vhost->epf->dev; + + command = reg_queue->cmd; + if (!command) + goto reset_handler; + + reg_queue->cmd = 0; + vq = vqueue->vq; + + switch (command) { + case QUEUE_CMD_ACTIVATE: + ret = pci_epf_vhost_queue_activate(vqueue); + if (ret) + reg_queue->cmd_status = QUEUE_CMD_STATUS_ERROR; + else + reg_queue->cmd_status = QUEUE_CMD_STATUS_OKAY; + break; + case QUEUE_CMD_DEACTIVATE: + pci_epf_vhost_queue_deactivate(vqueue); + reg_queue->cmd_status = QUEUE_CMD_STATUS_OKAY; + break; + case QUEUE_CMD_NOTIFY: + vhost_virtqueue_callback(vqueue->vq); + reg_queue->cmd_status = QUEUE_CMD_STATUS_OKAY; + break; + default: + dev_err(dev, "UNKNOWN QUEUE command: %d\n", command); + break; +} + +reset_handler: + queue_delayed_work(kpcivhost_workqueue, &vqueue->cmd_handler, + msecs_to_jiffies(1)); +} + +/* pci_epf_vhost_write - Write data to buffer provided by remote virtio driver + * @vdev: Vhost device that communicates with remove virtio device + * @dst: Buffer address present in the memory of the remote system to which + * data should be written + * @src: Buffer address in the local device provided by the vhost client driver + * @len: Length of the data to be copied from @src to @dst + * + * Write data to buffer provided by remote virtio driver from buffer provided + * by vhost client driver. + */ +static int pci_epf_vhost_write(struct vhost_dev *vdev, u64 dst, void *src, int len) +{ + const struct pci_epc_features *epc_features; + struct epf_vhost *vhost; + phys_addr_t phys_addr; + struct pci_epf *epf; + struct pci_epc *epc; + void __iomem *addr; + struct device *dev; + int offset, ret; + u64 dst_addr; + size_t align; + u8 func_no; + + vhost = to_epf_vhost(vdev); + epf = vhost->epf; + dev = &epf->dev; + epc = epf->epc; + func_no = epf->func_no; + epc_features = vhost->epc_features; + align = epc_features->align; + + offset = dst & (align - 1); + dst_addr = dst & ~(align - 1); + + addr = pci_epc_mem_alloc_addr(epc, &phys_addr, len); + if (!addr) { + dev_err(dev, "Failed to allocate outbound address\n"); + return -ENOMEM; + } + + ret = pci_epc_map_addr(epc, func_no, phys_addr, dst_addr, len); + if (ret) { + dev_err(dev, "Failed to map outbound address\n"); + goto ret; + } + + memcpy_toio(addr + offset, src, len); + + pci_epc_unmap_addr(epc, func_no, phys_addr); + +ret: + pci_epc_mem_free_addr(epc, phys_addr, addr, len); + + return ret; +} + +/* ntb_vhost_read - Read data from buffer provided by remote virtio driver + * @vdev: Vhost device that communicates with remove virtio device + * @dst: Buffer address in the local device provided by the vhost client driver + * @src: Buffer address in the remote device provided by the remote virtio + * driver + * @len: Length of the data to be copied from @src to @dst + * + * Read data from buffer provided by remote virtio driver to address provided + * by vhost client driver. + */ +static int pci_epf_vhost_read(struct vhost_dev *vdev, void *dst, u64 src, int len) +{ + const struct pci_epc_features *epc_features; + struct epf_vhost *vhost; + phys_addr_t phys_addr; + struct pci_epf *epf; + struct pci_epc *epc; + void __iomem *addr; + struct device *dev; + int offset, ret; + u64 src_addr; + size_t align; + u8 func_no; + + vhost = to_epf_vhost(vdev); + epf = vhost->epf; + dev = &epf->dev; + epc = epf->epc; + func_no = epf->func_no; + epc_features = vhost->epc_features; + align = epc_features->align; + + offset = src & (align - 1); + src_addr = src & ~(align - 1); + + addr = pci_epc_mem_alloc_addr(epc, &phys_addr, len); + if (!addr) { + dev_err(dev, "Failed to allocate outbound address\n"); + return -ENOMEM; + } + + ret = pci_epc_map_addr(epc, func_no, phys_addr, src_addr, len); + if (ret) { + dev_err(dev, "Failed to map outbound address\n"); + goto ret; + } + + memcpy_fromio(dst, addr + offset, len); + + pci_epc_unmap_addr(epc, func_no, phys_addr); + +ret: + pci_epc_mem_free_addr(epc, phys_addr, addr, len); + + return ret; +} + +/* pci_epf_vhost_notify - Send notification to the remote virtqueue + * @vq: The local vhost virtqueue corresponding to the remote virtio virtqueue + * + * Use endpoint core framework to raise MSI-X interrupt to notify the remote + * virtqueue. + */ +static void pci_epf_vhost_notify(struct vhost_virtqueue *vq) +{ + struct epf_vhost_reg_queue *reg_queue; + struct epf_vhost_reg *reg; + struct epf_vhost *vhost; + struct vhost_dev *vdev; + struct pci_epf *epf; + struct pci_epc *epc; + u8 func_no; + + vdev = vq->dev; + vhost = to_epf_vhost(vdev); + epf = vhost->epf; + func_no = epf->func_no; + epc = epf->epc; + reg = vhost->reg; + reg_queue = ®->vq[vq->index]; + + pci_epc_raise_irq(epc, func_no, PCI_EPC_IRQ_MSIX, + reg_queue->msix_vector + 1); +} + +/* pci_epf_vhost_del_vqs - Delete all the vqs associated with the vhost device + * @vdev: Vhost device that communicates with remove virtio device + * + * Delete all the vqs associated with the vhost device and free the memory + * address reserved for accessing the remote virtqueue. + */ +static void pci_epf_vhost_del_vqs(struct vhost_dev *vdev) +{ + struct epf_vhost_queue *vqueue; + struct vhost_virtqueue *vq; + phys_addr_t vq_phys_addr; + struct epf_vhost *vhost; + void __iomem *vq_addr; + unsigned int vq_size; + struct pci_epf *epf; + struct pci_epc *epc; + int i; + + vhost = to_epf_vhost(vdev); + epf = vhost->epf; + epc = epf->epc; + + for (i = 0; i < vdev->nvqs; i++) { + vq = vdev->vqs[i]; + if (IS_ERR_OR_NULL(vq)) + continue; + + vqueue = &vhost->vqueue[i]; + vq_phys_addr = vqueue->phys_addr; + vq_addr = vqueue->addr; + vq_size = vqueue->size; + pci_epc_mem_free_addr(epc, vq_phys_addr, vq_addr, vq_size); + kfree(vq); + } +} + +/* pci_epf_vhost_create_vq - Create a new vhost virtqueue + * @vdev: Vhost device that communicates with remove virtio device + * @index: Index of the vhost virtqueue + * @num_bufs: The number of buffers that should be supported by the vhost + * virtqueue (number of descriptors in the vhost virtqueue) + * @callback: Callback function associated with the virtqueue + * + * Create a new vhost virtqueue which can be used by the vhost client driver + * to access the remote virtio. This sets up the local address of the vhost + * virtqueue but shouldn't be accessed until the virtio sets the status to + * VIRTIO_CONFIG_S_DRIVER_OK. + */ +static struct vhost_virtqueue * +pci_epf_vhost_create_vq(struct vhost_dev *vdev, int index, + unsigned int num_bufs, + void (*callback)(struct vhost_virtqueue *)) +{ + struct epf_vhost_reg_queue *reg_queue; + struct epf_vhost_queue *vqueue; + struct epf_vhost_reg *reg; + struct vhost_virtqueue *vq; + phys_addr_t vq_phys_addr; + struct epf_vhost *vhost; + struct vringh *vringh; + void __iomem *vq_addr; + unsigned int vq_size; + struct vring *vring; + struct pci_epf *epf; + struct pci_epc *epc; + struct device *dev; + int ret; + + vhost = to_epf_vhost(vdev); + vqueue = &vhost->vqueue[index]; + reg = vhost->reg; + reg_queue = ®->vq[index]; + epf = vhost->epf; + epc = epf->epc; + dev = &epf->dev; + + vq = kzalloc(sizeof(*vq), GFP_KERNEL); + if (!vq) + return ERR_PTR(-ENOMEM); + + vq->dev = vdev; + vq->callback = callback; + vq->num = num_bufs; + vq->index = index; + vq->notify = pci_epf_vhost_notify; + vq->type = VHOST_TYPE_MMIO; + + vqueue->vq = vq; + vqueue->vhost = vhost; + + vringh = &vq->vringh; + vring = &vringh->vring; + reg_queue->num_buffers = num_bufs; + + vq_size = vring_size(num_bufs, VIRTIO_PCI_VRING_ALIGN); + vq_addr = pci_epc_mem_alloc_addr(epc, &vq_phys_addr, vq_size); + if (!vq_addr) { + dev_err(dev, "Failed to allocate virtqueue address\n"); + ret = -ENOMEM; + goto err_mem_alloc_addr; + } + + vring_init(vring, num_bufs, vq_addr, VIRTIO_PCI_VRING_ALIGN); + ret = vringh_init_mmio(vringh, 0, num_bufs, false, vring->desc, + vring->avail, vring->used); + if (ret) { + dev_err(dev, "Failed to init vringh\n"); + goto err_init_mmio; + } + + vqueue->phys_addr = vq_phys_addr; + vqueue->addr = vq_addr; + vqueue->size = vq_size; + + INIT_DELAYED_WORK(&vqueue->cmd_handler, pci_epf_vhost_queue_cmd_handler); + queue_work(kpcivhost_workqueue, &vqueue->cmd_handler.work); + + return vq; + +err_init_mmio: + pci_epc_mem_free_addr(epc, vq_phys_addr, vq_addr, vq_size); + +err_mem_alloc_addr: + kfree(vq); + + return ERR_PTR(ret); +} + +/* pci_epf_vhost_create_vqs - Create vhost virtqueues for vhost device + * @vdev: Vhost device that communicates with the remote virtio device + * @nvqs: Number of vhost virtqueues to be created + * @num_bufs: The number of buffers that should be supported by the vhost + * virtqueue (number of descriptors in the vhost virtqueue) + * @vqs: Pointers to all the created vhost virtqueues + * @callback: Callback function associated with the virtqueue + * @names: Names associated with each virtqueue + * + * Create vhost virtqueues for vhost device. This acts as a wrapper to + * pci_epf_vhost_create_vq() which creates individual vhost virtqueue. + */ +static int pci_epf_vhost_create_vqs(struct vhost_dev *vdev, unsigned int nvqs, + unsigned int num_bufs, + struct vhost_virtqueue *vqs[], + vhost_vq_callback_t *callbacks[], + const char * const names[]) +{ + struct epf_vhost *vhost; + struct pci_epf *epf; + struct device *dev; + int ret, i; + + vhost = to_epf_vhost(vdev); + epf = vhost->epf; + dev = &epf->dev; + + for (i = 0; i < nvqs; i++) { + vqs[i] = pci_epf_vhost_create_vq(vdev, i, num_bufs, + callbacks[i]); + if (IS_ERR_OR_NULL(vqs[i])) { + ret = PTR_ERR(vqs[i]); + dev_err(dev, "Failed to create virtqueue\n"); + goto err; + } + } + + vdev->nvqs = nvqs; + vdev->vqs = vqs; + + return 0; + +err: + pci_epf_vhost_del_vqs(vdev); + return ret; +} + +/* pci_epf_vhost_set_features - vhost_config_ops to set vhost device features + * @vdev: Vhost device that communicates with the remote virtio device + * @features: Features supported by the vhost client driver + * + * vhost_config_ops invoked by the vhost client driver to set vhost device + * features. + */ +static int pci_epf_vhost_set_features(struct vhost_dev *vdev, u64 features) +{ + struct epf_vhost_reg *reg; + struct epf_vhost *vhost; + + vhost = to_epf_vhost(vdev); + reg = vhost->reg; + + reg->host_features = features; + + return 0; +} + +/* ntb_vhost_set_status - vhost_config_ops to set vhost device status + * @vdev: Vhost device that communicates with the remote virtio device + * @status: Vhost device status configured by vhost client driver + * + * vhost_config_ops invoked by the vhost client driver to set vhost device + * status. + */ +static int pci_epf_vhost_set_status(struct vhost_dev *vdev, u8 status) +{ + struct epf_vhost_reg *reg; + struct epf_vhost *vhost; + + vhost = to_epf_vhost(vdev); + reg = vhost->reg; + + reg->device_status = status; + + return 0; +} + +/* ntb_vhost_get_status - vhost_config_ops to get vhost device status + * @vdev: Vhost device that communicates with the remote virtio device + * + * vhost_config_ops invoked by the vhost client driver to get vhost device + * status set by the remote virtio driver. + */ +static u8 pci_epf_vhost_get_status(struct vhost_dev *vdev) +{ + struct epf_vhost_reg *reg; + struct epf_vhost *vhost; + + vhost = to_epf_vhost(vdev); + reg = vhost->reg; + + return reg->device_status; +} + +static const struct vhost_config_ops pci_epf_vhost_ops = { + .create_vqs = pci_epf_vhost_create_vqs, + .del_vqs = pci_epf_vhost_del_vqs, + .write = pci_epf_vhost_write, + .read = pci_epf_vhost_read, + .set_features = pci_epf_vhost_set_features, + .set_status = pci_epf_vhost_set_status, + .get_status = pci_epf_vhost_get_status, +}; + +/* pci_epf_vhost_write_header - Write to PCIe standard configuration space + * header + * @vhost: EPF vhost containing the vhost device that communicates with the + * remote virtio device + * + * Invokes endpoint core framework's pci_epc_write_header() to write to the + * standard configuration space header. + */ +static int pci_epf_vhost_write_header(struct epf_vhost *vhost) +{ + struct pci_epf_header *header; + struct vhost_dev *vdev; + struct pci_epc *epc; + struct pci_epf *epf; + struct device *dev; + u8 func_no; + int ret; + + vdev = &vhost->vdev; + epf = vhost->epf; + dev = &epf->dev; + epc = epf->epc; + func_no = epf->func_no; + header = epf->header; + + ret = pci_epc_write_header(epc, func_no, header); + if (ret) { + dev_err(dev, "Configuration header write failed\n"); + return ret; + } + + return 0; +} + +/* pci_epf_vhost_release_dev - Callback function to free device + * @dev: Device in vhost_dev that has to be freed + * + * Callback function from device core invoked to free the device after + * all references have been removed. This frees the allocated memory for + * struct ntb_vhost. + */ +static void pci_epf_vhost_release_dev(struct device *dev) +{ + struct epf_vhost *vhost; + struct vhost_dev *vdev; + + vdev = to_vhost_dev(dev); + vhost = to_epf_vhost(vdev); + + kfree(vhost); +} + +/* pci_epf_vhost_register - Register a vhost device + * @vhost: EPF vhost containing the vhost device that communicates with the + * remote virtio device + * + * Invoked vhost_register_device() to register a vhost device after populating + * the deviceID and vendorID of the vhost device. + */ +static int pci_epf_vhost_register(struct epf_vhost *vhost) +{ + struct vhost_dev *vdev; + struct pci_epf *epf; + struct device *dev; + int ret; + + vdev = &vhost->vdev; + epf = vhost->epf; + dev = &epf->dev; + + vdev->dev.parent = dev; + vdev->dev.release = pci_epf_vhost_release_dev; + vdev->id.device = vhost->epf->header->subsys_id; + vdev->id.vendor = vhost->epf->header->subsys_vendor_id; + vdev->ops = &pci_epf_vhost_ops; + + ret = vhost_register_device(vdev); + if (ret) { + dev_err(dev, "Failed to register vhost device\n"); + return ret; + } + + return 0; +} + +/* pci_epf_vhost_configure_bar - Configure BAR of EPF device + * @vhost: EPF vhost containing the vhost device that communicates with the + * remote virtio device + * + * Allocate memory for the standard virtio configuration space and map it to + * the first free BAR. + */ +static int pci_epf_vhost_configure_bar(struct epf_vhost *vhost) +{ + size_t msix_table_size = 0, pba_size = 0, align, bar_size; + const struct pci_epc_features *epc_features; + struct pci_epf_bar *epf_bar; + struct vhost_dev *vdev; + struct pci_epf *epf; + struct pci_epc *epc; + struct device *dev; + bool msix_capable; + u32 config_size; + int barno, ret; + void *base; + u64 size; + + vdev = &vhost->vdev; + epf = vhost->epf; + dev = &epf->dev; + epc = epf->epc; + + epc_features = vhost->epc_features; + barno = pci_epc_get_first_free_bar(epc_features); + if (barno < 0) { + dev_err(dev, "Failed to get free BAR\n"); + return barno; + } + + size = epc_features->bar_fixed_size[barno]; + align = epc_features->align; + /* Check if epc_features is populated incorrectly */ + if ((!IS_ALIGNED(size, align))) + return -EINVAL; + + config_size = sizeof(struct epf_vhost_reg) + VHOST_DEVICE_CONFIG_SIZE; + config_size = ALIGN(config_size, 8); + + msix_capable = epc_features->msix_capable; + if (msix_capable) { + msix_table_size = PCI_MSIX_ENTRY_SIZE * epf->msix_interrupts; + vhost->msix_table_offset = config_size; + vhost->msix_bar = barno; + /* Align to QWORD or 8 Bytes */ + pba_size = ALIGN(DIV_ROUND_UP(epf->msix_interrupts, 8), 8); + } + + bar_size = config_size + msix_table_size + pba_size; + + if (!align) + bar_size = roundup_pow_of_two(bar_size); + else + bar_size = ALIGN(bar_size, align); + + if (!size) + size = bar_size; + else if (size < bar_size) + return -EINVAL; + + base = pci_epf_alloc_space(epf, size, barno, align, + PRIMARY_INTERFACE); + if (!base) { + dev_err(dev, "Failed to allocate configuration region\n"); + return -ENOMEM; + } + + epf_bar = &epf->bar[barno]; + ret = pci_epc_set_bar(epc, epf->func_no, epf_bar); + if (ret) { + dev_err(dev, "Failed to set BAR: %d\n", barno); + goto err_set_bar; + } + + vhost->reg = base; + + return 0; + +err_set_bar: + pci_epf_free_space(epf, base, barno, PRIMARY_INTERFACE); + + return ret; +} + +/* pci_epf_vhost_configure_interrupts - Configure MSI/MSI-X capability of EPF + * device + * @vhost: EPF vhost containing the vhost device that communicates with the + * remote virtio device + * + * Configure MSI/MSI-X capability of EPF device. This will be used to interrupt + * the vhost virtqueue. + */ +static int pci_epf_vhost_configure_interrupts(struct epf_vhost *vhost) +{ + const struct pci_epc_features *epc_features; + struct pci_epf *epf; + struct pci_epc *epc; + struct device *dev; + int ret; + + epc_features = vhost->epc_features; + epf = vhost->epf; + dev = &epf->dev; + epc = epf->epc; + + if (epc_features->msi_capable) { + ret = pci_epc_set_msi(epc, epf->func_no, + EPF_VHOST_MAX_INTERRUPTS); + if (ret) { + dev_err(dev, "MSI configuration failed\n"); + return ret; + } + } + + if (epc_features->msix_capable) { + ret = pci_epc_set_msix(epc, epf->func_no, + EPF_VHOST_MAX_INTERRUPTS, + vhost->msix_bar, + vhost->msix_table_offset); + if (ret) { + dev_err(dev, "MSI-X configuration failed\n"); + return ret; + } + } + + return 0; +} + +/* pci_epf_vhost_cfs_link - Link vhost client driver with EPF vhost to get + * the deviceID and driverID to be writtent to the PCIe config space + * @epf_vhost_item: Config item representing the EPF vhost created by this + * driver + * @epf: Endpoint function device that is bound to the endpoint controller + * + * This is invoked when the user creates a softlink between the vhost client + * to the EPF vhost. This gets the deviceID and vendorID data from the vhost + * client and copies it to the subys_id and subsys_vendor_id of the EPF + * header. This will be used by the remote virtio to bind a virtio client + * driver. + */ +static int pci_epf_vhost_cfs_link(struct config_item *epf_vhost_item, + struct config_item *driver_item) +{ + struct vhost_driver_item *vdriver_item; + struct epf_vhost *vhost; + + vdriver_item = to_vhost_driver_item(driver_item); + vhost = to_epf_vhost_from_ci(epf_vhost_item); + + vhost->epf->header->subsys_id = vdriver_item->device; + vhost->epf->header->subsys_vendor_id = vdriver_item->vendor; + + return 0; +} + +static struct configfs_item_operations pci_epf_vhost_cfs_ops = { + .allow_link = pci_epf_vhost_cfs_link, +}; + +static const struct config_item_type pci_epf_vhost_cfs_type = { + .ct_item_ops = &pci_epf_vhost_cfs_ops, + .ct_owner = THIS_MODULE, +}; + +/* pci_epf_vhost_cfs_work - Delayed work function to create configfs directory + * to perform EPF vhost specific initializations + * @work: The work_struct holding the pci_epf_vhost_cfs_work() function that + * is scheduled + * + * This is a delayed work function to create configfs directory to perform EPF + * vhost specific initializations. This configfs directory will be a + * sub-directory to the directory created by the user to create pci_epf device. + */ +static void pci_epf_vhost_cfs_work(struct work_struct *work) +{ + struct epf_vhost *vhost = container_of(work, struct epf_vhost, + cfs_work.work); + struct pci_epf *epf = vhost->epf; + struct device *dev = &epf->dev; + struct config_group *group; + struct vhost_dev *vdev; + int ret; + + if (!epf->group) { + queue_delayed_work(kpcivhost_workqueue, &vhost->cfs_work, + msecs_to_jiffies(50)); + return; + } + + vdev = &vhost->vdev; + group = &vhost->group; + config_group_init_type_name(group, dev_name(dev), &pci_epf_vhost_cfs_type); + ret = configfs_register_group(epf->group, group); + if (ret) { + dev_err(dev, "Failed to register configfs group %s\n", dev_name(dev)); + return; + } +} + +/* pci_epf_vhost_probe - Initialize struct epf_vhost when a new EPF device is + * created + * @epf: Endpoint function device that is bound to this driver + * + * Probe function to initialize struct epf_vhost when a new EPF device is + * created. + */ +static int pci_epf_vhost_probe(struct pci_epf *epf) +{ + struct epf_vhost *vhost; + + vhost = kzalloc(sizeof(*vhost), GFP_KERNEL); + if (!vhost) + return -ENOMEM; + + epf->header = &epf_vhost_header; + vhost->epf = epf; + + epf_set_drvdata(epf, vhost); + INIT_DELAYED_WORK(&vhost->cmd_handler, pci_epf_vhost_cmd_handler); + INIT_DELAYED_WORK(&vhost->cfs_work, pci_epf_vhost_cfs_work); + queue_delayed_work(kpcivhost_workqueue, &vhost->cfs_work, + msecs_to_jiffies(50)); + + return 0; +} + +/* pci_epf_vhost_remove - Free the initializations performed by + * pci_epf_vhost_probe() + * @epf: Endpoint function device that is bound to this driver + * + * Free the initializations performed by pci_epf_vhost_probe(). + */ +static int pci_epf_vhost_remove(struct pci_epf *epf) +{ + struct epf_vhost *vhost; + + vhost = epf_get_drvdata(epf); + cancel_delayed_work_sync(&vhost->cfs_work); + + return 0; +} + +/* pci_epf_vhost_bind - Bind callback to initialize the PCIe EP controller + * @epf: Endpoint function device that is bound to the endpoint controller + * + * pci_epf_vhost_bind() is invoked when an endpoint controller is bound to + * endpoint function. This function initializes the endpoint controller + * with vhost endpoint function specific data. + */ +static int pci_epf_vhost_bind(struct pci_epf *epf) +{ + const struct pci_epc_features *epc_features; + struct epf_vhost *vhost; + struct pci_epc *epc; + struct device *dev; + int ret; + + vhost = epf_get_drvdata(epf); + dev = &epf->dev; + epc = epf->epc; + + epc_features = pci_epc_get_features(epc, epf->func_no); + if (!epc_features) { + dev_err(dev, "Fail to get EPC features\n"); + return -EINVAL; + } + vhost->epc_features = epc_features; + + ret = pci_epf_vhost_write_header(vhost); + if (ret) { + dev_err(dev, "Failed to bind VHOST config header\n"); + return ret; + } + + ret = pci_epf_vhost_configure_bar(vhost); + if (ret) { + dev_err(dev, "Failed to configure BAR\n"); + return ret; + } + + ret = pci_epf_vhost_configure_interrupts(vhost); + if (ret) { + dev_err(dev, "Failed to configure BAR\n"); + return ret; + } + + ret = pci_epf_vhost_register(vhost); + if (ret) { + dev_err(dev, "Failed to bind VHOST config header\n"); + return ret; + } + + queue_work(kpcivhost_workqueue, &vhost->cmd_handler.work); + + return ret; +} + +/* pci_epf_vhost_unbind - Inbind callback to cleanup the PCIe EP controller + * @epf: Endpoint function device that is bound to the endpoint controller + * + * pci_epf_vhost_unbind() is invoked when the binding between endpoint + * controller is removed from endpoint function. This will unregister vhost + * device and cancel pending cmd_handler work. + */ +static void pci_epf_vhost_unbind(struct pci_epf *epf) +{ + struct epf_vhost *vhost; + struct vhost_dev *vdev; + + vhost = epf_get_drvdata(epf); + vdev = &vhost->vdev; + + cancel_delayed_work_sync(&vhost->cmd_handler); + if (device_is_registered(&vdev->dev)) + vhost_unregister_device(vdev); +} + +static struct pci_epf_ops epf_ops = { + .bind = pci_epf_vhost_bind, + .unbind = pci_epf_vhost_unbind, +}; + +static const struct pci_epf_device_id pci_epf_vhost_ids[] = { + { + .name = "pci-epf-vhost", + }, + { }, +}; + +static struct pci_epf_driver epf_vhost_driver = { + .driver.name = "pci_epf_vhost", + .probe = pci_epf_vhost_probe, + .remove = pci_epf_vhost_remove, + .id_table = pci_epf_vhost_ids, + .ops = &epf_ops, + .owner = THIS_MODULE, +}; + +static int __init pci_epf_vhost_init(void) +{ + int ret; + + kpcivhost_workqueue = alloc_workqueue("kpcivhost", WQ_MEM_RECLAIM | + WQ_HIGHPRI, 0); + ret = pci_epf_register_driver(&epf_vhost_driver); + if (ret) { + pr_err("Failed to register pci epf vhost driver --> %d\n", ret); + return ret; + } + + return 0; +} +module_init(pci_epf_vhost_init); + +static void __exit pci_epf_vhost_exit(void) +{ + pci_epf_unregister_driver(&epf_vhost_driver); +} +module_exit(pci_epf_vhost_exit); + +MODULE_DESCRIPTION("PCI EPF VHOST DRIVER"); +MODULE_AUTHOR("Kishon Vijay Abraham I "); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/vhost/vhost_cfs.c b/drivers/vhost/vhost_cfs.c index ae46e71968f1..ab0393289200 100644 --- a/drivers/vhost/vhost_cfs.c +++ b/drivers/vhost/vhost_cfs.c @@ -18,12 +18,6 @@ static struct config_group *vhost_driver_group; /* VHOST device like PCIe EP, NTB etc., */ static struct config_group *vhost_device_group; -struct vhost_driver_item { - struct config_group group; - u32 vendor; - u32 device; -}; - struct vhost_driver_group { struct config_group group; }; @@ -33,13 +27,6 @@ struct vhost_device_item { struct vhost_dev *vdev; }; -static inline -struct vhost_driver_item *to_vhost_driver_item(struct config_item *item) -{ - return container_of(to_config_group(item), struct vhost_driver_item, - group); -} - static inline struct vhost_device_item *to_vhost_device_item(struct config_item *item) { diff --git a/include/linux/vhost.h b/include/linux/vhost.h index be9341ffd266..640650311310 100644 --- a/include/linux/vhost.h +++ b/include/linux/vhost.h @@ -74,6 +74,7 @@ struct vhost_virtqueue { struct vhost_dev *dev; enum vhost_type type; struct vringh vringh; + int index; void (*callback)(struct vhost_virtqueue *vq); void (*notify)(struct vhost_virtqueue *vq); @@ -148,6 +149,12 @@ struct vhost_msg_node { struct list_head node; }; +struct vhost_driver_item { + struct config_group group; + u32 vendor; + u32 device; +}; + enum vhost_notify_event { NOTIFY_SET_STATUS, NOTIFY_FINALIZE_FEATURES, @@ -230,6 +237,13 @@ static inline void *vhost_get_drvdata(struct vhost_dev *vdev) return dev_get_drvdata(&vdev->dev); } +static inline +struct vhost_driver_item *to_vhost_driver_item(struct config_item *item) +{ + return container_of(to_config_group(item), struct vhost_driver_item, + group); +} + int vhost_register_driver(struct vhost_driver *driver); void vhost_unregister_driver(struct vhost_driver *driver); int vhost_register_device(struct vhost_dev *vdev); From patchwork Thu Jul 2 08:21:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kishon Vijay Abraham I X-Patchwork-Id: 192202 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp1231222ilg; Thu, 2 Jul 2020 01:24:24 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw6Ds2Ucl3mreF7uOfssZIqolh3d9VtCLgL39LicQ+Rb7S1wW4N3Jp53yf+/u9DmvoJo7k3 X-Received: by 2002:a17:906:b16:: with SMTP id u22mr1542493ejg.53.1593678264430; Thu, 02 Jul 2020 01:24:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593678264; cv=none; d=google.com; s=arc-20160816; b=GGNjsBlr9wXoaBP+GEw0vvV0kLFOc3OWShAW9OITkLxyPprpGqHcPuPnX8qrBJsUi2 edPiLpYR7p4+HC30ckzy7oeV//ayrKPvhE+wf78hL79MydwDmsREwNihTywOy9qlZRis D7rXLkRxEVyMlrU0LyL+K3bDjAySyxu6X41m/IH+Y6rGfQ5w05emnnhUu22wYIWwLySt W2aPvtcqGeG3yJs2w5kXHeOoD+WnP6m2OgVkLd2DVY7TYU6a3Cl8CojipwC3pnNosxVb W9B8+RDkU+tIYLArCdPNA82OTu7D5d2lri2aitbU2qJs/Dn0JgGZrFInwg3nCREvz2ct htyg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=BfOZr+xGOsPGiy8RO9cNV8MqKJURhNwgLuqcM2C41xI=; b=aF36LK2z4s4hlKBtUNReNJMe9XXYRlWvBCR10LdUa+K93tpt0oLJL+1ParkqP+imq1 TAR8Wef30pgT+mpvoV/hbUHQXsiFRr80s1whMhRjU31Du11Sb1Olb822gOE12Mlpc2xU KbmkTL+hVsc7A9C9BFGwC6N/P6iqWztL3kDa0k+lzgakKiulpSYQW5y8REkELbYeyA+2 +zYDl8nzJ2ZFh2W+AyPrHEzxGzwCJwVhsKvjkD7Z/LI00cvmvBQEdQ2r9r/GMUhZR9ZA qHcyhT83e2xQQSXku1puK+KStnmG6c8FjmXuEA3TWp8Tgu07aNfbPMe2HY9MC/ETem0h ua8Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=dB1wF8my; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id h5si5332454ejq.249.2020.07.02.01.24.24; Thu, 02 Jul 2020 01:24:24 -0700 (PDT) Received-SPF: pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=dB1wF8my; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728840AbgGBIYU (ORCPT + 9 others); Thu, 2 Jul 2020 04:24:20 -0400 Received: from lelv0143.ext.ti.com ([198.47.23.248]:55488 "EHLO lelv0143.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728262AbgGBIYN (ORCPT ); Thu, 2 Jul 2020 04:24:13 -0400 Received: from fllv0034.itg.ti.com ([10.64.40.246]) by lelv0143.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0628Ni8l042491; Thu, 2 Jul 2020 03:23:44 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1593678224; bh=BfOZr+xGOsPGiy8RO9cNV8MqKJURhNwgLuqcM2C41xI=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=dB1wF8myW4K6PFl4X6Fp9lnd6sEshzIz4rwrrXqOfcQgLTxCi2rTbf5VGlHypGs3L MCCeL7VqngSsw7+LPl2cWwD9C894EuMxplgAdS2z15O6ZjEJOCaccndOHpgGxjUitt BlcB+FRMEsaTD/M9MBmtJvRhEK9Dqr9XB6/8hxo0= Received: from DLEE104.ent.ti.com (dlee104.ent.ti.com [157.170.170.34]) by fllv0034.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0628NiT6032240 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 2 Jul 2020 03:23:44 -0500 Received: from DLEE108.ent.ti.com (157.170.170.38) by DLEE104.ent.ti.com (157.170.170.34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Thu, 2 Jul 2020 03:23:43 -0500 Received: from lelv0327.itg.ti.com (10.180.67.183) by DLEE108.ent.ti.com (157.170.170.38) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Thu, 2 Jul 2020 03:23:43 -0500 Received: from a0393678ub.india.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0628LiYW006145; Thu, 2 Jul 2020 03:23:38 -0500 From: Kishon Vijay Abraham I To: Ohad Ben-Cohen , Bjorn Andersson , Jon Mason , Dave Jiang , Allen Hubbe , Lorenzo Pieralisi , Bjorn Helgaas , "Michael S. Tsirkin" , Jason Wang , Paolo Bonzini , Stefan Hajnoczi , Stefano Garzarella CC: , , , , , , , Subject: [RFC PATCH 20/22] NTB: Add a new NTB client driver to implement VIRTIO functionality Date: Thu, 2 Jul 2020 13:51:41 +0530 Message-ID: <20200702082143.25259-21-kishon@ti.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200702082143.25259-1-kishon@ti.com> References: <20200702082143.25259-1-kishon@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Add a new NTB client driver to implement VIRTIO functionality. When two hosts are connected using NTB, one of the hosts should run NTB client driver that implements VIRTIO functionality and the other host should run NTB client implements VHOST functionality. This interfaces with VIRTIO layer so that any virtio client driver can exchange data with the remote vhost client driver. Since each NTB host can only expose fewer contiguous memory range to the remote NTB host (number of memory windows supported), reserve contiguous memory range using dma_alloc_coherent() and then manage this area using gen_pool for providing buffers to the virtio client driver. The virtio client driver should only provide this buffer region to the remote vhost driver. Signed-off-by: Kishon Vijay Abraham I --- drivers/ntb/Kconfig | 9 + drivers/ntb/Makefile | 1 + drivers/ntb/ntb_virtio.c | 853 +++++++++++++++++++++++++++++++++++++++ drivers/ntb/ntb_virtio.h | 56 +++ 4 files changed, 919 insertions(+) create mode 100644 drivers/ntb/ntb_virtio.c create mode 100644 drivers/ntb/ntb_virtio.h -- 2.17.1 diff --git a/drivers/ntb/Kconfig b/drivers/ntb/Kconfig index df16c755b4da..e171b3256f68 100644 --- a/drivers/ntb/Kconfig +++ b/drivers/ntb/Kconfig @@ -37,4 +37,13 @@ config NTB_TRANSPORT If unsure, say N. +config NTB_VIRTIO + tristate "NTB VIRTIO" + help + The NTB virtio driver sits between the NTB HW driver and the virtio + client driver and lets the virtio client driver to exchange data with + the remote vhost driver over the NTB hardware. + + If unsure, say N. + endif # NTB diff --git a/drivers/ntb/Makefile b/drivers/ntb/Makefile index 3a6fa181ff99..d37ab488bcbc 100644 --- a/drivers/ntb/Makefile +++ b/drivers/ntb/Makefile @@ -1,6 +1,7 @@ # SPDX-License-Identifier: GPL-2.0-only obj-$(CONFIG_NTB) += ntb.o hw/ test/ obj-$(CONFIG_NTB_TRANSPORT) += ntb_transport.o +obj-$(CONFIG_NTB_VIRTIO) += ntb_virtio.o ntb-y := core.o ntb-$(CONFIG_NTB_MSI) += msi.o diff --git a/drivers/ntb/ntb_virtio.c b/drivers/ntb/ntb_virtio.c new file mode 100644 index 000000000000..10fbe189ab8b --- /dev/null +++ b/drivers/ntb/ntb_virtio.c @@ -0,0 +1,853 @@ +// SPDX-License-Identifier: GPL-2.0 +/** + * NTB Client Driver to implement VIRTIO functionality + * + * Copyright (C) 2020 Texas Instruments + * Author: Kishon Vijay Abraham I + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "ntb_virtio.h" + +#define BUFFER_OFFSET 0x20000 + +struct ntb_virtio_queue { + struct delayed_work db_handler; + struct virtqueue *vq; +}; + +struct ntb_virtio { + struct ntb_virtio_queue vqueue[MAX_VQS]; + struct work_struct link_cleanup; + struct delayed_work link_work; + struct virtio_device vdev; + struct gen_pool *gen_pool; + dma_addr_t mw_phys_addr; + struct virtqueue **vqs; + struct ntb_dev *ndev; + struct device *dev; + /* mutex to protect sending commands to ntb vhost */ + struct mutex lock; + void *mw_addr; + u64 mw_size; +}; + +#define to_ntb_virtio(v) container_of((v), struct ntb_virtio, vdev) + +/* ntb_virtio_send_command - Send commands to the remote NTB vhost device + * @ntb: NTB virtio device that communicates with the remote vhost device + * @command: The command that has to be sent to the remote vhost device + * + * Helper function to send commands to the remote NTB vhost device. + */ +static int ntb_virtio_send_command(struct ntb_virtio *ntb, u32 command) +{ + struct ntb_dev *ndev; + ktime_t timeout; + bool timedout; + int ret = 0; + u8 status; + + ndev = ntb->ndev; + + mutex_lock(&ntb->lock); + ntb_peer_spad_write(ndev, NTB_DEF_PEER_IDX, VHOST_COMMAND, command); + timeout = ktime_add_ms(ktime_get(), COMMAND_TIMEOUT); + while (1) { + timedout = ktime_after(ktime_get(), timeout); + status = ntb_peer_spad_read(ndev, NTB_DEF_PEER_IDX, + VHOST_COMMAND_STATUS); + if (status == HOST_CMD_STATUS_ERROR) { + ret = -EINVAL; + break; + } + + if (status == HOST_CMD_STATUS_OKAY) + break; + + if (WARN_ON(timedout)) { + ret = -ETIMEDOUT; + break; + } + + usleep_range(5, 10); + } + + ntb_peer_spad_write(ndev, NTB_DEF_PEER_IDX, VHOST_COMMAND_STATUS, + HOST_CMD_STATUS_NONE); + mutex_unlock(&ntb->lock); + + return ret; +} + +/* ntb_virtio_get_features - virtio_config_ops to get vhost device features + * @vdev: Virtio device that communicates with remove vhost device + * + * virtio_config_ops to get vhost device features. The remote vhost device + * populates the vhost device features in scratchpad register. + */ +static u64 ntb_virtio_get_features(struct virtio_device *vdev) +{ + struct ntb_virtio *ntb; + struct ntb_dev *ndev; + u64 val; + + ntb = to_ntb_virtio(vdev); + ndev = ntb->ndev; + + val = ntb_peer_spad_read(ndev, NTB_DEF_PEER_IDX, VHOST_FEATURES_UPPER); + val <<= 32; + val |= ntb_peer_spad_read(ndev, NTB_DEF_PEER_IDX, VHOST_FEATURES_LOWER); + + return val; +} + +/* ntb_virtio_finalize_features - virtio_config_ops to finalize features with + * remote vhost device + * @vdev: Virtio device that communicates with remove vhost device + * + * Indicate the negotiated features to the remote vhost device by sending + * HOST_CMD_FINALIZE_FEATURES command. + */ +static int ntb_virtio_finalize_features(struct virtio_device *vdev) +{ + struct ntb_virtio *ntb; + struct ntb_dev *ndev; + struct device *dev; + int ret; + + ntb = to_ntb_virtio(vdev); + ndev = ntb->ndev; + dev = ntb->dev; + + /* Give virtio_ring a chance to accept features. */ + vring_transport_features(vdev); + + ntb_peer_spad_write(ndev, NTB_DEF_PEER_IDX, VIRTIO_FEATURES_LOWER, + lower_32_bits(vdev->features)); + ntb_peer_spad_write(ndev, NTB_DEF_PEER_IDX, VIRTIO_FEATURES_UPPER, + upper_32_bits(vdev->features)); + + ret = ntb_virtio_send_command(ntb, HOST_CMD_FINALIZE_FEATURES); + if (ret) { + dev_err(dev, "Failed to set configuration event vector\n"); + return -EINVAL; + } + + return 0; +} + +/* ntb_virtio_get_status - virtio_config_ops to get device status + * @vdev: Virtio device that communicates with remove vhost device + * + * virtio_config_ops to get device status. The remote vhost device + * populates the vhost device status in scratchpad register. + */ +static u8 ntb_virtio_get_status(struct virtio_device *vdev) +{ + struct ntb_virtio *ntb; + struct ntb_dev *ndev; + + ntb = to_ntb_virtio(vdev); + ndev = ntb->ndev; + + return ntb_peer_spad_read(ndev, NTB_DEF_PEER_IDX, VHOST_DEVICE_STATUS); +} + +/* ntb_virtio_set_status - virtio_config_ops to set device status + * @vdev: Virtio device that communicates with remove vhost device + * + * virtio_config_ops to set device status. This function updates the + * status in scratchpad register and sends a notification to the vhost + * device using HOST_CMD_SET_STATUS command. + */ +static void ntb_virtio_set_status(struct virtio_device *vdev, u8 status) +{ + struct ntb_virtio *ntb; + struct ntb_dev *ndev; + struct device *dev; + int ret; + + ntb = to_ntb_virtio(vdev); + ndev = ntb->ndev; + dev = ntb->dev; + + /* We should never be setting status to 0. */ + if (WARN_ON(!status)) + return; + + ntb_peer_spad_write(ndev, NTB_DEF_PEER_IDX, VHOST_DEVICE_STATUS, + status); + + ret = ntb_virtio_send_command(ntb, HOST_CMD_SET_STATUS); + if (ret) + dev_err(dev, "Failed to set device status\n"); +} + +/* ntb_virtio_vq_db_work - Handle doorbell event receive for a virtqueue + * @work: The work_struct holding the ntb_virtio_vq_db_work() function for every + * created virtqueue + * + * This function is invoked when the remote vhost driver sends a notification + * to the virtqueue. (vhost_virtqueue_kick() on the remote vhost driver). This + * function invokes the virtio client driver's virtqueue callback. + */ +static void ntb_virtio_vq_db_work(struct work_struct *work) +{ + struct ntb_virtio_queue *vqueue; + struct virtqueue *vq; + + vqueue = container_of(work, struct ntb_virtio_queue, db_handler.work); + vq = vqueue->vq; + + if (!vq->callback) + return; + + vq->callback(vq); +} + +/* ntb_virtio_notify - Send notification to the remote vhost virtqueue + * @vq: The local virtio virtqueue corresponding to the remote vhost virtqueue + * where the notification has to be sent + * + * Use NTB doorbell to send notification for the remote vhost virtqueue. + */ +bool ntb_virtio_notify(struct virtqueue *vq) +{ + struct ntb_virtio *ntb; + struct ntb_dev *ndev; + struct device *dev; + int ret; + + ntb = vq->priv; + ndev = ntb->ndev; + dev = ntb->dev; + + ret = ntb_peer_db_set(ntb->ndev, vq->index); + if (ret) { + dev_err(dev, "Failed to notify remote virtqueue\n"); + return false; + } + + return true; +} + +/* ntb_virtio_find_vq - Find a virtio virtqueue and instantiate it + * @vdev: Virtio device that communicates with remove vhost device + * @index: Index of the vhost virtqueue + * @callback: Callback function that has to be associated with the created + * virtqueue + * + * Create a new virtio virtqueue which will be used by the remote vhost + * to access this virtio device. + */ +static struct virtqueue * +ntb_virtio_find_vq(struct virtio_device *vdev, unsigned int index, + void (*callback)(struct virtqueue *vq), + const char *name, bool ctx) +{ + struct ntb_virtio_queue *vqueue; + resource_size_t xlat_align_size; + unsigned int vq_size, offset; + resource_size_t xlat_align; + struct ntb_virtio *ntb; + u16 queue_num_buffers; + struct ntb_dev *ndev; + struct virtqueue *vq; + struct device *dev; + void *mw_addr; + void *vq_addr; + int ret; + + ntb = to_ntb_virtio(vdev); + ndev = ntb->ndev; + dev = ntb->dev; + mw_addr = ntb->mw_addr; + + queue_num_buffers = ntb_peer_spad_read(ndev, NTB_DEF_PEER_IDX, + VHOST_QUEUE_NUM_BUFFERS(index)); + if (!queue_num_buffers) { + dev_err(dev, "Invalid number of buffers\n"); + return ERR_PTR(-EINVAL); + } + + ret = ntb_mw_get_align(ndev, NTB_DEF_PEER_IDX, 0, &xlat_align, + &xlat_align_size, NULL); + if (ret) { + dev_err(dev, "Failed to get memory window align size\n"); + return ERR_PTR(ret); + } + + /* zero vring */ + vq_size = vring_size(queue_num_buffers, xlat_align); + offset = index * vq_size; + if (offset + vq_size >= BUFFER_OFFSET) { + dev_err(dev, "Not enough memory for allocating vq\n"); + return ERR_PTR(-ENOMEM); + } + + vq_addr = mw_addr + offset; + memset(vq_addr, 0, vq_size); + + /* + * Create the new vq, and tell virtio we're not interested in + * the 'weak' smp barriers, since we're talking with a real device. + */ + vq = vring_new_virtqueue(index, queue_num_buffers, xlat_align, vdev, + false, ctx, vq_addr, ntb_virtio_notify, + callback, name); + if (!vq) { + dev_err(dev, "vring_new_virtqueue %s failed\n", name); + return ERR_PTR(-ENOMEM); + } + + vq->vdev = vdev; + vq->priv = ntb; + + vqueue = &ntb->vqueue[index]; + vqueue->vq = vq; + + INIT_DELAYED_WORK(&vqueue->db_handler, ntb_virtio_vq_db_work); + + return vq; +} + +/* ntb_virtio_find_vqs - Find virtio virtqueues requested by virtio driver and + * instantiate them + * @vdev: Virtio device that communicates with remove vhost device + * @nvqs: The number of virtqueues to be created + * @vqs: Array of pointers to the created vhost virtqueues + * @callback: Array of callback function that has to be associated with + * each of the created virtqueues + * @names: Names that should be associated with each virtqueue + * @ctx: Context flag to find virtqueue + * @desc: Interrupt affinity descriptor + * + * Find virtio virtqueues requested by virtio driver and instantiate them. The + * number of buffers supported by the virtqueue is provided by the vhost + * device. + */ +static int +ntb_virtio_find_vqs(struct virtio_device *vdev, unsigned int nvqs, + struct virtqueue *vqs[], vq_callback_t *callbacks[], + const char * const names[], const bool *ctx, + struct irq_affinity *desc) +{ + struct ntb_virtio *ntb; + struct device *dev; + int queue_idx = 0; + int i; + + ntb = to_ntb_virtio(vdev); + dev = ntb->dev; + + for (i = 0; i < nvqs; ++i) { + if (!names[i]) { + vqs[i] = NULL; + continue; + } + + vqs[i] = ntb_virtio_find_vq(vdev, queue_idx++, callbacks[i], + names[i], ctx ? ctx[i] : false); + if (IS_ERR(vqs[i])) { + dev_err(dev, "Failed to find virtqueue\n"); + return PTR_ERR(vqs[i]); + } + } + + return 0; +} + +/* ntb_virtio_del_vqs - Free memory allocated for virtio virtqueues + * @vdev: Virtio device that communicates with remove vhost device + * + * Free memory allocated for virtio virtqueues. + */ +void ntb_virtio_del_vqs(struct virtio_device *vdev) +{ + struct ntb_virtio_queue *vqueue; + struct virtqueue *vq, *tmp; + struct ntb_virtio *ntb; + struct ntb_dev *ndev; + int index; + + ntb = to_ntb_virtio(vdev); + ndev = ntb->ndev; + + list_for_each_entry_safe(vq, tmp, &vdev->vqs, list) { + index = vq->index; + vqueue = &ntb->vqueue[index]; + cancel_delayed_work_sync(&vqueue->db_handler); + vring_del_virtqueue(vq); + } +} + +/* ntb_virtio_reset - virtio_config_ops to reset the device + * @vdev: Virtio device that communicates with remove vhost device + * + * virtio_config_ops to reset the device. This sends HOST_CMD_RESET + * command to reset the device. + */ +static void ntb_virtio_reset(struct virtio_device *vdev) +{ + struct ntb_virtio *ntb; + struct device *dev; + int ret; + + ntb = to_ntb_virtio(vdev); + dev = ntb->dev; + + ret = ntb_virtio_send_command(ntb, HOST_CMD_RESET); + if (ret) + dev_err(dev, "Failed to reset device\n"); +} + +/* ntb_virtio_get - Copy the device configuration space data to buffer + * from virtio driver + * @vdev: Virtio device that communicates with remove vhost device + * @offset: Offset in the device configuration space + * @buf: Buffer address from virtio driver where configuration space + * data has to be copied + * @len: Length of the data from device configuration space to be copied + * + * Copy the device configuration space data to buffer from virtio driver. + */ +static void ntb_virtio_get(struct virtio_device *vdev, unsigned int offset, + void *buf, unsigned int len) +{ + unsigned int cfg_offset; + struct ntb_virtio *ntb; + struct ntb_dev *ndev; + struct device *dev; + int i, size; + + ntb = to_ntb_virtio(vdev); + ndev = ntb->ndev; + dev = ntb->dev; + + size = len / 4; + for (i = 0; i < size; i++) { + cfg_offset = VHOST_DEVICE_CFG_SPACE + i + offset; + *(u32 *)buf = ntb_spad_read(ndev, cfg_offset); + buf += 4; + } +} + +/* ntb_virtio_set - Copy the device configuration space data from buffer + * provided by virtio driver + * @vdev: Virtio device that communicates with remove vhost device + * @offset: Offset in the device configuration space + * @buf: Buffer address provided by virtio driver which has the configuration + * space data to be copied + * @len: Length of the data from device configuration space to be copied + * + * Copy the device configuration space data from buffer provided by virtio + * driver to the device. + */ +static void ntb_virtio_set(struct virtio_device *vdev, unsigned int offset, + const void *buf, unsigned int len) +{ + struct ntb_virtio *ntb; + struct ntb_dev *ndev; + struct device *dev; + int i, size; + + ntb = to_ntb_virtio(vdev); + ndev = ntb->ndev; + dev = ntb->dev; + + size = len / 4; + for (i = 0; i < size; i++) { + ntb_spad_write(ndev, VHOST_DEVICE_CFG_SPACE + i, *(u32 *)buf); + buf += 4; + } +} + +/* ntb_virtio_alloc_buffer - Allocate buffers from specially reserved memory + * of virtio which can be accessed by both virtio and vhost + * @vdev: Virtio device that communicates with remove vhost device + * @size: The size of the memory that has to be allocated + * + * Allocate buffers from specially reserved memory of virtio which can be + * accessed by both virtio and vhost. + */ +static void *ntb_virtio_alloc_buffer(struct virtio_device *vdev, size_t size) +{ + struct ntb_virtio *ntb; + struct gen_pool *pool; + struct ntb_dev *ndev; + struct device *dev; + unsigned long addr; + + ntb = to_ntb_virtio(vdev); + pool = ntb->gen_pool; + ndev = ntb->ndev; + dev = ntb->dev; + + addr = gen_pool_alloc(pool, size); + if (!addr) { + dev_err(dev, "Failed to allocate memory\n"); + return NULL; + } + + return (void *)addr; +} + +/* ntb_virtio_alloc_buffer - Free buffers allocated using + * ntb_virtio_alloc_buffer() + * @vdev: Virtio device that communicates with remove vhost device + * @addr: Address returned by ntb_virtio_alloc_buffer() + * @size: The size of the allocated memory + * + * Free buffers allocated using ntb_virtio_alloc_buffer(). + */ +static void ntb_virtio_free_buffer(struct virtio_device *vdev, void *addr, + size_t size) +{ + struct ntb_virtio *ntb; + struct gen_pool *pool; + struct ntb_dev *ndev; + struct device *dev; + + ntb = to_ntb_virtio(vdev); + pool = ntb->gen_pool; + ndev = ntb->ndev; + dev = ntb->dev; + + gen_pool_free(pool, (unsigned long)addr, size); +} + +static const struct virtio_config_ops ntb_virtio_config_ops = { + .get_features = ntb_virtio_get_features, + .finalize_features = ntb_virtio_finalize_features, + .find_vqs = ntb_virtio_find_vqs, + .del_vqs = ntb_virtio_del_vqs, + .reset = ntb_virtio_reset, + .set_status = ntb_virtio_set_status, + .get_status = ntb_virtio_get_status, + .get = ntb_virtio_get, + .set = ntb_virtio_set, + .alloc_buffer = ntb_virtio_alloc_buffer, + .free_buffer = ntb_virtio_free_buffer, +}; + +/* ntb_virtio_release - Callback function to free device + * @dev: Device in virtio_device that has to be freed + * + * Callback function from device core invoked to free the device after + * all references have been removed. This frees the allocated memory for + * struct ntb_virtio. + */ +static void ntb_virtio_release(struct device *dev) +{ + struct virtio_device *vdev; + struct ntb_virtio *ntb; + + vdev = dev_to_virtio(dev); + ntb = to_ntb_virtio(vdev); + + kfree(ntb); +} + +/* ntb_virtio_link_cleanup - Cleanup once link to the remote host is lost + * @ntb: NTB virtio device that communicates with remove vhost device + * + * Performs the cleanup that has to be done once the link to the remote host + * is lost or when the NTB virtio driver is removed. + */ +static void ntb_virtio_link_cleanup(struct ntb_virtio *ntb) +{ + dma_addr_t mw_phys_addr; + struct gen_pool *pool; + struct ntb_dev *ndev; + struct pci_dev *pdev; + void *mw_addr; + u64 mw_size; + + ndev = ntb->ndev; + pool = ntb->gen_pool; + pdev = ndev->pdev; + mw_size = ntb->mw_size; + mw_addr = ntb->mw_addr; + mw_phys_addr = ntb->mw_phys_addr; + + ntb_mw_clear_trans(ndev, 0, 0); + gen_pool_destroy(pool); + dma_free_coherent(&pdev->dev, mw_size, mw_addr, mw_phys_addr); +} + +/* ntb_virtio_link_cleanup_work - Cleanup once link to the remote host is lost + * @work: The work_struct holding the ntb_virtio_link_cleanup_work() function + * that is scheduled + * + * Performs the cleanup that has to be done once the link to the remote host + * is lost. This acts as a wrapper to ntb_virtio_link_cleanup() for the cleanup + * operation. + */ +static void ntb_virtio_link_cleanup_work(struct work_struct *work) +{ + struct ntb_virtio *ntb; + + ntb = container_of(work, struct ntb_virtio, link_cleanup); + ntb_virtio_link_cleanup(ntb); +} + +/* ntb_virtio_link_work - Initialization once link to the remote host is + * established + * @work: The work_struct holding the ntb_virtio_link_work() function that is + * scheduled + * + * Performs the NTB virtio initialization that has to be done once the link to + * the remote host is established. Reads the initialization data written by + * vhost driver (to get memory window size accessible by vhost) and reserves + * memory for virtqueues and buffers. + */ +static void ntb_virtio_link_work(struct work_struct *work) +{ + struct virtio_device *vdev; + dma_addr_t mw_phys_addr; + struct ntb_virtio *ntb; + u32 deviceid, vendorid; + struct gen_pool *pool; + struct ntb_dev *ndev; + struct pci_dev *pdev; + struct device *dev; + void *mw_addr; + u64 mw_size; + u32 type; + int ret; + + ntb = container_of(work, struct ntb_virtio, link_work.work); + ndev = ntb->ndev; + pdev = ndev->pdev; + dev = &ndev->dev; + + type = ntb_spad_read(ndev, VIRTIO_TYPE); + if (type != TYPE_VHOST) + goto out; + + mw_size = ntb_peer_spad_read(ndev, NTB_DEF_PEER_IDX, + VHOST_MW0_SIZE_UPPER); + mw_size <<= 32; + mw_size |= ntb_peer_spad_read(ndev, NTB_DEF_PEER_IDX, + VHOST_MW0_SIZE_LOWER); + ntb->mw_size = mw_size; + + mw_addr = dma_alloc_coherent(&pdev->dev, mw_size, &mw_phys_addr, + GFP_KERNEL); + if (!mw_addr) + return; + + pool = gen_pool_create(PAGE_SHIFT, -1); + if (!pool) { + dev_err(dev, "Failed to create gen pool\n"); + goto err_gen_pool; + } + + ret = gen_pool_add_virt(pool, (unsigned long)mw_addr + BUFFER_OFFSET, + mw_phys_addr + BUFFER_OFFSET, + mw_size - BUFFER_OFFSET, -1); + if (ret) { + dev_err(dev, "Failed to add memory to the pool\n"); + goto err_gen_pool_add_virt; + } + + ret = ntb_mw_set_trans(ndev, 0, 0, mw_phys_addr, mw_size); + if (ret) { + dev_err(dev, "Failed to set memory window translation\n"); + goto err_gen_pool_add_virt; + } + + ntb->mw_phys_addr = mw_phys_addr; + ntb->mw_addr = mw_addr; + ntb->mw_size = mw_size; + ntb->gen_pool = pool; + + ntb_peer_spad_write(ndev, NTB_DEF_PEER_IDX, VIRTIO_MW0_LOWER_ADDR, + lower_32_bits(mw_phys_addr)); + ntb_peer_spad_write(ndev, NTB_DEF_PEER_IDX, VIRTIO_MW0_UPPER_ADDR, + upper_32_bits(mw_phys_addr)); + + ntb_peer_spad_write(ndev, NTB_DEF_PEER_IDX, VIRTIO_TYPE, TYPE_VIRTIO); + + deviceid = ntb_peer_spad_read(ndev, NTB_DEF_PEER_IDX, VHOST_DEVICEID); + vendorid = ntb_peer_spad_read(ndev, NTB_DEF_PEER_IDX, VHOST_VENDORID); + + vdev = &ntb->vdev; + vdev->id.device = deviceid; + vdev->id.vendor = vendorid; + vdev->config = &ntb_virtio_config_ops, + vdev->dev.parent = dev; + vdev->dev.release = ntb_virtio_release; + + ret = register_virtio_device(vdev); + if (ret) { + dev_err(dev, "failed to register vdev: %d\n", ret); + goto err_register_virtio; + } + + return; + +out: + if (ntb_link_is_up(ndev, NULL, NULL) == 1) + schedule_delayed_work(&ntb->link_work, + msecs_to_jiffies(NTB_LINK_DOWN_TIMEOUT)); + return; + +err_register_virtio: + ntb_mw_clear_trans(ndev, 0, 0); + +err_gen_pool_add_virt: + gen_pool_destroy(pool); + +err_gen_pool: + dma_free_coherent(&pdev->dev, mw_size, mw_addr, mw_phys_addr); +} + +/* ntb_vhost_event_callback - Callback to link event interrupt + * @data: Private data specific to NTB virtio driver + * + * Callback function from NTB HW driver whenever both the hosts in the NTB + * setup has invoked ntb_link_enable(). + */ +static void ntb_virtio_event_callback(void *data) +{ + struct ntb_virtio *ntb = data; + + if (ntb_link_is_up(ntb->ndev, NULL, NULL) == 1) + schedule_delayed_work(&ntb->link_work, 0); + else + schedule_work(&ntb->link_cleanup); +} + +/* ntb_virtio_vq_db_callback - Callback to doorbell interrupt to handle virtio + * virtqueue work + * @data: Private data specific to NTB virtio driver + * @vector: Doorbell vector on which interrupt is received + * + * Callback function from NTB HW driver whenever remote vhost driver has sent + * a notification using doorbell. This schedules work corresponding to the + * virtqueue for which notification has been received. + */ +static void ntb_virtio_vq_db_callback(void *data, int vector) +{ + struct ntb_virtio_queue *vqueue; + struct ntb_virtio *ntb; + + ntb = data; + vqueue = &ntb->vqueue[vector - 1]; + + schedule_delayed_work(&vqueue->db_handler, 0); +} + +static const struct ntb_ctx_ops ntb_virtio_ops = { + .link_event = ntb_virtio_event_callback, + .db_event = ntb_virtio_vq_db_callback, +}; + +/* ntb_virtio_probe - Initialize struct ntb_virtio when a new NTB device is + * created + * @client: struct ntb_client * representing the ntb virtio client driver + * @ndev: NTB device created by NTB HW driver + * + * Probe function to initialize struct ntb_virtio when a new NTB device is + * created. + */ +static int ntb_virtio_probe(struct ntb_client *self, struct ntb_dev *ndev) +{ + struct device *dev = &ndev->dev; + struct ntb_virtio *ntb; + int ret; + + ntb = kzalloc(sizeof(*ntb), GFP_KERNEL); + if (!ntb) + return -ENOMEM; + + ntb->ndev = ndev; + ntb->dev = dev; + + mutex_init(&ntb->lock); + INIT_DELAYED_WORK(&ntb->link_work, ntb_virtio_link_work); + INIT_WORK(&ntb->link_cleanup, ntb_virtio_link_cleanup_work); + + ret = ntb_set_ctx(ndev, ntb, &ntb_virtio_ops); + if (ret) { + dev_err(dev, "Failed to set NTB virtio context\n"); + goto err; + } + + ntb_link_enable(ndev, NTB_SPEED_AUTO, NTB_WIDTH_AUTO); + + return 0; + +err: + kfree(ntb); + + return ret; +} + +/* ntb_virtio_free - Free the initializations performed by ntb_virtio_probe() + * @client: struct ntb_client * representing the ntb virtio client driver + * @ndev: NTB device created by NTB HW driver + * + * Free the initializations performed by ntb_virtio_probe(). + */ +void ntb_virtio_free(struct ntb_client *client, struct ntb_dev *ndev) +{ + struct virtio_device *vdev; + struct ntb_virtio *ntb; + + ntb = ndev->ctx; + vdev = &ntb->vdev; + + ntb_virtio_link_cleanup(ntb); + cancel_work_sync(&ntb->link_cleanup); + cancel_delayed_work_sync(&ntb->link_work); + ntb_link_disable(ndev); + + if (device_is_registered(&vdev->dev)) + unregister_virtio_device(vdev); +} + +static struct ntb_client ntb_virtio_client = { + .ops = { + .probe = ntb_virtio_probe, + .remove = ntb_virtio_free, + }, +}; + +static int __init ntb_virtio_init(void) +{ + int ret; + + ret = ntb_register_client(&ntb_virtio_client); + if (ret) { + pr_err("Failed to register ntb vhost driver --> %d\n", ret); + return ret; + } + + return 0; +} +module_init(ntb_virtio_init); + +static void __exit ntb_virtio_exit(void) +{ + ntb_unregister_client(&ntb_virtio_client); +} +module_exit(ntb_virtio_exit); + +MODULE_DESCRIPTION("NTB VIRTIO Driver"); +MODULE_AUTHOR("Kishon Vijay Abraham I "); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/ntb/ntb_virtio.h b/drivers/ntb/ntb_virtio.h new file mode 100644 index 000000000000..bc68ca38f60b --- /dev/null +++ b/drivers/ntb/ntb_virtio.h @@ -0,0 +1,56 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/** + * NTB VIRTIO/VHOST Header + * + * Copyright (C) 2020 Texas Instruments + * Author: Kishon Vijay Abraham I + */ + +#ifndef __LINUX_NTB_VIRTIO_H +#define __LINUX_NTB_VIRTIO_H + +#define VIRTIO_TYPE 0 +enum virtio_type { + TYPE_VIRTIO = 1, + TYPE_VHOST, +}; + +#define VHOST_VENDORID 1 +#define VHOST_DEVICEID 2 +#define VHOST_FEATURES_UPPER 3 +#define VHOST_FEATURES_LOWER 4 +#define VIRTIO_FEATURES_UPPER 5 +#define VIRTIO_FEATURES_LOWER 6 +#define VHOST_MW0_SIZE_LOWER 7 +#define VHOST_MW0_SIZE_UPPER 8 +#define VIRTIO_MW0_LOWER_ADDR 9 +#define VIRTIO_MW0_UPPER_ADDR 10 +#define VHOST_DEVICE_STATUS 11 +#define VHOST_CONFIG_GENERATION 12 + +#define VHOST_COMMAND 13 +enum host_cmd { + HOST_CMD_NONE, + HOST_CMD_SET_STATUS, + HOST_CMD_FINALIZE_FEATURES, + HOST_CMD_RESET, +}; + +#define VHOST_COMMAND_STATUS 14 +enum host_cmd_status { + HOST_CMD_STATUS_NONE, + HOST_CMD_STATUS_OKAY, + HOST_CMD_STATUS_ERROR, +}; + +#define VHOST_QUEUE_BASE 15 +#define VHOST_QUEUE_NUM_BUFFERS(n) (VHOST_QUEUE_BASE + (n)) + +#define VHOST_DEVICE_CFG_SPACE 23 + +#define NTB_LINK_DOWN_TIMEOUT 10 /* 10 milli-sec */ +#define COMMAND_TIMEOUT 1000 /* 1 sec */ + +#define MAX_VQS 8 + +#endif /* __LINUX_NTB_VIRTIO_H */ From patchwork Thu Jul 2 08:21:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kishon Vijay Abraham I X-Patchwork-Id: 192200 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp1231100ilg; Thu, 2 Jul 2020 01:24:12 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy8fk/pPCJU7kN2ebDHfT2K0JemMk8Tumfp0OmlvWc6Hj/KMKYafGLm8Ub07eSQLnMsYeo9 X-Received: by 2002:a50:da44:: with SMTP id a4mr27652828edk.379.1593678252704; Thu, 02 Jul 2020 01:24:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593678252; cv=none; d=google.com; s=arc-20160816; b=d92i4zgBlWrZnmRnBH46QSO8umLulI2Ce5aRS0fKZAWDksJrVpZAJVZ5AcswpB8g9l WMqwlWkZKIhn0MLEHKLjifpFwGK1QcBTd+n2OeC9mHx4P9MydggBSzrNu6OQgfRsCKoB 6Qk+rKQpMoLdGToUZ4dqxxkWoi1oXNc6K8wVyjxkzpLmvfP9VMO8TNTn1TVnhkyTOO0J 3tSRAvRJOr0LEORcH/F09wWbZuTXFcf0HHPR+YRYCiU6x6gLxmXxyd6q0PCL4NDoKgoc VpQh8V3blq48ZY9aHcwGl65+Kzmu7ovRw6kwdjuC28mBjtVcMOdePSNoSeFW0rLTXqfg GPyw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=gAKCY5TQ5Cn7DATTGO9aj2/NUxi3jWWKle85Kq4dJwE=; b=MWF5x7EY+RNnZwVruJMF8QPKGq+2BG65LJm4onhNvqEGKFA8W8ULMRiEOOf7PUjHTI D0fnsaVe/UTcdb4uM+rHvJWpYb4NftFqEiMTcQDDdz4+6MadUgCROl1UCN4PdFVyUcEc HCzW3Sg1vTFu0kvJ5PlNfHyEtx/IlPMDM34/k9DxXG4Ftj8xzrLAhLzApHN8I9ilRXmF 4x3hO+LZ1Swn40VWaBNA2Pcs4oCdtMLmX3KEg76onfaRpQB+0YLLAIvfU1pKLroPqEEJ ZuwIx5DtN3t3csil2fPwmL/p6U5XL5wIN/DUY7rtsGlrOxejIavAAmpFmwoMrl5Qo3X+ OBdw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b="ufbuqB/i"; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g14si346032edv.102.2020.07.02.01.24.12; Thu, 02 Jul 2020 01:24:12 -0700 (PDT) Received-SPF: pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b="ufbuqB/i"; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728784AbgGBIYF (ORCPT + 9 others); Thu, 2 Jul 2020 04:24:05 -0400 Received: from fllv0015.ext.ti.com ([198.47.19.141]:34766 "EHLO fllv0015.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728159AbgGBIYA (ORCPT ); Thu, 2 Jul 2020 04:24:00 -0400 Received: from lelv0265.itg.ti.com ([10.180.67.224]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0628Nokn017606; Thu, 2 Jul 2020 03:23:50 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1593678230; bh=gAKCY5TQ5Cn7DATTGO9aj2/NUxi3jWWKle85Kq4dJwE=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=ufbuqB/iKmfEf5ohieuGwtuPYy4TWr6/WbzhSsYCRgFqRZ+V+VAbt7jl/Bl1kNTO9 /ZSHsOPUCu8Swd8yrSrUDu6+HVyvu1/Mvk1afhgh6h8KFxcvdSXTMf/72SKAFtRSBI 9gfOFD5ZTE36pbwsm5ANktHMw92IxEwzV4m3N8I0= Received: from DFLE112.ent.ti.com (dfle112.ent.ti.com [10.64.6.33]) by lelv0265.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0628Nnvi067681 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 2 Jul 2020 03:23:49 -0500 Received: from DFLE112.ent.ti.com (10.64.6.33) by DFLE112.ent.ti.com (10.64.6.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Thu, 2 Jul 2020 03:23:49 -0500 Received: from lelv0327.itg.ti.com (10.180.67.183) by DFLE112.ent.ti.com (10.64.6.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Thu, 2 Jul 2020 03:23:49 -0500 Received: from a0393678ub.india.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0628LiYX006145; Thu, 2 Jul 2020 03:23:44 -0500 From: Kishon Vijay Abraham I To: Ohad Ben-Cohen , Bjorn Andersson , Jon Mason , Dave Jiang , Allen Hubbe , Lorenzo Pieralisi , Bjorn Helgaas , "Michael S. Tsirkin" , Jason Wang , Paolo Bonzini , Stefan Hajnoczi , Stefano Garzarella CC: , , , , , , , Subject: [RFC PATCH 21/22] NTB: Add a new NTB client driver to implement VHOST functionality Date: Thu, 2 Jul 2020 13:51:42 +0530 Message-ID: <20200702082143.25259-22-kishon@ti.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200702082143.25259-1-kishon@ti.com> References: <20200702082143.25259-1-kishon@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Add a new NTB client driver to implement VHOST functionality. When two hosts are connected using NTB, one of the hosts should run NTB client driver that implements VIRTIO functionality and the other host should run NTB client implements VHOST functionality. This interfaces with VHOST layer so that any vhost client driver can exchange data with the remote virtio client driver. Signed-off-by: Kishon Vijay Abraham I --- drivers/ntb/Kconfig | 9 + drivers/ntb/Makefile | 1 + drivers/ntb/ntb_vhost.c | 776 ++++++++++++++++++++++++++++++++++++++++ 3 files changed, 786 insertions(+) create mode 100644 drivers/ntb/ntb_vhost.c -- 2.17.1 diff --git a/drivers/ntb/Kconfig b/drivers/ntb/Kconfig index e171b3256f68..7d1b9bb56e71 100644 --- a/drivers/ntb/Kconfig +++ b/drivers/ntb/Kconfig @@ -46,4 +46,13 @@ config NTB_VIRTIO If unsure, say N. +config NTB_VHOST + tristate "NTB VHOST" + help + The NTB vhost driver sits between the NTB HW driver and the vhost + client driver and lets the vhost client driver to exchange data with + the remote virtio driver over the NTB hardware. + + If unsure, say N. + endif # NTB diff --git a/drivers/ntb/Makefile b/drivers/ntb/Makefile index d37ab488bcbc..25c9937c91cf 100644 --- a/drivers/ntb/Makefile +++ b/drivers/ntb/Makefile @@ -2,6 +2,7 @@ obj-$(CONFIG_NTB) += ntb.o hw/ test/ obj-$(CONFIG_NTB_TRANSPORT) += ntb_transport.o obj-$(CONFIG_NTB_VIRTIO) += ntb_virtio.o +obj-$(CONFIG_NTB_VHOST) += ntb_vhost.o ntb-y := core.o ntb-$(CONFIG_NTB_MSI) += msi.o diff --git a/drivers/ntb/ntb_vhost.c b/drivers/ntb/ntb_vhost.c new file mode 100644 index 000000000000..1d717bb98d85 --- /dev/null +++ b/drivers/ntb/ntb_vhost.c @@ -0,0 +1,776 @@ +// SPDX-License-Identifier: GPL-2.0 +/** + * NTB Client Driver to implement VHOST functionality + * + * Copyright (C) 2020 Texas Instruments + * Author: Kishon Vijay Abraham I + */ + +#include +#include +#include +#include +#include +#include + +#include "ntb_virtio.h" + +static struct workqueue_struct *kntbvhost_workqueue; + +struct ntb_vhost_queue { + struct delayed_work db_handler; + struct vhost_virtqueue *vq; + void __iomem *vq_addr; +}; + +struct ntb_vhost { + struct ntb_vhost_queue vqueue[MAX_VQS]; + struct work_struct link_cleanup; + struct delayed_work cmd_handler; + struct delayed_work link_work; + resource_size_t peer_mw_size; + struct config_group *group; + phys_addr_t peer_mw_addr; + struct vhost_dev vdev; + struct ntb_dev *ndev; + struct vring vring; + struct device *dev; + u64 virtio_addr; + u64 features; +}; + +#define to_ntb_vhost(v) container_of((v), struct ntb_vhost, vdev) + +/* ntb_vhost_finalize_features - Indicate features are finalized to vhost client + * driver + * @ntb: NTB vhost device that communicates with the remote virtio device + * + * Invoked when the remote virtio device sends HOST_CMD_FINALIZE_FEATURES + * command once the feature negotiation is complete. This function sends + * notification to the vhost client driver. + */ +static void ntb_vhost_finalize_features(struct ntb_vhost *ntb) +{ + struct vhost_dev *vdev; + struct ntb_dev *ndev; + u64 features; + + vdev = &ntb->vdev; + ndev = ntb->ndev; + + features = ntb_spad_read(ndev, VIRTIO_FEATURES_UPPER); + features <<= 32; + features |= ntb_spad_read(ndev, VIRTIO_FEATURES_LOWER); + vdev->features = features; + blocking_notifier_call_chain(&vdev->notifier, NOTIFY_FINALIZE_FEATURES, 0); +} + +/* ntb_vhost_cmd_handler - Handle commands from remote NTB virtio driver + * @work: The work_struct holding the ntb_vhost_cmd_handler() function that is + * scheduled + * + * Handle commands from the remote NTB virtio driver and sends notification to + * the vhost client driver. The remote virtio driver sends commands when the + * virtio driver status is updated or when the feature negotiation is complet + * or if the virtio driver wants to reset the device. + */ +static void ntb_vhost_cmd_handler(struct work_struct *work) +{ + struct vhost_dev *vdev; + struct ntb_vhost *ntb; + struct ntb_dev *ndev; + struct device *dev; + u8 command; + + ntb = container_of(work, struct ntb_vhost, cmd_handler.work); + vdev = &ntb->vdev; + ndev = ntb->ndev; + dev = ntb->dev; + + command = ntb_spad_read(ndev, VHOST_COMMAND); + if (!command) + goto reset_handler; + + ntb_spad_write(ndev, VHOST_COMMAND, 0); + + switch (command) { + case HOST_CMD_SET_STATUS: + blocking_notifier_call_chain(&vdev->notifier, NOTIFY_SET_STATUS, 0); + ntb_spad_write(ndev, VHOST_COMMAND_STATUS, HOST_CMD_STATUS_OKAY); + break; + case HOST_CMD_FINALIZE_FEATURES: + ntb_vhost_finalize_features(ntb); + ntb_spad_write(ndev, VHOST_COMMAND_STATUS, HOST_CMD_STATUS_OKAY); + break; + case HOST_CMD_RESET: + blocking_notifier_call_chain(&vdev->notifier, NOTIFY_RESET, 0); + ntb_spad_write(ndev, VHOST_COMMAND_STATUS, HOST_CMD_STATUS_OKAY); + break; + default: + dev_err(dev, "UNKNOWN command: %d\n", command); + break; + } + +reset_handler: + queue_delayed_work(kntbvhost_workqueue, &ntb->cmd_handler, + msecs_to_jiffies(1)); +} + +/* ntb_vhost_del_vqs - Delete all the vqs associated with the vhost device + * @vdev: Vhost device that communicates with the remote virtio device + * + * Delete all the vqs associated with the vhost device. + */ +void ntb_vhost_del_vqs(struct vhost_dev *vdev) +{ + struct ntb_vhost_queue *vqueue; + struct vhost_virtqueue *vq; + struct ntb_vhost *ntb; + int i; + + ntb = to_ntb_vhost(vdev); + + for (i = 0; i < vdev->nvqs; i++) { + vq = vdev->vqs[i]; + if (IS_ERR_OR_NULL(vq)) + continue; + + vqueue = &ntb->vqueue[i]; + cancel_delayed_work_sync(&vqueue->db_handler); + iounmap(vqueue->vq_addr); + kfree(vq); + } +} + +/* ntb_vhost_vq_db_work - Handle doorbell event receive for a virtqueue + * @work: The work_struct holding the ntb_vhost_vq_db_work() function for every + * created virtqueue + * + * This function is invoked when the remote virtio driver sends a notification + * to the virtqueue. (virtqueue_kick() on the remote virtio driver). This + * function invokes the vhost client driver's virtqueue callback. + */ +static void ntb_vhost_vq_db_work(struct work_struct *work) +{ + struct ntb_vhost_queue *vqueue; + + vqueue = container_of(work, struct ntb_vhost_queue, db_handler.work); + vhost_virtqueue_callback(vqueue->vq); +} + +/* ntb_vhost_notify - Send notification to the remote virtqueue + * @vq: The local vhost virtqueue corresponding to the remote virtio virtqueue + * + * Use NTB doorbell to send notification for the remote virtqueue + */ +static void ntb_vhost_notify(struct vhost_virtqueue *vq) +{ + struct vhost_dev *vdev; + struct ntb_vhost *ntb; + + vdev = vq->dev; + ntb = to_ntb_vhost(vdev); + + ntb_peer_db_set(ntb->ndev, vq->index); +} + +/* ntb_vhost_create_vq - Create a new vhost virtqueue + * @vdev: Vhost device that communicates with the remote virtio device + * @index: Index of the vhost virtqueue + * @num_bufs: The number of buffers that should be supported by the vhost + * virtqueue (number of descriptors in the vhost virtqueue) + * @callback: Callback function associated with the virtqueue + * + * Create a new vhost virtqueue which can be used by the vhost client driver + * to access the remote virtio. This sets up the local address of the vhost + * virtqueue but shouldn't be accessed until the virtio sets the status to + * VIRTIO_CONFIG_S_DRIVER_OK. + */ +static struct vhost_virtqueue * +ntb_vhost_create_vq(struct vhost_dev *vdev, int index, unsigned int num_bufs, + void (*callback)(struct vhost_virtqueue *)) +{ + struct ntb_vhost_queue *vqueue; + unsigned int vq_size, offset; + struct vhost_virtqueue *vq; + phys_addr_t vq_phys_addr; + phys_addr_t peer_mw_addr; + struct vringh *vringh; + struct ntb_vhost *ntb; + void __iomem *vq_addr; + struct ntb_dev *ndev; + struct vring *vring; + struct device *dev; + int ret; + + ntb = to_ntb_vhost(vdev); + vqueue = &ntb->vqueue[index]; + peer_mw_addr = ntb->peer_mw_addr; + ndev = ntb->ndev; + dev = ntb->dev; + + vq = kzalloc(sizeof(*vq), GFP_KERNEL); + if (!vq) + return ERR_PTR(-ENOMEM); + + vq->dev = vdev; + vq->callback = callback; + vq->num = num_bufs; + vq->index = index; + vq->notify = ntb_vhost_notify; + vq->type = VHOST_TYPE_MMIO; + + vringh = &vq->vringh; + vring = &vringh->vring; + + ntb_spad_write(ndev, VHOST_QUEUE_NUM_BUFFERS(index), num_bufs); + vq_size = vring_size(num_bufs, VIRTIO_PCI_VRING_ALIGN); + offset = index * vq_size; + if (offset + vq_size > ntb->peer_mw_size) { + dev_err(dev, "Not enough vhost memory for allocating vq\n"); + ret = -ENOMEM; + goto err_out_of_bound; + } + + vq_phys_addr = peer_mw_addr + offset; + vq_addr = ioremap_wc(vq_phys_addr, vq_size); + if (!vq_addr) { + dev_err(dev, "Fail to ioremap virtqueue address\n"); + ret = -ENOMEM; + goto err_out_of_bound; + } + + vqueue->vq = vq; + vqueue->vq_addr = vq_addr; + + vring_init(vring, num_bufs, vq_addr, VIRTIO_PCI_VRING_ALIGN); + ret = vringh_init_mmio(vringh, ntb->features, num_bufs, false, + vring->desc, vring->avail, vring->used); + if (ret) { + dev_err(dev, "Failed to init vringh\n"); + goto err_init_mmio; + } + + INIT_DELAYED_WORK(&vqueue->db_handler, ntb_vhost_vq_db_work); + + return vq; + +err_init_mmio: + iounmap(vq_addr); + +err_out_of_bound: + kfree(vq); + + return ERR_PTR(ret); +} + +/* ntb_vhost_create_vqs - Create vhost virtqueues for vhost device + * @vdev: Vhost device that communicates with the remote virtio device + * @nvqs: Number of vhost virtqueues to be created + * @num_bufs: The number of buffers that should be supported by the vhost + * virtqueue (number of descriptors in the vhost virtqueue) + * @vqs: Pointers to all the created vhost virtqueues + * @callback: Callback function associated with the virtqueue + * @names: Names associated with each virtqueue + * + * Create vhost virtqueues for vhost device. This acts as a wrapper to + * ntb_vhost_create_vq() which creates individual vhost virtqueue. + */ +static int ntb_vhost_create_vqs(struct vhost_dev *vdev, unsigned int nvqs, + unsigned int num_bufs, + struct vhost_virtqueue *vqs[], + vhost_vq_callback_t *callbacks[], + const char * const names[]) +{ + struct ntb_vhost *ntb; + struct device *dev; + int ret, i; + + ntb = to_ntb_vhost(vdev); + dev = ntb->dev; + + for (i = 0; i < nvqs; i++) { + vqs[i] = ntb_vhost_create_vq(vdev, i, num_bufs, callbacks[i]); + if (IS_ERR_OR_NULL(vqs[i])) { + ret = PTR_ERR(vqs[i]); + dev_err(dev, "Failed to create virtqueue\n"); + goto err; + } + } + + vdev->nvqs = nvqs; + vdev->vqs = vqs; + + return 0; + +err: + ntb_vhost_del_vqs(vdev); + + return ret; +} + +/* ntb_vhost_write - Write data to buffer provided by remote virtio driver + * @vdev: Vhost device that communicates with the remote virtio device + * @dst: Buffer address in the remote device provided by the remote virtio + * driver + * @src: Buffer address in the local device provided by the vhost client driver + * @len: Length of the data to be copied from @src to @dst + * + * Write data to buffer provided by remote virtio driver from buffer provided + * by vhost client driver. + */ +static int ntb_vhost_write(struct vhost_dev *vdev, u64 dst, void *src, int len) +{ + phys_addr_t peer_mw_addr, phys_addr; + struct ntb_vhost *ntb; + unsigned int offset; + struct device *dev; + u64 virtio_addr; + void *addr; + + ntb = to_ntb_vhost(vdev); + dev = ntb->dev; + + peer_mw_addr = ntb->peer_mw_addr; + virtio_addr = ntb->virtio_addr; + + offset = dst - virtio_addr; + if (offset + len > ntb->peer_mw_size) { + dev_err(dev, "Overflow of vhost memory\n"); + return -EINVAL; + } + + phys_addr = peer_mw_addr + offset; + addr = ioremap_wc(phys_addr, len); + if (!addr) { + dev_err(dev, "Failed to ioremap vhost address\n"); + return -ENOMEM; + } + + memcpy_toio(addr, src, len); + iounmap(addr); + + return 0; +} + +/* ntb_vhost_read - Read data from buffers provided by remote virtio driver + * @vdev: Vhost device that communicates with the remote virtio device + * @dst: Buffer address in the local device provided by the vhost client driver + * @src: Buffer address in the remote device provided by the remote virtio + * driver + * @len: Length of the data to be copied from @src to @dst + * + * Read data from buffers provided by remote virtio driver to address provided + * by vhost client driver. + */ +static int ntb_vhost_read(struct vhost_dev *vdev, void *dst, u64 src, int len) +{ + phys_addr_t peer_mw_addr, phys_addr; + struct ntb_vhost *ntb; + unsigned int offset; + struct device *dev; + u64 virtio_addr; + void *addr; + + ntb = to_ntb_vhost(vdev); + dev = ntb->dev; + + peer_mw_addr = ntb->peer_mw_addr; + virtio_addr = ntb->virtio_addr; + + offset = src - virtio_addr; + if (offset + len > ntb->peer_mw_size) { + dev_err(dev, "Overflow of vhost memory\n"); + return -EINVAL; + } + + phys_addr = peer_mw_addr + offset; + addr = ioremap_wc(phys_addr, len); + if (!addr) { + dev_err(dev, "Failed to ioremap vhost address\n"); + return -ENOMEM; + } + + memcpy_fromio(dst, addr, len); + iounmap(addr); + + return 0; +} + +/* ntb_vhost_release - Callback function to free device + * @dev: Device in vhost_dev that has to be freed + * + * Callback function from device core invoked to free the device after + * all references have been removed. This frees the allocated memory for + * struct ntb_vhost. + */ +static void ntb_vhost_release(struct device *dev) +{ + struct vhost_dev *vdev; + struct ntb_vhost *ntb; + + vdev = to_vhost_dev(dev); + ntb = to_ntb_vhost(vdev); + + kfree(ntb); +} + +/* ntb_vhost_set_features - vhost_config_ops to set vhost device features + * @vdev: Vhost device that communicates with the remote virtio device + * @features: Features supported by the vhost client driver + * + * vhost_config_ops invoked by the vhost client driver to set vhost device + * features. + */ +static int ntb_vhost_set_features(struct vhost_dev *vdev, u64 features) +{ + struct ntb_vhost *ntb; + struct ntb_dev *ndev; + + ntb = to_ntb_vhost(vdev); + ndev = ntb->ndev; + + ntb_spad_write(ndev, VHOST_FEATURES_LOWER, lower_32_bits(features)); + ntb_spad_write(ndev, VHOST_FEATURES_UPPER, upper_32_bits(features)); + ntb->features = features; + + return 0; +} + +/* ntb_vhost_set_status - vhost_config_ops to set vhost device status + * @vdev: Vhost device that communicates with the remote virtio device + * @status: Vhost device status configured by vhost client driver + * + * vhost_config_ops invoked by the vhost client driver to set vhost device + * status. + */ +static int ntb_vhost_set_status(struct vhost_dev *vdev, u8 status) +{ + struct ntb_vhost *ntb; + struct ntb_dev *ndev; + + ntb = to_ntb_vhost(vdev); + ndev = ntb->ndev; + + ntb_spad_write(ndev, VHOST_DEVICE_STATUS, status); + + return 0; +} + +/* ntb_vhost_get_status - vhost_config_ops to get vhost device status + * @vdev: Vhost device that communicates with the remote virtio device + * + * vhost_config_ops invoked by the vhost client driver to get vhost device + * status set by the remote virtio driver. + */ +static u8 ntb_vhost_get_status(struct vhost_dev *vdev) +{ + struct ntb_vhost *ntb; + struct ntb_dev *ndev; + + ntb = to_ntb_vhost(vdev); + ndev = ntb->ndev; + + return ntb_spad_read(ndev, VHOST_DEVICE_STATUS); +} + +static const struct vhost_config_ops ops = { + .create_vqs = ntb_vhost_create_vqs, + .del_vqs = ntb_vhost_del_vqs, + .write = ntb_vhost_write, + .read = ntb_vhost_read, + .set_features = ntb_vhost_set_features, + .set_status = ntb_vhost_set_status, + .get_status = ntb_vhost_get_status, +}; + +/* ntb_vhost_link_cleanup - Cleanup once link to the remote host is lost + * @ntb: NTB vhost device that communicates with the remote virtio device + * + * Performs the cleanup that has to be done once the link to the remote host + * is lost or when the NTB vhost driver is removed. + */ +static void ntb_vhost_link_cleanup(struct ntb_vhost *ntb) +{ + cancel_delayed_work_sync(&ntb->link_work); +} + +/* ntb_vhost_link_cleanup_work - Cleanup once link to the remote host is lost + * @work: The work_struct holding the ntb_vhost_link_cleanup_work() function + * that is scheduled + * + * Performs the cleanup that has to be done once the link to the remote host + * is lost. This acts as a wrapper to ntb_vhost_link_cleanup() for the cleanup + * operation. + */ +static void ntb_vhost_link_cleanup_work(struct work_struct *work) +{ + struct ntb_vhost *ntb; + + ntb = container_of(work, struct ntb_vhost, link_cleanup); + ntb_vhost_link_cleanup(ntb); +} + +/* ntb_vhost_link_work - Initialization once link to the remote host is + * established + * @work: The work_struct holding the ntb_vhost_link_work() function that is + * scheduled + * + * Performs the NTB vhost initialization that has to be done once the link to + * the remote host is established. Initializes the scratchpad registers with + * data required for the remote NTB virtio driver to establish communication + * with this vhost driver. + */ +static void ntb_vhost_link_work(struct work_struct *work) +{ + struct vhost_dev *vdev; + struct ntb_vhost *ntb; + struct ntb_dev *ndev; + struct device *dev; + u64 virtio_addr; + u32 type; + + ntb = container_of(work, struct ntb_vhost, link_work.work); + vdev = &ntb->vdev; + ndev = ntb->ndev; + dev = ntb->dev; + + /* + * Device will be registered when vhost client driver is linked to + * vhost transport device driver in configfs. + */ + if (!device_is_registered(&vdev->dev)) + goto out; + + /* + * This is unlikely to happen if "vhost" configfs is used for + * registering vhost device. + */ + if (vdev->id.device == 0 && vdev->id.vendor == 0) { + dev_err(dev, "vhost device is registered without valid ID\n"); + goto out; + } + + ntb_spad_write(ndev, VHOST_VENDORID, vdev->id.vendor); + ntb_spad_write(ndev, VHOST_DEVICEID, vdev->id.device); + ntb_peer_spad_write(ndev, NTB_DEF_PEER_IDX, VIRTIO_TYPE, TYPE_VHOST); + + type = ntb_spad_read(ndev, VIRTIO_TYPE); + if (type != TYPE_VIRTIO) + goto out; + + virtio_addr = ntb_spad_read(ndev, VIRTIO_MW0_UPPER_ADDR); + virtio_addr <<= 32; + virtio_addr |= ntb_spad_read(ndev, VIRTIO_MW0_LOWER_ADDR); + ntb->virtio_addr = virtio_addr; + + INIT_DELAYED_WORK(&ntb->cmd_handler, ntb_vhost_cmd_handler); + queue_work(kntbvhost_workqueue, &ntb->cmd_handler.work); + + return; + +out: + if (ntb_link_is_up(ndev, NULL, NULL) == 1) + schedule_delayed_work(&ntb->link_work, + msecs_to_jiffies(NTB_LINK_DOWN_TIMEOUT)); +} + +/* ntb_vhost_event_callback - Callback to link event interrupt + * @data: Private data specific to NTB vhost driver + * + * Callback function from NTB HW driver whenever both the hosts in the NTB + * setup has invoked ntb_link_enable(). + */ +static void ntb_vhost_event_callback(void *data) +{ + struct ntb_vhost *ntb; + struct ntb_dev *ndev; + + ntb = data; + ndev = ntb->ndev; + + if (ntb_link_is_up(ntb->ndev, NULL, NULL) == 1) + schedule_delayed_work(&ntb->link_work, 0); + else + schedule_work(&ntb->link_cleanup); +} + +/* ntb_vhost_vq_db_callback - Callback to doorbell interrupt to handle vhost + * virtqueue work + * @data: Private data specific to NTB vhost driver + * @vector: Doorbell vector on which interrupt is received + * + * Callback function from NTB HW driver whenever remote virtio driver has sent + * a notification using doorbell. This schedules work corresponding to the + * virtqueue for which notification has been received. + */ +static void ntb_vhost_vq_db_callback(void *data, int vector) +{ + struct ntb_vhost_queue *vqueue; + struct ntb_vhost *ntb; + + ntb = data; + vqueue = &ntb->vqueue[vector - 1]; + + schedule_delayed_work(&vqueue->db_handler, 0); +} + +static const struct ntb_ctx_ops ntb_vhost_ops = { + .link_event = ntb_vhost_event_callback, + .db_event = ntb_vhost_vq_db_callback, +}; + +/* ntb_vhost_configure_mw - Get memory window address and size + * @ntb: NTB vhost device that communicates with the remote virtio device + * + * Get address and size of memory window 0 and update the size of + * memory window 0 to scratchpad register in order for virtio driver + * to get the memory window size. + * + * TODO: Add support for multiple memory windows. + */ +static int ntb_vhost_configure_mw(struct ntb_vhost *ntb) +{ + struct ntb_dev *ndev; + struct device *dev; + int ret; + + ndev = ntb->ndev; + dev = ntb->dev; + + ret = ntb_peer_mw_get_addr(ndev, 0, &ntb->peer_mw_addr, &ntb->peer_mw_size); + if (ret) { + dev_err(dev, "Failed to get memory window address\n"); + return ret; + } + + ntb_spad_write(ndev, VHOST_MW0_SIZE_LOWER, lower_32_bits(ntb->peer_mw_size)); + ntb_spad_write(ndev, VHOST_MW0_SIZE_UPPER, upper_32_bits(ntb->peer_mw_size)); + + return 0; +} + +/* ntb_vhost_probe - Initialize struct ntb_vhost when a new NTB device is + * created + * @client: struct ntb_client * representing the ntb vhost client driver + * @ndev: NTB device created by NTB HW driver + * + * Probe function to initialize struct ntb_vhost when a new NTB device is + * created. Also get the supported MW0 size and MW0 address and write the MW0 + * address to the self scratchpad for the remote NTB virtio driver to read. + */ +static int ntb_vhost_probe(struct ntb_client *self, struct ntb_dev *ndev) +{ + struct device *dev = &ndev->dev; + struct config_group *group; + struct vhost_dev *vdev; + struct ntb_vhost *ntb; + int ret; + + ntb = kzalloc(sizeof(*ntb), GFP_KERNEL); + if (!ntb) + return -ENOMEM; + + ntb->ndev = ndev; + ntb->dev = dev; + + ret = ntb_vhost_configure_mw(ntb); + if (ret) { + dev_err(dev, "Failed to configure memory window\n"); + goto err; + } + + ret = ntb_set_ctx(ndev, ntb, &ntb_vhost_ops); + if (ret) { + dev_err(dev, "Failed to set NTB vhost context\n"); + goto err; + } + + vdev = &ntb->vdev; + vdev->dev.parent = dev; + vdev->dev.release = ntb_vhost_release; + vdev->ops = &ops; + + group = vhost_cfs_add_device_item(vdev); + if (IS_ERR(group)) { + dev_err(dev, "Failed to add configfs entry for vhost device\n"); + goto err; + } + + ntb->group = group; + + INIT_DELAYED_WORK(&ntb->link_work, ntb_vhost_link_work); + INIT_WORK(&ntb->link_cleanup, ntb_vhost_link_cleanup_work); + + ntb_link_enable(ndev, NTB_SPEED_AUTO, NTB_WIDTH_AUTO); + + return 0; + +err: + kfree(ntb); + + return ret; +} + +/* ntb_vhost_free - Free the initializations performed by ntb_vhost_probe() + * @client: struct ntb_client * representing the ntb vhost client driver + * @ndev: NTB device created by NTB HW driver + * + * Free the initializations performed by ntb_vhost_probe(). + */ +void ntb_vhost_free(struct ntb_client *client, struct ntb_dev *ndev) +{ + struct config_group *group; + struct vhost_dev *vdev; + struct ntb_vhost *ntb; + + ntb = ndev->ctx; + vdev = &ntb->vdev; + group = ntb->group; + + ntb_vhost_link_cleanup(ntb); + ntb_link_disable(ndev); + ntb_vhost_del_vqs(vdev); + vhost_cfs_remove_device_item(group); + if (device_is_registered(&vdev->dev)) + vhost_unregister_device(vdev); +} + +static struct ntb_client ntb_vhost_client = { + .ops = { + .probe = ntb_vhost_probe, + .remove = ntb_vhost_free, + }, +}; + +static int __init ntb_vhost_init(void) +{ + int ret; + + kntbvhost_workqueue = alloc_workqueue("kntbvhost", WQ_MEM_RECLAIM | + WQ_HIGHPRI, 0); + if (!kntbvhost_workqueue) { + pr_err("Failed to allocate kntbvhost_workqueue\n"); + return -ENOMEM; + } + + ret = ntb_register_client(&ntb_vhost_client); + if (ret) { + pr_err("Failed to register ntb vhost driver --> %d\n", ret); + return ret; + } + + return 0; +} +module_init(ntb_vhost_init); + +static void __exit ntb_vhost_exit(void) +{ + ntb_unregister_client(&ntb_vhost_client); + destroy_workqueue(kntbvhost_workqueue); +} +module_exit(ntb_vhost_exit); + +MODULE_DESCRIPTION("NTB VHOST Driver"); +MODULE_AUTHOR("Kishon Vijay Abraham I "); +MODULE_LICENSE("GPL v2"); From patchwork Thu Jul 2 08:21:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kishon Vijay Abraham I X-Patchwork-Id: 192203 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp1231343ilg; Thu, 2 Jul 2020 01:24:36 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxNauTdjo6oIIteTcnzR4WkTYAvWDWz35zEGOszO04EcY6ZETaWvnqSwChHUTqvOFOhIEOX X-Received: by 2002:a17:906:1db1:: with SMTP id u17mr20408992ejh.72.1593678276078; Thu, 02 Jul 2020 01:24:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593678276; cv=none; d=google.com; s=arc-20160816; b=qOASHENu9n7F9RFfS48XULmv79l23syBJeoowYemrvkJ4JHpBkoK0vVWZ8gflVSPGr dumWbKyCcMJ/SNRfgHsfBY3UdOZSJ/YEM2f25UCqNUmsriFaGhrySjT0z2DxMp1elwJV UfQwN/wkoA3GOHR3NHjQ67kTGDykIiNQPLh3V5QBO9g+MRIG4wUXk4RtBv6Wc3G2hc0T VChQGyVMUn6NxBOW537ajrkgNRek4XwNCJtLjYKoVB40X/+I/MAQJH3j3gOjOcp9TGT2 OzuuPAvvxGLtBOrMpgmiIy+gItLBe5XYILKmCqmGUXyHXoBf0w4UI3U6zdn2jLONGSg1 ib3A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=2Ps3sR9VF/LqByb34nV2fM7x0C723HV4lrdTJSaPtuA=; b=Ge/Rkaaq67PvOXUnxTTs6zhBOMHNf/YpqDu71OFjG0aNbXkbaWM1GdrMYyztmIX7cU Gvg7Ga90pdP+9vpEepsNx5Q/zzPeXcfSqA+nr3AUV2j3wJyjuIBJmiNaTfu9XjyKtpph s/qZjlNTuxTmltEYaf0iPRD/r7NQU0tXiREWFB1XuechBn/9cWJ1tlP7UwcykAdHsas6 K3fFaYWcXLMkO+PNDtVQsL0W+1ks7H6L3N+3SBfLRI68pO5G+FeVncnCZs5PMsoUxWf+ VuDbd5ODaNQ4/Omd5neJkAGN8g1q1OglptGtuIEW2c+i6JPUIJqszbUi+dC/+zr0BArJ 5rsA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=iCY0E9W7; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a9si4682930ejx.271.2020.07.02.01.24.35; Thu, 02 Jul 2020 01:24:36 -0700 (PDT) Received-SPF: pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=iCY0E9W7; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727820AbgGBIYe (ORCPT + 9 others); Thu, 2 Jul 2020 04:24:34 -0400 Received: from lelv0142.ext.ti.com ([198.47.23.249]:59768 "EHLO lelv0142.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728755AbgGBIYF (ORCPT ); Thu, 2 Jul 2020 04:24:05 -0400 Received: from fllv0035.itg.ti.com ([10.64.41.0]) by lelv0142.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0628Nt6b082238; Thu, 2 Jul 2020 03:23:55 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1593678235; bh=2Ps3sR9VF/LqByb34nV2fM7x0C723HV4lrdTJSaPtuA=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=iCY0E9W7YtWImyjACut0NGZBKOpaf22G78SdPYYg8vG5CI0IfbPCrAZ/V5/Wqr0ek 8lX40jte/wRY2JN8ardU2oSSftFNtSPIesQSJizfY8pkyhecWg5VidgZWPsuJ9vn10 7CU7+V1dA0D5K+DoWGKlSxkziCwP9B9jOImc4hIc= Received: from DFLE103.ent.ti.com (dfle103.ent.ti.com [10.64.6.24]) by fllv0035.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0628NtLg087605; Thu, 2 Jul 2020 03:23:55 -0500 Received: from DFLE111.ent.ti.com (10.64.6.32) by DFLE103.ent.ti.com (10.64.6.24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Thu, 2 Jul 2020 03:23:54 -0500 Received: from lelv0327.itg.ti.com (10.180.67.183) by DFLE111.ent.ti.com (10.64.6.32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Thu, 2 Jul 2020 03:23:54 -0500 Received: from a0393678ub.india.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0628LiYY006145; Thu, 2 Jul 2020 03:23:49 -0500 From: Kishon Vijay Abraham I To: Ohad Ben-Cohen , Bjorn Andersson , Jon Mason , Dave Jiang , Allen Hubbe , Lorenzo Pieralisi , Bjorn Helgaas , "Michael S. Tsirkin" , Jason Wang , Paolo Bonzini , Stefan Hajnoczi , Stefano Garzarella CC: , , , , , , , Subject: [RFC PATCH 22/22] NTB: Describe ntb_virtio and ntb_vhost client in the documentation Date: Thu, 2 Jul 2020 13:51:43 +0530 Message-ID: <20200702082143.25259-23-kishon@ti.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200702082143.25259-1-kishon@ti.com> References: <20200702082143.25259-1-kishon@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Add a blurb in Documentation/ntb.txt to describe the ntb_virtio and ntb_vhost client Signed-off-by: Kishon Vijay Abraham I --- Documentation/driver-api/ntb.rst | 11 +++++++++++ 1 file changed, 11 insertions(+) -- 2.17.1 diff --git a/Documentation/driver-api/ntb.rst b/Documentation/driver-api/ntb.rst index 87d1372da879..f84b81625397 100644 --- a/Documentation/driver-api/ntb.rst +++ b/Documentation/driver-api/ntb.rst @@ -227,6 +227,17 @@ test client is interacted with through the debugfs filesystem: specified peer. That peer's interrupt's occurrence file should be incremented. +NTB Vhost Client (ntb\_vhost) and NTB Virtio Client (ntb\_virtio) +------------------------------------------------------------------ + +When two hosts are connected via NTB, one of the hosts should use NTB Vhost +Client and the other host should use NTB Virtio Client. The NTB Vhost client +interfaces with the Linux Vhost Framework and lets it to be used with any +vhost client driver. The NTB Virtio client interfaces with the Linux Virtio +Framework and lets it to be used with any virtio client driver. The Vhost +client driver and Virtio client driver creates a logic cink to exchange data +with each other. + NTB Hardware Drivers ====================