From patchwork Tue May 19 12:25:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raphael Norwitz X-Patchwork-Id: 282401 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, UNPARSEABLE_RELAY, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19F71C433E0 for ; Tue, 19 May 2020 12:27:20 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DAF9820825 for ; Tue, 19 May 2020 12:27:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=sendgrid.net header.i=@sendgrid.net header.b="q/QuKzfw" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DAF9820825 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nutanix.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:45770 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jb1L5-0001pL-2B for qemu-devel@archiver.kernel.org; Tue, 19 May 2020 08:27:19 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51660) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jb1JI-0008Kb-5P for qemu-devel@nongnu.org; Tue, 19 May 2020 08:25:28 -0400 Received: from o1.dev.nutanix.com ([198.21.4.205]:2515) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jb1JH-00035A-4j for qemu-devel@nongnu.org; Tue, 19 May 2020 08:25:27 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sendgrid.net; h=from:subject:in-reply-to:references:to:cc:content-type: content-transfer-encoding; s=smtpapi; bh=M6xTVVbhl/MNPl1YAIqUl150kIlTpOqbu4j5fCTP0pQ=; b=q/QuKzfwm4GrziEWq8ojn5G3LHizdJdLQW6piE4YTtRwQksDLLY9TPGg9q4lktrlou5a BO+6W3h44zcq33AIcnomirwHiDROydR6hsgHmAdNM0OO6lywrQH/7Fj9GXWqhOyfCUD1qc Haw8bbPG7t7QA8rdptDad5BysqwG+n/mY= Received: by filterdrecv-p3iad2-8ddf98858-xxtk7 with SMTP id filterdrecv-p3iad2-8ddf98858-xxtk7-19-5EC3D034-6C 2020-05-19 12:25:24.920715111 +0000 UTC m=+4706274.792817300 Received: from localhost.localdomain.com (unknown) by ismtpd0002p1lon1.sendgrid.net (SG) with ESMTP id JrnbnvuhSbGAl57yO8u6Zw Tue, 19 May 2020 12:25:24.640 +0000 (UTC) From: Raphael Norwitz Subject: [PATCH v3 01/10] Add helper to populate vhost-user message regions Date: Tue, 19 May 2020 12:25:24 +0000 (UTC) Message-Id: <1588473683-27067-2-git-send-email-raphael.norwitz@nutanix.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1588473683-27067-1-git-send-email-raphael.norwitz@nutanix.com> References: <1588473683-27067-1-git-send-email-raphael.norwitz@nutanix.com> X-SG-EID: YCLURHX+pjNDm1i7d69iKyMnQi/dvWah9veFa8nllaoUC0ScIWrCgiaWGu43VgxFdB4istXUBpN9H93OJgc8zS1BYfxWzbSS+pQZx4eS3Ew6yLV9Xqja3b7JcYrwD1gaK+Utnj4D+eYE7Wogs9isKFPnZwmmhtkM0z5j14C7LgkpuUFG624mrDnbuTzkDbmH2KNicWAichC71+Ogh5VJZufAxMo6bAwivIeBkHJ3mXk2jOV6/9jc+zDbc+Grtlzp31ufuyBOs8ZWKHiwv4X+Eg== To: qemu-devel@nongnu.org, mst@redhat.com, marcandre.lureau@redhat.com Received-SPF: pass client-ip=198.21.4.205; envelope-from=bounces+16159052-3d09-qemu-devel=nongnu.org@sendgrid.net; helo=o1.dev.nutanix.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/05/19 08:25:15 X-ACL-Warn: Detected OS = Linux 3.11 and newer [fuzzy] X-Spam_score_int: -17 X-Spam_score: -1.8 X-Spam_bar: - X-Spam_report: (-1.8 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_MED=0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, HEADER_FROM_DIFFERENT_DOMAINS=0.249, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_PASS=-0.001, UNPARSEABLE_RELAY=0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: raphael.s.norwitz@gmail.com, marcandre.lureau@gmail.com, Raphael Norwitz Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" When setting vhost-user memory tables, memory region descriptors must be copied from the vhost_dev struct to the vhost-user message. To avoid duplicating code in setting the memory tables, we should use a helper to populate this field. This change adds this helper. Signed-off-by: Raphael Norwitz --- hw/virtio/vhost-user.c | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-) diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c index ec21e8f..ee6d1ed 100644 --- a/hw/virtio/vhost-user.c +++ b/hw/virtio/vhost-user.c @@ -407,6 +407,15 @@ static int vhost_user_set_log_base(struct vhost_dev *dev, uint64_t base, return 0; } +static void vhost_user_fill_msg_region(VhostUserMemoryRegion *dst, + struct vhost_memory_region *src) +{ + assert(src != NULL && dst != NULL); + dst->userspace_addr = src->userspace_addr; + dst->memory_size = src->memory_size; + dst->guest_phys_addr = src->guest_phys_addr; +} + static int vhost_user_fill_set_mem_table_msg(struct vhost_user *u, struct vhost_dev *dev, VhostUserMsg *msg, @@ -441,12 +450,8 @@ static int vhost_user_fill_set_mem_table_msg(struct vhost_user *u, error_report("Failed preparing vhost-user memory table msg"); return -1; } - msg->payload.memory.regions[*fd_num].userspace_addr = - reg->userspace_addr; - msg->payload.memory.regions[*fd_num].memory_size = - reg->memory_size; - msg->payload.memory.regions[*fd_num].guest_phys_addr = - reg->guest_phys_addr; + vhost_user_fill_msg_region(&msg->payload.memory.regions[*fd_num], + reg); msg->payload.memory.regions[*fd_num].mmap_offset = offset; fds[(*fd_num)++] = fd; } else if (track_ramblocks) { From patchwork Tue May 19 12:25:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raphael Norwitz X-Patchwork-Id: 282399 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, UNPARSEABLE_RELAY, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CFA9BC433DF for ; Tue, 19 May 2020 12:28:58 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9219320825 for ; Tue, 19 May 2020 12:28:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=sendgrid.net header.i=@sendgrid.net header.b="FZWzq2aB" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9219320825 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nutanix.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:54142 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jb1Mf-0005T9-KX for qemu-devel@archiver.kernel.org; Tue, 19 May 2020 08:28:57 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51666) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jb1JK-0008Nl-Hu for qemu-devel@nongnu.org; Tue, 19 May 2020 08:25:30 -0400 Received: from o1.dev.nutanix.com ([198.21.4.205]:40815) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jb1JJ-00035M-M7 for qemu-devel@nongnu.org; Tue, 19 May 2020 08:25:30 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sendgrid.net; h=from:subject:in-reply-to:references:to:cc:content-type: content-transfer-encoding; s=smtpapi; bh=wQ2T/1wFd1iaU2BPIzWDqxjwlbWfQXFdYSbm9RDL+PY=; b=FZWzq2aBrPIpg3uZiqv/mOoPwf+292AxVJx8+pAoYuQCXqdjLFauWavoVA9Ke2a5gEIP sdhpmVkDXeo6OB+OeGH7N8bP5uPeWQ4ScH/Z+tIXLs3lBLPvBhOtP5j8FVJ/nYfOp/vUo5 JcJMj6wWpgpwFqk4igaxKmuqbZ48+PRVw= Received: by filterdrecv-p3iad2-8ddf98858-f4h4l with SMTP id filterdrecv-p3iad2-8ddf98858-f4h4l-19-5EC3D037-7D 2020-05-19 12:25:27.756845705 +0000 UTC m=+4706281.076077842 Received: from localhost.localdomain.com (unknown) by ismtpd0002p1lon1.sendgrid.net (SG) with ESMTP id 4IGmOLPhSWa8iHDATFM9CA Tue, 19 May 2020 12:25:27.482 +0000 (UTC) From: Raphael Norwitz Subject: [PATCH v3 02/10] Add vhost-user helper to get MemoryRegion data Date: Tue, 19 May 2020 12:25:27 +0000 (UTC) Message-Id: <1588473683-27067-3-git-send-email-raphael.norwitz@nutanix.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1588473683-27067-1-git-send-email-raphael.norwitz@nutanix.com> References: <1588473683-27067-1-git-send-email-raphael.norwitz@nutanix.com> X-SG-EID: YCLURHX+pjNDm1i7d69iKyMnQi/dvWah9veFa8nllaoUC0ScIWrCgiaWGu43VgxFdB4istXUBpN9H93OJgc8zQxlheCF8KQlJt03fkbONPkP5AYoPJPmwoap1v/ZEdMqqsaYqwgijCF9LRcJ2Br0pd2BCkDUtpKOtY73K2K998yLxubkon91uAfSE8rJO0tkVT04wvgyqA6NiUYYLFPT3J17ER9xxgP4u4buziS6Ftbi7qXTaR9aA7FB8ohKcGgm9UE1CVnJ6UmxcJzwEKztyQ== To: qemu-devel@nongnu.org, mst@redhat.com, marcandre.lureau@redhat.com Received-SPF: pass client-ip=198.21.4.205; envelope-from=bounces+16159052-3d09-qemu-devel=nongnu.org@sendgrid.net; helo=o1.dev.nutanix.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/05/19 08:25:15 X-ACL-Warn: Detected OS = Linux 3.11 and newer [fuzzy] X-Spam_score_int: -17 X-Spam_score: -1.8 X-Spam_bar: - X-Spam_report: (-1.8 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_MED=0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, HEADER_FROM_DIFFERENT_DOMAINS=0.249, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_PASS=-0.001, UNPARSEABLE_RELAY=0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: raphael.s.norwitz@gmail.com, marcandre.lureau@gmail.com, Raphael Norwitz Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" When setting the memory tables, qemu uses a memory region's userspace address to look up the region's MemoryRegion struct. Among other things, the MemoryRegion contains the region's offset and associated file descriptor, all of which need to be sent to the backend. With VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS, this logic will be needed in multiple places, so before feature support is added it should be moved to a helper function. This helper is also used to simplify the vhost_user_can_merge() function. Signed-off-by: Raphael Norwitz --- hw/virtio/vhost-user.c | 25 +++++++++++++++---------- 1 file changed, 15 insertions(+), 10 deletions(-) diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c index ee6d1ed..dacf5bb 100644 --- a/hw/virtio/vhost-user.c +++ b/hw/virtio/vhost-user.c @@ -407,6 +407,18 @@ static int vhost_user_set_log_base(struct vhost_dev *dev, uint64_t base, return 0; } +static MemoryRegion *vhost_user_get_mr_data(uint64_t addr, ram_addr_t *offset, + int *fd) +{ + MemoryRegion *mr; + + assert((uintptr_t)addr == addr); + mr = memory_region_from_host((void *)(uintptr_t)addr, offset); + *fd = memory_region_get_fd(mr); + + return mr; +} + static void vhost_user_fill_msg_region(VhostUserMemoryRegion *dst, struct vhost_memory_region *src) { @@ -432,10 +444,7 @@ static int vhost_user_fill_set_mem_table_msg(struct vhost_user *u, for (i = 0; i < dev->mem->nregions; ++i) { reg = dev->mem->regions + i; - assert((uintptr_t)reg->userspace_addr == reg->userspace_addr); - mr = memory_region_from_host((void *)(uintptr_t)reg->userspace_addr, - &offset); - fd = memory_region_get_fd(mr); + mr = vhost_user_get_mr_data(reg->userspace_addr, &offset, &fd); if (fd > 0) { if (track_ramblocks) { assert(*fd_num < VHOST_MEMORY_MAX_NREGIONS); @@ -1550,13 +1559,9 @@ static bool vhost_user_can_merge(struct vhost_dev *dev, { ram_addr_t offset; int mfd, rfd; - MemoryRegion *mr; - - mr = memory_region_from_host((void *)(uintptr_t)start1, &offset); - mfd = memory_region_get_fd(mr); - mr = memory_region_from_host((void *)(uintptr_t)start2, &offset); - rfd = memory_region_get_fd(mr); + (void)vhost_user_get_mr_data(start1, &offset, &mfd); + (void)vhost_user_get_mr_data(start2, &offset, &rfd); return mfd == rfd; } From patchwork Tue May 19 12:25:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raphael Norwitz X-Patchwork-Id: 282400 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, UNPARSEABLE_RELAY, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90AD0C433E0 for ; Tue, 19 May 2020 12:28:15 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 49D4320825 for ; Tue, 19 May 2020 12:28:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=sendgrid.net header.i=@sendgrid.net header.b="xSeisfZH" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 49D4320825 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nutanix.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:49294 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jb1Ly-0003Gn-CN for qemu-devel@archiver.kernel.org; Tue, 19 May 2020 08:28:14 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51678) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jb1JW-0008VD-RW for qemu-devel@nongnu.org; Tue, 19 May 2020 08:25:44 -0400 Received: from o1.dev.nutanix.com ([198.21.4.205]:46099) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jb1JT-00035k-Fr for qemu-devel@nongnu.org; Tue, 19 May 2020 08:25:41 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sendgrid.net; h=from:subject:in-reply-to:references:to:cc:content-type: content-transfer-encoding; s=smtpapi; bh=i0Twq+cLmun0AL87TLF4mDFKdDwGhNZq/HRRt7ALzsI=; b=xSeisfZHsaoj5Hx4HvnAUbmQN4cQCXfZ+rmvu+OD9Xrk0zyOSRiNI4wCF6Q17jIh5uM1 CHWxBK9uWhUN5BKmIFGRAEicQPHfv6TRyG/l6XX4m7OkuBgccFC+IyWJy2uHJO+4D/Z5T2 9vJDVKju0cO57jyDyptUbvysLQhpgqeIQ= Received: by filterdrecv-p3iad2-8ddf98858-szfkb with SMTP id filterdrecv-p3iad2-8ddf98858-szfkb-20-5EC3D040-77 2020-05-19 12:25:37.007463432 +0000 UTC m=+4706285.824587008 Received: from localhost.localdomain.com (unknown) by ismtpd0002p1lon1.sendgrid.net (SG) with ESMTP id nqOupbdiSEyD2eX35KzFyw Tue, 19 May 2020 12:25:36.751 +0000 (UTC) From: Raphael Norwitz Subject: [PATCH v3 05/10] Lift max memory slots limit imposed by vhost-user Date: Tue, 19 May 2020 12:25:37 +0000 (UTC) Message-Id: <1588473683-27067-6-git-send-email-raphael.norwitz@nutanix.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1588473683-27067-1-git-send-email-raphael.norwitz@nutanix.com> References: <1588473683-27067-1-git-send-email-raphael.norwitz@nutanix.com> X-SG-EID: YCLURHX+pjNDm1i7d69iKyMnQi/dvWah9veFa8nllaoUC0ScIWrCgiaWGu43VgxFdB4istXUBpN9H93OJgc8zfb9oaW8Dn6ChLkDuFf6vD766A4JqCMd+JugyKPxO7KypFYsp1RqPXzWa7lBu0P1bVtWg9NQfoxmt4YaetEOiebleSNKL8Vu9Eat9C+IvLpCXxYiH+3GAZ6DJrfPc+SOgXmsBPhSzeYp+1lglInhr40nNusXJ/hWs8ZurvVTK2rNsPMBWa3mSCYlWPh3wYPYNw== To: qemu-devel@nongnu.org, mst@redhat.com, marcandre.lureau@redhat.com Received-SPF: pass client-ip=198.21.4.205; envelope-from=bounces+16159052-3d09-qemu-devel=nongnu.org@sendgrid.net; helo=o1.dev.nutanix.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/05/19 08:25:15 X-ACL-Warn: Detected OS = Linux 3.11 and newer [fuzzy] X-Spam_score_int: -17 X-Spam_score: -1.8 X-Spam_bar: - X-Spam_report: (-1.8 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_MED=0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, HEADER_FROM_DIFFERENT_DOMAINS=0.249, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_PASS=-0.001, UNPARSEABLE_RELAY=0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Turschmid , raphael.s.norwitz@gmail.com, marcandre.lureau@gmail.com, Raphael Norwitz Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Historically, sending all memory regions to vhost-user backends in a single message imposed a limitation on the number of times memory could be hot-added to a VM with a vhost-user device. Now that backends which support the VHOST_USER_PROTOCOL_F_CONFIGURE_SLOTS send memory regions individually, we no longer need to impose this limitation on devices which support this feature. With this change, VMs with a vhost-user device which supports the VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS can support a configurable number of memory slots, up to the maximum allowed by the target platform. Existing backends which do not support VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS are unaffected. Signed-off-by: Raphael Norwitz Signed-off-by: Peter Turschmid Suggested-by: Mike Cui --- docs/interop/vhost-user.rst | 7 +++--- hw/virtio/vhost-user.c | 56 ++++++++++++++++++++++++++++++--------------- 2 files changed, 40 insertions(+), 23 deletions(-) diff --git a/docs/interop/vhost-user.rst b/docs/interop/vhost-user.rst index 037eefa..688b7c6 100644 --- a/docs/interop/vhost-user.rst +++ b/docs/interop/vhost-user.rst @@ -1273,10 +1273,9 @@ Master message types feature has been successfully negotiated, this message is submitted by master to the slave. The slave should return the message with a u64 payload containing the maximum number of memory slots for - QEMU to expose to the guest. At this point, the value returned - by the backend will be capped at the maximum number of ram slots - which can be supported by vhost-user. Currently that limit is set - at VHOST_USER_MAX_RAM_SLOTS = 8. + QEMU to expose to the guest. The value returned by the backend + will be capped at the maximum number of ram slots which can be + supported by the target platform. ``VHOST_USER_ADD_MEM_REG`` :id: 37 diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c index 4af8476..270a96d 100644 --- a/hw/virtio/vhost-user.c +++ b/hw/virtio/vhost-user.c @@ -35,11 +35,29 @@ #include #endif -#define VHOST_MEMORY_MAX_NREGIONS 8 +#define VHOST_MEMORY_BASELINE_NREGIONS 8 #define VHOST_USER_F_PROTOCOL_FEATURES 30 #define VHOST_USER_SLAVE_MAX_FDS 8 /* + * Set maximum number of RAM slots supported to + * the maximum number supported by the target + * hardware plaform. + */ +#if defined(TARGET_X86) || defined(TARGET_X86_64) || \ + defined(TARGET_ARM) || defined(TARGET_ARM_64) +#include "hw/acpi/acpi.h" +#define VHOST_USER_MAX_RAM_SLOTS ACPI_MAX_RAM_SLOTS + +#elif defined(TARGET_PPC) || defined(TARGET_PPC_64) +#include "hw/ppc/spapr.h" +#define VHOST_USER_MAX_RAM_SLOTS SPAPR_MAX_RAM_SLOTS + +#else +#define VHOST_USER_MAX_RAM_SLOTS 512 +#endif + +/* * Maximum size of virtio device config space */ #define VHOST_USER_MAX_CONFIG_SIZE 256 @@ -127,7 +145,7 @@ typedef struct VhostUserMemoryRegion { typedef struct VhostUserMemory { uint32_t nregions; uint32_t padding; - VhostUserMemoryRegion regions[VHOST_MEMORY_MAX_NREGIONS]; + VhostUserMemoryRegion regions[VHOST_MEMORY_BASELINE_NREGIONS]; } VhostUserMemory; typedef struct VhostUserMemRegMsg { @@ -222,7 +240,7 @@ struct vhost_user { int slave_fd; NotifierWithReturn postcopy_notifier; struct PostCopyFD postcopy_fd; - uint64_t postcopy_client_bases[VHOST_MEMORY_MAX_NREGIONS]; + uint64_t postcopy_client_bases[VHOST_USER_MAX_RAM_SLOTS]; /* Length of the region_rb and region_rb_offset arrays */ size_t region_rb_len; /* RAMBlock associated with a given region */ @@ -237,7 +255,7 @@ struct vhost_user { /* Our current regions */ int num_shadow_regions; - struct vhost_memory_region shadow_regions[VHOST_MEMORY_MAX_NREGIONS]; + struct vhost_memory_region shadow_regions[VHOST_USER_MAX_RAM_SLOTS]; }; struct scrub_regions { @@ -392,7 +410,7 @@ int vhost_user_gpu_set_socket(struct vhost_dev *dev, int fd) static int vhost_user_set_log_base(struct vhost_dev *dev, uint64_t base, struct vhost_log *log) { - int fds[VHOST_MEMORY_MAX_NREGIONS]; + int fds[VHOST_USER_MAX_RAM_SLOTS]; size_t fd_num = 0; bool shmfd = virtio_has_feature(dev->protocol_features, VHOST_USER_PROTOCOL_F_LOG_SHMFD); @@ -469,7 +487,7 @@ static int vhost_user_fill_set_mem_table_msg(struct vhost_user *u, mr = vhost_user_get_mr_data(reg->userspace_addr, &offset, &fd); if (fd > 0) { if (track_ramblocks) { - assert(*fd_num < VHOST_MEMORY_MAX_NREGIONS); + assert(*fd_num < VHOST_MEMORY_BASELINE_NREGIONS); trace_vhost_user_set_mem_table_withfd(*fd_num, mr->name, reg->memory_size, reg->guest_phys_addr, @@ -477,7 +495,7 @@ static int vhost_user_fill_set_mem_table_msg(struct vhost_user *u, offset); u->region_rb_offset[i] = offset; u->region_rb[i] = mr->ram_block; - } else if (*fd_num == VHOST_MEMORY_MAX_NREGIONS) { + } else if (*fd_num == VHOST_MEMORY_BASELINE_NREGIONS) { error_report("Failed preparing vhost-user memory table msg"); return -1; } @@ -522,7 +540,7 @@ static void scrub_shadow_regions(struct vhost_dev *dev, bool track_ramblocks) { struct vhost_user *u = dev->opaque; - bool found[VHOST_MEMORY_MAX_NREGIONS] = {}; + bool found[VHOST_USER_MAX_RAM_SLOTS] = {}; struct vhost_memory_region *reg, *shadow_reg; int i, j, fd, add_idx = 0, rm_idx = 0, fd_num = 0; ram_addr_t offset; @@ -773,9 +791,9 @@ static int vhost_user_add_remove_regions(struct vhost_dev *dev, bool track_ramblocks) { struct vhost_user *u = dev->opaque; - struct scrub_regions add_reg[VHOST_MEMORY_MAX_NREGIONS]; - struct scrub_regions rem_reg[VHOST_MEMORY_MAX_NREGIONS]; - uint64_t shadow_pcb[VHOST_MEMORY_MAX_NREGIONS] = {}; + struct scrub_regions add_reg[VHOST_USER_MAX_RAM_SLOTS]; + struct scrub_regions rem_reg[VHOST_USER_MAX_RAM_SLOTS]; + uint64_t shadow_pcb[VHOST_USER_MAX_RAM_SLOTS] = {}; int nr_add_reg, nr_rem_reg; msg->hdr.size = sizeof(msg->payload.mem_reg.padding) + @@ -799,7 +817,7 @@ static int vhost_user_add_remove_regions(struct vhost_dev *dev, if (track_ramblocks) { memcpy(u->postcopy_client_bases, shadow_pcb, - sizeof(uint64_t) * VHOST_MEMORY_MAX_NREGIONS); + sizeof(uint64_t) * VHOST_USER_MAX_RAM_SLOTS); /* * Now we've registered this with the postcopy code, we ack to the * client, because now we're in the position to be able to deal with @@ -819,7 +837,7 @@ static int vhost_user_add_remove_regions(struct vhost_dev *dev, err: if (track_ramblocks) { memcpy(u->postcopy_client_bases, shadow_pcb, - sizeof(uint64_t) * VHOST_MEMORY_MAX_NREGIONS); + sizeof(uint64_t) * VHOST_USER_MAX_RAM_SLOTS); } return -1; @@ -831,7 +849,7 @@ static int vhost_user_set_mem_table_postcopy(struct vhost_dev *dev, bool config_mem_slots) { struct vhost_user *u = dev->opaque; - int fds[VHOST_MEMORY_MAX_NREGIONS]; + int fds[VHOST_MEMORY_BASELINE_NREGIONS]; size_t fd_num = 0; VhostUserMsg msg_reply; int region_i, msg_i; @@ -889,7 +907,7 @@ static int vhost_user_set_mem_table_postcopy(struct vhost_dev *dev, } memset(u->postcopy_client_bases, 0, - sizeof(uint64_t) * VHOST_MEMORY_MAX_NREGIONS); + sizeof(uint64_t) * VHOST_USER_MAX_RAM_SLOTS); /* * They're in the same order as the regions that were sent @@ -938,7 +956,7 @@ static int vhost_user_set_mem_table(struct vhost_dev *dev, struct vhost_memory *mem) { struct vhost_user *u = dev->opaque; - int fds[VHOST_MEMORY_MAX_NREGIONS]; + int fds[VHOST_MEMORY_BASELINE_NREGIONS]; size_t fd_num = 0; bool do_postcopy = u->postcopy_listen && u->postcopy_fd.handler; bool reply_supported = virtio_has_feature(dev->protocol_features, @@ -1145,7 +1163,7 @@ static int vhost_set_vring_file(struct vhost_dev *dev, VhostUserRequest request, struct vhost_vring_file *file) { - int fds[VHOST_MEMORY_MAX_NREGIONS]; + int fds[VHOST_USER_MAX_RAM_SLOTS]; size_t fd_num = 0; VhostUserMsg msg = { .hdr.request = request, @@ -1841,7 +1859,7 @@ static int vhost_user_backend_init(struct vhost_dev *dev, void *opaque) /* get max memory regions if backend supports configurable RAM slots */ if (!virtio_has_feature(dev->protocol_features, VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS)) { - u->user->memory_slots = VHOST_MEMORY_MAX_NREGIONS; + u->user->memory_slots = VHOST_MEMORY_BASELINE_NREGIONS; } else { err = vhost_user_get_max_memslots(dev, &ram_slots); if (err < 0) { @@ -1856,7 +1874,7 @@ static int vhost_user_backend_init(struct vhost_dev *dev, void *opaque) return -1; } - u->user->memory_slots = MIN(ram_slots, VHOST_MEMORY_MAX_NREGIONS); + u->user->memory_slots = MIN(ram_slots, VHOST_USER_MAX_RAM_SLOTS); } } From patchwork Tue May 19 12:25:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raphael Norwitz X-Patchwork-Id: 282397 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, UNPARSEABLE_RELAY, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C854AC433E0 for ; Tue, 19 May 2020 12:33:17 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9732520825 for ; Tue, 19 May 2020 12:33:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=sendgrid.net header.i=@sendgrid.net header.b="iPHqDY7z" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9732520825 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nutanix.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:39242 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jb1Qq-0002zL-PS for qemu-devel@archiver.kernel.org; Tue, 19 May 2020 08:33:16 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51700) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jb1Jd-0000Bi-S1 for qemu-devel@nongnu.org; Tue, 19 May 2020 08:25:49 -0400 Received: from o1.dev.nutanix.com ([198.21.4.205]:42273) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jb1Jc-0003BW-Mt for qemu-devel@nongnu.org; Tue, 19 May 2020 08:25:49 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sendgrid.net; h=from:subject:in-reply-to:references:to:cc:content-type: content-transfer-encoding; s=smtpapi; bh=vwkmlnReXmrnXmx0K8MZishuO6jVYqTahHF5z/mitvs=; b=iPHqDY7zUXwChTM02sKFCf/pCh3heyXVnBRzK3m3cmnPSrlAiiUF42JpKCyRfW8JR3dq 2RQOMxquEXYOi5+ZnV3QdL+RTUiXAJrPbg1KuzV1dndb7446TD4lKshaPv1XIGuzO2ZOj8 KhjcTmirezjEWF5QN2FwqVOiGeGnIRLcw= Received: by filterdrecv-p3iad2-8ddf98858-f4h4l with SMTP id filterdrecv-p3iad2-8ddf98858-f4h4l-19-5EC3D04A-5E 2020-05-19 12:25:46.782585185 +0000 UTC m=+4706300.101817334 Received: from localhost.localdomain.com (unknown) by ismtpd0002p1lon1.sendgrid.net (SG) with ESMTP id mzgWj0hfT3qDXIYjNhZlFA Tue, 19 May 2020 12:25:46.505 +0000 (UTC) From: Raphael Norwitz Subject: [PATCH v3 08/10] Support adding individual regions in libvhost-user Date: Tue, 19 May 2020 12:25:46 +0000 (UTC) Message-Id: <1588473683-27067-9-git-send-email-raphael.norwitz@nutanix.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1588473683-27067-1-git-send-email-raphael.norwitz@nutanix.com> References: <1588473683-27067-1-git-send-email-raphael.norwitz@nutanix.com> X-SG-EID: YCLURHX+pjNDm1i7d69iKyMnQi/dvWah9veFa8nllaoUC0ScIWrCgiaWGu43VgxFdB4istXUBpN9H93OJgc8zWBwzjvdkQJxdrOj/1okDW0fm7tcyC8YuxkTPXtHIymnegKSUhglEFEOuzyfmVhWU9zjAa8bzu5KXQYXX7VgdJpG0TuxuVYzvXfiIHeQlKDyAEN2XhHg/D1TP8GFEGsKNiTwPtRldtCSz8djro8JANOB8juA/82vgBmex8B2Nw0NuhmAwU8HQvUm7wR5jzuZVw== To: qemu-devel@nongnu.org, mst@redhat.com, marcandre.lureau@redhat.com Received-SPF: pass client-ip=198.21.4.205; envelope-from=bounces+16159052-3d09-qemu-devel=nongnu.org@sendgrid.net; helo=o1.dev.nutanix.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/05/19 08:25:15 X-ACL-Warn: Detected OS = Linux 3.11 and newer [fuzzy] X-Spam_score_int: -17 X-Spam_score: -1.8 X-Spam_bar: - X-Spam_report: (-1.8 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_MED=0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, HEADER_FROM_DIFFERENT_DOMAINS=0.249, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_PASS=-0.001, UNPARSEABLE_RELAY=0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: raphael.s.norwitz@gmail.com, marcandre.lureau@gmail.com, Raphael Norwitz Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" When the VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS is enabled, qemu will transmit memory regions to a backend individually using the new message VHOST_USER_ADD_MEM_REG. With this change vhost-user backends built with libvhost-user can now map in new memory regions when VHOST_USER_ADD_MEM_REG messages are received. Qemu only sends VHOST_USER_ADD_MEM_REG messages when the VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS feature is negotiated, and since it is not yet supported in libvhost-user, this new functionality is not yet used. Signed-off-by: Raphael Norwitz --- contrib/libvhost-user/libvhost-user.c | 103 ++++++++++++++++++++++++++++++++++ contrib/libvhost-user/libvhost-user.h | 7 +++ 2 files changed, 110 insertions(+) diff --git a/contrib/libvhost-user/libvhost-user.c b/contrib/libvhost-user/libvhost-user.c index 9f039b7..2c2a8d9 100644 --- a/contrib/libvhost-user/libvhost-user.c +++ b/contrib/libvhost-user/libvhost-user.c @@ -138,6 +138,7 @@ vu_request_to_string(unsigned int req) REQ(VHOST_USER_GPU_SET_SOCKET), REQ(VHOST_USER_VRING_KICK), REQ(VHOST_USER_GET_MAX_MEM_SLOTS), + REQ(VHOST_USER_ADD_MEM_REG), REQ(VHOST_USER_MAX), }; #undef REQ @@ -663,6 +664,106 @@ generate_faults(VuDev *dev) { } static bool +vu_add_mem_reg(VuDev *dev, VhostUserMsg *vmsg) { + int i; + bool track_ramblocks = dev->postcopy_listening; + VhostUserMemoryRegion *msg_region = &vmsg->payload.memreg.region; + VuDevRegion *dev_region = &dev->regions[dev->nregions]; + void *mmap_addr; + + /* + * If we are in postcopy mode and we receive a u64 payload with a 0 value + * we know all the postcopy client bases have been recieved, and we + * should start generating faults. + */ + if (track_ramblocks && + vmsg->size == sizeof(vmsg->payload.u64) && + vmsg->payload.u64 == 0) { + (void)generate_faults(dev); + return false; + } + + DPRINT("Adding region: %d\n", dev->nregions); + DPRINT(" guest_phys_addr: 0x%016"PRIx64"\n", + msg_region->guest_phys_addr); + DPRINT(" memory_size: 0x%016"PRIx64"\n", + msg_region->memory_size); + DPRINT(" userspace_addr 0x%016"PRIx64"\n", + msg_region->userspace_addr); + DPRINT(" mmap_offset 0x%016"PRIx64"\n", + msg_region->mmap_offset); + + dev_region->gpa = msg_region->guest_phys_addr; + dev_region->size = msg_region->memory_size; + dev_region->qva = msg_region->userspace_addr; + dev_region->mmap_offset = msg_region->mmap_offset; + + /* + * We don't use offset argument of mmap() since the + * mapped address has to be page aligned, and we use huge + * pages. + */ + if (track_ramblocks) { + /* + * In postcopy we're using PROT_NONE here to catch anyone + * accessing it before we userfault. + */ + mmap_addr = mmap(0, dev_region->size + dev_region->mmap_offset, + PROT_NONE, MAP_SHARED, + vmsg->fds[0], 0); + } else { + mmap_addr = mmap(0, dev_region->size + dev_region->mmap_offset, + PROT_READ | PROT_WRITE, MAP_SHARED, vmsg->fds[0], + 0); + } + + if (mmap_addr == MAP_FAILED) { + vu_panic(dev, "region mmap error: %s", strerror(errno)); + } else { + dev_region->mmap_addr = (uint64_t)(uintptr_t)mmap_addr; + DPRINT(" mmap_addr: 0x%016"PRIx64"\n", + dev_region->mmap_addr); + } + + close(vmsg->fds[0]); + + if (track_ramblocks) { + /* + * Return the address to QEMU so that it can translate the ufd + * fault addresses back. + */ + msg_region->userspace_addr = (uintptr_t)(mmap_addr + + dev_region->mmap_offset); + + /* Send the message back to qemu with the addresses filled in. */ + vmsg->fd_num = 0; + if (!vu_send_reply(dev, dev->sock, vmsg)) { + vu_panic(dev, "failed to respond to add-mem-region for postcopy"); + return false; + } + + DPRINT("Successfully added new region in postcopy\n"); + dev->nregions++; + return false; + + } else { + for (i = 0; i < dev->max_queues; i++) { + if (dev->vq[i].vring.desc) { + if (map_ring(dev, &dev->vq[i])) { + vu_panic(dev, "remapping queue %d for new memory region", + i); + } + } + } + + DPRINT("Successfully added new region\n"); + dev->nregions++; + vmsg_set_reply_u64(vmsg, 0); + return true; + } +} + +static bool vu_set_mem_table_exec_postcopy(VuDev *dev, VhostUserMsg *vmsg) { int i; @@ -1668,6 +1769,8 @@ vu_process_message(VuDev *dev, VhostUserMsg *vmsg) return vu_handle_vring_kick(dev, vmsg); case VHOST_USER_GET_MAX_MEM_SLOTS: return vu_handle_get_max_memslots(dev, vmsg); + case VHOST_USER_ADD_MEM_REG: + return vu_add_mem_reg(dev, vmsg); default: vmsg_close_fds(vmsg); vu_panic(dev, "Unhandled request: %d", vmsg->request); diff --git a/contrib/libvhost-user/libvhost-user.h b/contrib/libvhost-user/libvhost-user.h index 88ef40d..60ef7fd 100644 --- a/contrib/libvhost-user/libvhost-user.h +++ b/contrib/libvhost-user/libvhost-user.h @@ -98,6 +98,7 @@ typedef enum VhostUserRequest { VHOST_USER_GPU_SET_SOCKET = 33, VHOST_USER_VRING_KICK = 35, VHOST_USER_GET_MAX_MEM_SLOTS = 36, + VHOST_USER_ADD_MEM_REG = 37, VHOST_USER_MAX } VhostUserRequest; @@ -124,6 +125,11 @@ typedef struct VhostUserMemory { VhostUserMemoryRegion regions[VHOST_MEMORY_MAX_NREGIONS]; } VhostUserMemory; +typedef struct VhostUserMemRegMsg { + uint32_t padding; + VhostUserMemoryRegion region; +} VhostUserMemRegMsg; + typedef struct VhostUserLog { uint64_t mmap_size; uint64_t mmap_offset; @@ -176,6 +182,7 @@ typedef struct VhostUserMsg { struct vhost_vring_state state; struct vhost_vring_addr addr; VhostUserMemory memory; + VhostUserMemRegMsg memreg; VhostUserLog log; VhostUserConfig config; VhostUserVringArea area; From patchwork Tue May 19 12:25:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raphael Norwitz X-Patchwork-Id: 282398 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, UNPARSEABLE_RELAY, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72088C433E0 for ; Tue, 19 May 2020 12:31:27 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 41FD520825 for ; Tue, 19 May 2020 12:31:27 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=sendgrid.net header.i=@sendgrid.net header.b="YQHwfpvm" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 41FD520825 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nutanix.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:34252 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jb1P4-0000Zo-BB for qemu-devel@archiver.kernel.org; Tue, 19 May 2020 08:31:26 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51728) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jb1Jn-0000eL-Al for qemu-devel@nongnu.org; Tue, 19 May 2020 08:25:59 -0400 Received: from o1.dev.nutanix.com ([198.21.4.205]:6829) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jb1Jm-0003CL-1U for qemu-devel@nongnu.org; Tue, 19 May 2020 08:25:59 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sendgrid.net; h=from:subject:in-reply-to:references:to:cc:content-type: content-transfer-encoding; s=smtpapi; bh=y1EHDOQ5D2TU/nllYOH5y/Xh7lZzL/tO7x+t5Cla2Oo=; b=YQHwfpvmwfxUwImJ3UbNqdd15Gh/UxNfjrDdxhhaWd3uJIgQR0kaw5rCg/kSLGSup3QM iQn3cHB8wHjAUjTmcJB0FczfPSHhJ3aa0WT5ejG5RZdvX+gj9L9o/KVJmpXUiO8FNQcMUB QVfsvsTLmXCtoJDD+4IcJi+PtZHFw3gvE= Received: by filterdrecv-p3iad2-8ddf98858-c27gg with SMTP id filterdrecv-p3iad2-8ddf98858-c27gg-18-5EC3D053-88 2020-05-19 12:25:56.116234719 +0000 UTC m=+324675.679348021 Received: from localhost.localdomain.com (unknown) by ismtpd0002p1lon1.sendgrid.net (SG) with ESMTP id kubwUuABSI-_NdnbSTg37g Tue, 19 May 2020 12:25:55.845 +0000 (UTC) From: Raphael Norwitz Subject: [PATCH v3 10/10] Lift max ram slots limit in libvhost-user Date: Tue, 19 May 2020 12:25:56 +0000 (UTC) Message-Id: <1588473683-27067-11-git-send-email-raphael.norwitz@nutanix.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1588473683-27067-1-git-send-email-raphael.norwitz@nutanix.com> References: <1588473683-27067-1-git-send-email-raphael.norwitz@nutanix.com> X-SG-EID: YCLURHX+pjNDm1i7d69iKyMnQi/dvWah9veFa8nllaoUC0ScIWrCgiaWGu43VgxFdB4istXUBpN9H93OJgc8zTB/tRiFyPxoyC7xM5ZNSQMuAO7r0c1GZ+Xemwz/LtIP3fFqCVg9+LMYbLC0ecIDDiabrHd0mwUnYXsDGjWYfE8Bvf4fNbfgnG4PSuGEbn503mdIhZdq7KCB5+QPNlrQZa3mbVkHZRobDga5PllvrkaBqzHXWpdWMNx5H7xi9cJfLpgYDGb4hRA1uX5FSYb0Sw== To: qemu-devel@nongnu.org, mst@redhat.com, marcandre.lureau@redhat.com Received-SPF: pass client-ip=198.21.4.205; envelope-from=bounces+16159052-3d09-qemu-devel=nongnu.org@sendgrid.net; helo=o1.dev.nutanix.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/05/19 08:25:15 X-ACL-Warn: Detected OS = Linux 3.11 and newer [fuzzy] X-Spam_score_int: -17 X-Spam_score: -1.8 X-Spam_bar: - X-Spam_report: (-1.8 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_MED=0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, HEADER_FROM_DIFFERENT_DOMAINS=0.249, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_PASS=-0.001, UNPARSEABLE_RELAY=0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: raphael.s.norwitz@gmail.com, marcandre.lureau@gmail.com, Raphael Norwitz Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Historically, VMs with vhost-user devices could hot-add memory a maximum of 8 times. Now that the VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS protocol feature has been added, VMs with vhost-user backends which support this new feature can support a configurable number of ram slots up to the maximum supported by the target platform. This change adds VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS support for backends built with libvhost-user, and increases the number of supported ram slots from 8 to 32. Memory hot-add, hot-remove and postcopy migration were tested with the vhost-user-bridge sample. Signed-off-by: Raphael Norwitz --- contrib/libvhost-user/libvhost-user.c | 17 +++++++++-------- contrib/libvhost-user/libvhost-user.h | 15 +++++++++++---- 2 files changed, 20 insertions(+), 12 deletions(-) diff --git a/contrib/libvhost-user/libvhost-user.c b/contrib/libvhost-user/libvhost-user.c index 635cfb1..eeb6899 100644 --- a/contrib/libvhost-user/libvhost-user.c +++ b/contrib/libvhost-user/libvhost-user.c @@ -269,7 +269,7 @@ have_userfault(void) static bool vu_message_read(VuDev *dev, int conn_fd, VhostUserMsg *vmsg) { - char control[CMSG_SPACE(VHOST_MEMORY_MAX_NREGIONS * sizeof(int))] = { }; + char control[CMSG_SPACE(VHOST_MEMORY_BASELINE_NREGIONS * sizeof(int))] = {}; struct iovec iov = { .iov_base = (char *)vmsg, .iov_len = VHOST_USER_HDR_SIZE, @@ -340,7 +340,7 @@ vu_message_write(VuDev *dev, int conn_fd, VhostUserMsg *vmsg) { int rc; uint8_t *p = (uint8_t *)vmsg; - char control[CMSG_SPACE(VHOST_MEMORY_MAX_NREGIONS * sizeof(int))] = { }; + char control[CMSG_SPACE(VHOST_MEMORY_BASELINE_NREGIONS * sizeof(int))] = {}; struct iovec iov = { .iov_base = (char *)vmsg, .iov_len = VHOST_USER_HDR_SIZE, @@ -353,7 +353,7 @@ vu_message_write(VuDev *dev, int conn_fd, VhostUserMsg *vmsg) struct cmsghdr *cmsg; memset(control, 0, sizeof(control)); - assert(vmsg->fd_num <= VHOST_MEMORY_MAX_NREGIONS); + assert(vmsg->fd_num <= VHOST_MEMORY_BASELINE_NREGIONS); if (vmsg->fd_num > 0) { size_t fdsize = vmsg->fd_num * sizeof(int); msg.msg_controllen = CMSG_SPACE(fdsize); @@ -780,7 +780,7 @@ static bool vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) { int i, j; bool found = false; - VuDevRegion shadow_regions[VHOST_MEMORY_MAX_NREGIONS] = {}; + VuDevRegion shadow_regions[VHOST_USER_MAX_RAM_SLOTS] = {}; VhostUserMemoryRegion *msg_region = &vmsg->payload.memreg.region; DPRINT("Removing region:\n"); @@ -813,7 +813,7 @@ vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) { if (found) { memcpy(dev->regions, shadow_regions, - sizeof(VuDevRegion) * VHOST_MEMORY_MAX_NREGIONS); + sizeof(VuDevRegion) * VHOST_USER_MAX_RAM_SLOTS); DPRINT("Successfully removed a region\n"); dev->nregions--; vmsg_set_reply_u64(vmsg, 0); @@ -1394,7 +1394,8 @@ vu_get_protocol_features_exec(VuDev *dev, VhostUserMsg *vmsg) 1ULL << VHOST_USER_PROTOCOL_F_SLAVE_REQ | 1ULL << VHOST_USER_PROTOCOL_F_HOST_NOTIFIER | 1ULL << VHOST_USER_PROTOCOL_F_SLAVE_SEND_FD | - 1ULL << VHOST_USER_PROTOCOL_F_REPLY_ACK; + 1ULL << VHOST_USER_PROTOCOL_F_REPLY_ACK | + 1ULL << VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS; if (have_userfault()) { features |= 1ULL << VHOST_USER_PROTOCOL_F_PAGEFAULT; @@ -1732,14 +1733,14 @@ static bool vu_handle_get_max_memslots(VuDev *dev, VhostUserMsg *vmsg) { vmsg->flags = VHOST_USER_REPLY_MASK | VHOST_USER_VERSION; vmsg->size = sizeof(vmsg->payload.u64); - vmsg->payload.u64 = VHOST_MEMORY_MAX_NREGIONS; + vmsg->payload.u64 = VHOST_USER_MAX_RAM_SLOTS; vmsg->fd_num = 0; if (!vu_message_write(dev, dev->sock, vmsg)) { vu_panic(dev, "Failed to send max ram slots: %s\n", strerror(errno)); } - DPRINT("u64: 0x%016"PRIx64"\n", (uint64_t) VHOST_MEMORY_MAX_NREGIONS); + DPRINT("u64: 0x%016"PRIx64"\n", (uint64_t) VHOST_USER_MAX_RAM_SLOTS); return false; } diff --git a/contrib/libvhost-user/libvhost-user.h b/contrib/libvhost-user/libvhost-user.h index f843971..844c37c 100644 --- a/contrib/libvhost-user/libvhost-user.h +++ b/contrib/libvhost-user/libvhost-user.h @@ -28,7 +28,13 @@ #define VIRTQUEUE_MAX_SIZE 1024 -#define VHOST_MEMORY_MAX_NREGIONS 8 +#define VHOST_MEMORY_BASELINE_NREGIONS 8 + +/* + * Set a reasonable maximum number of ram slots, which will be supported by + * any architecture. + */ +#define VHOST_USER_MAX_RAM_SLOTS 32 typedef enum VhostSetConfigType { VHOST_SET_CONFIG_TYPE_MASTER = 0, @@ -55,6 +61,7 @@ enum VhostUserProtocolFeature { VHOST_USER_PROTOCOL_F_HOST_NOTIFIER = 11, VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD = 12, VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS = 14, + VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS = 15, VHOST_USER_PROTOCOL_F_MAX }; @@ -123,7 +130,7 @@ typedef struct VhostUserMemoryRegion { typedef struct VhostUserMemory { uint32_t nregions; uint32_t padding; - VhostUserMemoryRegion regions[VHOST_MEMORY_MAX_NREGIONS]; + VhostUserMemoryRegion regions[VHOST_MEMORY_BASELINE_NREGIONS]; } VhostUserMemory; typedef struct VhostUserMemRegMsg { @@ -190,7 +197,7 @@ typedef struct VhostUserMsg { VhostUserInflight inflight; } payload; - int fds[VHOST_MEMORY_MAX_NREGIONS]; + int fds[VHOST_MEMORY_BASELINE_NREGIONS]; int fd_num; uint8_t *data; } VU_PACKED VhostUserMsg; @@ -368,7 +375,7 @@ typedef struct VuDevInflightInfo { struct VuDev { int sock; uint32_t nregions; - VuDevRegion regions[VHOST_MEMORY_MAX_NREGIONS]; + VuDevRegion regions[VHOST_USER_MAX_RAM_SLOTS]; VuVirtq *vq; VuDevInflightInfo inflight_info; int log_call_fd;