From patchwork Thu Oct 30 14:22:26 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Stabellini X-Patchwork-Id: 39835 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ee0-f72.google.com (mail-ee0-f72.google.com [74.125.83.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 4E36A202FE for ; Thu, 30 Oct 2014 14:26:31 +0000 (UTC) Received: by mail-ee0-f72.google.com with SMTP id d17sf3248094eek.11 for ; Thu, 30 Oct 2014 07:26:30 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:in-reply-to :references:mime-version:cc:subject:precedence:list-id :list-unsubscribe:list-post:list-help:list-subscribe:sender :errors-to:x-original-sender:x-original-authentication-results :mailing-list:list-archive:content-type:content-transfer-encoding; bh=RYP4aaEfDHc9WD4PTpKX6vk1UBjcftcMAAGlFwcefVs=; b=hGFySDT5JLSVCWsQNzFGIFtupCgES9oETfJDmRtfLIw2q9AZD1bHSdPpa7gFr3NE2s fAMxy8/74iy52fz6HL1wCYLLuw4KSMspHCuLOe6CrvsYOzss7aH0wDPNjZBaVjmxMmSy UtByTxuN16tiwVt+RnK882LzfdOS0vN19OyugWG55r+lhnl+N9gHjH+Bu/b/ujdye638 kyuAIhsyHkKez/qLIJFFm9Z4+EU9C9bM1mzcEsFent85Y75mZnNHfWxRX5swJ56wv17k CnOdIK990+luItYTvC3dsYygyuuyhOoatWEZeko2UKEt+EJgEJqavO26bVinua2KrvP8 7Cug== X-Gm-Message-State: ALoCoQkh12+qpSfkDnkpVZiE47iQ/5PGP0P7SydsZTicgTKKaPitPy7mz4xDZXoSV2/me7lgX6Gs X-Received: by 10.112.16.35 with SMTP id c3mr431915lbd.13.1414679190387; Thu, 30 Oct 2014 07:26:30 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.18.226 with SMTP id z2ls359070lad.13.gmail; Thu, 30 Oct 2014 07:26:30 -0700 (PDT) X-Received: by 10.112.173.100 with SMTP id bj4mr19551609lbc.78.1414679190178; Thu, 30 Oct 2014 07:26:30 -0700 (PDT) Received: from mail-la0-f48.google.com (mail-la0-f48.google.com. [209.85.215.48]) by mx.google.com with ESMTPS id ap3si12325366lbc.33.2014.10.30.07.26.30 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 30 Oct 2014 07:26:30 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.48 as permitted sender) client-ip=209.85.215.48; Received: by mail-la0-f48.google.com with SMTP id gq15so4432486lab.21 for ; Thu, 30 Oct 2014 07:26:30 -0700 (PDT) X-Received: by 10.112.189.10 with SMTP id ge10mr19142922lbc.23.1414679189989; Thu, 30 Oct 2014 07:26:29 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.84.229 with SMTP id c5csp51785lbz; Thu, 30 Oct 2014 07:26:29 -0700 (PDT) X-Received: by 10.140.41.166 with SMTP id z35mr25087930qgz.82.1414679185308; Thu, 30 Oct 2014 07:26:25 -0700 (PDT) Received: from lists.xen.org (lists.xen.org. [50.57.142.19]) by mx.google.com with ESMTPS id t105si12477721qgt.96.2014.10.30.07.26.24 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Thu, 30 Oct 2014 07:26:25 -0700 (PDT) Received-SPF: none (google.com: xen-devel-bounces@lists.xen.org does not designate permitted sender hosts) client-ip=50.57.142.19; Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1XjqeQ-0002uk-JR; Thu, 30 Oct 2014 14:24:34 +0000 Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1XjqeP-0002sq-1X for xen-devel@lists.xensource.com; Thu, 30 Oct 2014 14:24:33 +0000 Received: from [193.109.254.147] by server-14.bemta-14.messagelabs.com id 67/26-02698-02A42545; Thu, 30 Oct 2014 14:24:32 +0000 X-Env-Sender: Stefano.Stabellini@citrix.com X-Msg-Ref: server-13.tower-27.messagelabs.com!1414679066!13467521!2 X-Originating-IP: [66.165.176.63] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n X-StarScan-Received: X-StarScan-Version: 6.12.3; banners=-,-,- X-VirusChecked: Checked Received: (qmail 19937 invoked from network); 30 Oct 2014 14:24:31 -0000 Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63) by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP; 30 Oct 2014 14:24:31 -0000 X-IronPort-AV: E=Sophos;i="5.07,286,1413244800"; d="scan'208";a="187857760" Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id 14.3.181.6; Thu, 30 Oct 2014 10:22:34 -0400 Received: from kaball.uk.xensource.com ([10.80.2.59]) by ukmail1.uk.xensource.com with esmtp (Exim 4.69) (envelope-from ) id 1XjqcP-0001tu-As; Thu, 30 Oct 2014 14:22:29 +0000 From: Stefano Stabellini To: Date: Thu, 30 Oct 2014 14:22:26 +0000 Message-ID: <1414678947-8607-1-git-send-email-stefano.stabellini@eu.citrix.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: References: MIME-Version: 1.0 X-DLP: MIA1 Cc: peter.maydell@linaro.org, xen-devel@lists.xensource.com, Don Slutz , Stefano Stabellini Subject: [Xen-devel] [PULL 1/2] xen-hvm.c: Add support for Xen access to vmport X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: List-Unsubscribe: , List-Post: , List-Help: , List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: stefano.stabellini@eu.citrix.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.48 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Archive: From: Don Slutz This adds synchronisation of the 6 vcpu registers (only 32bits of them) that vmport.c needs between Xen and QEMU. This is to avoid a 2nd and 3rd exchange between QEMU and Xen to fetch and put these 6 vcpu registers used by the code in vmport.c and vmmouse.c The registers are passed in the new shared page provided by HVM_PARAM_VMPORT_REGS_PFN. Add new array to XenIOState that allows selection of current_cpu by vcpu id. Now pass XenIOState to handle_ioreq(). Add new routines regs_to_cpu(), regs_from_cpu(), and handle_vmport_ioreq(). Signed-off-by: Don Slutz Reviewed-by: Paul Durrant Signed-off-by: Stefano Stabellini --- include/hw/xen/xen_common.h | 15 ++++++ xen-hvm.c | 108 +++++++++++++++++++++++++++++++++++++++++-- 2 files changed, 118 insertions(+), 5 deletions(-) diff --git a/include/hw/xen/xen_common.h b/include/hw/xen/xen_common.h index 07731b9..95612a4 100644 --- a/include/hw/xen/xen_common.h +++ b/include/hw/xen/xen_common.h @@ -164,4 +164,19 @@ void destroy_hvm_domain(bool reboot); /* shutdown/destroy current domain because of an error */ void xen_shutdown_fatal_error(const char *fmt, ...) GCC_FMT_ATTR(1, 2); +#ifdef HVM_PARAM_VMPORT_REGS_PFN +static inline int xen_get_vmport_regs_pfn(XenXC xc, domid_t dom, + unsigned long *vmport_regs_pfn) +{ + return xc_get_hvm_param(xc, dom, HVM_PARAM_VMPORT_REGS_PFN, + vmport_regs_pfn); +} +#else +static inline int xen_get_vmport_regs_pfn(XenXC xc, domid_t dom, + unsigned long *vmport_regs_pfn) +{ + return -ENOSYS; +} +#endif + #endif /* QEMU_HW_XEN_COMMON_H */ diff --git a/xen-hvm.c b/xen-hvm.c index 05e522c..21f1cbb 100644 --- a/xen-hvm.c +++ b/xen-hvm.c @@ -41,6 +41,29 @@ static MemoryRegion *framebuffer; static bool xen_in_migration; /* Compatibility with older version */ + +/* This allows QEMU to build on a system that has Xen 4.5 or earlier + * installed. This here (not in hw/xen/xen_common.h) because xen/hvm/ioreq.h + * needs to be included before this block and hw/xen/xen_common.h needs to + * be included before xen/hvm/ioreq.h + */ +#ifndef IOREQ_TYPE_VMWARE_PORT +#define IOREQ_TYPE_VMWARE_PORT 3 +struct vmware_regs { + uint32_t esi; + uint32_t edi; + uint32_t ebx; + uint32_t ecx; + uint32_t edx; +}; +typedef struct vmware_regs vmware_regs_t; + +struct shared_vmport_iopage { + struct vmware_regs vcpu_vmport_regs[1]; +}; +typedef struct shared_vmport_iopage shared_vmport_iopage_t; +#endif + #if __XEN_LATEST_INTERFACE_VERSION__ < 0x0003020a static inline uint32_t xen_vcpu_eport(shared_iopage_t *shared_page, int i) { @@ -79,8 +102,10 @@ typedef struct XenPhysmap { typedef struct XenIOState { shared_iopage_t *shared_page; + shared_vmport_iopage_t *shared_vmport_page; buffered_iopage_t *buffered_io_page; QEMUTimer *buffered_io_timer; + CPUState **cpu_by_vcpu_id; /* the evtchn port for polling the notification, */ evtchn_port_t *ioreq_local_port; /* evtchn local port for buffered io */ @@ -773,7 +798,50 @@ static void cpu_ioreq_move(ioreq_t *req) } } -static void handle_ioreq(ioreq_t *req) +static void regs_to_cpu(vmware_regs_t *vmport_regs, ioreq_t *req) +{ + X86CPU *cpu; + CPUX86State *env; + + cpu = X86_CPU(current_cpu); + env = &cpu->env; + env->regs[R_EAX] = req->data; + env->regs[R_EBX] = vmport_regs->ebx; + env->regs[R_ECX] = vmport_regs->ecx; + env->regs[R_EDX] = vmport_regs->edx; + env->regs[R_ESI] = vmport_regs->esi; + env->regs[R_EDI] = vmport_regs->edi; +} + +static void regs_from_cpu(vmware_regs_t *vmport_regs) +{ + X86CPU *cpu = X86_CPU(current_cpu); + CPUX86State *env = &cpu->env; + + vmport_regs->ebx = env->regs[R_EBX]; + vmport_regs->ecx = env->regs[R_ECX]; + vmport_regs->edx = env->regs[R_EDX]; + vmport_regs->esi = env->regs[R_ESI]; + vmport_regs->edi = env->regs[R_EDI]; +} + +static void handle_vmport_ioreq(XenIOState *state, ioreq_t *req) +{ + vmware_regs_t *vmport_regs; + + assert(state->shared_vmport_page); + vmport_regs = + &state->shared_vmport_page->vcpu_vmport_regs[state->send_vcpu]; + QEMU_BUILD_BUG_ON(sizeof(*req) < sizeof(*vmport_regs)); + + current_cpu = state->cpu_by_vcpu_id[state->send_vcpu]; + regs_to_cpu(vmport_regs, req); + cpu_ioreq_pio(req); + regs_from_cpu(vmport_regs); + current_cpu = NULL; +} + +static void handle_ioreq(XenIOState *state, ioreq_t *req) { if (!req->data_is_ptr && (req->dir == IOREQ_WRITE) && (req->size < sizeof (target_ulong))) { @@ -787,6 +855,9 @@ static void handle_ioreq(ioreq_t *req) case IOREQ_TYPE_COPY: cpu_ioreq_move(req); break; + case IOREQ_TYPE_VMWARE_PORT: + handle_vmport_ioreq(state, req); + break; case IOREQ_TYPE_TIMEOFFSET: break; case IOREQ_TYPE_INVALIDATE: @@ -828,7 +899,7 @@ static int handle_buffered_iopage(XenIOState *state) req.data |= ((uint64_t)buf_req->data) << 32; } - handle_ioreq(&req); + handle_ioreq(state, &req); xen_mb(); state->buffered_io_page->read_pointer += qw ? 2 : 1; @@ -857,14 +928,16 @@ static void cpu_handle_ioreq(void *opaque) handle_buffered_iopage(state); if (req) { - handle_ioreq(req); + handle_ioreq(state, req); if (req->state != STATE_IOREQ_INPROCESS) { fprintf(stderr, "Badness in I/O request ... not in service?!: " "%x, ptr: %x, port: %"PRIx64", " - "data: %"PRIx64", count: %" FMT_ioreq_size ", size: %" FMT_ioreq_size "\n", + "data: %"PRIx64", count: %" FMT_ioreq_size + ", size: %" FMT_ioreq_size + ", type: %"FMT_ioreq_size"\n", req->state, req->data_is_ptr, req->addr, - req->data, req->count, req->size); + req->data, req->count, req->size, req->type); destroy_hvm_domain(false); return; } @@ -904,6 +977,14 @@ static void xen_main_loop_prepare(XenIOState *state) state); if (evtchn_fd != -1) { + CPUState *cpu_state; + + DPRINTF("%s: Init cpu_by_vcpu_id\n", __func__); + CPU_FOREACH(cpu_state) { + DPRINTF("%s: cpu_by_vcpu_id[%d]=%p\n", + __func__, cpu_state->cpu_index, cpu_state); + state->cpu_by_vcpu_id[cpu_state->cpu_index] = cpu_state; + } qemu_set_fd_handler(evtchn_fd, cpu_handle_ioreq, NULL, state); } } @@ -1020,6 +1101,20 @@ int xen_hvm_init(ram_addr_t *below_4g_mem_size, ram_addr_t *above_4g_mem_size, errno, xen_xc); } + rc = xen_get_vmport_regs_pfn(xen_xc, xen_domid, &ioreq_pfn); + if (!rc) { + DPRINTF("shared vmport page at pfn %lx\n", ioreq_pfn); + state->shared_vmport_page = + xc_map_foreign_range(xen_xc, xen_domid, XC_PAGE_SIZE, + PROT_READ|PROT_WRITE, ioreq_pfn); + if (state->shared_vmport_page == NULL) { + hw_error("map shared vmport IO page returned error %d handle=" + XC_INTERFACE_FMT, errno, xen_xc); + } + } else if (rc != -ENOSYS) { + hw_error("get vmport regs pfn returned error %d, rc=%d", errno, rc); + } + xc_get_hvm_param(xen_xc, xen_domid, HVM_PARAM_BUFIOREQ_PFN, &ioreq_pfn); DPRINTF("buffered io page at pfn %lx\n", ioreq_pfn); state->buffered_io_page = xc_map_foreign_range(xen_xc, xen_domid, XC_PAGE_SIZE, @@ -1028,6 +1123,9 @@ int xen_hvm_init(ram_addr_t *below_4g_mem_size, ram_addr_t *above_4g_mem_size, hw_error("map buffered IO page returned error %d", errno); } + /* Note: cpus is empty at this point in init */ + state->cpu_by_vcpu_id = g_malloc0(max_cpus * sizeof(CPUState *)); + state->ioreq_local_port = g_malloc0(max_cpus * sizeof (evtchn_port_t)); /* FIXME: how about if we overflow the page here? */