From patchwork Tue Aug 19 12:01:52 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 35585 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ie0-f200.google.com (mail-ie0-f200.google.com [209.85.223.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 244072136C for ; Tue, 19 Aug 2014 12:05:12 +0000 (UTC) Received: by mail-ie0-f200.google.com with SMTP id at20sf4269347iec.3 for ; Tue, 19 Aug 2014 05:05:11 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version:sender :errors-to:x-original-sender:x-original-authentication-results :mailing-list:content-type:content-transfer-encoding; bh=CIoDh+R11kmdJ6ty3U5LBeQ7d2pfXIDLAZhzPTqV6uU=; b=Tl/BNBm/A3iXI/2P66p5pptoTJhcpd0knJH6kefBSCi52FtKcLLfDGA46DpL/2JKU+ u2TGlCgHVgMssqI9J2kAwlcPu+UXNgL6ZAzOMH1tZlzux/BM2QVGuU2GhS6Oojc8ERLA fpgoC5rXMFCpjPLkV2s7LyenH8AUvv7qgvzKgK1P4s/0Th0Iw2YofIkVSZzOw++xPExk 6t9AQo7rpeYfHDLHek9euaPDYhbGbZmC2fF5sM7bgq/mbtB+l7wVN3biQDSgMFHRKwuX yMpF2G5YY+MiR6PqsC851c9LUgWvAhILKXZbLO9QLDjXzlgpJA9rjgaLqHI6rysD+QeV vp7Q== X-Gm-Message-State: ALoCoQkB2FjL3e0DUkLNaYlc+KmFzDKuymYh1ttqQ5/kKZHS/Wg5TPKHAeJSMwDN8fGzcW3ywQki X-Received: by 10.182.28.135 with SMTP id b7mr21582759obh.19.1408449911412; Tue, 19 Aug 2014 05:05:11 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.82.75 with SMTP id g69ls2840764qgd.53.gmail; Tue, 19 Aug 2014 05:05:11 -0700 (PDT) X-Received: by 10.52.168.134 with SMTP id zw6mr9125040vdb.37.1408449911324; Tue, 19 Aug 2014 05:05:11 -0700 (PDT) Received: from mail-vc0-f169.google.com (mail-vc0-f169.google.com [209.85.220.169]) by mx.google.com with ESMTPS id sd8si2603558vcb.55.2014.08.19.05.05.11 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 19 Aug 2014 05:05:11 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.169 as permitted sender) client-ip=209.85.220.169; Received: by mail-vc0-f169.google.com with SMTP id le20so7406083vcb.0 for ; Tue, 19 Aug 2014 05:05:11 -0700 (PDT) X-Received: by 10.220.200.71 with SMTP id ev7mr29768389vcb.24.1408449911211; Tue, 19 Aug 2014 05:05:11 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.37.5 with SMTP id tc5csp232834vcb; Tue, 19 Aug 2014 05:05:10 -0700 (PDT) X-Received: by 10.70.128.105 with SMTP id nn9mr50428248pdb.23.1408449909833; Tue, 19 Aug 2014 05:05:09 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id kp1si26501790pbd.148.2014.08.19.05.05.07 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 19 Aug 2014 05:05:07 -0700 (PDT) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XJi7N-0006jB-FO; Tue, 19 Aug 2014 12:02:25 +0000 Received: from mail-lb0-f172.google.com ([209.85.217.172]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1XJi79-0006WJ-5h for linux-arm-kernel@lists.infradead.org; Tue, 19 Aug 2014 12:02:12 +0000 Received: by mail-lb0-f172.google.com with SMTP id z11so5394679lbi.17 for ; Tue, 19 Aug 2014 05:01:43 -0700 (PDT) X-Received: by 10.112.5.170 with SMTP id t10mr23791693lbt.64.1408449703096; Tue, 19 Aug 2014 05:01:43 -0700 (PDT) Received: from localhost.localdomain (188-178-240-98-static.dk.customer.tdc.net. [188.178.240.98]) by mx.google.com with ESMTPSA id kv3sm31833801lbc.37.2014.08.19.05.01.41 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 19 Aug 2014 05:01:42 -0700 (PDT) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org Subject: [PATCH 2/2] arm/arm64: KVM: Support KVM_CAP_READONLY_MEM Date: Tue, 19 Aug 2014 14:01:52 +0200 Message-Id: <1408449712-14903-2-git-send-email-christoffer.dall@linaro.org> X-Mailer: git-send-email 2.0.0 In-Reply-To: <1408449712-14903-1-git-send-email-christoffer.dall@linaro.org> References: <1408449712-14903-1-git-send-email-christoffer.dall@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140819_050211_404881_FDD86034 X-CRM114-Status: GOOD ( 15.00 ) X-Spam-Score: -1.4 (-) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-1.4 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.217.172 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -0.7 RCVD_IN_MSPIKE_H2 RBL: Average reputation (+2) [209.85.217.172 listed in wl.mailspike.net] Cc: Peter Maydell , Alex Bennee , Christoffer Dall , Ard Biesheuvel X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: christoffer.dall@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.169 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 When userspace loads code and data in a read-only memory regions, KVM needs to be able to handle this on arm and arm64. Specifically this is used when running code directly from a read-only flash device; the common scenario is a UEFI blob loaded with the -bios option in QEMU. Note that the MMIO exit on writes to a read-only memory is ABI and can be used to emulate block-erase style flash devices. Acked-by: Marc Zyngier Signed-off-by: Christoffer Dall --- arch/arm/include/uapi/asm/kvm.h | 1 + arch/arm/kvm/arm.c | 1 + arch/arm/kvm/mmu.c | 15 ++++++++------- arch/arm64/include/uapi/asm/kvm.h | 1 + 4 files changed, 11 insertions(+), 7 deletions(-) diff --git a/arch/arm/include/uapi/asm/kvm.h b/arch/arm/include/uapi/asm/kvm.h index e6ebdd3..51257fd 100644 --- a/arch/arm/include/uapi/asm/kvm.h +++ b/arch/arm/include/uapi/asm/kvm.h @@ -25,6 +25,7 @@ #define __KVM_HAVE_GUEST_DEBUG #define __KVM_HAVE_IRQ_LINE +#define __KVM_HAVE_READONLY_MEM #define KVM_REG_SIZE(id) \ (1U << (((id) & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT)) diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index a99e0cd..3ab3e60 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -188,6 +188,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_ONE_REG: case KVM_CAP_ARM_PSCI: case KVM_CAP_ARM_PSCI_0_2: + case KVM_CAP_READONLY_MEM: r = 1; break; case KVM_CAP_COALESCED_MMIO: diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 16e7994..dcbe01e 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -747,14 +747,13 @@ static bool transparent_hugepage_adjust(pfn_t *pfnp, phys_addr_t *ipap) } static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, - struct kvm_memory_slot *memslot, + struct kvm_memory_slot *memslot, unsigned long hva, unsigned long fault_status) { int ret; bool write_fault, writable, hugetlb = false, force_pte = false; unsigned long mmu_seq; gfn_t gfn = fault_ipa >> PAGE_SHIFT; - unsigned long hva = gfn_to_hva(vcpu->kvm, gfn); struct kvm *kvm = vcpu->kvm; struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache; struct vm_area_struct *vma; @@ -863,7 +862,8 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run) unsigned long fault_status; phys_addr_t fault_ipa; struct kvm_memory_slot *memslot; - bool is_iabt; + unsigned long hva; + bool is_iabt, write_fault, writable; gfn_t gfn; int ret, idx; @@ -884,7 +884,10 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run) idx = srcu_read_lock(&vcpu->kvm->srcu); gfn = fault_ipa >> PAGE_SHIFT; - if (!kvm_is_visible_gfn(vcpu->kvm, gfn)) { + memslot = gfn_to_memslot(vcpu->kvm, gfn); + hva = gfn_to_hva_memslot_prot(memslot, gfn, &writable); + write_fault = kvm_is_write_fault(kvm_vcpu_get_hsr(vcpu)); + if (kvm_is_error_hva(hva) || (write_fault && !writable)) { if (is_iabt) { /* Prefetch Abort on I/O address */ kvm_inject_pabt(vcpu, kvm_vcpu_get_hfar(vcpu)); @@ -910,9 +913,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run) goto out_unlock; } - memslot = gfn_to_memslot(vcpu->kvm, gfn); - - ret = user_mem_abort(vcpu, fault_ipa, memslot, fault_status); + ret = user_mem_abort(vcpu, fault_ipa, memslot, hva, fault_status); if (ret == 0) ret = 1; out_unlock: diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index e633ff8..f4ec5a6 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -37,6 +37,7 @@ #define __KVM_HAVE_GUEST_DEBUG #define __KVM_HAVE_IRQ_LINE +#define __KVM_HAVE_READONLY_MEM #define KVM_REG_SIZE(id) \ (1U << (((id) & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT))