From patchwork Mon Jun 3 16:52:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Martin X-Patchwork-Id: 165679 Delivered-To: patch@linaro.org Received: by 2002:a92:9e1a:0:0:0:0:0 with SMTP id q26csp4716150ili; Mon, 3 Jun 2019 09:52:15 -0700 (PDT) X-Google-Smtp-Source: APXvYqyno3/VKdGHWjFHFI0kH6dMZUBb2a3P8M21F5rllgbnbUme6HLDmfyi/eCkEPuqIeAEeZNn X-Received: by 2002:a63:d615:: with SMTP id q21mr27994561pgg.401.1559580735388; Mon, 03 Jun 2019 09:52:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1559580735; cv=none; d=google.com; s=arc-20160816; b=gQ92fFyI0h29wmzHTSaPx4S7J66mpczqtbGtVvPqhkZfpVO8sZR7pKHM88X7YkxCxf K5p0pEEF8xwK/V+i5PJstINcc9ffhCwc6qlk0AzdIk9VMVnUfXsgNhSoq2PWlHs6LPOu N2aJcoOHkOjbcGkn2B9UDOg3GlL0D0lDvQiNRmoOrPzXa7DyTlXk9NbhHClHJO6GXFcq W3Dcirqxg/U9Ut65L3wLM1d7ThdIJule40Brl8OtDn7x9SeyGPsQfZYp8tWGQcLDcbB0 jzmRUq71l1HyBosX5+keMzc9tC9HaB0NJg1/Q+ctahgpehdHNt0fsj+k/FdjKJvr9vMM NtwQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from; bh=dz508Tdy0q9dguetJxWuWnb9B9XmtUV7s84sYdy/e1Y=; b=UDKIuPOvChLDiLmzq+Uc+7SU/tmVxzYu+r0avahtyv/MaLI7J7SnSB7b8t/3YsW3DI OCnrX8H81XGx+2lc8T6YF2YGayVZHoz3wz3+2ae9MiqAyiElMG7x+1/QMeWBiTvgeim3 mjihY1BTSNSMauaNHZ9PA4xZlA3GivSHak1X4OxBGm30vPwjdJNXWkJueQUpuO+E2yMF zKT0fq2/JjoU5ST5UCqxFPX01MDllgR3WWU/pckcCIkuAW2t+osiNkEjqu+CnqguxEIj q5Of7b3WcbFv0ol8aDZ8GLk96iE1zZL3FsX6QJWGZ3LQQX3DQMr+JwSp5KANlCitG12Q KjZw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j10si20089340pgj.87.2019.06.03.09.52.15; Mon, 03 Jun 2019 09:52:15 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727191AbfFCQwO (ORCPT + 14 others); Mon, 3 Jun 2019 12:52:14 -0400 Received: from foss.arm.com ([217.140.101.70]:55396 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726967AbfFCQwN (ORCPT ); Mon, 3 Jun 2019 12:52:13 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4805780D; Mon, 3 Jun 2019 09:52:13 -0700 (PDT) Received: from e103592.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 372C33F5AF; Mon, 3 Jun 2019 09:52:12 -0700 (PDT) From: Dave Martin To: kvmarm@lists.cs.columbia.edu Cc: Marc Zyngier , Christoffer Dall , Peter Maydell , linux-arm-kernel@lists.infradead.org, stable@vger.kernel.org Subject: [PATCH] KVM: arm64: Filter out invalid core register IDs in KVM_GET_REG_LIST Date: Mon, 3 Jun 2019 17:52:07 +0100 Message-Id: <1559580727-13444-1-git-send-email-Dave.Martin@arm.com> X-Mailer: git-send-email 2.1.4 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org Since commit d26c25a9d19b ("arm64: KVM: Tighten guest core register access from userspace"), KVM_{GET,SET}_ONE_REG rejects register IDs that do not correspond to a single underlying architectural register. KVM_GET_REG_LIST was not changed to match however: instead, it simply yields a list of 32-bit register IDs that together cover the whole kvm_regs struct. This means that if userspace tries to use the resulting list of IDs directly to drive calls to KVM_*_ONE_REG, some of those calls will now fail. This was not the intention. Instead, iterating KVM_*_ONE_REG over the list of IDs returned by KVM_GET_REG_LIST should be guaranteed to work. This patch fixes the problem by splitting validate_core_offset() into a backend core_reg_size_from_offset() which does all of the work except for checking that the size field in the register ID matches, and kvm_arm_copy_reg_indices() and num_core_regs() are converted to use this to enumerate the valid offsets. kvm_arm_copy_reg_indices() now also sets the register ID size field appropriately based on the value returned, so the register ID supplied to userspace is fully qualified for use with the register access ioctls. Cc: stable@vger.kernel.org Fixes: d26c25a9d19b ("arm64: KVM: Tighten guest core register access from userspace") Signed-off-by: Dave Martin --- Changes since v3: * Rebased onto v5.2-rc1. * Tested with qemu by migrating from one qemu instance to another on ThunderX2. --- arch/arm64/kvm/guest.c | 53 +++++++++++++++++++++++++++++++++++++------------- 1 file changed, 40 insertions(+), 13 deletions(-) -- 2.1.4 Reviewed-by: Andrew Jones Tested-by: Andrew Jones diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 3ae2f82..6527c76 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -70,10 +70,8 @@ static u64 core_reg_offset_from_id(u64 id) return id & ~(KVM_REG_ARCH_MASK | KVM_REG_SIZE_MASK | KVM_REG_ARM_CORE); } -static int validate_core_offset(const struct kvm_vcpu *vcpu, - const struct kvm_one_reg *reg) +static int core_reg_size_from_offset(const struct kvm_vcpu *vcpu, u64 off) { - u64 off = core_reg_offset_from_id(reg->id); int size; switch (off) { @@ -103,8 +101,7 @@ static int validate_core_offset(const struct kvm_vcpu *vcpu, return -EINVAL; } - if (KVM_REG_SIZE(reg->id) != size || - !IS_ALIGNED(off, size / sizeof(__u32))) + if (!IS_ALIGNED(off, size / sizeof(__u32))) return -EINVAL; /* @@ -115,6 +112,21 @@ static int validate_core_offset(const struct kvm_vcpu *vcpu, if (vcpu_has_sve(vcpu) && core_reg_offset_is_vreg(off)) return -EINVAL; + return size; +} + +static int validate_core_offset(const struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) +{ + u64 off = core_reg_offset_from_id(reg->id); + int size = core_reg_size_from_offset(vcpu, off); + + if (size < 0) + return -EINVAL; + + if (KVM_REG_SIZE(reg->id) != size) + return -EINVAL; + return 0; } @@ -453,19 +465,34 @@ static int copy_core_reg_indices(const struct kvm_vcpu *vcpu, { unsigned int i; int n = 0; - const u64 core_reg = KVM_REG_ARM64 | KVM_REG_SIZE_U64 | KVM_REG_ARM_CORE; for (i = 0; i < sizeof(struct kvm_regs) / sizeof(__u32); i++) { - /* - * The KVM_REG_ARM64_SVE regs must be used instead of - * KVM_REG_ARM_CORE for accessing the FPSIMD V-registers on - * SVE-enabled vcpus: - */ - if (vcpu_has_sve(vcpu) && core_reg_offset_is_vreg(i)) + u64 reg = KVM_REG_ARM64 | KVM_REG_ARM_CORE | i; + int size = core_reg_size_from_offset(vcpu, i); + + if (size < 0) + continue; + + switch (size) { + case sizeof(__u32): + reg |= KVM_REG_SIZE_U32; + break; + + case sizeof(__u64): + reg |= KVM_REG_SIZE_U64; + break; + + case sizeof(__uint128_t): + reg |= KVM_REG_SIZE_U128; + break; + + default: + WARN_ON(1); continue; + } if (uindices) { - if (put_user(core_reg | i, uindices)) + if (put_user(reg, uindices)) return -EFAULT; uindices++; }