From patchwork Wed Jun 12 12:44:49 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Martin X-Patchwork-Id: 166546 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp3636881ilk; Wed, 12 Jun 2019 05:44:55 -0700 (PDT) X-Google-Smtp-Source: APXvYqynCnarYtjhDGIUZJ1ERu77wKdDDFOPg9dxheXJPmDDpvTHCwVqDhkgg2D7+/fBK7x5syWN X-Received: by 2002:a17:90b:8d6:: with SMTP id ds22mr32714491pjb.143.1560343495680; Wed, 12 Jun 2019 05:44:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560343495; cv=none; d=google.com; s=arc-20160816; b=f6cuouD02JVPR5qP8E5t2ldrQML7kVUBXBwLW1DH8th+iwoUOMHjiNar02a8cjwDXF 9hCRu8s2Veh4jz22h3+ohAHjkNYp4tVaqxyi4NKoywNxZ8KHxllo6dncGr/1btkF9m/N bKk/hNyVKxO4+1MfLwyXR2UWag7Tv8u0vdxoOT7NbsahpxWktU03t627pAu5zIAnDfNL H6XRp82h2LSmF1zVI9DErydt/qwAqwky4CS4QDQW0Es/EVQO7B0ybq0UKGv36IUcU9sV +Sz8/f0Ts7dbXM/CTlNHX5bBqqQwuCRkB5YX1b5nKBnD7iDs2IqRaFQ31vuPLE/x2keq yn+g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from; bh=PwsGemrxEmyeCC+lNmvxoccq9uLm+wDny0axUnufMmU=; b=YozwYJVVR9zjBLNYvHV17tkzFwTGvOdUXIqtZwU++iSatdu3ks+Twg0ld01stdy5l+ tLqGD3No3szwwcb2DyDTtwsH7gBtJLhvgbFEA1idaryDxu7DBx9eQ/P5QVbfvG/yJ1LS SgfueIa2gxZM0V9aVh+NSNgsiHlcX/gs2ih+V/HhMXq3s+3we7wxudpk6ONFMch/+Hpu O2DGZB/LwkDNzD6iPb4DKxHj+Hg6T4NZWvTY9DBl23GApp+z/JdEy5t1ai4sw2Iu5fP5 lfXX1j6WF0coKR0DP/CaAUHUtVVqUbYhD7X235SkKdGUlTu0Mwpr9SBlJ72i/bqvYLqs N++A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q2si5510650pjv.99.2019.06.12.05.44.55; Wed, 12 Jun 2019 05:44:55 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728693AbfFLMoz (ORCPT + 14 others); Wed, 12 Jun 2019 08:44:55 -0400 Received: from foss.arm.com ([217.140.110.172]:52552 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728606AbfFLMoy (ORCPT ); Wed, 12 Jun 2019 08:44:54 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 525FD28; Wed, 12 Jun 2019 05:44:54 -0700 (PDT) Received: from e103592.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 68A863F246; Wed, 12 Jun 2019 05:44:53 -0700 (PDT) From: Dave Martin To: kvmarm@lists.cs.columbia.edu Cc: Marc Zyngier , Christoffer Dall , Peter Maydell , linux-arm-kernel@lists.infradead.org, Andrew Jones , stable@vger.kernel.org Subject: [PATCH v4 REPOST] KVM: arm64: Filter out invalid core register IDs in KVM_GET_REG_LIST Date: Wed, 12 Jun 2019 13:44:49 +0100 Message-Id: <1560343489-22906-1-git-send-email-Dave.Martin@arm.com> X-Mailer: git-send-email 2.1.4 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org Since commit d26c25a9d19b ("arm64: KVM: Tighten guest core register access from userspace"), KVM_{GET,SET}_ONE_REG rejects register IDs that do not correspond to a single underlying architectural register. KVM_GET_REG_LIST was not changed to match however: instead, it simply yields a list of 32-bit register IDs that together cover the whole kvm_regs struct. This means that if userspace tries to use the resulting list of IDs directly to drive calls to KVM_*_ONE_REG, some of those calls will now fail. This was not the intention. Instead, iterating KVM_*_ONE_REG over the list of IDs returned by KVM_GET_REG_LIST should be guaranteed to work. This patch fixes the problem by splitting validate_core_offset() into a backend core_reg_size_from_offset() which does all of the work except for checking that the size field in the register ID matches, and kvm_arm_copy_reg_indices() and num_core_regs() are converted to use this to enumerate the valid offsets. kvm_arm_copy_reg_indices() now also sets the register ID size field appropriately based on the value returned, so the register ID supplied to userspace is fully qualified for use with the register access ioctls. Cc: stable@vger.kernel.org Fixes: d26c25a9d19b ("arm64: KVM: Tighten guest core register access from userspace") Signed-off-by: Dave Martin Reviewed-by: Andrew Jones Tested-by: Andrew Jones --- This is just a repost of [1], with Andrew Jones' reviewer tags added. [1] [PATCH] KVM: arm64: Filter out invalid core register IDs in KVM_GET_REG_LIST https://lists.cs.columbia.edu/pipermail/kvmarm/2019-June/036093.html arch/arm64/kvm/guest.c | 53 +++++++++++++++++++++++++++++++++++++------------- 1 file changed, 40 insertions(+), 13 deletions(-) -- 2.1.4 diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 3ae2f82..6527c76 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -70,10 +70,8 @@ static u64 core_reg_offset_from_id(u64 id) return id & ~(KVM_REG_ARCH_MASK | KVM_REG_SIZE_MASK | KVM_REG_ARM_CORE); } -static int validate_core_offset(const struct kvm_vcpu *vcpu, - const struct kvm_one_reg *reg) +static int core_reg_size_from_offset(const struct kvm_vcpu *vcpu, u64 off) { - u64 off = core_reg_offset_from_id(reg->id); int size; switch (off) { @@ -103,8 +101,7 @@ static int validate_core_offset(const struct kvm_vcpu *vcpu, return -EINVAL; } - if (KVM_REG_SIZE(reg->id) != size || - !IS_ALIGNED(off, size / sizeof(__u32))) + if (!IS_ALIGNED(off, size / sizeof(__u32))) return -EINVAL; /* @@ -115,6 +112,21 @@ static int validate_core_offset(const struct kvm_vcpu *vcpu, if (vcpu_has_sve(vcpu) && core_reg_offset_is_vreg(off)) return -EINVAL; + return size; +} + +static int validate_core_offset(const struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) +{ + u64 off = core_reg_offset_from_id(reg->id); + int size = core_reg_size_from_offset(vcpu, off); + + if (size < 0) + return -EINVAL; + + if (KVM_REG_SIZE(reg->id) != size) + return -EINVAL; + return 0; } @@ -453,19 +465,34 @@ static int copy_core_reg_indices(const struct kvm_vcpu *vcpu, { unsigned int i; int n = 0; - const u64 core_reg = KVM_REG_ARM64 | KVM_REG_SIZE_U64 | KVM_REG_ARM_CORE; for (i = 0; i < sizeof(struct kvm_regs) / sizeof(__u32); i++) { - /* - * The KVM_REG_ARM64_SVE regs must be used instead of - * KVM_REG_ARM_CORE for accessing the FPSIMD V-registers on - * SVE-enabled vcpus: - */ - if (vcpu_has_sve(vcpu) && core_reg_offset_is_vreg(i)) + u64 reg = KVM_REG_ARM64 | KVM_REG_ARM_CORE | i; + int size = core_reg_size_from_offset(vcpu, i); + + if (size < 0) + continue; + + switch (size) { + case sizeof(__u32): + reg |= KVM_REG_SIZE_U32; + break; + + case sizeof(__u64): + reg |= KVM_REG_SIZE_U64; + break; + + case sizeof(__uint128_t): + reg |= KVM_REG_SIZE_U128; + break; + + default: + WARN_ON(1); continue; + } if (uindices) { - if (put_user(core_reg | i, uindices)) + if (put_user(reg, uindices)) return -EFAULT; uindices++; }