From patchwork Fri Oct 23 21:43:55 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Laszlo Ersek X-Patchwork-Id: 55508 Delivered-To: patch@linaro.org Received: by 10.112.59.35 with SMTP id w3csp1476716lbq; Fri, 23 Oct 2015 14:44:05 -0700 (PDT) X-Received: by 10.66.248.137 with SMTP id ym9mr7123541pac.157.1445636645039; Fri, 23 Oct 2015 14:44:05 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id dl5si20607194pbb.108.2015.10.23.14.44.04; Fri, 23 Oct 2015 14:44:05 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751412AbbJWVoD (ORCPT + 2 others); Fri, 23 Oct 2015 17:44:03 -0400 Received: from mx1.redhat.com ([209.132.183.28]:59925 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751324AbbJWVoC (ORCPT ); Fri, 23 Oct 2015 17:44:02 -0400 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by mx1.redhat.com (Postfix) with ESMTPS id 1F0668EA37; Fri, 23 Oct 2015 21:44:02 +0000 (UTC) Received: from lacos-laptop-7.usersys.redhat.com (ovpn-113-38.phx2.redhat.com [10.3.113.38]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id t9NLhxgP020985; Fri, 23 Oct 2015 17:44:00 -0400 From: Laszlo Ersek To: kvm@vger.kernel.org, lersek@redhat.com Cc: Paolo Bonzini , =?UTF-8?q?Radim=20Kr=E8m=E1=F8?= , Jordan Justen , Michael Kinney , stable@vger.kernel.org Subject: [PATCH] KVM: x86: fix RSM into 64-bit protected mode, round 2 Date: Fri, 23 Oct 2015 23:43:55 +0200 Message-Id: <1445636635-32260-1-git-send-email-lersek@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org Commit b10d92a54dac ("KVM: x86: fix RSM into 64-bit protected mode") reordered the rsm_load_seg_64() and rsm_enter_protected_mode() calls, relative to each other. The argument that said commit made was correct, however putting rsm_enter_protected_mode() first whole-sale violated the following (correct) invariant from em_rsm(): * Get back to real mode, to prepare a safe state in which to load * CR0/CR3/CR4/EFER. Also this will ensure that addresses passed * to read_std/write_std are not virtual. Namely, rsm_enter_protected_mode() may re-enable paging, *after* which rsm_load_seg_64() GET_SMSTATE() read_std() will try to interpret the (smbase + offset) address as a virtual one. This will result in unexpected page faults being injected to the guest in response to the RSM instruction. Split rsm_load_seg_64() in two parts: - The first part, rsm_stash_seg_64(), shall call GET_SMSTATE() while in real mode, and save the relevant state off SMRAM into an array local to rsm_load_state_64(). - The second part, rsm_load_seg_64(), shall occur after entering protected mode, but the segment details shall come from the local array, not the guest's SMRAM. Fixes: b10d92a54dac25a6152f1aa1ffc95c12908035ce Cc: Paolo Bonzini Cc: Radim Krčmář Cc: Jordan Justen Cc: Michael Kinney Cc: stable@vger.kernel.org Signed-off-by: Laszlo Ersek --- arch/x86/kvm/emulate.c | 37 ++++++++++++++++++++++++++++++------- 1 file changed, 30 insertions(+), 7 deletions(-) -- 1.8.3.1 -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 9da95b9..25e16b6 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -2311,7 +2311,16 @@ static int rsm_load_seg_32(struct x86_emulate_ctxt *ctxt, u64 smbase, int n) return X86EMUL_CONTINUE; } -static int rsm_load_seg_64(struct x86_emulate_ctxt *ctxt, u64 smbase, int n) +struct rsm_stashed_seg_64 { + u16 selector; + struct desc_struct desc; + u32 base3; +}; + +static int rsm_stash_seg_64(struct x86_emulate_ctxt *ctxt, + struct rsm_stashed_seg_64 *stash, + u64 smbase, + int n) { struct desc_struct desc; int offset; @@ -2326,10 +2335,20 @@ static int rsm_load_seg_64(struct x86_emulate_ctxt *ctxt, u64 smbase, int n) set_desc_base(&desc, GET_SMSTATE(u32, smbase, offset + 8)); base3 = GET_SMSTATE(u32, smbase, offset + 12); - ctxt->ops->set_segment(ctxt, selector, &desc, base3, n); + stash[n].selector = selector; + stash[n].desc = desc; + stash[n].base3 = base3; return X86EMUL_CONTINUE; } +static inline void rsm_load_seg_64(struct x86_emulate_ctxt *ctxt, + struct rsm_stashed_seg_64 *stash, + int n) +{ + ctxt->ops->set_segment(ctxt, stash[n].selector, &stash[n].desc, + stash[n].base3, n); +} + static int rsm_enter_protected_mode(struct x86_emulate_ctxt *ctxt, u64 cr0, u64 cr4) { @@ -2419,6 +2438,7 @@ static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, u64 smbase) u32 base3; u16 selector; int i, r; + struct rsm_stashed_seg_64 stash[6]; for (i = 0; i < 16; i++) *reg_write(ctxt, i) = GET_SMSTATE(u64, smbase, 0x7ff8 - i * 8); @@ -2460,15 +2480,18 @@ static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, u64 smbase) dt.address = GET_SMSTATE(u64, smbase, 0x7e68); ctxt->ops->set_gdt(ctxt, &dt); + for (i = 0; i < ARRAY_SIZE(stash); i++) { + r = rsm_stash_seg_64(ctxt, stash, smbase, i); + if (r != X86EMUL_CONTINUE) + return r; + } + r = rsm_enter_protected_mode(ctxt, cr0, cr4); if (r != X86EMUL_CONTINUE) return r; - for (i = 0; i < 6; i++) { - r = rsm_load_seg_64(ctxt, smbase, i); - if (r != X86EMUL_CONTINUE) - return r; - } + for (i = 0; i < ARRAY_SIZE(stash); i++) + rsm_load_seg_64(ctxt, stash, i); return X86EMUL_CONTINUE; }