From patchwork Tue Oct 27 13:45:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 312651 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 133C2C55178 for ; Tue, 27 Oct 2020 15:22:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C1AC821527 for ; Tue, 27 Oct 2020 15:22:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603812152; bh=7xT6aAxzOBLkpA2SaahT+QLhtQ+3akzwiY9gddM3Qho=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=zecD5rDx18FTwtniKn1mD6mzWXkAKba8eDKeMKQjfUsUnQg//9e5lxGiGTB421pvJ h56Y7VTsbwIVHXJOA8EKNxP4t0POJCaS5iarVQckFrpBimSkulXKrLFdB5aDRBHaKf Q49nNAUjNRIOJvklTWUQinaARrPjp49fXqsjxXNc= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1797251AbgJ0PW3 (ORCPT ); Tue, 27 Oct 2020 11:22:29 -0400 Received: from mail.kernel.org ([198.145.29.99]:37242 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1797241AbgJ0PWY (ORCPT ); Tue, 27 Oct 2020 11:22:24 -0400 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 218232064B; Tue, 27 Oct 2020 15:22:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603812143; bh=7xT6aAxzOBLkpA2SaahT+QLhtQ+3akzwiY9gddM3Qho=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IFyieVV9RSyc9xEHvgJtw73jwpaeYM0sp99SVS4H26eBM9scQpAMoacaJwoNR4AQ2 N4aaKXNz8xqOoB5fBSf/EUDo098zgVVgIdZ3GVOQQKMWp87sD1uj9SPN24UwR+mJsV RwykExx85nO+257OxQy2H4Xsvjz+EQMqtKpl1ZTs= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Junaid Shahid , Sean Christopherson , Paolo Bonzini Subject: [PATCH 5.9 076/757] KVM: x86/mmu: Commit zap of remaining invalid pages when recovering lpages Date: Tue, 27 Oct 2020 14:45:26 +0100 Message-Id: <20201027135454.103297549@linuxfoundation.org> X-Mailer: git-send-email 2.29.1 In-Reply-To: <20201027135450.497324313@linuxfoundation.org> References: <20201027135450.497324313@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Sean Christopherson commit e89505698c9f70125651060547da4ff5046124fc upstream. Call kvm_mmu_commit_zap_page() after exiting the "prepare zap" loop in kvm_recover_nx_lpages() to finish zapping pages in the unlikely event that the loop exited due to lpage_disallowed_mmu_pages being empty. Because the recovery thread drops mmu_lock() when rescheduling, it's possible that lpage_disallowed_mmu_pages could be emptied by a different thread without to_zap reaching zero despite to_zap being derived from the number of disallowed lpages. Fixes: 1aa9b9572b105 ("kvm: x86: mmu: Recovery of shattered NX large pages") Cc: Junaid Shahid Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson Message-Id: <20200923183735.584-2-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini Signed-off-by: Greg Kroah-Hartman --- arch/x86/kvm/mmu/mmu.c | 1 + 1 file changed, 1 insertion(+) --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6376,6 +6376,7 @@ static void kvm_recover_nx_lpages(struct cond_resched_lock(&kvm->mmu_lock); } } + kvm_mmu_commit_zap_page(kvm, &invalid_list); spin_unlock(&kvm->mmu_lock); srcu_read_unlock(&kvm->srcu, rcu_idx);