From patchwork Mon Mar 15 13:56:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 401150 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 121F1C43381 for ; Mon, 15 Mar 2021 14:10:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C281A64E81 for ; Mon, 15 Mar 2021 14:10:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233904AbhCOOJw (ORCPT ); Mon, 15 Mar 2021 10:09:52 -0400 Received: from mail.kernel.org ([198.145.29.99]:52488 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234595AbhCOOE1 (ORCPT ); Mon, 15 Mar 2021 10:04:27 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id A7C4664EF1; Mon, 15 Mar 2021 14:04:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1615817066; bh=oqEf25nvMNpzF7bOgfDJbXuzXtb2BZEjztG/ZKz2/Tk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bTFx5AvJbhpOksRNqBurA/UNIpwEPuHgVGQq80kmInzRLCdaw0kyB4Mv5LKK4oJmB Aw5wYZs1Ov0aYGWoyp+cEE0il8+aSL/lIBfT8m9vLMPYQKV+EXqXreVD1LKJuGuiSU 0x/OHrzClxbSykxQYpaiKolcGjN3dASsAd+5VS4c= From: gregkh@linuxfoundation.org To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Andy Lutomirski , Joerg Roedel , Borislav Petkov Subject: [PATCH 5.10 271/290] x86/sev-es: Check regs->sp is trusted before adjusting #VC IST stack Date: Mon, 15 Mar 2021 14:56:04 +0100 Message-Id: <20210315135551.180280688@linuxfoundation.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210315135541.921894249@linuxfoundation.org> References: <20210315135541.921894249@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Greg Kroah-Hartman From: Joerg Roedel commit 545ac14c16b5dbd909d5a90ddf5b5a629a40fa94 upstream. The code in the NMI handler to adjust the #VC handler IST stack is needed in case an NMI hits when the #VC handler is still using its IST stack. But the check for this condition also needs to look if the regs->sp value is trusted, meaning it was not set by user-space. Extend the check to not use regs->sp when the NMI interrupted user-space code or the SYSCALL gap. Fixes: 315562c9af3d5 ("x86/sev-es: Adjust #VC IST Stack on entering NMI handler") Reported-by: Andy Lutomirski Signed-off-by: Joerg Roedel Signed-off-by: Borislav Petkov Cc: stable@vger.kernel.org # 5.10+ Link: https://lkml.kernel.org/r/20210303141716.29223-3-joro@8bytes.org Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/sev-es.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/sev-es.c b/arch/x86/kernel/sev-es.c index 84c1821819af..301f20f6d4dd 100644 --- a/arch/x86/kernel/sev-es.c +++ b/arch/x86/kernel/sev-es.c @@ -121,8 +121,18 @@ static void __init setup_vc_stacks(int cpu) cea_set_pte((void *)vaddr, pa, PAGE_KERNEL); } -static __always_inline bool on_vc_stack(unsigned long sp) +static __always_inline bool on_vc_stack(struct pt_regs *regs) { + unsigned long sp = regs->sp; + + /* User-mode RSP is not trusted */ + if (user_mode(regs)) + return false; + + /* SYSCALL gap still has user-mode RSP */ + if (ip_within_syscall_gap(regs)) + return false; + return ((sp >= __this_cpu_ist_bottom_va(VC)) && (sp < __this_cpu_ist_top_va(VC))); } @@ -144,7 +154,7 @@ void noinstr __sev_es_ist_enter(struct pt_regs *regs) old_ist = __this_cpu_read(cpu_tss_rw.x86_tss.ist[IST_INDEX_VC]); /* Make room on the IST stack */ - if (on_vc_stack(regs->sp)) + if (on_vc_stack(regs)) new_ist = ALIGN_DOWN(regs->sp, 8) - sizeof(old_ist); else new_ist = old_ist - sizeof(old_ist);