From patchwork Thu Mar 5 21:15:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Eric W. Biederman" X-Patchwork-Id: 229838 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D36AC3F2D1 for ; Thu, 5 Mar 2020 21:17:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E8D94208C3 for ; Thu, 5 Mar 2020 21:17:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726145AbgCEVRw (ORCPT ); Thu, 5 Mar 2020 16:17:52 -0500 Received: from out02.mta.xmission.com ([166.70.13.232]:35642 "EHLO out02.mta.xmission.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726049AbgCEVRw (ORCPT ); Thu, 5 Mar 2020 16:17:52 -0500 Received: from in02.mta.xmission.com ([166.70.13.52]) by out02.mta.xmission.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1j9xsM-0000pK-Gf; Thu, 05 Mar 2020 14:17:50 -0700 Received: from ip68-227-160-95.om.om.cox.net ([68.227.160.95] helo=x220.xmission.com) by in02.mta.xmission.com with esmtpsa (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.87) (envelope-from ) id 1j9xsL-0006Wy-Nd; Thu, 05 Mar 2020 14:17:50 -0700 From: ebiederm@xmission.com (Eric W. Biederman) To: Bernd Edlinger Cc: Christian Brauner , Kees Cook , Jann Horn , Jonathan Corbet , Alexander Viro , Andrew Morton , Alexey Dobriyan , Thomas Gleixner , Oleg Nesterov , Frederic Weisbecker , Andrei Vagin , Ingo Molnar , "Peter Zijlstra \(Intel\)" , Yuyang Du , David Hildenbrand , Sebastian Andrzej Siewior , Anshuman Khandual , David Howells , James Morris , Greg Kroah-Hartman , Shakeel Butt , Jason Gunthorpe , Christian Kellner , Andrea Arcangeli , Aleksa Sarai , "Dmitry V. Levin" , "linux-doc\@vger.kernel.org" , "linux-kernel\@vger.kernel.org" , "linux-fsdevel\@vger.kernel.org" , "linux-mm\@kvack.org" , "stable\@vger.kernel.org" , "linux-api\@vger.kernel.org" References: <87k142lpfz.fsf@x220.int.ebiederm.org> <875zfmloir.fsf@x220.int.ebiederm.org> <87v9nmjulm.fsf@x220.int.ebiederm.org> <202003021531.C77EF10@keescook> <20200303085802.eqn6jbhwxtmz4j2x@wittgenstein> <87v9nlii0b.fsf@x220.int.ebiederm.org> <87a74xi4kz.fsf@x220.int.ebiederm.org> <87r1y8dqqz.fsf@x220.int.ebiederm.org> <87tv32cxmf.fsf_-_@x220.int.ebiederm.org> Date: Thu, 05 Mar 2020 15:15:36 -0600 In-Reply-To: <87tv32cxmf.fsf_-_@x220.int.ebiederm.org> (Eric W. Biederman's message of "Thu, 05 Mar 2020 15:14:48 -0600") Message-ID: <87o8tacxl3.fsf_-_@x220.int.ebiederm.org> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.1 (gnu/linux) MIME-Version: 1.0 X-XM-SPF: eid=1j9xsL-0006Wy-Nd; ; ; mid=<87o8tacxl3.fsf_-_@x220.int.ebiederm.org>; ; ; hst=in02.mta.xmission.com; ; ; ip=68.227.160.95; ; ; frm=ebiederm@xmission.com; ; ; spf=neutral X-XM-AID: U2FsdGVkX19NkA3NbUrDnw7DDors6fjDf+oECzNhv98= X-SA-Exim-Connect-IP: 68.227.160.95 X-SA-Exim-Mail-From: ebiederm@xmission.com Subject: [PATCH 1/2] exec: Properly mark the point of no return X-SA-Exim-Version: 4.2.1 (built Thu, 05 May 2016 13:38:54 -0600) X-SA-Exim-Scanned: Yes (on in02.mta.xmission.com) Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org Add a flag binfmt->unrecoverable to mark when execution has gotten to the point where it is impossible to return to userspace with the calling process unchanged. While techinically this state starts as soon as de_thread starts killing threads, the only return path at that point is if there is a fatal signal pending. I have choosen instead to set unrecoverable when the killing stops, and there are possibilities of failures other than fatal signals. In particular it is possible for the allocation of a new sighand structure to fail. Setting unrecoverable at this point has the benefit that other actions can be taken after the other threads are all dead, and the unrecoverable flag can double as a flag that those actions have been taken. Signed-off-by: "Eric W. Biederman" --- fs/exec.c | 7 ++++--- include/linux/binfmts.h | 7 ++++++- 2 files changed, 10 insertions(+), 4 deletions(-) diff --git a/fs/exec.c b/fs/exec.c index db17be51b112..c243f9660d46 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -1061,7 +1061,7 @@ static int exec_mmap(struct mm_struct *mm) * disturbing other processes. (Other processes might share the signal * table via the CLONE_SIGHAND option to clone().) */ -static int de_thread(struct task_struct *tsk) +static int de_thread(struct linux_binprm *bprm, struct task_struct *tsk) { struct signal_struct *sig = tsk->signal; struct sighand_struct *oldsighand = tsk->sighand; @@ -1182,6 +1182,7 @@ static int de_thread(struct task_struct *tsk) release_task(leader); } + bprm->unrecoverable = true; sig->group_exit_task = NULL; sig->notify_count = 0; @@ -1266,7 +1267,7 @@ int flush_old_exec(struct linux_binprm * bprm) * Make sure we have a private signal table and that * we are unassociated from the previous thread group. */ - retval = de_thread(current); + retval = de_thread(bprm, current); if (retval) goto out; @@ -1664,7 +1665,7 @@ int search_binary_handler(struct linux_binprm *bprm) read_lock(&binfmt_lock); put_binfmt(fmt); - if (retval < 0 && !bprm->mm) { + if (retval < 0 && bprm->unrecoverable) { /* we got to flush_old_exec() and failed after it */ read_unlock(&binfmt_lock); force_sigsegv(SIGSEGV); diff --git a/include/linux/binfmts.h b/include/linux/binfmts.h index b40fc633f3be..12263115ce7a 100644 --- a/include/linux/binfmts.h +++ b/include/linux/binfmts.h @@ -44,7 +44,12 @@ struct linux_binprm { * exec has happened. Used to sanitize execution environment * and to set AT_SECURE auxv for glibc. */ - secureexec:1; + secureexec:1, + /* + * Set when changes have been made that prevent returning + * to userspace. + */ + unrecoverable:1; #ifdef __alpha__ unsigned int taso:1; #endif