From patchwork Tue Nov 7 09:54:53 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wang Nan X-Patchwork-Id: 118136 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp3772485qgn; Tue, 7 Nov 2017 01:59:35 -0800 (PST) X-Google-Smtp-Source: ABhQp+Qx+IOtrpfc1RfI1GD2Tbfhg9pjkxIcSIoow/VsNT3JKGQMlzg0eQ/8dgsy9fzRFmgWLXJk X-Received: by 10.98.198.138 with SMTP id x10mr19863954pfk.55.1510048775186; Tue, 07 Nov 2017 01:59:35 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1510048775; cv=none; d=google.com; s=arc-20160816; b=LHwaTFAXLKE7z6flNoS7NnFfWiC4rtJrV90+P5cl+9v/Wd6rdqqLnfTOfkEEE+8gR+ CakXUtFI9d5ZbVjiJSwY3AQPus25h5mwvnhJMF890UQgEhCPZiQzV1+2o/OKHmUL6SqD ONVM6pPSVELsU4xpHbpaU9UWZ6E7DdNDCfbL1TutJvyXPz2Pgis5NwzHihxDBe6FS91e kZYoi/oONZGGrfbLgZ4ohc5PpA3d+2/vK5LigfEmK17BM8VyUXSSY3WZDlNbqIdod780 Y1D2J2DOPWJaH5vozErxl/SqjH7w/dq8MR9QKpPXnekNSD2Pb6UPL2b6k2N+rClwJ7gw GctA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:subject:cc :to:from:arc-authentication-results; bh=RjCMe8nooK1QmVXp3IwFvGdXptgrDidEOa58ZInaYIg=; b=r8GDn3uLnLo55mPRAi4rB+YTPfLjz61pC89OoV4vxCpUC9PL2bKqE6K3S/Y8VHcRj5 rmesSgp0AC5kUZx6tcGhHzW5e2NdL72WvFRMQgEp/1Rgp0SFqV/o2D64+z0nViby84Nl qDZsL0Nspy+JfDz0vlbjSUS6ljI6RWHawNurYXEjcZJqXS6WrPmd30tuDg0De6FO0WZ4 FC5lyIhpAL001ahMCeXEEDZBZBSVSp/9Zrh1wiyNlyqksCR6/DTXRmr3urAFyKzoEnja 0MemKiKlctL0NQ4m1OPlipdrKMa7lE/H7Ysu+55KaDpUkb7Z6qIG7GPqbJO5KBywa9tW Y24A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 1si812462pfn.400.2017.11.07.01.59.34; Tue, 07 Nov 2017 01:59:35 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756127AbdKGJ7c (ORCPT + 26 others); Tue, 7 Nov 2017 04:59:32 -0500 Received: from szxga06-in.huawei.com ([45.249.212.32]:39794 "EHLO huawei.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1752666AbdKGJ73 (ORCPT ); Tue, 7 Nov 2017 04:59:29 -0500 Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 3EC39E971F2D5; Tue, 7 Nov 2017 17:56:40 +0800 (CST) Received: from linux-4hy3.site (10.107.193.248) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.361.1; Tue, 7 Nov 2017 17:55:31 +0800 From: Wang Nan To: , , , CC: Wang Nan , Bob Liu , "Andrew Morton" , Michal Hocko , "David Rientjes" , Ingo Molnar , "Roman Gushchin" , Konstantin Khlebnikov , Andrea Arcangeli Subject: [RESEND PATCH] mm, oom_reaper: gather each vma to prevent leaking TLB entry Date: Tue, 7 Nov 2017 09:54:53 +0000 Message-ID: <20171107095453.179940-1-wangnan0@huawei.com> X-Mailer: git-send-email 2.10.1 MIME-Version: 1.0 X-Originating-IP: [10.107.193.248] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org tlb_gather_mmu(&tlb, mm, 0, -1) means gathering the whole virtual memory space. In this case, tlb->fullmm is true. Some archs like arm64 doesn't flush TLB when tlb->fullmm is true: commit 5a7862e83000 ("arm64: tlbflush: avoid flushing when fullmm == 1"). Which makes leaking of tlb entries. Will clarifies his patch: > Basically, we tag each address space with an ASID (PCID on x86) which > is resident in the TLB. This means we can elide TLB invalidation when > pulling down a full mm because we won't ever assign that ASID to another mm > without doing TLB invalidation elsewhere (which actually just nukes the > whole TLB). > > I think that means that we could potentially not fault on a kernel uaccess, > because we could hit in the TLB. There could be a window between complete_signal() sending IPI to other cores and all threads sharing this mm are really kicked off from cores. In this window, the oom reaper may calls tlb_flush_mmu_tlbonly() to flush TLB then frees pages. However, due to the above problem, the TLB entries are not really flushed on arm64. Other threads are possible to access these pages through TLB entries. Moreover, a copy_to_user() can also write to these pages without generating page fault, causes use-after-free bugs. This patch gathers each vma instead of gathering full vm space. In this case tlb->fullmm is not true. The behavior of oom reaper become similar to munmapping before do_exit, which should be safe for all archs. Signed-off-by: Wang Nan Cc: Bob Liu Cc: Andrew Morton Cc: Michal Hocko Cc: David Rientjes Cc: Ingo Molnar Cc: Roman Gushchin Cc: Konstantin Khlebnikov Cc: Andrea Arcangeli --- mm/oom_kill.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) -- 2.10.1 Acked-by: Michal Hocko diff --git a/mm/oom_kill.c b/mm/oom_kill.c index dee0f75..18c5b35 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -532,7 +532,6 @@ static bool __oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm) */ set_bit(MMF_UNSTABLE, &mm->flags); - tlb_gather_mmu(&tlb, mm, 0, -1); for (vma = mm->mmap ; vma; vma = vma->vm_next) { if (!can_madv_dontneed_vma(vma)) continue; @@ -547,11 +546,13 @@ static bool __oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm) * we do not want to block exit_mmap by keeping mm ref * count elevated without a good reason. */ - if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED)) + if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED)) { + tlb_gather_mmu(&tlb, mm, vma->vm_start, vma->vm_end); unmap_page_range(&tlb, vma, vma->vm_start, vma->vm_end, NULL); + tlb_finish_mmu(&tlb, vma->vm_start, vma->vm_end); + } } - tlb_finish_mmu(&tlb, 0, -1); pr_info("oom_reaper: reaped process %d (%s), now anon-rss:%lukB, file-rss:%lukB, shmem-rss:%lukB\n", task_pid_nr(tsk), tsk->comm, K(get_mm_counter(mm, MM_ANONPAGES)),