From patchwork Sat Mar 7 04:01:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 229821 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C449CC10F00 for ; Sat, 7 Mar 2020 04:01:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 970AA206D5 for ; Sat, 7 Mar 2020 04:01:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1583553682; bh=ZdSVe9l/2C83mtxJfy2o5sX7V8M7uJAgJwVxLyJaXgk=; h=Date:From:To:Subject:List-ID:From; b=zXgOv13xFkGOqzihknK5ZzSoTocrru/ZlbVcDeTaA/+WWw2c9dhnuU9XQgOIkIwd9 zKVuJ/MTY+cqO8AVZ01iflgUA2SPgywnNNsdP8O04HhFQKOoW7vCDGNLqTpNnW2Xpk tjgePlaASM9DOEyIPA2J6BBU2/HuYepQ2qGV2cjs= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726702AbgCGEBW (ORCPT ); Fri, 6 Mar 2020 23:01:22 -0500 Received: from mail.kernel.org ([198.145.29.99]:50280 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726368AbgCGEBW (ORCPT ); Fri, 6 Mar 2020 23:01:22 -0500 Received: from localhost.localdomain (c-71-198-47-131.hsd1.ca.comcast.net [71.198.47.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 9B7A72073D; Sat, 7 Mar 2020 04:01:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1583553680; bh=ZdSVe9l/2C83mtxJfy2o5sX7V8M7uJAgJwVxLyJaXgk=; h=Date:From:To:Subject:From; b=txfOkik7JzTIfHHXaA9u8HEfKEFPoKiOoSY5gSNrlb5vl5bCxuk+6DvMyHGvuM7cS 3qjButk60ICxv4jGtrUQJG2+cLg1ufEY3zufv5Sg+U9xKiDCjD54+Cy8j716CxIAf6 SYX1A3SPohKLhtooCzaQx5df0O5Mhm3+AcGpKWuc= Date: Fri, 06 Mar 2020 20:01:20 -0800 From: akpm@linux-foundation.org To: dan.j.williams@intel.com, jmoyer@redhat.com, Justin.He@arm.com, kirill.shutemov@linux.intel.com, kirill@shutemov.name, mm-commits@vger.kernel.org, stable@vger.kernel.org Subject: [merged] mm-avoid-data-corruption-on-cow-fault-into-pfn-mapped-vma.patch removed from -mm tree Message-ID: <20200307040120.ZTrx62j2k%akpm@linux-foundation.org> User-Agent: s-nail v14.8.16 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org The patch titled Subject: mm: avoid data corruption on CoW fault into PFN-mapped VMA has been removed from the -mm tree. Its filename was mm-avoid-data-corruption-on-cow-fault-into-pfn-mapped-vma.patch This patch was dropped because it was merged into mainline or a subsystem tree ------------------------------------------------------ From: "Kirill A. Shutemov" Subject: mm: avoid data corruption on CoW fault into PFN-mapped VMA Jeff Moyer has reported that one of xfstests triggers a warning when run on DAX-enabled filesystem: WARNING: CPU: 76 PID: 51024 at mm/memory.c:2317 wp_page_copy+0xc40/0xd50 ... wp_page_copy+0x98c/0xd50 (unreliable) do_wp_page+0xd8/0xad0 __handle_mm_fault+0x748/0x1b90 handle_mm_fault+0x120/0x1f0 __do_page_fault+0x240/0xd70 do_page_fault+0x38/0xd0 handle_page_fault+0x10/0x30 The warning happens on failed __copy_from_user_inatomic() which tries to copy data into a CoW page. This happens because of race between MADV_DONTNEED and CoW page fault: CPU0 CPU1 handle_mm_fault() do_wp_page() wp_page_copy() do_wp_page() madvise(MADV_DONTNEED) zap_page_range() zap_pte_range() ptep_get_and_clear_full() __copy_from_user_inatomic() sees empty PTE and fails WARN_ON_ONCE(1) clear_page() The solution is to re-try __copy_from_user_inatomic() under PTL after checking that PTE is matches the orig_pte. The second copy attempt can still fail, like due to non-readable PTE, but there's nothing reasonable we can do about, except clearing the CoW page. Link: http://lkml.kernel.org/r/20200218154151.13349-1-kirill.shutemov@linux.intel.com Signed-off-by: Kirill A. Shutemov Reported-by: Jeff Moyer Tested-by: Jeff Moyer Cc: Dan Williams Cc: Justin He Cc: Signed-off-by: Andrew Morton --- mm/memory.c | 35 +++++++++++++++++++++++++++-------- 1 file changed, 27 insertions(+), 8 deletions(-) --- a/mm/memory.c~mm-avoid-data-corruption-on-cow-fault-into-pfn-mapped-vma +++ a/mm/memory.c @@ -2257,7 +2257,7 @@ static inline bool cow_user_page(struct bool ret; void *kaddr; void __user *uaddr; - bool force_mkyoung; + bool locked = false; struct vm_area_struct *vma = vmf->vma; struct mm_struct *mm = vma->vm_mm; unsigned long addr = vmf->address; @@ -2282,11 +2282,11 @@ static inline bool cow_user_page(struct * On architectures with software "accessed" bits, we would * take a double page fault, so mark it accessed here. */ - force_mkyoung = arch_faults_on_old_pte() && !pte_young(vmf->orig_pte); - if (force_mkyoung) { + if (arch_faults_on_old_pte() && !pte_young(vmf->orig_pte)) { pte_t entry; vmf->pte = pte_offset_map_lock(mm, vmf->pmd, addr, &vmf->ptl); + locked = true; if (!likely(pte_same(*vmf->pte, vmf->orig_pte))) { /* * Other thread has already handled the fault @@ -2310,18 +2310,37 @@ static inline bool cow_user_page(struct * zeroes. */ if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE)) { + if (locked) + goto warn; + + /* Re-validate under PTL if the page is still mapped */ + vmf->pte = pte_offset_map_lock(mm, vmf->pmd, addr, &vmf->ptl); + locked = true; + if (!likely(pte_same(*vmf->pte, vmf->orig_pte))) { + /* The PTE changed under us. Retry page fault. */ + ret = false; + goto pte_unlock; + } + /* - * Give a warn in case there can be some obscure - * use-case + * The same page can be mapped back since last copy attampt. + * Try to copy again under PTL. */ - WARN_ON_ONCE(1); - clear_page(kaddr); + if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE)) { + /* + * Give a warn in case there can be some obscure + * use-case + */ +warn: + WARN_ON_ONCE(1); + clear_page(kaddr); + } } ret = true; pte_unlock: - if (force_mkyoung) + if (locked) pte_unmap_unlock(vmf->pte, vmf->ptl); kunmap_atomic(kaddr); flush_dcache_page(dst);