From patchwork Fri Jun 16 21:02:34 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 105763 Delivered-To: patch@linaro.org Received: by 10.140.91.77 with SMTP id y71csp1493948qgd; Fri, 16 Jun 2017 14:02:38 -0700 (PDT) X-Received: by 10.84.232.73 with SMTP id f9mr14990005pln.28.1497646958672; Fri, 16 Jun 2017 14:02:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1497646958; cv=none; d=google.com; s=arc-20160816; b=fx6iDE+O2aRqBNzIb7f3Uv8Qxw4TxR6q9t6EdvILzNB/IiP5xAFw7bMGS/h52/uwOy ScLyCbnmiXu+cLqH0Mkf7LipHH7Q/hyRKwXGZL3Q6l1pPEiZuSh942n0pBxL0ghYBt3O v3UE6sYOTbbf7nC7BkPG+E2ZqZL3P0jhcKB4XAhgvNc3416Lzs7z0S5L0UC/k+6Zhfs2 DUxqXY66SetPaHQGHB2H+Rsqmbs3yEoovvanHDAeFyGsaH0RBDefAK6pqlxy3EAtKhTg /Ycx/hP25wU5ozMdHzi2r9DK47og4/DH8Yw5MO17qHydIoyrSAGxHjHMqhFJgWisNuRB kpIg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:message-id:subject:to:from:date :arc-authentication-results; bh=/W3qbVRgBwp4ozWRV30LozD7TgJfg/S4wJkT7Trtoj4=; b=VxvtB6wWzzXduW8ir/roqzDf66dac619IZ6qsSTfOQZ8Xt6nNR5Y8n6vfcV5hQkOwR LxLvUhUf17fJyS5oKlSMqk/6w6C54XdaqrtSmDp8uwr3jc7YhfxafsvkLah88Q/v0dvR gcF7HuEwIjvCwHrKsdHMciXkJmDANKFN2Kh15+L9ttQYOpz9ZLeWx8a3EBKumKVOh6TF 2W4BB/QXoTn7jiaHYkKetex3KDg5DJ+OMiNxcQiAOyGmqq+IkNDZ/v52dxuZ2ksHLeLx FNfGIaVDgd3jDxsQ0UvGU2oG8I7p+zEsoKbic2XdxfmjRJx6kd7+Vk70+GcmBkhkG2Yr 1FLg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d7si1863173pgn.47.2017.06.16.14.02.38; Fri, 16 Jun 2017 14:02:38 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752158AbdFPVCg (ORCPT + 6 others); Fri, 16 Jun 2017 17:02:36 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:47780 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751818AbdFPVCf (ORCPT ); Fri, 16 Jun 2017 17:02:35 -0400 Received: from akpm3.svl.corp.google.com (unknown [104.133.9.92]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id E359A94B; Fri, 16 Jun 2017 21:02:34 +0000 (UTC) Date: Fri, 16 Jun 2017 14:02:34 -0700 From: akpm@linux-foundation.org To: torvalds@linux-foundation.org, mm-commits@vger.kernel.org, akpm@linux-foundation.org, mark.rutland@arm.com, kirill.shutemov@linux.intel.com, mgorman@suse.de, stable@vger.kernel.org, steve.capper@arm.com, vbabka@suse.cz, will.deacon@arm.com Subject: [patch 3/5] mm: numa: avoid waiting on freed migrated pages Message-ID: <5944476a.tWJw/NF4KYzrLQGi%akpm@linux-foundation.org> User-Agent: Heirloom mailx 12.5 6/20/10 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Mark Rutland Subject: mm: numa: avoid waiting on freed migrated pages In do_huge_pmd_numa_page(), we attempt to handle a migrating thp pmd by waiting until the pmd is unlocked before we return and retry. However, we can race with migrate_misplaced_transhuge_page(): // do_huge_pmd_numa_page // migrate_misplaced_transhuge_page() // Holds 0 refs on page // Holds 2 refs on page vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); /* ... */ if (pmd_trans_migrating(*vmf->pmd)) { page = pmd_page(*vmf->pmd); spin_unlock(vmf->ptl); ptl = pmd_lock(mm, pmd); if (page_count(page) != 2)) { /* roll back */ } /* ... */ mlock_migrate_page(new_page, page); /* ... */ spin_unlock(ptl); put_page(page); put_page(page); // page freed here wait_on_page_locked(page); goto out; } This can result in the freed page having its waiters flag set unexpectedly, which trips the PAGE_FLAGS_CHECK_AT_PREP checks in the page alloc/free functions. This has been observed on arm64 KVM guests. We can avoid this by having do_huge_pmd_numa_page() take a reference on the page before dropping the pmd lock, mirroring what we do in __migration_entry_wait(). When we hit the race, migrate_misplaced_transhuge_page() will see the reference and abort the migration, as it may do today in other cases. Fixes: b8916634b77bffb2 ("mm: Prevent parallel splits during THP migration") Link: http://lkml.kernel.org/r/1497349722-6731-2-git-send-email-will.deacon@arm.com Signed-off-by: Mark Rutland Signed-off-by: Will Deacon Acked-by: Steve Capper Acked-by: Kirill A. Shutemov Acked-by: Vlastimil Babka Cc: Mel Gorman Cc: Signed-off-by: Andrew Morton --- mm/huge_memory.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff -puN mm/huge_memory.c~mm-numa-avoid-waiting-on-freed-migrated-pages mm/huge_memory.c --- a/mm/huge_memory.c~mm-numa-avoid-waiting-on-freed-migrated-pages +++ a/mm/huge_memory.c @@ -1426,8 +1426,11 @@ int do_huge_pmd_numa_page(struct vm_faul */ if (unlikely(pmd_trans_migrating(*vmf->pmd))) { page = pmd_page(*vmf->pmd); + if (!get_page_unless_zero(page)) + goto out_unlock; spin_unlock(vmf->ptl); wait_on_page_locked(page); + put_page(page); goto out; } @@ -1459,9 +1462,12 @@ int do_huge_pmd_numa_page(struct vm_faul /* Migration could have started since the pmd_trans_migrating check */ if (!page_locked) { + page_nid = -1; + if (!get_page_unless_zero(page)) + goto out_unlock; spin_unlock(vmf->ptl); wait_on_page_locked(page); - page_nid = -1; + put_page(page); goto out; }