From patchwork Tue Jun 6 17:58:36 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 103186 Delivered-To: patch@linaro.org Received: by 10.182.29.35 with SMTP id g3csp1387605obh; Tue, 6 Jun 2017 10:59:13 -0700 (PDT) X-Received: by 10.99.44.201 with SMTP id s192mr5063251pgs.84.1496771953436; Tue, 06 Jun 2017 10:59:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1496771953; cv=none; d=google.com; s=arc-20160816; b=UFoIW0EVV871p0GacCZOR4dT931OjWvYtfuZYEO3DYIKhFqnODxNNGsZS74eAngetJ 2FymR25K4pSjWxubNn9eHFILV7iXjyKVvHz2WTBcD00Q5btoNUXc13ID313h8o1HzQa6 3oMxXXdlkcYgIIMn+O298y58VT2kb9b8TLGJZP46mKSuNHkVFmgLZd0r+iVBZuQulqiI 3ou8p+Kemf/NSgxPPwLjSJksL53EpET3vdw6EOKAOzBR6Rgkx4tR2Vkr+ITytPlT8oaf eddMORSbRNLelRa1nt/YqaFRGQDdgWIzJFu7Cf+hO6BRcECWTjB+m+JnlEimJYnUzKy4 FH/Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=x9qS0Db0ISOQoubf+CL+5jyqTkq9UQKjtf6VEzGEORU=; b=YDUkjZeGQ8KF2hcaQjk9dJfayB/m+08qChJTky8Q4rw8DgwPid9/Kf+M101zpmavl0 5775d1mwxHGWeXDhYlKx2HuvVdQlTSG9GdbmShQZ9rgkPlM8NtGJd7Jnnolw95bnjHlD lc1mcYbU/nEyX3XUnlPU1nYvyA3Or4GqTOGuC4Neo0fyzXlYqN6se7bWMM2Y1+Vdxsio UvHmHF1E3BBs9n9VEy2pnNRKZFAnp0RG+WOQgEICdUjAWHPSUz/bG2Gn4VhqR/LF1r4U 9OWn+KATZL1/Du8Ox52oe2gsmLrKCx7LDnG27X1xIhz3amo3cjtQcARiIjdyAJnOjTxH l7ig== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k83si3138925pfh.135.2017.06.06.10.59.13; Tue, 06 Jun 2017 10:59:13 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751652AbdFFR6c (ORCPT + 25 others); Tue, 6 Jun 2017 13:58:32 -0400 Received: from foss.arm.com ([217.140.101.70]:50640 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751405AbdFFR6a (ORCPT ); Tue, 6 Jun 2017 13:58:30 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8F59615BE; Tue, 6 Jun 2017 10:58:29 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 604793F911; Tue, 6 Jun 2017 10:58:29 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id 04C811AE17A1; Tue, 6 Jun 2017 18:58:36 +0100 (BST) From: Will Deacon To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: mark.rutland@arm.com, akpm@linux-foundation.org, kirill.shutemov@linux.intel.com, Punit.Agrawal@arm.com, mgorman@suse.de, steve.capper@arm.com, Will Deacon Subject: [PATCH 3/3] mm: migrate: Stabilise page count when migrating transparent hugepages Date: Tue, 6 Jun 2017 18:58:36 +0100 Message-Id: <1496771916-28203-4-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1496771916-28203-1-git-send-email-will.deacon@arm.com> References: <1496771916-28203-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When migrating a transparent hugepage, migrate_misplaced_transhuge_page guards itself against a concurrent fastgup of the page by checking that the page count is equal to 2 before and after installing the new pmd. If the page count changes, then the pmd is reverted back to the original entry, however there is a small window where the new (possibly writable) pmd is installed and the underlying page could be written by userspace. Restoring the old pmd could therefore result in loss of data. This patch fixes the problem by freezing the page count whilst updating the page tables, which protects against a concurrent fastgup without the need to restore the old pmd in the failure case (since the page count can no longer change under our feet). Cc: Mel Gorman Signed-off-by: Will Deacon --- mm/migrate.c | 15 ++------------- 1 file changed, 2 insertions(+), 13 deletions(-) -- 2.1.4 Acked-by: Kirill A. Shutemov diff --git a/mm/migrate.c b/mm/migrate.c index 89a0a1707f4c..8b21f1b1ec6e 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1913,7 +1913,6 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm, int page_lru = page_is_file_cache(page); unsigned long mmun_start = address & HPAGE_PMD_MASK; unsigned long mmun_end = mmun_start + HPAGE_PMD_SIZE; - pmd_t orig_entry; /* * Rate-limit the amount of data that is being migrated to a node. @@ -1956,8 +1955,7 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm, /* Recheck the target PMD */ mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end); ptl = pmd_lock(mm, pmd); - if (unlikely(!pmd_same(*pmd, entry) || page_count(page) != 2)) { -fail_putback: + if (unlikely(!pmd_same(*pmd, entry) || !page_ref_freeze(page, 2))) { spin_unlock(ptl); mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end); @@ -1979,7 +1977,6 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm, goto out_unlock; } - orig_entry = *pmd; entry = mk_huge_pmd(new_page, vma->vm_page_prot); entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); @@ -1996,15 +1993,7 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm, set_pmd_at(mm, mmun_start, pmd, entry); update_mmu_cache_pmd(vma, address, &entry); - if (page_count(page) != 2) { - set_pmd_at(mm, mmun_start, pmd, orig_entry); - flush_pmd_tlb_range(vma, mmun_start, mmun_end); - mmu_notifier_invalidate_range(mm, mmun_start, mmun_end); - update_mmu_cache_pmd(vma, address, &entry); - page_remove_rmap(new_page, true); - goto fail_putback; - } - + page_ref_unfreeze(page, 2); mlock_migrate_page(new_page, page); page_remove_rmap(page, true); set_page_owner_migrate_reason(new_page, MR_NUMA_MISPLACED);