From patchwork Tue Jun 13 10:28:42 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 104380 Delivered-To: patch@linaro.org Received: by 10.140.91.77 with SMTP id y71csp332106qgd; Tue, 13 Jun 2017 03:29:46 -0700 (PDT) X-Received: by 10.84.218.206 with SMTP id g14mr60966191plm.85.1497349786604; Tue, 13 Jun 2017 03:29:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1497349786; cv=none; d=google.com; s=arc-20160816; b=VRYKLj1pOy9nfOIJ8wVStAKVulf8OvQbyZP2u+9AjhBAcDW0o6veJmhwPdL8u22DmT oCVv8hMXP+YSn+jHLYnC1TlOFSE3zIy9fS3mfHHzTU1BAn7Ddef+yNL6h2OQEe505IkT UAQ5nhX5iAx8UZcvDEGcYfiTsF3khzqOCV861pQma1KtyQJz9MmzGVObjfvUB7UhVp03 5P3afvjr9ARtogI48DFPE4d/re+G7b4DIpkBCXrSOQJ3dYBUp6t8J9lwRDHfMJL4yHzT FThe/sLtVX9MHMt1qtPLyl+FffMQDcYTAmNO+fG8C8OgPWQUkvlX3asXNprNgFPD1Q3f Behg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=WKNfeVOMnamfGFsjeTN65hly4shCoFZqut4+D1X1OE4=; b=HlYGVZkJzrF3+eEGjV/YKdwJG5yUHMvOmFAw6f9VNegcSlBXts1Y6oi2z8sknXLsZ/ 98CmL2xptvg0ozptMXI+Ltl+uQXzSQIdd/c5kAHGtaUGB4n0mYIjzWPbGnXaKF1EOYCS Htrdm5rQdkbV0ppObNGWQJOseTMqSndfBAbRGO3uB1y53SRIqOdxQpose285IjuFrhhm a/L651pMpCmp6dJye0q8cC2R0djLjvudgGI1x3VcSK7ALGUry+2bsPtaCD1AVcC9NT0i KlKEX9uE7y6ydvFGMr4HJp4g6A9KWuR5MokCZm0MN9jRVXQitAHPYuJosKBp+UQ5cBZr gLCQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v62si9079364pfb.42.2017.06.13.03.29.46; Tue, 13 Jun 2017 03:29:46 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752985AbdFMK3Q (ORCPT + 25 others); Tue, 13 Jun 2017 06:29:16 -0400 Received: from foss.arm.com ([217.140.101.70]:46160 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752168AbdFMK2f (ORCPT ); Tue, 13 Jun 2017 06:28:35 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0B21715BE; Tue, 13 Jun 2017 03:28:35 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D1BCB3F93D; Tue, 13 Jun 2017 03:28:34 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id E62981AE322D; Tue, 13 Jun 2017 11:28:44 +0100 (BST) From: Will Deacon To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: mark.rutland@arm.com, akpm@linux-foundation.org, kirill.shutemov@linux.intel.com, Punit.Agrawal@arm.com, mgorman@suse.de, steve.capper@arm.com, vbabka@suse.cz, Will Deacon Subject: [PATCH v2 3/3] mm: migrate: Stabilise page count when migrating transparent hugepages Date: Tue, 13 Jun 2017 11:28:42 +0100 Message-Id: <1497349722-6731-4-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1497349722-6731-1-git-send-email-will.deacon@arm.com> References: <1497349722-6731-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When migrating a transparent hugepage, migrate_misplaced_transhuge_page guards itself against a concurrent fastgup of the page by checking that the page count is equal to 2 before and after installing the new pmd. If the page count changes, then the pmd is reverted back to the original entry, however there is a small window where the new (possibly writable) pmd is installed and the underlying page could be written by userspace. Restoring the old pmd could therefore result in loss of data. This patch fixes the problem by freezing the page count whilst updating the page tables, which protects against a concurrent fastgup without the need to restore the old pmd in the failure case (since the page count can no longer change under our feet). Cc: Mel Gorman Acked-by: Kirill A. Shutemov Signed-off-by: Will Deacon --- mm/migrate.c | 15 ++------------- 1 file changed, 2 insertions(+), 13 deletions(-) -- 2.1.4 diff --git a/mm/migrate.c b/mm/migrate.c index 89a0a1707f4c..8b21f1b1ec6e 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1913,7 +1913,6 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm, int page_lru = page_is_file_cache(page); unsigned long mmun_start = address & HPAGE_PMD_MASK; unsigned long mmun_end = mmun_start + HPAGE_PMD_SIZE; - pmd_t orig_entry; /* * Rate-limit the amount of data that is being migrated to a node. @@ -1956,8 +1955,7 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm, /* Recheck the target PMD */ mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end); ptl = pmd_lock(mm, pmd); - if (unlikely(!pmd_same(*pmd, entry) || page_count(page) != 2)) { -fail_putback: + if (unlikely(!pmd_same(*pmd, entry) || !page_ref_freeze(page, 2))) { spin_unlock(ptl); mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end); @@ -1979,7 +1977,6 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm, goto out_unlock; } - orig_entry = *pmd; entry = mk_huge_pmd(new_page, vma->vm_page_prot); entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); @@ -1996,15 +1993,7 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm, set_pmd_at(mm, mmun_start, pmd, entry); update_mmu_cache_pmd(vma, address, &entry); - if (page_count(page) != 2) { - set_pmd_at(mm, mmun_start, pmd, orig_entry); - flush_pmd_tlb_range(vma, mmun_start, mmun_end); - mmu_notifier_invalidate_range(mm, mmun_start, mmun_end); - update_mmu_cache_pmd(vma, address, &entry); - page_remove_rmap(new_page, true); - goto fail_putback; - } - + page_ref_unfreeze(page, 2); mlock_migrate_page(new_page, page); page_remove_rmap(page, true); set_page_owner_migrate_reason(new_page, MR_NUMA_MISPLACED);