From patchwork Wed Sep 27 15:49:29 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 114375 Delivered-To: patch@linaro.org Received: by 10.140.106.117 with SMTP id d108csp5203399qgf; Wed, 27 Sep 2017 08:53:58 -0700 (PDT) X-Google-Smtp-Source: AOwi7QDKct3DE8YXpZ6oGlWBfgQUIWS4dT5DFqS15z+1PeWP0juFM82VlnaHoz3ZcF7NJNvqQBv3 X-Received: by 10.101.69.74 with SMTP id x10mr1735821pgr.294.1506527638637; Wed, 27 Sep 2017 08:53:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1506527638; cv=none; d=google.com; s=arc-20160816; b=FNFvAlvC+awGwSKaA8agLb6dK8uJOsSKG4dG/tP/NnaJ+HeL7MnYgMg1ZsMf9cT6jg 69YSMJjy8JBrTVouppGzyJDCReoDY6rA7VSugUlO7eNulQP6RiOSATgs3Vdw2rW87Q8p BUtv30nOvYLFgoZrvWeaJQLQgHJnfq0wF1biahPwQbpTuHtU4P7eHMv553uXZZCo0fyI E8ZxvkcdWlsQv9+5Nx3lhR4bUpM1Yk2STHAf0051fjk8JZxjQS3CvovCXe6HfzgnlAlL iMrfm0zv4J4RmO+hgYHMoCRbqOm/hInnSG02nMoJdsPnCnP+EQyEK8vkYGv+undZMs/f /h/w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=AUCD3qMUwqVLZ2OE0mQWpCMemGdkSDWlzeRu7fGBgis=; b=koJWiY7Yv4+J4x24Pv4LNflkTXTzLexoBvXkmF4t9wmgyxSQDt8gELKQX5nDTwX9Wl rQ3JdEA+y4OJ1Mi4Urq3m3EP7dqpdGEYB5GNGM0itRps5NJdU80ANwUY3vVkp9Qdc7/0 +/dFuaLz2xJQQFJC5p1ysLHqqrdhQPD+wK7A4gnzqQu/2jW8zrfQYwsn+XUbzoXx2Ug1 KJt68a40b2IkJNf3TXjAQBjR9wTZfmLWKTK09NHC4gCPs6hj3aLH38guxigLaM+RBnpu I82kalDGJqQ2Q7pieYSdGNoP2vGxKcJiGW6JLvMId6bMQtaHuAkSCjvR9lcSojknVgCO S8MA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 10si7691057pgg.246.2017.09.27.08.53.58; Wed, 27 Sep 2017 08:53:58 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753449AbdI0Px4 (ORCPT + 26 others); Wed, 27 Sep 2017 11:53:56 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:47376 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752489AbdI0Pxw (ORCPT ); Wed, 27 Sep 2017 11:53:52 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E040A1596; Wed, 27 Sep 2017 08:53:51 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id AFE373F7B9; Wed, 27 Sep 2017 08:53:51 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id 045F11AE30B8; Wed, 27 Sep 2017 16:54:05 +0100 (BST) From: Will Deacon To: peterz@infradead.org, paulmck@linux.vnet.ibm.com, kirill.shutemov@linux.intel.com Cc: linux-kernel@vger.kernel.org, ynorov@caviumnetworks.com, rruigrok@codeaurora.org, linux-arch@vger.kernel.org, akpm@linux-foundation.org, catalin.marinas@arm.com, Will Deacon Subject: [RFC PATCH 2/2] mm: page_vma_mapped: Ensure pmd is loaded with READ_ONCE outside of lock Date: Wed, 27 Sep 2017 16:49:29 +0100 Message-Id: <1506527369-19535-3-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1506527369-19535-1-git-send-email-will.deacon@arm.com> References: <1506527369-19535-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Loading the pmd without holding the pmd_lock exposes us to races with concurrent updaters of the page tables but, worse still, it also allows the compiler to cache the pmd value in a register and reuse it later on, even if we've performed a READ_ONCE in between and seen a more recent value. In the case of page_vma_mapped_walk, this leads to the following crash when the pmd loaded for the initial pmd_trans_huge check is all zeroes and a subsequent valid table entry is loaded by check_pmd. We then proceed into map_pte, but the compiler re-uses the zero entry inside pte_offset_map, resulting in a junk pointer being installed in pvmw->pte: [ 254.032812] PC is at check_pte+0x20/0x170 [ 254.032948] LR is at page_vma_mapped_walk+0x2e0/0x540 [...] [ 254.036114] Process doio (pid: 2463, stack limit = 0xffff00000f2e8000) [ 254.036361] Call trace: [ 254.038977] [] check_pte+0x20/0x170 [ 254.039137] [] page_vma_mapped_walk+0x2e0/0x540 [ 254.039332] [] page_mkclean_one+0xac/0x278 [ 254.039489] [] rmap_walk_file+0xf0/0x238 [ 254.039642] [] rmap_walk+0x64/0xa0 [ 254.039784] [] page_mkclean+0x90/0xa8 [ 254.040029] [] clear_page_dirty_for_io+0x84/0x2a8 [ 254.040311] [] mpage_submit_page+0x34/0x98 [ 254.040518] [] mpage_process_page_bufs+0x164/0x170 [ 254.040743] [] mpage_prepare_extent_to_map+0x134/0x2b8 [ 254.040969] [] ext4_writepages+0x484/0xe30 [ 254.041175] [] do_writepages+0x44/0xe8 [ 254.041372] [] __filemap_fdatawrite_range+0xbc/0x110 [ 254.041568] [] file_write_and_wait_range+0x48/0xd8 [ 254.041739] [] ext4_sync_file+0x80/0x4b8 [ 254.041907] [] vfs_fsync_range+0x64/0xc0 [ 254.042106] [] SyS_msync+0x194/0x1e8 This patch fixes the problem by ensuring that READ_ONCE is used before the initial checks on the pmd, and this value is subsequently used when checking whether or not the pmd is present. pmd_check is removed and the pmd_present check is inlined directly. Signed-off-by: Will Deacon --- mm/page_vma_mapped.c | 25 ++++++++++--------------- 1 file changed, 10 insertions(+), 15 deletions(-) -- 2.1.4 diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index 6a03946469a9..6b85f5464246 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -6,17 +6,6 @@ #include "internal.h" -static inline bool check_pmd(struct page_vma_mapped_walk *pvmw) -{ - pmd_t pmde; - /* - * Make sure we don't re-load pmd between present and !trans_huge check. - * We need a consistent view. - */ - pmde = READ_ONCE(*pvmw->pmd); - return pmd_present(pmde) && !pmd_trans_huge(pmde); -} - static inline bool not_found(struct page_vma_mapped_walk *pvmw) { page_vma_mapped_walk_done(pvmw); @@ -116,6 +105,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) pgd_t *pgd; p4d_t *p4d; pud_t *pud; + pmd_t pmde; /* The only possible pmd mapping has been handled on last iteration */ if (pvmw->pmd && !pvmw->pte) @@ -148,7 +138,13 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) if (!pud_present(*pud)) return false; pvmw->pmd = pmd_offset(pud, pvmw->address); - if (pmd_trans_huge(*pvmw->pmd) || is_pmd_migration_entry(*pvmw->pmd)) { + /* + * Make sure the pmd value isn't cached in a register by the + * compiler and used as a stale value after we've observed a + * subsequent update. + */ + pmde = READ_ONCE(*pvmw->pmd); + if (pmd_trans_huge(pmde) || is_pmd_migration_entry(pmde)) { pvmw->ptl = pmd_lock(mm, pvmw->pmd); if (likely(pmd_trans_huge(*pvmw->pmd))) { if (pvmw->flags & PVMW_MIGRATION) @@ -175,9 +171,8 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) spin_unlock(pvmw->ptl); pvmw->ptl = NULL; } - } else { - if (!check_pmd(pvmw)) - return false; + } else if (!pmd_present(pmde)) { + return false; } if (!map_pte(pvmw)) goto next_pte;