From patchwork Mon Dec 16 20:45:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajay Kaher X-Patchwork-Id: 181714 Delivered-To: patch@linaro.org Received: by 2002:a92:3001:0:0:0:0:0 with SMTP id x1csp4259635ile; Mon, 16 Dec 2019 04:45:49 -0800 (PST) X-Google-Smtp-Source: APXvYqziwZVpQ1jKHO6Kvi8PZO84f3Dn1u6NEXAYFsLKasMig3a/vkmGChkC2A1QxVLzAUkXKeqs X-Received: by 2002:a05:6830:1e5c:: with SMTP id e28mr32308610otj.293.1576500349034; Mon, 16 Dec 2019 04:45:49 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1576500349; cv=none; d=google.com; s=arc-20160816; b=AYSp7s5aVy+IlvhmUuNvlPUJCqa6knYXcFNYWFfZa+N1l/cf9A4ZANk3E16BtlDEv5 h0HjGbzK/m2UMIBeF+YdJyiTqWUGxdD+1QsDoNLhv7Xaccbvd212yZ1zEP0ogAPjejFZ 8BGakZsTJVnkuDaFtdjq9aM33KtMvv5f6cK/SZpvphluY5B3enGweTd/03O9joH1abBq lGd1YEJVFz/qYcchO36auR0KKe3D0Kmjxn7HoaghtZQRlDKox81M/LRnCoifu7m3exAa Ho9SFAB9ProqfG8JJGJ0Ghkwi3fdrCfxM+PUmaRTeTlOc+62G3mw8DOYfovrh/hxMcWT 89+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=fX/x45gsQHgTA6LPdMdHpiuweOtvoyNingFQX/gKBP4=; b=MMZAzwywjyjm3aZAlV0AumH7r88aPLo0vAILsl0WUQE1N/7rSZdAta6uUHaqgVbsid 9ROrvwyyMRzLarlFfoUx9CMVRg7ZMYuzRODChCwjlXhRVuDJlNU26wKqutlPWyOBuGdr WvBhNW5gMAfJOwa0KfXD5fHremRJeE1r6caRlUyeGdrls65EQ+eCk2XDay+H36sjiEMT uRzrMETRwxvjBTMg4dLriBGo76UhKjba4d0jfEL9SK74bcje5rv6OtkG1jxHhseEaWmH 66srruTNG/jkKsZEdQes1CZ3/FCJfioFpduyOVWjZ4AmuIWGPLy8y578PfrP7W/CmoA8 pFSg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=NONE dis=NONE) header.from=vmware.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a205si10677665oii.95.2019.12.16.04.45.48; Mon, 16 Dec 2019 04:45:49 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=NONE dis=NONE) header.from=vmware.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727553AbfLPMps (ORCPT + 14 others); Mon, 16 Dec 2019 07:45:48 -0500 Received: from ex13-edg-ou-001.vmware.com ([208.91.0.189]:9001 "EHLO EX13-EDG-OU-001.vmware.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727599AbfLPMps (ORCPT ); Mon, 16 Dec 2019 07:45:48 -0500 Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX13-EDG-OU-001.vmware.com (10.113.208.155) with Microsoft SMTP Server id 15.0.1156.6; Mon, 16 Dec 2019 04:45:46 -0800 Received: from akaher-lnx-dev.eng.vmware.com (unknown [10.110.19.203]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id 9E8EB402B8; Mon, 16 Dec 2019 04:45:40 -0800 (PST) From: Ajay Kaher To: CC: , , , , , , , , , , , , , , , , , , , "Aneesh Kumar K . V" , Catalin Marinas , Naoya Horiguchi , Mark Rutland , Hillf Danton , Michal Hocko , Mike Kravetz , Vlastimil Babka Subject: [PATCH v3 3/8] mm, gup: remove broken VM_BUG_ON_PAGE compound check for hugepages Date: Tue, 17 Dec 2019 02:15:43 +0530 Message-ID: <1576529149-14269-4-git-send-email-akaher@vmware.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1576529149-14269-1-git-send-email-akaher@vmware.com> References: <1576529149-14269-1-git-send-email-akaher@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-001.vmware.com: akaher@vmware.com does not designate permitted sender hosts) Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Will Deacon commit a3e328556d41bb61c55f9dfcc62d6a826ea97b85 upstream. When operating on hugepages with DEBUG_VM enabled, the GUP code checks the compound head for each tail page prior to calling page_cache_add_speculative. This is broken, because on the fast-GUP path (where we don't hold any page table locks) we can be racing with a concurrent invocation of split_huge_page_to_list. split_huge_page_to_list deals with this race by using page_ref_freeze to freeze the page and force concurrent GUPs to fail whilst the component pages are modified. This modification includes clearing the compound_head field for the tail pages, so checking this prior to a successful call to page_cache_add_speculative can lead to false positives: In fact, page_cache_add_speculative *already* has this check once the page refcount has been successfully updated, so we can simply remove the broken calls to VM_BUG_ON_PAGE. Link: http://lkml.kernel.org/r/20170522133604.11392-2-punit.agrawal@arm.com Signed-off-by: Will Deacon Signed-off-by: Punit Agrawal Acked-by: Steve Capper Acked-by: Kirill A. Shutemov Cc: Aneesh Kumar K.V Cc: Catalin Marinas Cc: Naoya Horiguchi Cc: Mark Rutland Cc: Hillf Danton Cc: Michal Hocko Cc: Mike Kravetz Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Srivatsa S. Bhat (VMware) Signed-off-by: Ajay Kaher Signed-off-by: Vlastimil Babka --- mm/gup.c | 3 --- 1 file changed, 3 deletions(-) -- 2.7.4 diff --git a/mm/gup.c b/mm/gup.c index 45c544b..6e7cfaa 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1136,7 +1136,6 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, page = head + ((addr & ~PMD_MASK) >> PAGE_SHIFT); tail = page; do { - VM_BUG_ON_PAGE(compound_head(page) != head, page); pages[*nr] = page; (*nr)++; page++; @@ -1183,7 +1182,6 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr, page = head + ((addr & ~PUD_MASK) >> PAGE_SHIFT); tail = page; do { - VM_BUG_ON_PAGE(compound_head(page) != head, page); pages[*nr] = page; (*nr)++; page++; @@ -1226,7 +1224,6 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr, page = head + ((addr & ~PGDIR_MASK) >> PAGE_SHIFT); tail = page; do { - VM_BUG_ON_PAGE(compound_head(page) != head, page); pages[*nr] = page; (*nr)++; page++;