From patchwork Wed Mar 14 19:29:37 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 131724 Delivered-To: patch@linaro.org Received: by 10.46.84.17 with SMTP id i17csp220299ljb; Wed, 14 Mar 2018 12:30:00 -0700 (PDT) X-Google-Smtp-Source: AG47ELvx4pSFBsgPyNwD4ge4WnrCxfRPqaxI0pv2O/OOFRb9MJ9dFcgUpn9ArczK9ZF6hDdlxgCX X-Received: by 10.99.129.199 with SMTP id t190mr4582445pgd.376.1521055800177; Wed, 14 Mar 2018 12:30:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1521055800; cv=none; d=google.com; s=arc-20160816; b=nrb5Sn4kg/a/8t+RpIvVivs3RRD+doLtzbGzLJraLAUfTAVv7y5QLvzIYQ9EXRUVrO Rc1yHIPAQUCZba+/7jnI+tN7Q2mdzCwjLI++lmP41vdpkVCUcszf6ToG3uqi7dXxZn8H tJ+9W+oBt1GMIjP1WhBG+ooIPbB+yHxVjXVRNWlQywA1lwpKtjlfsjxqjOPUh0FRX4a8 j2MXFRKO+fmR595OrAEz9nVV0pZTmmV6t5xV41f4HPSTyOe0mJ8+nMtoyD7458zbOcq2 agujwcY1sVSK85osMNiELL4O6Alic+ca7Su9BBrrsOy44T327MDVijRtAxxqA4z4YUvf 9P1w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=oygvEXu5UO2iCkLsFeP0Go/GOzLt5TNO8CnPklW3Wac=; b=XzEkszTL/Uw1cOIdxQSxS0agdfE0CnOpkp4AiwJC6brD1LtL2WB3B4PbtKilO4z54F A3hHQOPb9h/1B/GpnXOfApRWjrdjBqrcfpSO6o8BcuUQQQ/hVcNCyf+x7de49PtMlLPS IFDxXWcxpY1ZU3RYdehbPS4eq1HBjxVyuVEsg4ZGQpdXJdmmtyC/zeK3EcNMcCL8YKAj rUhVi5bNO5ysSl19Yv13Z6uMf7onbbibZLTBystOT1IR6oTo2PpE3Wj4wbnnbwDqhhLw xrRex1pj2kFPC3T57eI51GoGFZWnm8T3xz6iHGJ9UWZnM6EN8VUTxG24TaALkHJzcT65 ydvA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=PYgzcUeT; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o12-v6si2422186plk.449.2018.03.14.12.29.59; Wed, 14 Mar 2018 12:30:00 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=PYgzcUeT; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752197AbeCNT3y (ORCPT + 28 others); Wed, 14 Mar 2018 15:29:54 -0400 Received: from mail-wr0-f193.google.com ([209.85.128.193]:42992 "EHLO mail-wr0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751410AbeCNT3w (ORCPT ); Wed, 14 Mar 2018 15:29:52 -0400 Received: by mail-wr0-f193.google.com with SMTP id s18so5885835wrg.9 for ; Wed, 14 Mar 2018 12:29:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id; bh=oygvEXu5UO2iCkLsFeP0Go/GOzLt5TNO8CnPklW3Wac=; b=PYgzcUeTm2pM3iKETv+WKrVsVhSQm76S0M8Tup6DPWGKSwolq3GYJb5fO9CLxBJGBc fDJIUKyGnc8nYiVhIm2DlMybNiV/cDmNMR6V25CoEshJ8C4XgwYiKgpgYipFD01xDrSn WshLGfZuZZWpC7NbBRTucPf7Ikr6/RVsN1WXg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=oygvEXu5UO2iCkLsFeP0Go/GOzLt5TNO8CnPklW3Wac=; b=S3RDt9nLS/mv0vsvZNCwgZyav6KevOa655Kd6VU0lWjueI+M/zuQ9+TJteVE6L0Iyx SCnwkHhmjwirdQC027sZO+0bwdozRelFHD9ci5ky9NfATGHYD5v/J+Bo4z16655HzUEr Mp8tz57zkWrHStdqJ3niE9po9CYo+QkZlc3vMBwJ374H/tSDGFiHFDOG3L4qaTZcfAPh Nfr+faz08sG8oRNQAxZjtIYcxCJHM0IXIoIhCKqNCgL8Ns/wwJuRjT9qRb0Ixn9mLNd0 qMsG3F7LwMuxsL4zufSW9UtdBlPMYjTMzOm7OgoKFv/YHCutGVpyWrzweX+m3xEg6ALZ eRng== X-Gm-Message-State: AElRT7EZ6aF0es/OoPA+eG/UGtU3GUa4IivqMs/h1+WAhG+S50JFzANc cMOqMzq/ymfWXI++QlAMCfbvQg== X-Received: by 10.223.145.67 with SMTP id j61mr5065516wrj.152.1521055790774; Wed, 14 Mar 2018 12:29:50 -0700 (PDT) Received: from localhost.localdomain ([105.148.128.186]) by smtp.gmail.com with ESMTPSA id 69sm2240300wmp.36.2018.03.14.12.29.47 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 14 Mar 2018 12:29:49 -0700 (PDT) From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Cc: mark.rutland@arm.com, will.deacon@arm.com, catalin.marinas@arm.com, marc.zyngier@arm.com, Ard Biesheuvel , Daniel Vacek , Mel Gorman , Michal Hocko , Paul Burton , Pavel Tatashin , Vlastimil Babka , Andrew Morton , Linus Torvalds Subject: [PATCH v2] Revert "mm/page_alloc: fix memmap_init_zone pageblock alignment" Date: Wed, 14 Mar 2018 19:29:37 +0000 Message-Id: <20180314192937.12888-1-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.15.1 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This reverts commit 864b75f9d6b0100bb24fdd9a20d156e7cda9b5ae. Commit 864b75f9d6b0 ("mm/page_alloc: fix memmap_init_zone pageblock alignment") modified the logic in memmap_init_zone() to initialize struct pages associated with invalid PFNs, to appease a VM_BUG_ON() in move_freepages(), which is redundant by its own admission, and dereferences struct page fields to obtain the zone without checking whether the struct pages in question are valid to begin with. Commit 864b75f9d6b0 only makes it worse, since the rounding it does may cause pfn assume the same value it had in a prior iteration of the loop, resulting in an infinite loop and a hang very early in the boot. Also, since it doesn't perform the same rounding on start_pfn itself but only on intermediate values following an invalid PFN, we may still hit the same VM_BUG_ON() as before. So instead, let's fix this at the core, and ensure that the BUG check doesn't dereference struct page fields of invalid pages. Fixes: 864b75f9d6b0 ("mm/page_alloc: fix memmap_init_zone pageblock alignment") Cc: Daniel Vacek Cc: Mel Gorman Cc: Michal Hocko Cc: Paul Burton Cc: Pavel Tatashin Cc: Vlastimil Babka Cc: Andrew Morton Cc: Linus Torvalds Signed-off-by: Ard Biesheuvel --- mm/page_alloc.c | 13 +++++-------- 1 file changed, 5 insertions(+), 8 deletions(-) -- 2.15.1 diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 3d974cb2a1a1..635d7dd29d7f 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1910,7 +1910,9 @@ static int move_freepages(struct zone *zone, * Remove at a later date when no bug reports exist related to * grouping pages by mobility */ - VM_BUG_ON(page_zone(start_page) != page_zone(end_page)); + VM_BUG_ON(pfn_valid(page_to_pfn(start_page)) && + pfn_valid(page_to_pfn(end_page)) && + page_zone(start_page) != page_zone(end_page)); #endif if (num_movable) @@ -5359,14 +5361,9 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, /* * Skip to the pfn preceding the next valid one (or * end_pfn), such that we hit a valid pfn (or end_pfn) - * on our next iteration of the loop. Note that it needs - * to be pageblock aligned even when the region itself - * is not. move_freepages_block() can shift ahead of - * the valid region but still depends on correct page - * metadata. + * on our next iteration of the loop. */ - pfn = (memblock_next_valid_pfn(pfn, end_pfn) & - ~(pageblock_nr_pages-1)) - 1; + pfn = memblock_next_valid_pfn(pfn, end_pfn) - 1; #endif continue; }