From patchwork Thu Jul 5 09:28:27 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rabin Vincent X-Patchwork-Id: 9853 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 3FC3323E1B for ; Thu, 5 Jul 2012 14:59:39 +0000 (UTC) Received: from mail-gh0-f180.google.com (mail-gh0-f180.google.com [209.85.160.180]) by fiordland.canonical.com (Postfix) with ESMTP id E2B5CA18917 for ; Thu, 5 Jul 2012 14:59:38 +0000 (UTC) Received: by ghbz12 with SMTP id z12so8109548ghb.11 for ; Thu, 05 Jul 2012 07:59:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf :dkim-signature:mime-version:in-reply-to:references:from:date :x-google-sender-auth:message-id:to:x-mailman-approved-at:cc:subject :x-beenthere:x-mailman-version:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:content-type :content-transfer-encoding:sender:errors-to:x-gm-message-state; bh=ueFGK5hhv5x3f6mXThd/I4ypo7Mj2gcAL//6XeX4aYk=; b=SA2Bf4toJYp6VQhdBCCB7Z+FJFhoaQyQZQR76hEN1KVm+6iA/VMPjIwgTEUyjkIDmk FIAt3dNcJpZtdmg9Nf78P3d6XaxozrcCobTeCaGcjqSlJWE680SPyj/ukVqN6xSB6uax CQYyVXvfpA1D45oDgW2pJiEjugPJnS+LoxXVf2EByanj2lUFwYzl13mJePoXhdzCmCXZ 2wduRLsN9M9AraSu7/xPNz5MYrR9jH9DmId9eYG0c/qKjr74MRoV+5NkmZp3z2PcBE4p OhQZZvkJDH915F98IenxotfGfsa6xoNwqYgd9/8qkQq/rRqOEaOenOh0BXl6tRl6oCai HV5Q== Received: by 10.42.89.72 with SMTP id f8mr13796136icm.33.1341500378003; Thu, 05 Jul 2012 07:59:38 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.231.24.148 with SMTP id v20csp66978ibb; Thu, 5 Jul 2012 07:59:37 -0700 (PDT) Received: by 10.204.157.18 with SMTP id z18mr13069841bkw.16.1341500376606; Thu, 05 Jul 2012 07:59:36 -0700 (PDT) Received: from mombin.canonical.com (mombin.canonical.com. [91.189.95.16]) by mx.google.com with ESMTP id hg14si22850534bkc.4.2012.07.05.07.59.35; Thu, 05 Jul 2012 07:59:36 -0700 (PDT) Received-SPF: neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) client-ip=91.189.95.16; Authentication-Results: mx.google.com; spf=neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) smtp.mail=linaro-mm-sig-bounces@lists.linaro.org; dkim=neutral (body hash did not verify) header.i=@gmail.com Received: from localhost ([127.0.0.1] helo=mombin.canonical.com) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1SmnWg-0000Su-WB; Thu, 05 Jul 2012 14:59:27 +0000 Received: from mail-bk0-f42.google.com ([209.85.214.42]) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1SmiN3-0001c3-40 for linaro-mm-sig@lists.linaro.org; Thu, 05 Jul 2012 09:29:09 +0000 Received: by bkcjm19 with SMTP id jm19so3223163bkc.1 for ; Thu, 05 Jul 2012 02:29:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:from:date :x-google-sender-auth:message-id:subject:to:cc:content-type; bh=4YJA1S5oaMLo0V4kktyR6rkOAtazSjXe/giWNnYXyb8=; b=ovODlLF4+zIJ152g/mM8rYCThU0/LCvuRawxisnB30RoSDa6ZQ9b3iSOk3ur5ZQe33 mNs5lp4dwIeF8k45JpIXv1aSG84VGjM+9F7oxHNT2saRHX91cUq3b3K14xCzFBmG+1F7 NGYVWmXJkBJlWi6Z/BIs0CF+ba2MuHFVEyPYUIXpa04M9tdCLcExjl5TrejD6rZ1xBXM FDvVThxpIXl8UsQs21Xlhlr69ZRxr26n3BuEYNNxhUf5xkXTTkmmsG6wLDOdRkWymo0b XWAo5SBN9SRJtcHDKTv8IGXmqvpZ11FH1A7A7+D3tNe8eleDIoOO1nogrxkOimn6/ODL KKbA== Received: by 10.204.10.70 with SMTP id o6mr2340153bko.31.1341480547814; Thu, 05 Jul 2012 02:29:07 -0700 (PDT) MIME-Version: 1.0 Received: by 10.204.40.207 with HTTP; Thu, 5 Jul 2012 02:28:27 -0700 (PDT) In-Reply-To: <4FAD89DC.2090307@codeaurora.org> References: <4FAC200D.2080306@codeaurora.org> <02fc01cd2f50$5d77e4c0$1867ae40$%szyprowski@samsung.com> <4FAD89DC.2090307@codeaurora.org> From: Rabin Vincent Date: Thu, 5 Jul 2012 14:58:27 +0530 X-Google-Sender-Auth: ZFWfAZKGbeirMN37qfqlgMBk3II Message-ID: To: Marek Szyprowski , Michal Nazarewicz X-Mailman-Approved-At: Thu, 05 Jul 2012 14:59:25 +0000 Cc: linux-arm-msm@vger.kernel.org, LKML , linaro-mm-sig@lists.linaro.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org Subject: Re: [Linaro-mm-sig] Bad use of highmem with buffer_migrate_page? X-BeenThere: linaro-mm-sig@lists.linaro.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: "Unified memory management interest group." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: linaro-mm-sig-bounces@lists.linaro.org Errors-To: linaro-mm-sig-bounces@lists.linaro.org X-Gm-Message-State: ALoCoQnN121TW15LsEJdW+ie+ibiZ01GC+HjDgLRZj8z2he5G9IzBBiI9ENAVK3GqPi7/UJToUTy On Sat, May 12, 2012 at 3:21 AM, Laura Abbott wrote: > On 5/11/2012 1:30 AM, Marek Szyprowski wrote: >> On Thursday, May 10, 2012 10:08 PM Laura Abbott wrote: >>> I did a backport of the Contiguous Memory Allocator to a 3.0.8 tree. I >>> wrote fairly simple test case that, in 1MB chunks, allocs up to 40MB >>> from a reserved area, maps, writes, unmaps and then frees in an infinite >>> loop. When running this with another program in parallel to put some >>> stress on the filesystem, I hit data aborts in the filesystem/journal >>> layer, although not always the same backtrace. As an example: >>> >>> [] (__ext4_check_dir_entry+0x20/0x184) from [] >>> (add_dirent_to_buf+0x70/0x2ac) >>> [] (add_dirent_to_buf+0x70/0x2ac) from [] >>> (ext4_add_entry+0xd8/0x4bc) >>> [] (ext4_add_entry+0xd8/0x4bc) from [] >>> (ext4_add_nondir+0x14/0x64) >>> [] (ext4_add_nondir+0x14/0x64) from [] >>> (ext4_create+0xd8/0x120) >>> [] (ext4_create+0xd8/0x120) from [] >>> (vfs_create+0x74/0xa4) >>> [] (vfs_create+0x74/0xa4) from [] >>> (do_last+0x588/0x8d4) >>> [] (do_last+0x588/0x8d4) from [] >>> (path_openat+0xc4/0x394) >>> [] (path_openat+0xc4/0x394) from [] >>> (do_filp_open+0x30/0x7c) >>> [] (do_filp_open+0x30/0x7c) from [] >>> (do_sys_open+0xd8/0x174) >>> [] (do_sys_open+0xd8/0x174) from [] >>> (ret_fast_syscall+0x0/0x30) >>> >>> Every panic had the same issue where a struct buffer_head [1] had a >>> b_data that was unexpectedly NULL. >>> >>> During the course of CMA, buffer_migrate_page could be called to migrate >>> from a CMA page to a new page. buffer_migrate_page calls set_bh_page[2] >>> to set the new page for the buffer_head. If the new page is a highmem >>> page though, the bh->b_data ends up as NULL, which could produce the >>> panics seen above. >>> >>> This seems to indicate that highmem pages are not not appropriate for >>> use as pages to migrate to. The following made the problem go away for >>> me: >>> >>> --- a/mm/page_alloc.c >>> +++ b/mm/page_alloc.c >>> @@ -5753,7 +5753,7 @@ static struct page * >>> __alloc_contig_migrate_alloc(struct page *page, unsigned long private, >>> int **resultp) >>> { >>> - return alloc_page(GFP_HIGHUSER_MOVABLE); >>> + return alloc_page(GFP_USER | __GFP_MOVABLE); >>> } >>> >>> >>> Does this seem like an actual issue or is this an artifact of my >>> backport to 3.0? I'm not familiar enough with the filesystem layer to be >>> able to tell where highmem can actually be used. >> >> >> I will need to investigate this further as this issue doesn't appear on >> v3.3+ kernels, but I remember I saw something similar when I tried CMA >> backported to v3.0. The problem is still present on latest mainline. The filesystem layer expects that the pages in the block device's mapping are not in highmem (the mapping's gfp mask is set in bdget()), but CMA replaces lowmem pages with highmem pages leading to the crashes. The above fix should work, but perhaps the following is preferable since it should allow moving highmem pages to other highmem pages? diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 4403009..4a4f921 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5635,7 +5635,12 @@ static struct page * __alloc_contig_migrate_alloc(struct page *page, unsigned long private, int **resultp) { - return alloc_page(GFP_HIGHUSER_MOVABLE); + gfp_t gfp_mask = GFP_USER | __GFP_MOVABLE; + + if (PageHighMem(page)) + gfp_mask |= __GFP_HIGHMEM; + + return alloc_page(gfp_mask); } /* [start, end) must belong to a single zone. */