From patchwork Tue Mar 5 06:57:58 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marek Szyprowski X-Patchwork-Id: 15227 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 754F723E24 for ; Tue, 5 Mar 2013 06:58:48 +0000 (UTC) Received: from mail-vb0-f41.google.com (mail-vb0-f41.google.com [209.85.212.41]) by fiordland.canonical.com (Postfix) with ESMTP id 16419A18CF4 for ; Tue, 5 Mar 2013 06:58:47 +0000 (UTC) Received: by mail-vb0-f41.google.com with SMTP id l22so1157090vbn.14 for ; Mon, 04 Mar 2013 22:58:47 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:x-forwarded-to:x-forwarded-for:delivered-to:x-received :received-spf:x-auditid:from:to:date:message-id:x-mailer:in-reply-to :references:x-brightmail-tracker:cc:subject:x-beenthere :x-mailman-version:precedence:list-id:list-unsubscribe:list-archive :list-post:list-help:list-subscribe:mime-version:content-type :content-transfer-encoding:sender:errors-to:x-gm-message-state; bh=afUoTSyKOzOeHvF8YpsntRTZJ9CreNCULuY98FrwPaE=; b=AACjXnXvO/zleOLSXQ3JZ9jA1Y9e6IYO/MK9ii66ad5uAqJk8eUGgtmwD/4Pp2TmeY QYvK0HnPU0jyaiRbYk8ruzz6ex6r16kt+OWJ24XhQyEyJagsqdmlxLxlgPJ8uHubUf5a X2VtCLdc/q2AuFaGsfc2U2D0Ql8YI5E1pCXcdUShKJx3lKBYHJNm6xTdvNniVcSYzEAV iwIuZqTRHEyRoJDFeQHJ+t5OICbIGr95AzLXvEQw9rSteREo4gCGOWHQOEDnHOVUGx5R j0l7eHVDX7Trsr3q/lj6FbYqXqe1u3FIHFqcyPiUjSXkO8uhUT/9GnFSSxFp1drBLf7i 89XA== X-Received: by 10.52.88.237 with SMTP id bj13mr7933630vdb.75.1362466727498; Mon, 04 Mar 2013 22:58:47 -0800 (PST) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.58.127.98 with SMTP id nf2csp96901veb; Mon, 4 Mar 2013 22:58:46 -0800 (PST) X-Received: by 10.204.149.81 with SMTP id s17mr8546327bkv.117.1362466726454; Mon, 04 Mar 2013 22:58:46 -0800 (PST) Received: from mombin.canonical.com (mombin.canonical.com. [91.189.95.16]) by mx.google.com with ESMTP id gm9si6520847bkc.225.2013.03.04.22.58.45; Mon, 04 Mar 2013 22:58:46 -0800 (PST) Received-SPF: neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) client-ip=91.189.95.16; Authentication-Results: mx.google.com; spf=neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) smtp.mail=linaro-mm-sig-bounces@lists.linaro.org Received: from localhost ([127.0.0.1] helo=mombin.canonical.com) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1UClpk-0004Da-Qy; Tue, 05 Mar 2013 06:58:44 +0000 Received: from mailout4.samsung.com ([203.254.224.34]) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1UClpj-0004DC-Cl for linaro-mm-sig@lists.linaro.org; Tue, 05 Mar 2013 06:58:43 +0000 Received: from epcpsbgm1.samsung.com (epcpsbgm1 [203.254.230.26]) by mailout4.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0MJ600MM1E137MT0@mailout4.samsung.com> for linaro-mm-sig@lists.linaro.org; Tue, 05 Mar 2013 15:58:42 +0900 (KST) X-AuditID: cbfee61a-b7f7d6d000000f4e-52-513597a22ed6 Received: from epmmp2 ( [203.254.227.17]) by epcpsbgm1.samsung.com (EPCPMTA) with SMTP id 53.6D.03918.2A795315; Tue, 05 Mar 2013 15:58:42 +0900 (KST) Received: from localhost.localdomain ([106.116.147.30]) by mmp2.samsung.com (Oracle Communications Messaging Server 7u4-24.01 (7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTPA id <0MJ600A49E14LIB0@mmp2.samsung.com>; Tue, 05 Mar 2013 15:58:41 +0900 (KST) From: Marek Szyprowski To: linux-mm@kvack.org, linaro-mm-sig@lists.linaro.org, linux-kernel@vger.kernel.org Date: Tue, 05 Mar 2013 07:57:58 +0100 Message-id: <1362466679-17111-5-git-send-email-m.szyprowski@samsung.com> X-Mailer: git-send-email 1.7.9.5 In-reply-to: <1362466679-17111-1-git-send-email-m.szyprowski@samsung.com> References: <1362466679-17111-1-git-send-email-m.szyprowski@samsung.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrFLMWRmVeSWpSXmKPExsVy+t9jQd1F000DDX6e47CYs34Nm8XfScfY LTbOWM9qcbbpDbvFlysPmSwu75rDZnFvzX9Wi7VH7rJb/D7YyW6x4HgLq8Wyr+/ZHbg9fv+a xOjRu/crq8emVZ1sHps+TWL3ODHjN4vH7X+PmT3W/XnF5NG3ZRWjx+dNcgGcUVw2Kak5mWWp Rfp2CVwZXxaVF+wQr5g/z7aB8bFQFyMnh4SAiURn9z5GCFtM4sK99WxdjFwcQgLTGSWuPVvF DuG0M0lc37eXBaSKTcBQouttFxuILSIQJvGncRsrSBGzwHEmiT8X14IlhAWSJN4duQxmswio SvTs3w1m8wp4SJx83wdkcwCtU5CYM8kGJMwp4CnxZPlHsPlCQCXztn1mmsDIu4CRYRWjaGpB ckFxUnquoV5xYm5xaV66XnJ+7iZGcIg+k9rBuLLB4hCjAAejEg8vw1GTQCHWxLLiytxDjBIc zEoivGJ1poFCvCmJlVWpRfnxRaU5qcWHGKU5WJTEeRlPPQkQEkhPLEnNTk0tSC2CyTJxcEo1 MJauPWRh7VAzRW5aoJ/P1F9VETr57mtqJhvxqUVXzD1YHq/lfKM1v67ZpuWmV9REj7Djh6ZJ PjMx65uye5PIpy/mUquWy5lf3OriUy+kXRR1+mnH0x3C6VOdmcOcpn7v2nxskQOvnOGES/2/ RXZ8uL2QZ/7s0xIbl7i8jZmibvid+6/LK5eoI0osxRmJhlrMRcWJAO6lV7JNAgAA Cc: Arnd Bergmann , Bartlomiej Zolnierkiewicz , Mel Gorman , Michal Nazarewicz , Minchan Kim , Kyungmin Park , Andrew Morton Subject: [Linaro-mm-sig] [RFC/PATCH 4/5] mm: get_user_pages: migrate out CMA pages when FOLL_DURABLE flag is set X-BeenThere: linaro-mm-sig@lists.linaro.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: "Unified memory management interest group." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: linaro-mm-sig-bounces@lists.linaro.org Errors-To: linaro-mm-sig-bounces@lists.linaro.org X-Gm-Message-State: ALoCoQmYSm87qf9FjzLKX7U8toPz0W6diFXcAVhJKQ/mfdiAdQkDkiz6pyKH1uwKV6LG0gTSdGUP When __get_user_pages() is called with FOLL_DURABLE flag, ensure that no page in CMA pageblocks gets locked. This workarounds the permanent migration failures caused by locking the pages by get_user_pages() call for a long period of time. Signed-off-by: Marek Szyprowski Signed-off-by: Kyungmin Park --- mm/internal.h | 12 ++++++++++++ mm/memory.c | 43 +++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 55 insertions(+) diff --git a/mm/internal.h b/mm/internal.h index 8562de0..a290d04 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -105,6 +105,18 @@ extern void prep_compound_page(struct page *page, unsigned long order); extern bool is_free_buddy_page(struct page *page); #endif +#ifdef CONFIG_CMA +static inline int is_cma_page(struct page *page) +{ + unsigned mt = get_pageblock_migratetype(page); + if (mt == MIGRATE_ISOLATE || mt == MIGRATE_CMA) + return true; + return false; +} +#else +#define is_cma_page(page) 0 +#endif + #if defined CONFIG_COMPACTION || defined CONFIG_CMA /* diff --git a/mm/memory.c b/mm/memory.c index 2b9c2dd..f81b273 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1650,6 +1650,45 @@ static inline int stack_guard_page(struct vm_area_struct *vma, unsigned long add } /** + * replace_cma_page() - migrate page out of CMA page blocks + * @page: source page to be migrated + * + * Returns either the old page (if migration was not possible) or the pointer + * to the newly allocated page (with additional reference taken). + * + * get_user_pages() might take a reference to a page for a long period of time, + * what prevent such page from migration. This is fatal to the preffered usage + * pattern of CMA pageblocks. This function replaces the given user page with + * a new one allocated from NON-MOVABLE pageblock, so locking CMA page can be + * avoided. + */ +static inline struct page *migrate_replace_cma_page(struct page *page) +{ + struct page *newpage = alloc_page(GFP_HIGHUSER); + + if (!newpage) + goto out; + + /* + * Take additional reference to the new page to ensure it won't get + * freed after migration procedure end. + */ + get_page_foll(newpage); + + if (migrate_replace_page(page, newpage) == 0) + return newpage; + + put_page(newpage); + __free_page(newpage); +out: + /* + * Migration errors in case of get_user_pages() might not + * be fatal to CMA itself, so better don't fail here. + */ + return page; +} + +/** * __get_user_pages() - pin user pages in memory * @tsk: task_struct of target task * @mm: mm_struct of target mm @@ -1884,6 +1923,10 @@ long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, } if (IS_ERR(page)) return i ? i : PTR_ERR(page); + + if ((gup_flags & FOLL_DURABLE) && is_cma_page(page)) + page = migrate_replace_cma_page(page); + if (pages) { pages[i] = page;