From patchwork Tue Mar 5 06:57:55 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marek Szyprowski X-Patchwork-Id: 15224 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id CDC9723E24 for ; Tue, 5 Mar 2013 06:58:38 +0000 (UTC) Received: from mail-ve0-f174.google.com (mail-ve0-f174.google.com [209.85.128.174]) by fiordland.canonical.com (Postfix) with ESMTP id 279DAA1882D for ; Tue, 5 Mar 2013 06:58:38 +0000 (UTC) Received: by mail-ve0-f174.google.com with SMTP id pb11so5480845veb.33 for ; Mon, 04 Mar 2013 22:58:37 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:x-forwarded-to:x-forwarded-for:delivered-to:x-received :received-spf:x-auditid:from:to:date:message-id:x-mailer:in-reply-to :references:x-brightmail-tracker:cc:subject:x-beenthere :x-mailman-version:precedence:list-id:list-unsubscribe:list-archive :list-post:list-help:list-subscribe:mime-version:content-type :content-transfer-encoding:sender:errors-to:x-gm-message-state; bh=juBrUt6bgche5qfFdMFhnkXHG4k0Y3XXertv3LcZTH8=; b=FZhdmFrDV4GA7ml/E5pTQrbJ4TysPsxf/nGIOd2Mnv8JF1cO52PqQpqTBSx+E/yhzG Ars9tl5y0INZzzSEm1kVm8ebN8l71/azS+HVuQWp4Cf9BV8z81BuuSsG0Y1WkDdqwYCc 9LGkFarUei9HsKaRWAfIFT+jFN+0CrAyQVF4ckAXbCWFQlTm4IdbCuFy+XIt7zMsArzA voQXEpyY60DzsEvrKLdeVGy6oPPwk0+PCCOcl/UzMwiZjHRVZ1bBGRENKmFSUEV1zRKo PuqjG8m7WJmxq8bgO2lCWvGxLrwQKjgrVtDoqVHlo/Wtr01ZEoWh2peu9/FZmoESXL6z ssUw== X-Received: by 10.220.39.69 with SMTP id f5mr9140091vce.45.1362466717543; Mon, 04 Mar 2013 22:58:37 -0800 (PST) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.58.127.98 with SMTP id nf2csp96892veb; Mon, 4 Mar 2013 22:58:36 -0800 (PST) X-Received: by 10.204.145.195 with SMTP id e3mr8732344bkv.27.1362466715803; Mon, 04 Mar 2013 22:58:35 -0800 (PST) Received: from mombin.canonical.com (mombin.canonical.com. [91.189.95.16]) by mx.google.com with ESMTP id fv8si6534338bkc.5.2013.03.04.22.58.35; Mon, 04 Mar 2013 22:58:35 -0800 (PST) Received-SPF: neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) client-ip=91.189.95.16; Authentication-Results: mx.google.com; spf=neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) smtp.mail=linaro-mm-sig-bounces@lists.linaro.org Received: from localhost ([127.0.0.1] helo=mombin.canonical.com) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1UClpZ-0004Bd-I7; Tue, 05 Mar 2013 06:58:33 +0000 Received: from mailout1.samsung.com ([203.254.224.24]) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1UClpX-0004BL-Vv for linaro-mm-sig@lists.linaro.org; Tue, 05 Mar 2013 06:58:32 +0000 Received: from epcpsbgm2.samsung.com (epcpsbgm2 [203.254.230.27]) by mailout1.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0MJ6001FEE1HS6O0@mailout1.samsung.com> for linaro-mm-sig@lists.linaro.org; Tue, 05 Mar 2013 15:58:29 +0900 (KST) X-AuditID: cbfee61b-b7fb06d000000f28-b2-51359795ffdc Received: from epmmp2 ( [203.254.227.17]) by epcpsbgm2.samsung.com (EPCPMTA) with SMTP id BC.D3.03880.59795315; Tue, 05 Mar 2013 15:58:29 +0900 (KST) Received: from localhost.localdomain ([106.116.147.30]) by mmp2.samsung.com (Oracle Communications Messaging Server 7u4-24.01 (7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTPA id <0MJ600A49E14LIB0@mmp2.samsung.com>; Tue, 05 Mar 2013 15:58:29 +0900 (KST) From: Marek Szyprowski To: linux-mm@kvack.org, linaro-mm-sig@lists.linaro.org, linux-kernel@vger.kernel.org Date: Tue, 05 Mar 2013 07:57:55 +0100 Message-id: <1362466679-17111-2-git-send-email-m.szyprowski@samsung.com> X-Mailer: git-send-email 1.7.9.5 In-reply-to: <1362466679-17111-1-git-send-email-m.szyprowski@samsung.com> References: <1362466679-17111-1-git-send-email-m.szyprowski@samsung.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrNLMWRmVeSWpSXmKPExsVy+t9jQd2p000DDdackLeYs34Nm8XfScfY LTbOWM9qcbbpDbvFlysPmSwu75rDZnFvzX9Wi7VH7rJb/D7YyW6x4HgLq8Wyr+/ZHbg9fv+a xOjRu/crq8emVZ1sHps+TWL3ODHjN4vH7X+PmT3W/XnF5NG3ZRWjx+dNcgGcUVw2Kak5mWWp Rfp2CVwZM3btZy7YL1nRv2AaewPjWZEuRg4OCQETiRMTGbsYOYFMMYkL99azdTFycQgJTGeU 6Ju/nRXCaWeS2LNvOgtIFZuAoUTX2y42EFtEIEziT+M2sCJmgeNMEn8urgVLCAukSSz7epAZ xGYRUJV4u3wpK4jNK+AhsX3Xc0aIzQoScybZgIQ5BTwlniz/CDZfCKhk3rbPTBMYeRcwMqxi FE0tSC4oTkrPNdIrTswtLs1L10vOz93ECA7SZ9I7GFc1WBxiFOBgVOLhZThqEijEmlhWXJl7 iFGCg1lJhFeszjRQiDclsbIqtSg/vqg0J7X4EKM0B4uSOC/jqScBQgLpiSWp2ampBalFMFkm Dk6pBkbFg2cefnTazrXT9qPatj13Et7lake/6ihZ0ruUrff8JqfU6ztl+3c++7/mlVSX0/Md cuW314myvH3KKX/s3M+TQqv3sF6YIH95mdvx34vcujNY13Fwl83b3O9y//yhwPt9hx929OmW bl26UCIi8Hpsa1L+MaN8PnPHNZ6lU8RTP/R2FV58VsOoxFKckWioxVxUnAgAhMpzAU4CAAA= Cc: Arnd Bergmann , Bartlomiej Zolnierkiewicz , Mel Gorman , Michal Nazarewicz , Minchan Kim , Kyungmin Park , Andrew Morton Subject: [Linaro-mm-sig] [RFC/PATCH 1/5] mm: introduce migrate_replace_page() for migrating page to the given target X-BeenThere: linaro-mm-sig@lists.linaro.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: "Unified memory management interest group." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: linaro-mm-sig-bounces@lists.linaro.org Errors-To: linaro-mm-sig-bounces@lists.linaro.org X-Gm-Message-State: ALoCoQkBb1ol1iaLljW9KAdT1ZiaSLI2XgX9Ptee3DzMzmW1PcIgbdBrpt3hRln8E1hw2qNulwl5 Introduce migrate_replace_page() function for migrating single page to the given target page. Signed-off-by: Marek Szyprowski Signed-off-by: Kyungmin Park --- include/linux/migrate.h | 5 ++++ mm/migrate.c | 59 +++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 64 insertions(+) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index a405d3dc..3a8a6c1 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -35,6 +35,8 @@ enum migrate_reason { #ifdef CONFIG_MIGRATION +extern int migrate_replace_page(struct page *oldpage, struct page *newpage); + extern void putback_lru_pages(struct list_head *l); extern void putback_movable_pages(struct list_head *l); extern int migrate_page(struct address_space *, @@ -57,6 +59,9 @@ extern int migrate_huge_page_move_mapping(struct address_space *mapping, struct page *newpage, struct page *page); #else +static inline int migrate_replace_page(struct page *oldpage, + struct page *newpage) { return -ENOSYS; } + static inline void putback_lru_pages(struct list_head *l) {} static inline void putback_movable_pages(struct list_head *l) {} static inline int migrate_pages(struct list_head *l, new_page_t x, diff --git a/mm/migrate.c b/mm/migrate.c index 3bbaf5d..a2a6950 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1067,6 +1067,65 @@ out: return rc; } +/* + * migrate_replace_page + * + * The function takes one single page and a target page (newpage) and + * tries to migrate data to the target page. The caller must ensure that + * the source page is locked with one additional get_page() call, which + * will be freed during the migration. The caller also must release newpage + * if migration fails, otherwise the ownership of the newpage is taken. + * Source page is released if migration succeeds. + * + * Return: error code or 0 on success. + */ +int migrate_replace_page(struct page *page, struct page *newpage) +{ + struct zone *zone = page_zone(page); + unsigned long flags; + int ret = -EAGAIN; + int pass; + + migrate_prep(); + + spin_lock_irqsave(&zone->lru_lock, flags); + + if (PageLRU(page) && + __isolate_lru_page(page, ISOLATE_UNEVICTABLE) == 0) { + struct lruvec *lruvec = mem_cgroup_page_lruvec(page, zone); + del_page_from_lru_list(page, lruvec, page_lru(page)); + spin_unlock_irqrestore(&zone->lru_lock, flags); + } else { + spin_unlock_irqrestore(&zone->lru_lock, flags); + return -EAGAIN; + } + + /* page is now isolated, so release additional reference */ + put_page(page); + + for (pass = 0; pass < 10 && ret != 0; pass++) { + cond_resched(); + + if (page_count(page) == 1) { + /* page was freed from under us, so we are done */ + ret = 0; + break; + } + ret = __unmap_and_move(page, newpage, 1, MIGRATE_SYNC); + } + + if (ret == 0) { + /* take ownership of newpage and add it to lru */ + putback_lru_page(newpage); + } else { + /* restore additional reference to the oldpage */ + get_page(page); + } + + putback_lru_page(page); + return ret; +} + #ifdef CONFIG_NUMA /* * Move a list of individual pages