From patchwork Mon Nov 12 08:59:42 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marek Szyprowski X-Patchwork-Id: 12809 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id DE3F34C1772 for ; Mon, 12 Nov 2012 09:00:40 +0000 (UTC) Received: from mail-ia0-f180.google.com (mail-ia0-f180.google.com [209.85.210.180]) by fiordland.canonical.com (Postfix) with ESMTP id 41F8AA197CE for ; Mon, 12 Nov 2012 09:00:40 +0000 (UTC) Received: by mail-ia0-f180.google.com with SMTP id f6so4111096iag.11 for ; Mon, 12 Nov 2012 01:00:39 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf:x-auditid :from:to:date:message-id:x-mailer:x-brightmail-tracker:cc:subject :x-beenthere:x-mailman-version:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version :content-type:content-transfer-encoding:sender:errors-to :x-gm-message-state; bh=M/jHRL32lN9Amkvxa6q48ItsB+DUH2P1Ky+XJZp41w8=; b=Ki1R/B6PdL9EzsIMzbS58EqsIQDiboePhHY87H7RMxukAcDUKeUWmLPzaMIaSOWUo/ guG3p97nZjZXRzoGz/A64GqWjaJMNqWoJ5XKYYbSx6zfmNazFc4tltq8EosVQux/0Roj trgdYSbRYQsiITypq3MbmPNPqEGewnVi7jQBHK0IUhRRADRGzL0Nz4r8p1daAruLrSkS ZVrvPeSKFlADcyCpz/I5/DlRtdBByiRinOk8RHw0tZP5XTsQLkA87AegT69rfJ4I4tc9 J9pxYVmlPiwiircIsC9eUWfklOowmDk6LKr1ub9aCRLACZ/uLoxkz1CYUlYaXSJ7AaCi Wq5Q== Received: by 10.43.7.132 with SMTP id oo4mr17969665icb.6.1352710839691; Mon, 12 Nov 2012 01:00:39 -0800 (PST) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.50.67.148 with SMTP id n20csp330832igt; Mon, 12 Nov 2012 01:00:38 -0800 (PST) Received: by 10.205.129.17 with SMTP id hg17mr1205473bkc.41.1352710838119; Mon, 12 Nov 2012 01:00:38 -0800 (PST) Received: from mombin.canonical.com (mombin.canonical.com. [91.189.95.16]) by mx.google.com with ESMTP id 6si10789057bka.37.2012.11.12.01.00.26; Mon, 12 Nov 2012 01:00:38 -0800 (PST) Received-SPF: neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) client-ip=91.189.95.16; Authentication-Results: mx.google.com; spf=neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) smtp.mail=linaro-mm-sig-bounces@lists.linaro.org Received: from localhost ([127.0.0.1] helo=mombin.canonical.com) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1TXpsV-0004xh-9n; Mon, 12 Nov 2012 09:00:23 +0000 Received: from mailout3.samsung.com ([203.254.224.33]) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1TXpsT-0004sd-JQ for linaro-mm-sig@lists.linaro.org; Mon, 12 Nov 2012 09:00:21 +0000 Received: from epcpsbgm2.samsung.com (epcpsbgm2 [203.254.230.27]) by mailout3.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0MDD0039JAC6K8V0@mailout3.samsung.com> for linaro-mm-sig@lists.linaro.org; Mon, 12 Nov 2012 18:00:18 +0900 (KST) X-AuditID: cbfee61b-b7f616d00000319b-28-50a0baa29af6 Received: from epmmp1.local.host ( [203.254.227.16]) by epcpsbgm2.samsung.com (EPCPMTA) with SMTP id D8.38.12699.2AAB0A05; Mon, 12 Nov 2012 18:00:18 +0900 (KST) Received: from localhost.localdomain ([106.116.147.30]) by mmp1.samsung.com (Oracle Communications Messaging Server 7u4-24.01 (7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTPA id <0MDD000SMABXAW00@mmp1.samsung.com> for linaro-mm-sig@lists.linaro.org; Mon, 12 Nov 2012 18:00:18 +0900 (KST) From: Marek Szyprowski To: linux-mm@kvack.org, linaro-mm-sig@lists.linaro.org, linux-kernel@vger.kernel.org Date: Mon, 12 Nov 2012 09:59:42 +0100 Message-id: <1352710782-25425-1-git-send-email-m.szyprowski@samsung.com> X-Mailer: git-send-email 1.7.9.5 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrJJMWRmVeSWpSXmKPExsVy+t9jAd1FuxYEGCz5LGLx5cpDJgdGj9v/ HjMHMEZx2aSk5mSWpRbp2yVwZSz+t5utYCl3xcKDs5kbGJdxdjFyckgImEj0Pt3CCGGLSVy4 t54NxBYSWMQoMWlRMYS9ikli+nlDEJtNwFCi620XWI2IQJjEn8ZtrF2MXBzMAiuYJC7e3ccK khAWiJOY/mYXWBGLgKpE0443YHFeAQ+Jw8cuM3UxcgAtU5CYM8lmAiP3AkaGVYyiqQXJBcVJ 6blGesWJucWleel6yfm5mxjBHnwmvYNxVYPFIUYBDkYlHl6FiAUBQqyJZcWVuYcYJTiYlUR4 fXcChXhTEiurUovy44tKc1KLDzFKc7AoifM2e6QECAmkJ5akZqemFqQWwWSZODilGhh7o1Kz Iic8SI2MW62luOzTs8mTf8+eu9Nh5px3/+UnnqzYOz0m3zi59t5jTt/Eua+PBchNk5Rxa9Zy W+b0NvSmqs/WM/lXFWQlA9/tZFgyu+Dx5DNBFZMSmvMfTrweZJJ1Zu0F32X5T/eoz5Ha0Dh9 WePbNRNsH2517JywsEzEcYVcno+s662fSizFGYmGWsxFxYkAHOe9+dwBAAA= Cc: Bartlomiej Zolnierkiewicz , Mel Gorman , Michal Nazarewicz , Minchan Kim , Kyungmin Park , Andrew Morton Subject: [Linaro-mm-sig] [PATCH] mm: cma: allocate pages from CMA if NR_FREE_PAGES approaches low water mark X-BeenThere: linaro-mm-sig@lists.linaro.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: "Unified memory management interest group." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: linaro-mm-sig-bounces@lists.linaro.org Errors-To: linaro-mm-sig-bounces@lists.linaro.org X-Gm-Message-State: ALoCoQn8ai3HvWsWPoCL90mLvegkwwcQQimQXUVjk33LZ24df7xccx2IlBVeUWLsjCmGqAdoRp02 It has been observed that system tends to keep a lot of CMA free pages even in very high memory pressure use cases. The CMA fallback for movable pages is used very rarely, only when system is completely pruned from MOVABLE pages, what usually means that the out-of-memory even will be triggered very soon. To avoid such situation and make better use of CMA pages, a heuristics is introduced which turns on CMA fallback for movable pages when the real number of free pages (excluding CMA free pages) approaches low water mark. Signed-off-by: Marek Szyprowski Reviewed-by: Kyungmin Park CC: Michal Nazarewicz --- mm/page_alloc.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index fcb9719..90b51f3 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1076,6 +1076,15 @@ static struct page *__rmqueue(struct zone *zone, unsigned int order, { struct page *page; +#ifdef CONFIG_CMA + unsigned long nr_free = zone_page_state(zone, NR_FREE_PAGES); + unsigned long nr_cma_free = zone_page_state(zone, NR_FREE_CMA_PAGES); + + if (migratetype == MIGRATE_MOVABLE && nr_cma_free && + nr_free - nr_cma_free < 2 * low_wmark_pages(zone)) + migratetype = MIGRATE_CMA; +#endif /* CONFIG_CMA */ + retry_reserve: page = __rmqueue_smallest(zone, order, migratetype);