From patchwork Thu Nov 8 06:59:45 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marek Szyprowski X-Patchwork-Id: 12761 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 1CC2323E01 for ; Thu, 8 Nov 2012 07:00:10 +0000 (UTC) Received: from mail-ia0-f180.google.com (mail-ia0-f180.google.com [209.85.210.180]) by fiordland.canonical.com (Postfix) with ESMTP id 9DFB4A194F0 for ; Thu, 8 Nov 2012 07:00:09 +0000 (UTC) Received: by mail-ia0-f180.google.com with SMTP id f6so1662235iag.11 for ; Wed, 07 Nov 2012 23:00:09 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf:x-auditid :from:to:date:message-id:x-mailer:x-brightmail-tracker:cc:subject :x-beenthere:x-mailman-version:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version :content-type:content-transfer-encoding:sender:errors-to :x-gm-message-state; bh=7kyqxawqTSjD7qGXeGloU76oQxW9Kslvc/eJMnkgjZU=; b=Rdqile6AecB9x6KtMtePEay1n9kTHq0+KHUk3evL4c2CXo8IRWdgmTWpFhqb9iEED7 I6SDKjJ21qec4SVIfUrx5KXQU6MDf87YiUAiS69n5hQ3X8OlXHsZ3fsXa/WfoMVOsffl 6n8nHiQZAFSF4NgMmIE7eZ4IFbpFlQE2FtHNlnZURXnrlYCFpRxgb+Mf+mjj0u1YNRqZ gJH3qf2roD5YMluNcOurPFBPYWZkgWdVntyDCb9vnupeA+OYOgNauTXkWn3ztdh1ikSY pDZp+Pd0xZrGWLgIrZhyBv45XPOnEYGCwFJKMnNGK+7Pmo2KCAXbIfzxsMwoOHBcOPU/ SrMw== Received: by 10.50.152.137 with SMTP id uy9mr7024509igb.62.1352358008935; Wed, 07 Nov 2012 23:00:08 -0800 (PST) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.50.67.148 with SMTP id n20csp331892igt; Wed, 7 Nov 2012 23:00:08 -0800 (PST) Received: by 10.14.179.69 with SMTP id g45mr24298110eem.42.1352358007743; Wed, 07 Nov 2012 23:00:07 -0800 (PST) Received: from mombin.canonical.com (mombin.canonical.com. [91.189.95.16]) by mx.google.com with ESMTP id j5si322462eeo.110.2012.11.07.23.00.06; Wed, 07 Nov 2012 23:00:07 -0800 (PST) Received-SPF: neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) client-ip=91.189.95.16; Authentication-Results: mx.google.com; spf=neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) smtp.mail=linaro-mm-sig-bounces@lists.linaro.org Received: from localhost ([127.0.0.1] helo=mombin.canonical.com) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1TWM5q-0006wg-QV; Thu, 08 Nov 2012 07:00:03 +0000 Received: from mailout2.samsung.com ([203.254.224.25]) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1TWM5l-0006pw-VE for linaro-mm-sig@lists.linaro.org; Thu, 08 Nov 2012 06:59:59 +0000 Received: from epcpsbgm1.samsung.com (epcpsbgm1 [203.254.230.26]) by mailout2.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0MD500HSIQ3JB5B0@mailout2.samsung.com> for linaro-mm-sig@lists.linaro.org; Thu, 08 Nov 2012 15:59:56 +0900 (KST) X-AuditID: cbfee61a-b7fa66d0000004cf-3d-509b586b5dac Received: from epmmp1.local.host ( [203.254.227.16]) by epcpsbgm1.samsung.com (EPCPMTA) with SMTP id 26.31.01231.B685B905; Thu, 08 Nov 2012 15:59:55 +0900 (KST) Received: from localhost.localdomain ([106.116.147.30]) by mmp1.samsung.com (Oracle Communications Messaging Server 7u4-24.01 (7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTPA id <0MD500GVNQ3NRE60@mmp1.samsung.com> for linaro-mm-sig@lists.linaro.org; Thu, 08 Nov 2012 15:59:55 +0900 (KST) From: Marek Szyprowski To: linux-mm@kvack.org, linaro-mm-sig@lists.linaro.org, linux-kernel@vger.kernel.org Date: Thu, 08 Nov 2012 07:59:45 +0100 Message-id: <1352357985-14869-1-git-send-email-m.szyprowski@samsung.com> X-Mailer: git-send-email 1.7.9.5 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrNJMWRmVeSWpSXmKPExsVy+t9jAd3siNkBBjunWFt8ufKQyYHR4/a/ x8wBjFFcNimpOZllqUX6dglcGUeurmErWKVY0XO+namBca10FyMnh4SAicS5KbdZIGwxiQv3 1rN1MXJxCAksYpSY1bwHylnFJPFveycrSBWbgKFE19suNhBbRCBM4k/jNlaQImaB40wSfy6u BUsICxhLHNrcxgRiswioStybtIUdxOYV8JB49Os40DoOoHUKEnMm2Uxg5F7AyLCKUTS1ILmg OCk911CvODG3uDQvXS85P3cTI9iLz6R2MK5ssDjEKMDBqMTDqyE5O0CINbGsuDL3EKMEB7OS CO8yZ6AQb0piZVVqUX58UWlOavEhRmkOFiVx3maPlAAhgfTEktTs1NSC1CKYLBMHp1QDY96t TRdPey/eu4ax7kZTw0d2l8niS4QqXxpOMlMwjrxyaEa2GfPHe6fNPs4Pv6xkFauiVMbe9/f9 rbzJh4L6Y0Tb+LNMBdIuGr34Or9v/7ulHtbq/j931nhH5y9Zmsnxc3F/pV709oc6DB1ykybt 7EvasHjlW0ebHOujgbWv31xTann/L+mktBJLcUaioRZzUXEiAHmYc0LeAQAA Cc: Arnd Bergmann , Bartlomiej Zolnierkiewicz , Mel Gorman , Michal Nazarewicz , Minchan Kim , Kyungmin Park , Andrew Morton Subject: [Linaro-mm-sig] [PATCH] mm: remove watermark hacks for CMA X-BeenThere: linaro-mm-sig@lists.linaro.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: "Unified memory management interest group." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: linaro-mm-sig-bounces@lists.linaro.org Errors-To: linaro-mm-sig-bounces@lists.linaro.org X-Gm-Message-State: ALoCoQlQWqkrRa2WxXhK/NnI03CRVyVawf3RiYFOfG+iaEYS95JPKY2n85dXYFuyQS4vgndTtHOn Commits 2139cbe627b89 ("cma: fix counting of isolated pages") and d95ea5d18e69951 ("cma: fix watermark checking") introduced a reliable method of free page accounting when memory is being allocated from CMA regions, so the workaround introduced earlier by commit 49f223a9cd96c72 ("mm: trigger page reclaim in alloc_contig_range() to stabilise watermarks") can be finally removed. Signed-off-by: Marek Szyprowski --- include/linux/mmzone.h | 9 -------- mm/page_alloc.c | 57 ------------------------------------------------ 2 files changed, 66 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index c9fcd8f..f010b23 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -63,10 +63,8 @@ enum { #ifdef CONFIG_CMA # define is_migrate_cma(migratetype) unlikely((migratetype) == MIGRATE_CMA) -# define cma_wmark_pages(zone) zone->min_cma_pages #else # define is_migrate_cma(migratetype) false -# define cma_wmark_pages(zone) 0 #endif #define for_each_migratetype_order(order, type) \ @@ -372,13 +370,6 @@ struct zone { /* see spanned/present_pages for more description */ seqlock_t span_seqlock; #endif -#ifdef CONFIG_CMA - /* - * CMA needs to increase watermark levels during the allocation - * process to make sure that the system is not starved. - */ - unsigned long min_cma_pages; -#endif struct free_area free_area[MAX_ORDER]; #ifndef CONFIG_SPARSEMEM diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 43ab09f..5028a18 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5217,10 +5217,6 @@ static void __setup_per_zone_wmarks(void) zone->watermark[WMARK_LOW] = min_wmark_pages(zone) + (tmp >> 2); zone->watermark[WMARK_HIGH] = min_wmark_pages(zone) + (tmp >> 1); - zone->watermark[WMARK_MIN] += cma_wmark_pages(zone); - zone->watermark[WMARK_LOW] += cma_wmark_pages(zone); - zone->watermark[WMARK_HIGH] += cma_wmark_pages(zone); - setup_zone_migrate_reserve(zone); spin_unlock_irqrestore(&zone->lock, flags); } @@ -5765,54 +5761,6 @@ static int __alloc_contig_migrate_range(struct compact_control *cc, return ret > 0 ? 0 : ret; } -/* - * Update zone's cma pages counter used for watermark level calculation. - */ -static inline void __update_cma_watermarks(struct zone *zone, int count) -{ - unsigned long flags; - spin_lock_irqsave(&zone->lock, flags); - zone->min_cma_pages += count; - spin_unlock_irqrestore(&zone->lock, flags); - setup_per_zone_wmarks(); -} - -/* - * Trigger memory pressure bump to reclaim some pages in order to be able to - * allocate 'count' pages in single page units. Does similar work as - *__alloc_pages_slowpath() function. - */ -static int __reclaim_pages(struct zone *zone, gfp_t gfp_mask, int count) -{ - enum zone_type high_zoneidx = gfp_zone(gfp_mask); - struct zonelist *zonelist = node_zonelist(0, gfp_mask); - int did_some_progress = 0; - int order = 1; - - /* - * Increase level of watermarks to force kswapd do his job - * to stabilise at new watermark level. - */ - __update_cma_watermarks(zone, count); - - /* Obey watermarks as if the page was being allocated */ - while (!zone_watermark_ok(zone, 0, low_wmark_pages(zone), 0, 0)) { - wake_all_kswapd(order, zonelist, high_zoneidx, zone_idx(zone)); - - did_some_progress = __perform_reclaim(gfp_mask, order, zonelist, - NULL); - if (!did_some_progress) { - /* Exhausted what can be done so it's blamo time */ - out_of_memory(zonelist, gfp_mask, order, NULL, false); - } - } - - /* Restore original watermark levels. */ - __update_cma_watermarks(zone, -count); - - return count; -} - /** * alloc_contig_range() -- tries to allocate given range of pages * @start: start PFN to allocate @@ -5921,11 +5869,6 @@ int alloc_contig_range(unsigned long start, unsigned long end, goto done; } - /* - * Reclaim enough pages to make sure that contiguous allocation - * will not starve the system. - */ - __reclaim_pages(zone, GFP_HIGHUSER_MOVABLE, end-start); /* Grab isolated pages from freelists. */ outer_end = isolate_freepages_range(&cc, outer_start, end);