From patchwork Thu Nov 3 08:58:19 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chen Feng X-Patchwork-Id: 80619 Delivered-To: patch@linaro.org Received: by 10.140.97.247 with SMTP id m110csp536091qge; Thu, 3 Nov 2016 02:27:32 -0700 (PDT) X-Received: by 10.98.48.69 with SMTP id w66mr15322586pfw.0.1478165252776; Thu, 03 Nov 2016 02:27:32 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 75si8467726pfv.196.2016.11.03.02.27.32; Thu, 03 Nov 2016 02:27:32 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752807AbcKCJ1V (ORCPT + 27 others); Thu, 3 Nov 2016 05:27:21 -0400 Received: from szxga01-in.huawei.com ([58.251.152.64]:58921 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751573AbcKCJ1S (ORCPT ); Thu, 3 Nov 2016 05:27:18 -0400 Received: from 172.24.1.36 (EHLO szxeml428-hub.china.huawei.com) ([172.24.1.36]) by szxrg01-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id DUM53652; Thu, 03 Nov 2016 16:58:31 +0800 (CST) Received: from vm163-62.huawei.com (10.184.163.62) by szxeml428-hub.china.huawei.com (10.82.67.183) with Microsoft SMTP Server id 14.3.235.1; Thu, 3 Nov 2016 16:58:20 +0800 From: Chen Feng To: , , , , , , , , , CC: , , , , Subject: [PATCH] mm: cma: improve utilization of cma pages Date: Thu, 3 Nov 2016 16:58:19 +0800 Message-ID: <1478163499-110185-1-git-send-email-puck.chen@hisilicon.com> X-Mailer: git-send-email 1.9.1 MIME-Version: 1.0 X-Originating-IP: [10.184.163.62] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently, cma pages can only be use by fallback of movable. When there is no movable pages, the pcp pages will also be refilled. So use the cma type before movable pages, and let cma-type fallback to movable type. I also have seen Joonsoo Kim on cma-zone. Makes cma pages a zone. It's a good idea. But while testing it, the cma zone can be exhausted soon. Then the cma zone will always doing balance. The slab_scans and swap ion/out will be too high. CC: Qiu xishi Signed-off-by: Chen Feng Reviewd-by: Fu Jun --- include/linux/gfp.h | 3 +++ include/linux/mmzone.h | 4 ++-- mm/page_alloc.c | 24 ++++++++---------------- 3 files changed, 13 insertions(+), 18 deletions(-) -- 1.9.1 diff --git a/include/linux/gfp.h b/include/linux/gfp.h index f8041f9de..0bb8599 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -270,6 +270,9 @@ static inline int gfpflags_to_migratetype(const gfp_t gfp_flags) BUILD_BUG_ON((1UL << GFP_MOVABLE_SHIFT) != ___GFP_MOVABLE); BUILD_BUG_ON((___GFP_MOVABLE >> GFP_MOVABLE_SHIFT) != MIGRATE_MOVABLE); + if (IS_ENABLED(CONFIG_CMA) && gfp_flags & __GFP_MOVABLE) + return MIGRATE_CMA; + if (unlikely(page_group_by_mobility_disabled)) return MIGRATE_UNMOVABLE; diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 0f088f3..c7875c1 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -39,8 +39,6 @@ enum { MIGRATE_UNMOVABLE, MIGRATE_MOVABLE, MIGRATE_RECLAIMABLE, - MIGRATE_PCPTYPES, /* the number of types on the pcp lists */ - MIGRATE_HIGHATOMIC = MIGRATE_PCPTYPES, #ifdef CONFIG_CMA /* * MIGRATE_CMA migration type is designed to mimic the way @@ -57,6 +55,8 @@ enum { */ MIGRATE_CMA, #endif + MIGRATE_PCPTYPES, /* the number of types on the pcp lists */ + MIGRATE_HIGHATOMIC = MIGRATE_PCPTYPES, #ifdef CONFIG_MEMORY_ISOLATION MIGRATE_ISOLATE, /* can't allocate from here */ #endif diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 8fd42aa..33ed6f3 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1828,17 +1828,6 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order, #endif }; -#ifdef CONFIG_CMA -static struct page *__rmqueue_cma_fallback(struct zone *zone, - unsigned int order) -{ - return __rmqueue_smallest(zone, order, MIGRATE_CMA); -} -#else -static inline struct page *__rmqueue_cma_fallback(struct zone *zone, - unsigned int order) { return NULL; } -#endif - /* * Move the free pages in a range to the free lists of the requested type. * Note that start_page and end_pages are not aligned on a pageblock @@ -2171,10 +2160,13 @@ static struct page *__rmqueue(struct zone *zone, unsigned int order, struct page *page; page = __rmqueue_smallest(zone, order, migratetype); - if (unlikely(!page)) { - if (migratetype == MIGRATE_MOVABLE) - page = __rmqueue_cma_fallback(zone, order); + /* Fallback cma type to movable here */ + if (!page && migratetype == MIGRATE_CMA) { + migratetype = MIGRATE_MOVABLE; + page = __rmqueue_smallest(zone, order, migratetype); + } + if (unlikely(!page)) { if (!page) page = __rmqueue_fallback(zone, order, migratetype); } @@ -2787,7 +2779,7 @@ bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark, if (alloc_harder) return true; - for (mt = 0; mt < MIGRATE_PCPTYPES; mt++) { + for (mt = 0; mt < MIGRATE_PCPTYPES - 1; mt++) { if (!list_empty(&area->free_list[mt])) return true; } @@ -4206,10 +4198,10 @@ static void show_migration_types(unsigned char type) [MIGRATE_UNMOVABLE] = 'U', [MIGRATE_MOVABLE] = 'M', [MIGRATE_RECLAIMABLE] = 'E', - [MIGRATE_HIGHATOMIC] = 'H', #ifdef CONFIG_CMA [MIGRATE_CMA] = 'C', #endif + [MIGRATE_HIGHATOMIC] = 'H', #ifdef CONFIG_MEMORY_ISOLATION [MIGRATE_ISOLATE] = 'I', #endif