From patchwork Thu Jun 23 10:05:17 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arnd Bergmann X-Patchwork-Id: 102113 Delivered-To: patch@linaro.org Received: by 10.140.28.4 with SMTP id 4csp346238qgy; Thu, 23 Jun 2016 03:03:38 -0700 (PDT) X-Received: by 10.98.208.197 with SMTP id p188mr33744711pfg.152.1466676218493; Thu, 23 Jun 2016 03:03:38 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 1si6100323paj.218.2016.06.23.03.03.38; Thu, 23 Jun 2016 03:03:38 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751639AbcFWKDf (ORCPT + 30 others); Thu, 23 Jun 2016 06:03:35 -0400 Received: from mout.kundenserver.de ([212.227.17.24]:65265 "EHLO mout.kundenserver.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751026AbcFWKDd (ORCPT ); Thu, 23 Jun 2016 06:03:33 -0400 Received: from wuerfel.lan. ([78.42.132.4]) by mrelayeu.kundenserver.de (mreue101) with ESMTPA (Nemesis) id 0LZek0-1bfA573rnI-00lR12; Thu, 23 Jun 2016 12:03:14 +0200 From: Arnd Bergmann To: Mel Gorman Cc: Vlastimil Babka , Johannes Weiner , Rik van Riel , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Arnd Bergmann Subject: [RFC, DEBUGGING 1/2] mm: pass NR_FILE_PAGES/NR_SHMEM into node_page_state Date: Thu, 23 Jun 2016 12:05:17 +0200 Message-Id: <20160623100518.156662-1-arnd@arndb.de> X-Mailer: git-send-email 2.9.0 X-Provags-ID: V03:K0:pJv8m746ArfmiPYWKOckBagTqx0lr3Vvqg4JtXcrUHlmkEbgHx4 1AlU5lxVOz8zFRlE6p3x/BkrcydhkdZ9H0noJPxogBeCySkHW1Mr2uLSY0pr0lPpcjmsRLG 3aMGPxaU3raZXnGR8ELaQwM2YHpXKQhJIrgfKj9fPeOD1muYC5VH04zuxi6+RMPiaUFlfeA R/wSyWLldtGia4O3Oxzwg== X-UI-Out-Filterresults: notjunk:1; V01:K0:fVWHxLkrZhk=:rDjKeWFIpj1kvtp2tkFvLu KTu1ybXL49nRSnVSMsNHGNnAUrpPteaEXT6PHjkRDtH5RLgJP87GJocq4/jnD6HV/7WHysp07 Au+D7MM/YBXbhOgVHxDihjSvqmu9e2If3UsUZGg2TVLWzYk72qq4tGz4AKMzS66dTr9qoAOPm iIlUovDj4ZH7fza95fNGxYTtW542Q6XCRGNwc9VUKNE/fllfpGNcL2A1Srsn0Dk9RQyGP6Xfe ylA0xiMBu/w9/bASotJq8kptZxkmWriwSMtJjtlxBg7WfCSKq3UmB8Yb5JVMjhKx4KhxnVnGE QZLyuo/BiyjTdicGqM9NaozLStaRPGBtvgnTUb06jFIwMZULG+x/UDWsWfH5IgaBSwn0ynW3I mRp/aWQRkY2nFLUTVxHRPXZKgWNtZIdEZlqRD+QjWjw51HJrZquG9mdgrtTfvcnMoEu/frnL1 yvFjHKYPBNBKCtbBfqlTMKfthhsASk8LOfKzQBUshkS9RL3yRqfuvbrKVRrGxeak7VBK6aFGC gEiDp3sDc8bCNYMb7dDar56tSz6UzSkwoGLkD1lGpIfiUYLjLUgydV1wcK1coD7iZLrS/Ennd FqnHMPYaE58z8/cDc1y4E9YyorGeq3DxW+sLZZxxepS1s87Hwgnvftjkap2bmE0Z84Klj8SDb 7ERJJelDYahBqxnfhcyKSisa6FLJuk/f8VC1kbPlfSdM9Sk1wrZARhQJubtOqK2FF9dM= Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org I see some new warnings from a recent mm change: mm/filemap.c: In function '__delete_from_page_cache': include/linux/vmstat.h:116:2: error: array subscript is above array bounds [-Werror=array-bounds] atomic_long_add(x, &zone->vm_stat[item]); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ include/linux/vmstat.h:116:35: error: array subscript is above array bounds [-Werror=array-bounds] atomic_long_add(x, &zone->vm_stat[item]); ~~~~~~~~~~~~~^~~~~~ include/linux/vmstat.h:116:35: error: array subscript is above array bounds [-Werror=array-bounds] include/linux/vmstat.h:117:2: error: array subscript is above array bounds [-Werror=array-bounds] Looking deeper into it, I find that we pass the wrong enum into some functions after the type for the symbol has changed. This changes the code to use the other function for those that are using the incorrect type. I've done this blindly just going by warnings I got from a debug patch I did for this, so it's likely that some cases are more subtle and need another change, so please treat this as a bug-report rather than a patch for applying. Signed-off-by: Arnd Bergmann Fixes: e426f7b4ade5 ("mm: move most file-based accounting to the node") --- mm/filemap.c | 4 ++-- mm/page_alloc.c | 15 ++++++++------- mm/rmap.c | 4 ++-- mm/shmem.c | 4 ++-- mm/vmscan.c | 2 +- 5 files changed, 15 insertions(+), 14 deletions(-) -- 2.9.0 diff --git a/mm/filemap.c b/mm/filemap.c index 6cb19e012887..77e902bf04f4 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -218,9 +218,9 @@ void __delete_from_page_cache(struct page *page, void *shadow) /* hugetlb pages do not participate in page cache accounting. */ if (!PageHuge(page)) - __mod_zone_page_state(page_zone(page), NR_FILE_PAGES, -nr); + __mod_node_page_state(page_zone(page)->zone_pgdat, NR_FILE_PAGES, -nr); if (PageSwapBacked(page)) { - __mod_zone_page_state(page_zone(page), NR_SHMEM, -nr); + __mod_node_page_state(page_zone(page)->zone_pgdat, NR_SHMEM, -nr); if (PageTransHuge(page)) __dec_zone_page_state(page, NR_SHMEM_THPS); } else { diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 23b5044f5ced..d5287011ed27 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3484,9 +3484,10 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order, unsigned long writeback; unsigned long dirty; - writeback = zone_page_state_snapshot(zone, + writeback = node_page_state_snapshot(zone->zone_pgdat, NR_WRITEBACK); - dirty = zone_page_state_snapshot(zone, NR_FILE_DIRTY); + dirty = node_page_state_snapshot(zone->zone_pgdat, + NR_FILE_DIRTY); if (2*(writeback + dirty) > reclaimable) { congestion_wait(BLK_RW_ASYNC, HZ/10); @@ -4396,9 +4397,9 @@ void show_free_areas(unsigned int filter) K(zone->present_pages), K(zone->managed_pages), K(zone_page_state(zone, NR_MLOCK)), - K(zone_page_state(zone, NR_FILE_DIRTY)), - K(zone_page_state(zone, NR_WRITEBACK)), - K(zone_page_state(zone, NR_SHMEM)), + K(node_page_state(zone->zone_pgdat, NR_FILE_DIRTY)), + K(node_page_state(zone, NR_WRITEBACK)), + K(node_page_state(zone, NR_SHMEM)), #ifdef CONFIG_TRANSPARENT_HUGEPAGE K(zone_page_state(zone, NR_SHMEM_THPS) * HPAGE_PMD_NR), K(zone_page_state(zone, NR_SHMEM_PMDMAPPED) @@ -4410,12 +4411,12 @@ void show_free_areas(unsigned int filter) zone_page_state(zone, NR_KERNEL_STACK) * THREAD_SIZE / 1024, K(zone_page_state(zone, NR_PAGETABLE)), - K(zone_page_state(zone, NR_UNSTABLE_NFS)), + K(node_page_state(zone, NR_UNSTABLE_NFS)), K(zone_page_state(zone, NR_BOUNCE)), K(free_pcp), K(this_cpu_read(zone->pageset->pcp.count)), K(zone_page_state(zone, NR_FREE_CMA_PAGES)), - K(zone_page_state(zone, NR_WRITEBACK_TEMP)), + K(node_page_state(zone, NR_WRITEBACK_TEMP)), K(node_page_state(zone->zone_pgdat, NR_PAGES_SCANNED))); printk("lowmem_reserve[]:"); for (i = 0; i < MAX_NR_ZONES; i++) diff --git a/mm/rmap.c b/mm/rmap.c index 4deff963ea8a..898b2b7806ca 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1296,7 +1296,7 @@ void page_add_file_rmap(struct page *page, bool compound) if (!atomic_inc_and_test(&page->_mapcount)) goto out; } - __mod_zone_page_state(page_zone(page), NR_FILE_MAPPED, nr); + __mod_node_page_state(page_zone(page)->zone_pgdat, NR_FILE_MAPPED, nr); mem_cgroup_inc_page_stat(page, MEM_CGROUP_STAT_FILE_MAPPED); out: unlock_page_memcg(page); @@ -1336,7 +1336,7 @@ static void page_remove_file_rmap(struct page *page, bool compound) * these counters are not modified in interrupt context, and * pte lock(a spinlock) is held, which implies preemption disabled. */ - __mod_zone_page_state(page_zone(page), NR_FILE_MAPPED, -nr); + __mod_node_page_state(page_zone(page)->zone_pgdat, NR_FILE_MAPPED, -nr); mem_cgroup_dec_page_stat(page, MEM_CGROUP_STAT_FILE_MAPPED); if (unlikely(PageMlocked(page))) diff --git a/mm/shmem.c b/mm/shmem.c index e5c50fb0d4a4..99dcb8e5642d 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -576,8 +576,8 @@ static int shmem_add_to_page_cache(struct page *page, mapping->nrpages += nr; if (PageTransHuge(page)) __inc_zone_page_state(page, NR_SHMEM_THPS); - __mod_zone_page_state(page_zone(page), NR_FILE_PAGES, nr); - __mod_zone_page_state(page_zone(page), NR_SHMEM, nr); + __mod_node_page_state(page_zone(page)->zone_pgdat, NR_FILE_PAGES, nr); + __mod_node_page_state(page_zone(page)->zone_pgdat, NR_SHMEM, nr); spin_unlock_irq(&mapping->tree_lock); } else { page->mapping = NULL; diff --git a/mm/vmscan.c b/mm/vmscan.c index 07e17dac1793..4702069cc80b 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2079,7 +2079,7 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg, int z; unsigned long total_high_wmark = 0; - pgdatfree = sum_zone_node_page_state(pgdat->node_id, NR_FREE_PAGES); + pgdatfree = global_page_state(NR_FREE_PAGES); pgdatfile = node_page_state(pgdat, NR_ACTIVE_FILE) + node_page_state(pgdat, NR_INACTIVE_FILE); From patchwork Thu Jun 23 10:05:18 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arnd Bergmann X-Patchwork-Id: 102112 Delivered-To: patch@linaro.org Received: by 10.140.28.4 with SMTP id 4csp346790qgy; Thu, 23 Jun 2016 03:04:35 -0700 (PDT) X-Received: by 10.98.26.148 with SMTP id a142mr41183065pfa.46.1466676275781; Thu, 23 Jun 2016 03:04:35 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w69si6059094pfd.81.2016.06.23.03.04.35; Thu, 23 Jun 2016 03:04:35 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751872AbcFWKEd (ORCPT + 30 others); Thu, 23 Jun 2016 06:04:33 -0400 Received: from mout.kundenserver.de ([212.227.17.10]:56049 "EHLO mout.kundenserver.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750960AbcFWKEc (ORCPT ); Thu, 23 Jun 2016 06:04:32 -0400 Received: from wuerfel.lan. ([78.42.132.4]) by mrelayeu.kundenserver.de (mreue101) with ESMTPA (Nemesis) id 0MOAtg-1bLVf50skO-005YoG; Thu, 23 Jun 2016 12:03:15 +0200 From: Arnd Bergmann To: Mel Gorman Cc: Vlastimil Babka , Johannes Weiner , Rik van Riel , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Arnd Bergmann Subject: [RFC, DEBUGGING 2/2] mm: add type checking for page state functions Date: Thu, 23 Jun 2016 12:05:18 +0200 Message-Id: <20160623100518.156662-2-arnd@arndb.de> X-Mailer: git-send-email 2.9.0 In-Reply-To: <20160623100518.156662-1-arnd@arndb.de> References: <20160623100518.156662-1-arnd@arndb.de> X-Provags-ID: V03:K0:mRerk8gP7bbFfs3YWa8iiWunLqpofeGe3Xzz+Ybp/UXa+mrEcBE gq5kyvDHi2qDI+wVULTSW1nSzXn/v6i80oaqwaW7qn9nlzvXdtgspmzEY1RZoxvbOjs+Vf5 yV+v/UMLQsg4QkEeprseS5YlkX+fU36guj4oDwRzOLKBadfj4Facwth/Cz9rH9acHEM28By IeheYq+Zwg+C0ZQMLXhaA== X-UI-Out-Filterresults: notjunk:1; V01:K0:LrEGeCGlcZU=:/V9TMK+z8HHB3zunhbWpVc NDrN3utgpmaBZqk5YRPL1sByL2TSmSXDzl4VW27Itb4w56rV6RoetJ2o+NcUo4fXwgjYC3ZMr orJOYjyt2cJgoR1LO4Q68EfpyYUkFmR6mZu2Rt50cHIRIrDhmrfP9SiNuD3tKQTZqCbxghYyN QmYqe4DqwTE0TsWDxL9zlJdc93VTabwM6cSaauKLzVMYqwH2e3X4VCBzvVmxJ5auK0Vilwg2O gm+kpmPVb7IxiUVaCMXQUf+31wOFnood78g6BNEIni5D2MCgk7OMi3vpZgYWjBG4zpkLboufL Sj9SNSIJnmca7H5F2DgZko6v4c0oTuMooPeSoIOaNssXZ7cePI2JwOG5rNvBnjIOocX6VTSsL vGUeR7l9YkiEAawDcukxrFcIgowZ+xcK8SN16E+inNgiMDN8Cl+vdrGhYbkbnA1+pbWamoq8n ZN55WZ51PU+S8/asGQWOAmrkn3p8OMG1IC+fYoAHl02UaVnoIz+zQh+ZrppfpUJ15Ec8EvFLu /vqGBbzm2CVb1aq1s52kIcjOZk+cMJVZucKaT29v71wv0LJjLX6LE1V5zT1gDe2wylUsiIHc3 4P4ZyzaMKMpukTbaVGoAN8XRY+nw/JI0AEem0Q9zHMp1VYpwBj1KVvU5FoqDOeR/thlAhOLNC lG5ytMqGVhPzvPjsnD8E3Zgu7vb6qDBJpQlDwDaebbzTkanF1frIZQbNr8lb1Pd7/8iE= Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We had a couple of bugs where we pass the incorrect 'enum' into one of the statistics functions, and unfortunately gcc can only warn about comparing distinct enum types rather than warning about passing an enum of the wrong type into a function. This wraps all the stats calls inside of macros that add the type checking using a comparison. This is a fairly crude method, but it helped uncover some issues for me. Signed-off-by: Arnd Bergmann --- include/linux/vmstat.h | 36 +++++++++++++++++++++++++++++++++++- 1 file changed, 35 insertions(+), 1 deletion(-) -- 2.9.0 diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h index c799073fe1c4..0328858894a5 100644 --- a/include/linux/vmstat.h +++ b/include/linux/vmstat.h @@ -116,6 +116,8 @@ static inline void zone_page_state_add(long x, struct zone *zone, atomic_long_add(x, &zone->vm_stat[item]); atomic_long_add(x, &vm_zone_stat[item]); } +#define zone_page_state_add(x, zone, item) \ + zone_page_state_add(x, zone, ((item) == (enum zone_stat_item )0) ? (item) : (item)) static inline void node_page_state_add(long x, struct pglist_data *pgdat, enum node_stat_item item) @@ -123,6 +125,8 @@ static inline void node_page_state_add(long x, struct pglist_data *pgdat, atomic_long_add(x, &pgdat->vm_stat[item]); atomic_long_add(x, &vm_node_stat[item]); } +#define node_page_state_add(x, node, item) \ + node_page_state_add(x, node, ((item) == (enum node_stat_item )0) ? (item) : (item)) static inline unsigned long global_page_state(enum zone_stat_item item) { @@ -133,6 +137,8 @@ static inline unsigned long global_page_state(enum zone_stat_item item) #endif return x; } +#define global_page_state(item) \ + global_page_state(((item) == (enum zone_stat_item )0) ? (item) : (item)) static inline unsigned long global_node_page_state(enum node_stat_item item) { @@ -143,6 +149,8 @@ static inline unsigned long global_node_page_state(enum node_stat_item item) #endif return x; } +#define global_node_page_state(item) \ + global_node_page_state(((item) == (enum node_stat_item )0) ? (item) : (item)) static inline unsigned long zone_page_state(struct zone *zone, enum zone_stat_item item) @@ -154,6 +162,8 @@ static inline unsigned long zone_page_state(struct zone *zone, #endif return x; } +#define zone_page_state(zone, item) \ + zone_page_state(zone, ((item) == (enum zone_stat_item )0) ? (item) : (item)) /* * More accurate version that also considers the currently pending @@ -176,6 +186,8 @@ static inline unsigned long zone_page_state_snapshot(struct zone *zone, #endif return x; } +#define zone_page_state_snapshot(zone, item) \ + zone_page_state_snapshot(zone, ((item) == (enum zone_stat_item )0) ? (item) : (item)) static inline unsigned long node_page_state_snapshot(pg_data_t *pgdat, enum zone_stat_item item) @@ -192,7 +204,8 @@ static inline unsigned long node_page_state_snapshot(pg_data_t *pgdat, #endif return x; } - +#define node_page_state_snapshot(zone, item) \ + node_page_state_snapshot(zone, ((item) == (enum node_stat_item )0) ? (item) : (item)) #ifdef CONFIG_NUMA extern unsigned long sum_zone_node_page_state(int node, @@ -341,6 +354,27 @@ static inline void drain_zonestat(struct zone *zone, struct per_cpu_pageset *pset) { } #endif /* CONFIG_SMP */ +#define __mod_zone_page_state(zone, item, delta) \ + __mod_zone_page_state(zone, ((item) == (enum zone_stat_item )0) ? (item) : (item), delta) +#define __mod_node_page_state(pgdat, item, delta) \ + __mod_node_page_state(pgdat, ((item) == (enum node_stat_item )0) ? (item) : (item), delta) +#define __inc_zone_state(zone, item) \ + __inc_zone_state(zone, ((item) == (enum zone_stat_item )0) ? (item) : (item)) +#define __inc_node_state(pgdat, item) \ + __inc_node_state(pgdat, ((item) == (enum node_stat_item )0) ? (item) : (item)) +#define __dec_zone_state(zone, item) \ + __dec_zone_state(zone, ((item) == (enum zone_stat_item )0) ? (item) : (item)) +#define __dec_node_state(pgdat, item) \ + __dec_node_state(pgdat, ((item) == (enum node_stat_item )0) ? (item) : (item)) +#define __inc_zone_page_state(page, item) \ + __inc_zone_page_state(page, ((item) == (enum zone_stat_item )0) ? (item) : (item)) +#define __inc_node_page_state(page, item) \ + __inc_node_page_state(page, ((item) == (enum node_stat_item )0) ? (item) : (item)) +#define __dec_zone_page_state(page, item) \ + __dec_zone_page_state(page, ((item) == (enum zone_stat_item )0) ? (item) : (item)) +#define __dec_node_page_state(page, item) \ + __dec_node_page_state(page, ((item) == (enum node_stat_item )0) ? (item) : (item)) + static inline void __mod_zone_freepage_state(struct zone *zone, int nr_pages, int migratetype) {