From patchwork Tue Mar 31 08:58:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 228659 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CDBC0C2D0EF for ; Tue, 31 Mar 2020 09:09:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9B91520B80 for ; Tue, 31 Mar 2020 09:09:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1585645746; bh=gJXCPdAm6kytCWXBKwOUdWR72qRSaKL9kkbp0X+2k2c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=fe57J7dUZ/z/mAwHcK4c3z8dNgcFNKlIb1wUPLg9ft5IimkWLxSgvMrvw9STQ3djT KpSFAyMcE6kNthaqBDjFI3cXvv6l9mNOgE1ziM9GmM3cl35KD9IhH8jcIfPtr7/4ad 3pyW0FhKqmymb6bFpEO5NfgB/HFB+kX38PQU3wxs= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731416AbgCaJJF (ORCPT ); Tue, 31 Mar 2020 05:09:05 -0400 Received: from mail.kernel.org ([198.145.29.99]:51910 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731423AbgCaJJE (ORCPT ); Tue, 31 Mar 2020 05:09:04 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id D394C20B1F; Tue, 31 Mar 2020 09:09:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1585645743; bh=gJXCPdAm6kytCWXBKwOUdWR72qRSaKL9kkbp0X+2k2c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mzjvFqHM5HybEgcqsgn+DPNr+BbnIonzz+VfhFC3rcDlv6akwMS+Ws7yH339vu9Qo Wpw6SsyDcpDtrx+Yr/h0m5RTXhw607cwvE9iYo3xbFrM/tCgFdfHz3mBjeb45NCcWD dRmdXlXS41DYcdVDkQrEeuMCu3iDphcuYzziuU8U= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Roman Gushchin , Andrew Morton , Shakeel Butt , Johannes Weiner , Michal Hocko , Bharata B Rao , Linus Torvalds Subject: [PATCH 5.5 114/170] mm: fork: fix kernel_stack memcg stats for various stack implementations Date: Tue, 31 Mar 2020 10:58:48 +0200 Message-Id: <20200331085436.270978792@linuxfoundation.org> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200331085423.990189598@linuxfoundation.org> References: <20200331085423.990189598@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Roman Gushchin commit 8380ce479010f2f779587b462a9b4681934297c3 upstream. Depending on CONFIG_VMAP_STACK and the THREAD_SIZE / PAGE_SIZE ratio the space for task stacks can be allocated using __vmalloc_node_range(), alloc_pages_node() and kmem_cache_alloc_node(). In the first and the second cases page->mem_cgroup pointer is set, but in the third it's not: memcg membership of a slab page should be determined using the memcg_from_slab_page() function, which looks at page->slab_cache->memcg_params.memcg . In this case, using mod_memcg_page_state() (as in account_kernel_stack()) is incorrect: page->mem_cgroup pointer is NULL even for pages charged to a non-root memory cgroup. It can lead to kernel_stack per-memcg counters permanently showing 0 on some architectures (depending on the configuration). In order to fix it, let's introduce a mod_memcg_obj_state() helper, which takes a pointer to a kernel object as a first argument, uses mem_cgroup_from_obj() to get a RCU-protected memcg pointer and calls mod_memcg_state(). It allows to handle all possible configurations (CONFIG_VMAP_STACK and various THREAD_SIZE/PAGE_SIZE values) without spilling any memcg/kmem specifics into fork.c . Note: This is a special version of the patch created for stable backports. It contains code from the following two patches: - mm: memcg/slab: introduce mem_cgroup_from_obj() - mm: fork: fix kernel_stack memcg stats for various stack implementations [guro@fb.com: introduce mem_cgroup_from_obj()] Link: http://lkml.kernel.org/r/20200324004221.GA36662@carbon.dhcp.thefacebook.com Fixes: 4d96ba353075 ("mm: memcg/slab: stop setting page->mem_cgroup pointer for slab pages") Signed-off-by: Roman Gushchin Signed-off-by: Andrew Morton Reviewed-by: Shakeel Butt Acked-by: Johannes Weiner Cc: Michal Hocko Cc: Bharata B Rao Cc: Shakeel Butt Cc: Link: http://lkml.kernel.org/r/20200303233550.251375-1-guro@fb.com Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- include/linux/memcontrol.h | 12 ++++++++++++ kernel/fork.c | 4 ++-- mm/memcontrol.c | 38 ++++++++++++++++++++++++++++++++++++++ 3 files changed, 52 insertions(+), 2 deletions(-) --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -695,6 +695,7 @@ static inline unsigned long lruvec_page_ void __mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, int val); void __mod_lruvec_slab_state(void *p, enum node_stat_item idx, int val); +void mod_memcg_obj_state(void *p, int idx, int val); static inline void mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, int val) @@ -1123,6 +1124,10 @@ static inline void __mod_lruvec_slab_sta __mod_node_page_state(page_pgdat(page), idx, val); } +static inline void mod_memcg_obj_state(void *p, int idx, int val) +{ +} + static inline unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order, gfp_t gfp_mask, @@ -1427,6 +1432,8 @@ static inline int memcg_cache_id(struct return memcg ? memcg->kmemcg_id : -1; } +struct mem_cgroup *mem_cgroup_from_obj(void *p); + #else static inline int memcg_kmem_charge(struct page *page, gfp_t gfp, int order) @@ -1468,6 +1475,11 @@ static inline void memcg_put_cache_ids(v { } +static inline struct mem_cgroup *mem_cgroup_from_obj(void *p) +{ + return NULL; +} + #endif /* CONFIG_MEMCG_KMEM */ #endif /* _LINUX_MEMCONTROL_H */ --- a/kernel/fork.c +++ b/kernel/fork.c @@ -397,8 +397,8 @@ static void account_kernel_stack(struct mod_zone_page_state(page_zone(first_page), NR_KERNEL_STACK_KB, THREAD_SIZE / 1024 * account); - mod_memcg_page_state(first_page, MEMCG_KERNEL_STACK_KB, - account * (THREAD_SIZE / 1024)); + mod_memcg_obj_state(stack, MEMCG_KERNEL_STACK_KB, + account * (THREAD_SIZE / 1024)); } } --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -777,6 +777,17 @@ void __mod_lruvec_slab_state(void *p, en rcu_read_unlock(); } +void mod_memcg_obj_state(void *p, int idx, int val) +{ + struct mem_cgroup *memcg; + + rcu_read_lock(); + memcg = mem_cgroup_from_obj(p); + if (memcg) + mod_memcg_state(memcg, idx, val); + rcu_read_unlock(); +} + /** * __count_memcg_events - account VM events in a cgroup * @memcg: the memory cgroup @@ -2661,6 +2672,33 @@ static void commit_charge(struct page *p } #ifdef CONFIG_MEMCG_KMEM +/* + * Returns a pointer to the memory cgroup to which the kernel object is charged. + * + * The caller must ensure the memcg lifetime, e.g. by taking rcu_read_lock(), + * cgroup_mutex, etc. + */ +struct mem_cgroup *mem_cgroup_from_obj(void *p) +{ + struct page *page; + + if (mem_cgroup_disabled()) + return NULL; + + page = virt_to_head_page(p); + + /* + * Slab pages don't have page->mem_cgroup set because corresponding + * kmem caches can be reparented during the lifetime. That's why + * memcg_from_slab_page() should be used instead. + */ + if (PageSlab(page)) + return memcg_from_slab_page(page); + + /* All other pages use page->mem_cgroup */ + return page->mem_cgroup; +} + static int memcg_alloc_cache_id(void) { int id, size;