From patchwork Thu Aug 20 09:20:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 265413 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C05BBC433E1 for ; Thu, 20 Aug 2020 12:40:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 83137208E4 for ; Thu, 20 Aug 2020 12:40:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1597927231; bh=C/84CIoJYHv4E/FABQ0fh8uf1gdQweNA4EgZRAF4Klc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=HeZMLlanGxooWHRFg9BvgVquujXmUepagSfwRRkGAwjKp/agajHXYM289Cx0w8Qig uk3RZbzslKZDnpUkUqTL8ic8kkcehk/CuoV+xhLUMDrkVKSySceyWkHujn9vO0+mvr DWsBXcdLQeKime4oBww0faQ/YiOo/1G9BJUAq7A0= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729365AbgHTJqx (ORCPT ); Thu, 20 Aug 2020 05:46:53 -0400 Received: from mail.kernel.org ([198.145.29.99]:49750 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729359AbgHTJqw (ORCPT ); Thu, 20 Aug 2020 05:46:52 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id DFCBF2173E; Thu, 20 Aug 2020 09:46:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1597916811; bh=C/84CIoJYHv4E/FABQ0fh8uf1gdQweNA4EgZRAF4Klc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iGHEJLUSiQ5r4i+CrewD8B6EWYnDtVJogROozJIWxMZK0J5yj9g0Fd4qm7WuSb54F ApCAq/YiZsTMcmBJSsNJEpYXE6adsLmqFQtY74+yHDmKWsQklroGCHwUJu7Ne4pATa datcEB3NDcO2ukgeBIHgEvanUh0fP6gGP6tpvh5c= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, =?utf-8?q?Michal_Koutn=C3=BD?= , Michal Hocko , Andrew Morton , Roman Gushchin , Johannes Weiner , Tejun Heo , Linus Torvalds Subject: [PATCH 5.4 055/152] mm/page_counter.c: fix protection usage propagation Date: Thu, 20 Aug 2020 11:20:22 +0200 Message-Id: <20200820091556.543146115@linuxfoundation.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200820091553.615456912@linuxfoundation.org> References: <20200820091553.615456912@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Michal Koutný commit a6f23d14ec7d7d02220ad8bb2774be3322b9aeec upstream. When workload runs in cgroups that aren't directly below root cgroup and their parent specifies reclaim protection, it may end up ineffective. The reason is that propagate_protected_usage() is not called in all hierarchy up. All the protected usage is incorrectly accumulated in the workload's parent. This means that siblings_low_usage is overestimated and effective protection underestimated. Even though it is transitional phenomenon (uncharge path does correct propagation and fixes the wrong children_low_usage), it can undermine the intended protection unexpectedly. We have noticed this problem while seeing a swap out in a descendant of a protected memcg (intermediate node) while the parent was conveniently under its protection limit and the memory pressure was external to that hierarchy. Michal has pinpointed this down to the wrong siblings_low_usage which led to the unwanted reclaim. The fix is simply updating children_low_usage in respective ancestors also in the charging path. Fixes: 230671533d64 ("mm: memory.low hierarchical behavior") Signed-off-by: Michal Koutný Signed-off-by: Michal Hocko Signed-off-by: Andrew Morton Acked-by: Michal Hocko Acked-by: Roman Gushchin Cc: Johannes Weiner Cc: Tejun Heo Cc: [4.18+] Link: http://lkml.kernel.org/r/20200803153231.15477-1-mhocko@kernel.org Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- mm/page_counter.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) --- a/mm/page_counter.c +++ b/mm/page_counter.c @@ -77,7 +77,7 @@ void page_counter_charge(struct page_cou long new; new = atomic_long_add_return(nr_pages, &c->usage); - propagate_protected_usage(counter, new); + propagate_protected_usage(c, new); /* * This is indeed racy, but we can live with some * inaccuracy in the watermark. @@ -121,7 +121,7 @@ bool page_counter_try_charge(struct page new = atomic_long_add_return(nr_pages, &c->usage); if (new > c->max) { atomic_long_sub(nr_pages, &c->usage); - propagate_protected_usage(counter, new); + propagate_protected_usage(c, new); /* * This is racy, but we can live with some * inaccuracy in the failcnt. @@ -130,7 +130,7 @@ bool page_counter_try_charge(struct page *fail = c; goto failed; } - propagate_protected_usage(counter, new); + propagate_protected_usage(c, new); /* * Just like with failcnt, we can live with some * inaccuracy in the watermark.