From patchwork Mon Aug 8 12:55:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Oberhollenzer X-Patchwork-Id: 596178 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 038B6C25B0D for ; Mon, 8 Aug 2022 12:56:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242800AbiHHM4s (ORCPT ); Mon, 8 Aug 2022 08:56:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40846 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241832AbiHHM4k (ORCPT ); Mon, 8 Aug 2022 08:56:40 -0400 Received: from mail.infraroot.at (mail.infraroot.at [54.37.73.54]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AE55A38AF for ; Mon, 8 Aug 2022 05:56:39 -0700 (PDT) Received: from localtoast.corp.sigma-star.at (unknown [82.150.214.1]) by mail.infraroot.at (Postfix) with ESMTPSA id A10B9416A1; Mon, 8 Aug 2022 14:56:36 +0200 (CEST) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.infraroot.at A10B9416A1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=infraroot.at; s=default; t=1659963397; bh=M+bJ4tqx4dLgqcZrZ76MbmGEKAf1tcpLk6KbvZTtkZo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=H5eVEYauyIdX1lFm2MP1Vl7fuxYVjTGmn3befQfBOIIM2f/nLeRCw5UFd7I2BMWS9 2cN4C+ShmtHIj1wVwkhCz7eIy8uXr0hAbvv6XBetN4lTLupNdrNqhfQTR256Ei5EEE kIQG7TivRnQB39qLx4ougH5zaQ9kOkTQNuXtx3f4= From: David Oberhollenzer To: linux-rt-users@vger.kernel.org Cc: williams@redhat.com, bigeasy@linutronix.de, richard@nod.at, joseph.salisbury@canonical.com, Johannes Weiner , Shakeel Butt , Roman Gushchin , Michal Hocko , David Oberhollenzer Subject: [PATCH v2 04/10] mm/memcg: Opencode the inner part of obj_cgroup_uncharge_pages() in drain_obj_stock() Date: Mon, 8 Aug 2022 14:55:56 +0200 Message-Id: <20220808125602.97747-5-goliath@infraroot.at> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220808125602.97747-1-goliath@infraroot.at> References: <20220808125602.97747-1-goliath@infraroot.at> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org From: Johannes Weiner Provide the inner part of refill_stock() as __refill_stock() without disabling interrupts. This eases the integration of local_lock_t where recursive locking must be avoided. Open code obj_cgroup_uncharge_pages() in drain_obj_stock() and use __refill_stock(). The caller of drain_obj_stock() already disables interrupts. [bigeasy: Patch body around Johannes' diff ] Signed-off-by: Johannes Weiner Signed-off-by: Sebastian Andrzej Siewior Reviewed-by: Shakeel Butt Reviewed-by: Roman Gushchin Acked-by: Michal Hocko [do: backported to v5.15] Signed-off-by: David Oberhollenzer --- mm/memcontrol.c | 26 ++++++++++++++++++++------ 1 file changed, 20 insertions(+), 6 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index e566c34466a1..2cf55b3ab4b0 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2264,12 +2264,9 @@ static void drain_local_stock(struct work_struct *dummy) * Cache charges(val) to local per_cpu area. * This will be consumed by consume_stock() function, later. */ -static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages) +static void __refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages) { struct memcg_stock_pcp *stock; - unsigned long flags; - - local_irq_save(flags); stock = this_cpu_ptr(&memcg_stock); if (stock->cached != memcg) { /* reset if necessary */ @@ -2281,7 +2278,14 @@ static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages) if (stock->nr_pages > MEMCG_CHARGE_BATCH) drain_stock(stock); +} +static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages) +{ + unsigned long flags; + + local_irq_save(flags); + __refill_stock(memcg, nr_pages); local_irq_restore(flags); } @@ -3180,8 +3184,18 @@ static void drain_obj_stock(struct memcg_stock_pcp *stock) unsigned int nr_pages = stock->nr_bytes >> PAGE_SHIFT; unsigned int nr_bytes = stock->nr_bytes & (PAGE_SIZE - 1); - if (nr_pages) - obj_cgroup_uncharge_pages(old, nr_pages); + if (nr_pages) { + struct mem_cgroup *memcg; + + memcg = get_mem_cgroup_from_objcg(old); + + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) + page_counter_uncharge(&memcg->kmem, nr_pages); + + __refill_stock(memcg, nr_pages); + + css_put(&memcg->css); + } /* * The leftover is flushed to the centralized per-memcg value.