From patchwork Wed Oct 18 19:48:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joseph Salisbury X-Patchwork-Id: 735400 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D7FE39CA4D for ; Wed, 18 Oct 2023 19:48:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=canonical.com header.i=@canonical.com header.b="Ln+uLWOU" Received: from smtp-relay-canonical-0.canonical.com (smtp-relay-canonical-0.canonical.com [185.125.188.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7741F133; Wed, 18 Oct 2023 12:48:47 -0700 (PDT) Received: from smtp.gmail.com (1.general.jsalisbury.us.vpn [10.172.66.188]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-canonical-0.canonical.com (Postfix) with ESMTPSA id 001884166C; Wed, 18 Oct 2023 19:48:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1697658523; bh=/IVYZ1A9AUMVA/ZaFju9t9TXf8Tzupe5AWHRYxLqdYI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Ln+uLWOUDIQFyId+viJZjW4JnD553+0BKtsBfFdk/KnQt7Hhkvc/iQP2SWfH5Li5K UbWSRfa61088kygrFctzraqM9+DXZZirJWONz2zvScjX9kCFerhGDxQnGwbXS4X7Gd EtUANnPo4ReoSScFYc5VRQMj96RBO5ZbC4VyeeeRkA+Oz0qXvuj7BixudGt5rIuL+F i6Z6DOG7TW0gdcPO50IIYE986ZHPP30fMB023pbYOc51fTIOQMQlN1/+K+Ez/6d6Ns C3LUmO2zwm7HiTGtpVQpml7g9mxXlDOfK54/31CayVxUegDscZOce2Sr9wEz62IKx8 uHZnZt+OKr6Dg== From: Joseph Salisbury To: LKML , linux-rt-users , Steven Rostedt , Thomas Gleixner , Carsten Emde , John Kacur , Sebastian Andrzej Siewior , Daniel Wagner , Tom Zanussi , Clark Williams , Pavel Machek , Joseph Salisbury Cc: Ido Schimmel Subject: [PATCH RT 04/12] debugobject: Ensure pool refill (again) Date: Wed, 18 Oct 2023 15:48:25 -0400 Message-Id: <20231018194833.651674-5-joseph.salisbury@canonical.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231018194833.651674-1-joseph.salisbury@canonical.com> References: <20231018194833.651674-1-joseph.salisbury@canonical.com> Precedence: bulk X-Mailing-List: linux-rt-users@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Thomas Gleixner v5.15.133-rt70-rc1 stable review patch. If anyone has any objections, please let me know. ----------- The recent fix to ensure atomicity of lookup and allocation inadvertently broke the pool refill mechanism. Prior to that change debug_objects_activate() and debug_objecs_assert_init() invoked debug_objecs_init() to set up the tracking object for statically initialized objects. That's not longer the case and debug_objecs_init() is now the only place which does pool refills. Depending on the number of statically initialized objects this can be enough to actually deplete the pool, which was observed by Ido via a debugobjects OOM warning. Restore the old behaviour by adding explicit refill opportunities to debug_objects_activate() and debug_objecs_assert_init(). Fixes: 63a759694eed ("debugobject: Prevent init race with static objects") Reported-by: Ido Schimmel Signed-off-by: Thomas Gleixner Tested-by: Ido Schimmel Link: https://lore.kernel.org/r/871qk05a9d.ffs@tglx (cherry picked from commit 0af462f19e635ad522f28981238334620881badc) Signed-off-by: Joseph Salisbury --- lib/debugobjects.c | 21 +++++++++++++++------ 1 file changed, 15 insertions(+), 6 deletions(-) diff --git a/lib/debugobjects.c b/lib/debugobjects.c index 579406c1e9ed..4c39678c03ee 100644 --- a/lib/debugobjects.c +++ b/lib/debugobjects.c @@ -590,6 +590,16 @@ static struct debug_obj *lookup_object_or_alloc(void *addr, struct debug_bucket return NULL; } +static void debug_objects_fill_pool(void) +{ + /* + * On RT enabled kernels the pool refill must happen in preemptible + * context: + */ + if (!IS_ENABLED(CONFIG_PREEMPT_RT) || preemptible()) + fill_pool(); +} + static void __debug_object_init(void *addr, const struct debug_obj_descr *descr, int onstack) { @@ -598,12 +608,7 @@ __debug_object_init(void *addr, const struct debug_obj_descr *descr, int onstack struct debug_obj *obj; unsigned long flags; - /* - * On RT enabled kernels the pool refill must happen in preemptible - * context: - */ - if (!IS_ENABLED(CONFIG_PREEMPT_RT) || preemptible()) - fill_pool(); + debug_objects_fill_pool(); db = get_bucket((unsigned long) addr); @@ -688,6 +693,8 @@ int debug_object_activate(void *addr, const struct debug_obj_descr *descr) if (!debug_objects_enabled) return 0; + debug_objects_fill_pool(); + db = get_bucket((unsigned long) addr); raw_spin_lock_irqsave(&db->lock, flags); @@ -897,6 +904,8 @@ void debug_object_assert_init(void *addr, const struct debug_obj_descr *descr) if (!debug_objects_enabled) return; + debug_objects_fill_pool(); + db = get_bucket((unsigned long) addr); raw_spin_lock_irqsave(&db->lock, flags);