From patchwork Fri Apr 29 20:46:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Yu X-Patchwork-Id: 568507 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77EAAC433EF for ; Sat, 30 Apr 2022 03:36:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239219AbiD3DkB (ORCPT ); Fri, 29 Apr 2022 23:40:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36742 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232757AbiD3DkB (ORCPT ); Fri, 29 Apr 2022 23:40:01 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9D1F58165B; Fri, 29 Apr 2022 20:36:38 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 31FA8B838BF; Sat, 30 Apr 2022 03:36:37 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 58302C385A7; Sat, 30 Apr 2022 03:36:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1651289795; bh=/wxIu6sulo5yHNBHDR4kSxJL1m5XWpAKeneenD0KkMc=; h=From:To:Cc:Subject:Date:From; b=n3vErz3f7a2glcLdJ3Psi2sZ9Zi9+BWPoAyM/Kql35yPY9CcHBEiyF3NKiccIkkrD khXF+teJTcv8lgCgdrgAOi4Vj+3Azc2hsiUz5PlidjCauBCrSC62qL2jITG35JuCTm Oprz5PMBy08+ah4wN6kD7ToJoxfLm/8VzxSe2eA4IJvkGWCIH+Ekvs3X6QZPXo5qVO XQuXozQeq9Q7ZOHg8LzxWeo3RYChK9WSGvoMzTI1FysMrYN3U7XbO+grU5C8Oj8eb2 bHn+vKtYLVJz3z+ZEW87oU2uL9gyDAsVd4BvZxae9xy26S1qbX1Mf3fb7fItOdpOM8 wX/xTNhBabYew== From: Chao Yu To: jaegeuk@kernel.org Cc: linux-f2fs-devel@lists.sourceforge.net, linux-kernel@vger.kernel.org, Chao Yu , stable@vger.kernel.org, Ming Yan , Chao Yu Subject: [PATCH] f2fs: fix deadloop in foreground GC Date: Sat, 30 Apr 2022 04:46:31 +0800 Message-Id: <20220429204631.7241-1-chao@kernel.org> X-Mailer: git-send-email 2.32.0 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org As Yanming reported in bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=215914 The root cause is: in a very small sized image, it's very easy to exceed threshold of foreground GC, if we calculate free space and dirty data based on section granularity, in corner case, has_not_enough_free_secs() will always return true, result in deadloop in f2fs_gc(). So this patch refactors has_not_enough_free_secs() as below to fix this issue: 1. calculate needed space based on block granularity, and separate all blocks to two parts, section part, and block part, comparing section part to free section, and comparing block part to free space in openned log. 2. account F2FS_DIRTY_NODES, F2FS_DIRTY_IMETA and F2FS_DIRTY_DENTS as node block consumer; 3. account F2FS_DIRTY_DENTS as data block consumer; Cc: stable@vger.kernel.org Reported-by: Ming Yan Signed-off-by: Chao Yu --- fs/f2fs/segment.h | 30 +++++++++++++++++------------- 1 file changed, 17 insertions(+), 13 deletions(-) diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h index 8a591455d796..28f7aa9b40bf 100644 --- a/fs/f2fs/segment.h +++ b/fs/f2fs/segment.h @@ -575,11 +575,10 @@ static inline int reserved_sections(struct f2fs_sb_info *sbi) return GET_SEC_FROM_SEG(sbi, reserved_segments(sbi)); } -static inline bool has_curseg_enough_space(struct f2fs_sb_info *sbi) +static inline bool has_curseg_enough_space(struct f2fs_sb_info *sbi, + unsigned int node_blocks, unsigned int dent_blocks) { - unsigned int node_blocks = get_pages(sbi, F2FS_DIRTY_NODES) + - get_pages(sbi, F2FS_DIRTY_DENTS); - unsigned int dent_blocks = get_pages(sbi, F2FS_DIRTY_DENTS); + unsigned int segno, left_blocks; int i; @@ -605,19 +604,24 @@ static inline bool has_curseg_enough_space(struct f2fs_sb_info *sbi) static inline bool has_not_enough_free_secs(struct f2fs_sb_info *sbi, int freed, int needed) { - int node_secs = get_blocktype_secs(sbi, F2FS_DIRTY_NODES); - int dent_secs = get_blocktype_secs(sbi, F2FS_DIRTY_DENTS); - int imeta_secs = get_blocktype_secs(sbi, F2FS_DIRTY_IMETA); + unsigned int total_node_blocks = get_pages(sbi, F2FS_DIRTY_NODES) + + get_pages(sbi, F2FS_DIRTY_DENTS) + + get_pages(sbi, F2FS_DIRTY_IMETA); + unsigned int total_dent_blocks = get_pages(sbi, F2FS_DIRTY_DENTS); + unsigned int node_secs = total_node_blocks / BLKS_PER_SEC(sbi); + unsigned int dent_secs = total_dent_blocks / BLKS_PER_SEC(sbi); + unsigned int node_blocks = total_node_blocks % BLKS_PER_SEC(sbi); + unsigned int dent_blocks = total_dent_blocks % BLKS_PER_SEC(sbi); if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING))) return false; - if (free_sections(sbi) + freed == reserved_sections(sbi) + needed && - has_curseg_enough_space(sbi)) - return false; - return (free_sections(sbi) + freed) <= - (node_secs + 2 * dent_secs + imeta_secs + - reserved_sections(sbi) + needed); + if (free_sections(sbi) + freed <= + node_secs + dent_secs + reserved_sections(sbi) + needed) + return true; + if (!has_curseg_enough_space(sbi, node_blocks, dent_blocks)) + return true; + return false; } static inline bool f2fs_is_checkpoint_ready(struct f2fs_sb_info *sbi)