From patchwork Tue Jun 8 18:27:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 457531 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D01EC47082 for ; Tue, 8 Jun 2021 19:20:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 847BD6108E for ; Tue, 8 Jun 2021 19:20:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237255AbhFHTWP (ORCPT ); Tue, 8 Jun 2021 15:22:15 -0400 Received: from mail.kernel.org ([198.145.29.99]:38704 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237766AbhFHTTD (ORCPT ); Tue, 8 Jun 2021 15:19:03 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id C63E861430; Tue, 8 Jun 2021 18:52:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1623178327; bh=MbsscMw7MYl62lVCu41/poMHHkHLkqFTbvMqYnPY72Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WSqVeZ/BHwcvgq72fAj+QKOFrj/olNZVpXXtlaVF71PmcR+Bv3D1M7SOP8T9yNC4H a3KwmpiJFp3/LFDWDgHdOOWoGVEf/TiF/VEyzer4s9JpXNVlz8vo7+kC6BrGqSaYvQ aHoQ1Loce7quDgBVZD0TW2FPsViuds/8F+yxDq/0= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Ding Hui , Naoya Horiguchi , Oscar Salvador , David Hildenbrand , Andrew Morton , Linus Torvalds Subject: [PATCH 5.12 128/161] mm/page_alloc: fix counting of free pages after take off from buddy Date: Tue, 8 Jun 2021 20:27:38 +0200 Message-Id: <20210608175949.771631072@linuxfoundation.org> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210608175945.476074951@linuxfoundation.org> References: <20210608175945.476074951@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Ding Hui commit bac9c6fa1f929213bbd0ac9cdf21e8e2f0916828 upstream. Recently we found that there is a lot MemFree left in /proc/meminfo after do a lot of pages soft offline, it's not quite correct. Before Oscar's rework of soft offline for free pages [1], if we soft offline free pages, these pages are left in buddy with HWPoison flag, and NR_FREE_PAGES is not updated immediately. So the difference between NR_FREE_PAGES and real number of available free pages is also even big at the beginning. However, with the workload running, when we catch HWPoison page in any alloc functions subsequently, we will remove it from buddy, meanwhile update the NR_FREE_PAGES and try again, so the NR_FREE_PAGES will get more and more closer to the real number of available free pages. (regardless of unpoison_memory()) Now, for offline free pages, after a successful call take_page_off_buddy(), the page is no longer belong to buddy allocator, and will not be used any more, but we missed accounting NR_FREE_PAGES in this situation, and there is no chance to be updated later. Do update in take_page_off_buddy() like rmqueue() does, but avoid double counting if some one already set_migratetype_isolate() on the page. [1]: commit 06be6ff3d2ec ("mm,hwpoison: rework soft offline for free pages") Link: https://lkml.kernel.org/r/20210526075247.11130-1-dinghui@sangfor.com.cn Fixes: 06be6ff3d2ec ("mm,hwpoison: rework soft offline for free pages") Signed-off-by: Ding Hui Suggested-by: Naoya Horiguchi Reviewed-by: Oscar Salvador Acked-by: David Hildenbrand Acked-by: Naoya Horiguchi Cc: Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- mm/page_alloc.c | 2 ++ 1 file changed, 2 insertions(+) --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -8951,6 +8951,8 @@ bool take_page_off_buddy(struct page *pa del_page_from_free_list(page_head, zone, page_order); break_down_buddy_pages(zone, page_head, page, 0, page_order, migratetype); + if (!is_migrate_isolate(migratetype)) + __mod_zone_freepage_state(zone, -1, migratetype); ret = true; break; }