From patchwork Sat Sep 19 04:20:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 263777 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5EB21C43467 for ; Sat, 19 Sep 2020 04:20:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2B02D235F8 for ; Sat, 19 Sep 2020 04:20:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1600489233; bh=IsciPojv2+hUVaVLTM3iG2V5aNsBJRVuhcx6O3Zlqb4=; h=Date:From:To:Subject:In-Reply-To:List-ID:From; b=ZqqFB3TK1GwUS9q8IV5JsBLp0QFtRJASrujGPCyfoWv22tPJYq2DGLX7aN0YvE7QZ TqmvfUkz4jww8TnKsIB/v9g01JAWfnLQVp+gkGq1EXP5GeQZ9KCn10/GacEZ8saJ+K oa9pUfMNNOKGD5CN2/jZWDiiuIB1p8pSGH63n1bw= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726249AbgISEUc (ORCPT ); Sat, 19 Sep 2020 00:20:32 -0400 Received: from mail.kernel.org ([198.145.29.99]:55052 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726009AbgISEUc (ORCPT ); Sat, 19 Sep 2020 00:20:32 -0400 Received: from localhost.localdomain (c-71-198-47-131.hsd1.ca.comcast.net [71.198.47.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 60A0E21D43; Sat, 19 Sep 2020 04:20:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1600489231; bh=IsciPojv2+hUVaVLTM3iG2V5aNsBJRVuhcx6O3Zlqb4=; h=Date:From:To:Subject:In-Reply-To:From; b=MS7RHm7OEKbcYwFHjMHeJNRK/fMtJkcTsTZb1qTpZMX/Bej7Kqt0xAsNEfJnGqHfr 7MOrFC7eN1N3J3ou/2+CNApAvrqkARe/90F5/mD9yXNO2ROvByO1AA2Xf8koVdlflb cPV9iTUFwxqtGlt97mSJ4pGf+hzgRc+Iemfi4p8c= Date: Fri, 18 Sep 2020 21:20:31 -0700 From: Andrew Morton To: akpm@linux-foundation.org, david@redhat.com, linux-mm@kvack.org, mhocko@suse.com, mm-commits@vger.kernel.org, osalvador@suse.de, pasha.tatashin@soleen.com, richard.weiyang@gmail.com, rientjes@google.com, stable@vger.kernel.org, torvalds@linux-foundation.org, vbabka@suse.cz Subject: [patch 11/15] mm/memory_hotplug: drain per-cpu pages again during memory offline Message-ID: <20200919042031.EJlTFXy4s%akpm@linux-foundation.org> In-Reply-To: <20200918211925.7e97f0ef63d92f5cfe5ccbc5@linux-foundation.org> User-Agent: s-nail v14.8.16 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Pavel Tatashin Subject: mm/memory_hotplug: drain per-cpu pages again during memory offline There is a race during page offline that can lead to infinite loop: a page never ends up on a buddy list and __offline_pages() keeps retrying infinitely or until a termination signal is received. Thread#1 - a new process: load_elf_binary begin_new_exec exec_mmap mmput exit_mmap tlb_finish_mmu tlb_flush_mmu release_pages free_unref_page_list free_unref_page_prepare set_pcppage_migratetype(page, migratetype); // Set page->index migration type below MIGRATE_PCPTYPES Thread#2 - hot-removes memory __offline_pages start_isolate_page_range set_migratetype_isolate set_pageblock_migratetype(page, MIGRATE_ISOLATE); Set migration type to MIGRATE_ISOLATE-> set drain_all_pages(zone); // drain per-cpu page lists to buddy allocator. Thread#1 - continue free_unref_page_commit migratetype = get_pcppage_migratetype(page); // get old migration type list_add(&page->lru, &pcp->lists[migratetype]); // add new page to already drained pcp list Thread#2 Never drains pcp again, and therefore gets stuck in the loop. The fix is to try to drain per-cpu lists again after check_pages_isolated_cb() fails. Link: https://lkml.kernel.org/r/20200903140032.380431-1-pasha.tatashin@soleen.com Link: https://lkml.kernel.org/r/20200904151448.100489-2-pasha.tatashin@soleen.com Link: http://lkml.kernel.org/r/20200904070235.GA15277@dhcp22.suse.cz Fixes: c52e75935f8d ("mm: remove extra drain pages on pcp list") Signed-off-by: Pavel Tatashin Acked-by: David Rientjes Acked-by: Vlastimil Babka Acked-by: Michal Hocko Acked-by: David Hildenbrand Cc: Oscar Salvador Cc: Wei Yang Cc: Signed-off-by: Andrew Morton --- mm/memory_hotplug.c | 14 ++++++++++++++ mm/page_isolation.c | 8 ++++++++ 2 files changed, 22 insertions(+) --- a/mm/memory_hotplug.c~mm-memory_hotplug-drain-per-cpu-pages-again-during-memory-offline +++ a/mm/memory_hotplug.c @@ -1575,6 +1575,20 @@ static int __ref __offline_pages(unsigne /* check again */ ret = walk_system_ram_range(start_pfn, end_pfn - start_pfn, NULL, check_pages_isolated_cb); + /* + * per-cpu pages are drained in start_isolate_page_range, but if + * there are still pages that are not free, make sure that we + * drain again, because when we isolated range we might + * have raced with another thread that was adding pages to pcp + * list. + * + * Forward progress should be still guaranteed because + * pages on the pcp list can only belong to MOVABLE_ZONE + * because has_unmovable_pages explicitly checks for + * PageBuddy on freed pages on other zones. + */ + if (ret) + drain_all_pages(zone); } while (ret); /* Ok, all of our target is isolated. --- a/mm/page_isolation.c~mm-memory_hotplug-drain-per-cpu-pages-again-during-memory-offline +++ a/mm/page_isolation.c @@ -170,6 +170,14 @@ __first_valid_page(unsigned long pfn, un * pageblocks we may have modified and return -EBUSY to caller. This * prevents two threads from simultaneously working on overlapping ranges. * + * Please note that there is no strong synchronization with the page allocator + * either. Pages might be freed while their page blocks are marked ISOLATED. + * In some cases pages might still end up on pcp lists and that would allow + * for their allocation even when they are in fact isolated already. Depending + * on how strong of a guarantee the caller needs drain_all_pages might be needed + * (e.g. __offline_pages will need to call it after check for isolated range for + * a next retry). + * * Return: the number of isolated pageblocks on success and -EBUSY if any part * of range cannot be isolated. */