From patchwork Mon Jan 25 19:47:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 370367 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75E67C433E0 for ; Mon, 25 Jan 2021 20:55:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1F5C122513 for ; Mon, 25 Jan 2021 20:55:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732358AbhAYUxh (ORCPT ); Mon, 25 Jan 2021 15:53:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57992 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732138AbhAYTsg (ORCPT ); Mon, 25 Jan 2021 14:48:36 -0500 Received: from mail-qt1-x829.google.com (mail-qt1-x829.google.com [IPv6:2607:f8b0:4864:20::829]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F21D8C0613D6 for ; Mon, 25 Jan 2021 11:47:55 -0800 (PST) Received: by mail-qt1-x829.google.com with SMTP id z6so10629717qtn.0 for ; Mon, 25 Jan 2021 11:47:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=uI6NShNpRZxuWFJgcFGIsf9y7F3FhEyQzIGIbDpR8GM=; b=APd7o0SfBkNxtMpH7XgAZalnVlax3qUjjcZ+T+cdkl721zjTqiyrNgXce5aI1JJhVI bNjSKFbZ1wyY6khN4tVflZG3pBI0NyM2Pn5NrhEvsFSUoi1iFeBR0qBTvqdDK2YqWTol HRwdRsuAbAocWhIzZA35IFuhtxT8eF3U+58pIGQioZzxndFcdjcQ6Emht1tGSp1l+rE1 kWHtBDtJIHZ3ynh8lE5qFiJaIg3upEOlt3xN0Q5EXPmSjxt3CHFa8QBb1U8FuKGt/KY5 5LkyG1H1xtQv/uujJpKHSKwMbDKLK8S3mJSx/veQ+M8RVAWGWdY7TtO2zjzMb8ITmdw3 OzuQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=uI6NShNpRZxuWFJgcFGIsf9y7F3FhEyQzIGIbDpR8GM=; b=OlXtCaulTnlG8opwe0yP3HQOXPqP29+GZ2tiM4OB+Ch+LpZzRxITxXp37gMqOqQm5W axNk92v2A84CyHOXPqKytbop5DWBUF7rI9mk4kfaVnzyDFjUoskuX3cawEZho6kSgz+a iscWVjxe4qBMu4YM7nQkCtmR+XyTCijH+DVkKJPNqO/Km31Qun2biRpN+PB2T8sUqksn ljzSbsYlmMx69x7sHS6JNPYB75BIzI/dFYgRgyXzACrwb0Pn0QAANbG9dskWmNWvJUmo e1g/ivbrO1k5CaEaAK5h+6SQSt9ktI5vSFaOw06pAjOcbXcafr5cWbgfD/e0S22LpE9o 3gbA== X-Gm-Message-State: AOAM530RWPtl235yHwtCr1HDYZZzhMmFc9ROSWzrjVfzL9UKRTqUeTBt SfJfE/XHQN9IOd/zPxbMcVVytA== X-Google-Smtp-Source: ABdhPJw5iCQaRdjeqKnek/6/7TMDHdOTBAJYl2HAxX/BZQACvfjwwxQckKi3CGyggwDyfJkTML5qaw== X-Received: by 2002:ac8:e82:: with SMTP id v2mr2047681qti.164.1611604075264; Mon, 25 Jan 2021 11:47:55 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id c12sm12121569qtq.76.2021.01.25.11.47.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Jan 2021 11:47:54 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org, ira.weiny@intel.com, linux-kselftest@vger.kernel.org, jmorris@namei.org Subject: [PATCH v8 01/14] mm/gup: don't pin migrated cma pages in movable zone Date: Mon, 25 Jan 2021 14:47:38 -0500 Message-Id: <20210125194751.1275316-2-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210125194751.1275316-1-pasha.tatashin@soleen.com> References: <20210125194751.1275316-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org In order not to fragment CMA the pinned pages are migrated. However, they are migrated to ZONE_MOVABLE, which also should not have pinned pages. Remove __GFP_MOVABLE, so pages can be migrated to zones where pinning is allowed. Signed-off-by: Pavel Tatashin Reviewed-by: David Hildenbrand Reviewed-by: John Hubbard Acked-by: Michal Hocko --- mm/gup.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/gup.c b/mm/gup.c index 3e086b073624..24f25b1e9103 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1563,7 +1563,7 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, long ret = nr_pages; struct migration_target_control mtc = { .nid = NUMA_NO_NODE, - .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_NOWARN, + .gfp_mask = GFP_USER | __GFP_NOWARN, }; check_again: From patchwork Mon Jan 25 19:47:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 370369 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 308D8C433E6 for ; Mon, 25 Jan 2021 20:44:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0530322582 for ; Mon, 25 Jan 2021 20:44:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732031AbhAYUnv (ORCPT ); Mon, 25 Jan 2021 15:43:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58220 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732064AbhAYTtg (ORCPT ); Mon, 25 Jan 2021 14:49:36 -0500 Received: from mail-qk1-x736.google.com (mail-qk1-x736.google.com [IPv6:2607:f8b0:4864:20::736]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EA169C061797 for ; Mon, 25 Jan 2021 11:48:00 -0800 (PST) Received: by mail-qk1-x736.google.com with SMTP id k193so13705388qke.6 for ; Mon, 25 Jan 2021 11:48:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=g2firNhV/7kB7YU7cH9BNSgXaQWIxvW5HlS98FNnYEs=; b=AMPWTTnTQtQrH0f812HX8J/XenqFD0LhTJ4uvra0Yc82gbHE9JUqY698LNtS80BE7R HR+xMh6u82Ttl2HRET2KJJgXpfY0HAW1XAkqnXoYkorW9DdsIXlDDaKOJsRmrtQ2/nEG Dmb4GCZxUK9zEwlcH3WDx/URO0SVMy0zcD9v2FKoTwgARO1MZ/OlQM2BSz/juCLIjm3+ Ea0mQylNJ+kUPUNcC5yvUomLFphaeUQkqGKv5Qe3PZZaI9d8ehH1Z1VgrSynAksIDQ8Q 1KdzNIxtcFbSYYmlQkrBgCfYAOIWMyB8X7sV+6aCuasMoW4NW0MbgSIrAIc2CBltCiiY S4Bw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=g2firNhV/7kB7YU7cH9BNSgXaQWIxvW5HlS98FNnYEs=; b=MdJ9zBKWIFRKG10tvwGb2zJ5T/MTSCnCwcWABTqg/lH5NzLRBa97Kd1e7b9o7P4CYR WlRtdZgxouC9/iEkvJsPbmUYQa+u9w3QcfzSGop5K9zCMaQqK4NaMeHd+Hp2G98yROEz z/zcCXkHbyeeMSStYxmmcn4XhKX57eK7JoZBjJVGQ02XW/hSyAvXPFxrSTtTb1kyPd/j mBrRCT720uGsqidbs/yMIVeD2V+wIZYm8T5udxYHxylDjWj+y0T+XlgRJi99Xa7mGLX7 15BcvHnDD5i6brJLMdWnHkUX0sV8C2INRfEPFQj7xd+L7aOYzQQTcd7Bmdlpdy27Q4bw ZuUw== X-Gm-Message-State: AOAM532imVGQTsbjxLdpEZzzr+3p+BTcJNg1bO/jK/XFi1tgR9VPvWEP ZLsuiLHjLdt6g7E2qzLCuMHc2g== X-Google-Smtp-Source: ABdhPJwYEL3DoI+pABS4RBJJVNs455EMYUvP8bvQI2v1Jkb8103rBVGefY9aunaJRQVj9v3yEHgwng== X-Received: by 2002:a37:4554:: with SMTP id s81mr2369392qka.323.1611604080206; Mon, 25 Jan 2021 11:48:00 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id c12sm12121569qtq.76.2021.01.25.11.47.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Jan 2021 11:47:59 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org, ira.weiny@intel.com, linux-kselftest@vger.kernel.org, jmorris@namei.org Subject: [PATCH v8 04/14] mm/gup: check for isolation errors Date: Mon, 25 Jan 2021 14:47:41 -0500 Message-Id: <20210125194751.1275316-5-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210125194751.1275316-1-pasha.tatashin@soleen.com> References: <20210125194751.1275316-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org It is still possible that we pin movable CMA pages if there are isolation errors and cma_page_list stays empty when we check again. Check for isolation errors, and return success only when there are no isolation errors, and cma_page_list is empty after checking. Because isolation errors are transient, we retry indefinitely. Fixes: 9a4e9f3b2d73 ("mm: update get_user_pages_longterm to migrate pages allocated from CMA region") Signed-off-by: Pavel Tatashin Reviewed-by: Jason Gunthorpe --- mm/gup.c | 60 ++++++++++++++++++++++++++++++++------------------------ 1 file changed, 34 insertions(+), 26 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 88ce41f41543..7ecca2d66dff 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1555,8 +1555,8 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, struct vm_area_struct **vmas, unsigned int gup_flags) { - unsigned long i; - bool drain_allow = true; + unsigned long i, isolation_error_count; + bool drain_allow; LIST_HEAD(cma_page_list); long ret = nr_pages; struct page *prev_head, *head; @@ -1567,6 +1567,8 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, check_again: prev_head = NULL; + isolation_error_count = 0; + drain_allow = true; for (i = 0; i < nr_pages; i++) { head = compound_head(pages[i]); if (head == prev_head) @@ -1578,25 +1580,35 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, * of the CMA zone if possible. */ if (is_migrate_cma_page(head)) { - if (PageHuge(head)) - isolate_huge_page(head, &cma_page_list); - else { + if (PageHuge(head)) { + if (!isolate_huge_page(head, &cma_page_list)) + isolation_error_count++; + } else { if (!PageLRU(head) && drain_allow) { lru_add_drain_all(); drain_allow = false; } - if (!isolate_lru_page(head)) { - list_add_tail(&head->lru, &cma_page_list); - mod_node_page_state(page_pgdat(head), - NR_ISOLATED_ANON + - page_is_file_lru(head), - thp_nr_pages(head)); + if (isolate_lru_page(head)) { + isolation_error_count++; + continue; } + list_add_tail(&head->lru, &cma_page_list); + mod_node_page_state(page_pgdat(head), + NR_ISOLATED_ANON + + page_is_file_lru(head), + thp_nr_pages(head)); } } } + /* + * If list is empty, and no isolation errors, means that all pages are + * in the correct zone. + */ + if (list_empty(&cma_page_list) && !isolation_error_count) + return ret; + if (!list_empty(&cma_page_list)) { /* * drop the above get_user_pages reference. @@ -1616,23 +1628,19 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, return ret > 0 ? -ENOMEM : ret; } - /* - * We did migrate all the pages, Try to get the page references - * again migrating any new CMA pages which we failed to isolate - * earlier. - */ - ret = __get_user_pages_locked(mm, start, nr_pages, - pages, vmas, NULL, - gup_flags); - - if (ret > 0) { - nr_pages = ret; - drain_allow = true; - goto check_again; - } + /* We unpinned pages before migration, pin them again */ + ret = __get_user_pages_locked(mm, start, nr_pages, pages, vmas, + NULL, gup_flags); + if (ret <= 0) + return ret; + nr_pages = ret; } - return ret; + /* + * check again because pages were unpinned, and we also might have + * had isolation errors and need more pages to migrate. + */ + goto check_again; } #else static long check_and_migrate_cma_pages(struct mm_struct *mm, From patchwork Mon Jan 25 19:47:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 370368 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 377D6C433DB for ; Mon, 25 Jan 2021 20:49:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EE82A224DF for ; Mon, 25 Jan 2021 20:49:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731793AbhAYUns (ORCPT ); Mon, 25 Jan 2021 15:43:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58204 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732172AbhAYTui (ORCPT ); Mon, 25 Jan 2021 14:50:38 -0500 Received: from mail-qt1-x835.google.com (mail-qt1-x835.google.com [IPv6:2607:f8b0:4864:20::835]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9BC8FC061351 for ; Mon, 25 Jan 2021 11:48:02 -0800 (PST) Received: by mail-qt1-x835.google.com with SMTP id o18so10564091qtp.10 for ; Mon, 25 Jan 2021 11:48:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=erDKiXAtt/ZfDNNFCFxfMe9eST86O7VxkIZqpHHCdMo=; b=gXzBEYM45h44JmTh+SOS1KqJ1znNfWJdkHIooP65qs8VDI0rCPGi5a6NwsG4j/pCNh /2/dc+XTJVBSZrZCNDbuwaGzREMMmuImcnaeIEbB+Q02AmkENedC7dJkYmFjnqVp+Myv DHV05lsECVIBoPoj5Vso7Cfy4F9qn/Jl9/+Kfo/Td4bLGSjaGbivdep/zGt7PiizKxbR gstHnSIHGaFghCA2orssPKJIZ4UdhRzPKHjSS85B3h5LpTQ/Fj1sFh2m0eCI1sdpOOJG BeeMpZ92z2aU8mtDFN3zjna36X77bHpBNPnq1mLUSe0IcOM7v7LCaye3DaZPNqFvcs2y G0ZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=erDKiXAtt/ZfDNNFCFxfMe9eST86O7VxkIZqpHHCdMo=; b=MXbLD42afAshDGI1MI7qDJo7/SuEvBWMRS59vlGED5J/4kJaosUpnsv55MhlfjRxVn bJYsIxYvTXTIOQ0n6QqmuGcYfR/j7Jx87g3ka46oBRLiaoqdPXxnx7X4ThQNgjZIxbvx Kg8elM0j66W9GtKitswTK1FfTmc8hfrO7zsBEx3WPINQvUj0hDqtiqMbopoNYU5puxwu uJ7HS9ATNjgDUmfeaOgv1UXWJXZOFfhsXNJNnRfFGMrTECwuXQNQN43Fe8CRSDWQrj/e p4MXGGt8weD1PfqM1OitgIjLiMU0RtyQW3BMhFiSPRe3/VulK0dmL1PhsdR23TRmk7Ql 9faA== X-Gm-Message-State: AOAM530LnYoOWftc9onRFxiNYsO5UtMDPjIe5KzVLrdZeOo6oB717Ix2 4U26SmLQrb4+JZhUod+1vNz5rQ== X-Google-Smtp-Source: ABdhPJwbOUzBr/56M0RmyAQ+ZsDl4fcJihGJYA2TsdytVsQeYtM4XvDXE+uslGEnRTypRVtMn1fUjw== X-Received: by 2002:aed:3661:: with SMTP id e88mr2018577qtb.243.1611604081880; Mon, 25 Jan 2021 11:48:01 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id c12sm12121569qtq.76.2021.01.25.11.48.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Jan 2021 11:48:01 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org, ira.weiny@intel.com, linux-kselftest@vger.kernel.org, jmorris@namei.org Subject: [PATCH v8 05/14] mm cma: rename PF_MEMALLOC_NOCMA to PF_MEMALLOC_PIN Date: Mon, 25 Jan 2021 14:47:42 -0500 Message-Id: <20210125194751.1275316-6-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210125194751.1275316-1-pasha.tatashin@soleen.com> References: <20210125194751.1275316-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org PF_MEMALLOC_NOCMA is used ot guarantee that the allocator will not return pages that might belong to CMA region. This is currently used for long term gup to make sure that such pins are not going to be done on any CMA pages. When PF_MEMALLOC_NOCMA has been introduced we haven't realized that it is focusing on CMA pages too much and that there is larger class of pages that need the same treatment. MOVABLE zone cannot contain any long term pins as well so it makes sense to reuse and redefine this flag for that usecase as well. Rename the flag to PF_MEMALLOC_PIN which defines an allocation context which can only get pages suitable for long-term pins. Also re-name: memalloc_nocma_save()/memalloc_nocma_restore to memalloc_pin_save()/memalloc_pin_restore() and make the new functions common. Signed-off-by: Pavel Tatashin Reviewed-by: John Hubbard Acked-by: Michal Hocko --- include/linux/sched.h | 2 +- include/linux/sched/mm.h | 21 +++++---------------- mm/gup.c | 4 ++-- mm/hugetlb.c | 4 ++-- mm/page_alloc.c | 4 ++-- 5 files changed, 12 insertions(+), 23 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index e47a3823f408..f90a27611926 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1576,7 +1576,7 @@ extern struct pid *cad_pid; #define PF_SWAPWRITE 0x00800000 /* Allowed to write to swap */ #define PF_NO_SETAFFINITY 0x04000000 /* Userland is not allowed to meddle with cpus_mask */ #define PF_MCE_EARLY 0x08000000 /* Early kill for mce process policy */ -#define PF_MEMALLOC_NOCMA 0x10000000 /* All allocation request will have _GFP_MOVABLE cleared */ +#define PF_MEMALLOC_PIN 0x10000000 /* Allocation context constrained to zones which allow long term pinning. */ #define PF_FREEZER_SKIP 0x40000000 /* Freezer should not count it as freezable */ #define PF_SUSPEND_TASK 0x80000000 /* This thread called freeze_processes() and should not be frozen */ diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h index 1ae08b8462a4..5f4dd3274734 100644 --- a/include/linux/sched/mm.h +++ b/include/linux/sched/mm.h @@ -270,29 +270,18 @@ static inline void memalloc_noreclaim_restore(unsigned int flags) current->flags = (current->flags & ~PF_MEMALLOC) | flags; } -#ifdef CONFIG_CMA -static inline unsigned int memalloc_nocma_save(void) +static inline unsigned int memalloc_pin_save(void) { - unsigned int flags = current->flags & PF_MEMALLOC_NOCMA; + unsigned int flags = current->flags & PF_MEMALLOC_PIN; - current->flags |= PF_MEMALLOC_NOCMA; + current->flags |= PF_MEMALLOC_PIN; return flags; } -static inline void memalloc_nocma_restore(unsigned int flags) +static inline void memalloc_pin_restore(unsigned int flags) { - current->flags = (current->flags & ~PF_MEMALLOC_NOCMA) | flags; + current->flags = (current->flags & ~PF_MEMALLOC_PIN) | flags; } -#else -static inline unsigned int memalloc_nocma_save(void) -{ - return 0; -} - -static inline void memalloc_nocma_restore(unsigned int flags) -{ -} -#endif #ifdef CONFIG_MEMCG DECLARE_PER_CPU(struct mem_cgroup *, int_active_memcg); diff --git a/mm/gup.c b/mm/gup.c index 7ecca2d66dff..857b273e32ac 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1669,7 +1669,7 @@ static long __gup_longterm_locked(struct mm_struct *mm, long rc; if (gup_flags & FOLL_LONGTERM) - flags = memalloc_nocma_save(); + flags = memalloc_pin_save(); rc = __get_user_pages_locked(mm, start, nr_pages, pages, vmas, NULL, gup_flags); @@ -1678,7 +1678,7 @@ static long __gup_longterm_locked(struct mm_struct *mm, if (rc > 0) rc = check_and_migrate_cma_pages(mm, start, rc, pages, vmas, gup_flags); - memalloc_nocma_restore(flags); + memalloc_pin_restore(flags); } return rc; } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index a6bad1f686c5..2d79e515a7a3 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1049,10 +1049,10 @@ static void enqueue_huge_page(struct hstate *h, struct page *page) static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid) { struct page *page; - bool nocma = !!(current->flags & PF_MEMALLOC_NOCMA); + bool pin = !!(current->flags & PF_MEMALLOC_PIN); list_for_each_entry(page, &h->hugepage_freelists[nid], lru) { - if (nocma && is_migrate_cma_page(page)) + if (pin && is_migrate_cma_page(page)) continue; if (PageHWPoison(page)) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index b031a5ae0bd5..f92d7c810953 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3813,8 +3813,8 @@ static inline unsigned int current_alloc_flags(gfp_t gfp_mask, #ifdef CONFIG_CMA unsigned int pflags = current->flags; - if (!(pflags & PF_MEMALLOC_NOCMA) && - gfp_migratetype(gfp_mask) == MIGRATE_MOVABLE) + if (!(pflags & PF_MEMALLOC_PIN) && + gfp_migratetype(gfp_mask) == MIGRATE_MOVABLE) alloc_flags |= ALLOC_CMA; #endif From patchwork Mon Jan 25 19:47:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 370370 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D64AC433E0 for ; Mon, 25 Jan 2021 20:43:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 521602255F for ; Mon, 25 Jan 2021 20:43:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732340AbhAYUmi (ORCPT ); Mon, 25 Jan 2021 15:42:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58534 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731973AbhAYTvD (ORCPT ); Mon, 25 Jan 2021 14:51:03 -0500 Received: from mail-qv1-xf32.google.com (mail-qv1-xf32.google.com [IPv6:2607:f8b0:4864:20::f32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3EF57C061353 for ; Mon, 25 Jan 2021 11:48:04 -0800 (PST) Received: by mail-qv1-xf32.google.com with SMTP id p5so6757171qvs.7 for ; Mon, 25 Jan 2021 11:48:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=BhWP1P8Q7i9lD0EGpT6UypXh912bl/nwbH5yu3+aADs=; b=R90VZ3vGcrRYvZbgV7azLdKs09oOeO1Lk8KPeEtI0CYo+B4Bw6mPEKxqL5eg56jNc9 ZDrbez70bMDtoC6Od7HL0ftNoMEQ3peg7mRtVWsgG325a5PlRlbmKT7RHgvkhtOhId4d 7dYEeW1LvOtO3oZjif6bi5MbAEz9No5RYl4KG4ubdRNqV0/tlyNZMriVg+YWst3j+WTZ PjO8GSotud/xnvOzQkGUVD0179zez9cw1bfDMJSQuTSACrGxlrSUyp4iYu0Ps2hw+Zuv dQq7Ilb7K8Fa+eF8jxWDHJeEvOyhu0eMMUe1CmjKngGzlok/qZkPzNDWJsDq0udh+T7i vr8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BhWP1P8Q7i9lD0EGpT6UypXh912bl/nwbH5yu3+aADs=; b=I3Mn/Gvv92zDZgwJxyFcxl6SeX+jHCwj2dKiObd7f6PSWiYEBG6FDOUgDGFVLmvLa3 phxBvVoTG3uGjDUR8abRgk+nN6jwYvE7XedEDS+T0Cv2slZhd4XSAWmYjfSAma1i//l9 8StYs113Uc49CTDsUIn8eP4qvDN3NBxBB8SfZEhrC+GN7AGZfdMwTAEfeAUWULWU4wjp WGwZq1sFano/6/rW0yp2ycshrDf1D3HSy2v00vHaAmqUiMUcl2eMJ9tBs7d/Ejx2dEVn IPcOYHYeGvnEdqGhUA9Gh9qDPdwP347FBTsBrHnVAvHifBYAeapttyagVnZ5msgsMuZB WSyQ== X-Gm-Message-State: AOAM531hBPU8uAt6WQ8/TkD0FuVnYhw9Ho5XuneqLM1tiPbwOa4dV8IU 1NX7kSrU9y/JLdqIs9lScoJq4g== X-Google-Smtp-Source: ABdhPJy3QWGRlhCdTJ6t8Bl2AWaYHX6rqsFzWzOohroOr2+KwZLFlfWsdO5+EVfO+/raYzXEIL5oIA== X-Received: by 2002:a0c:c688:: with SMTP id d8mr2432312qvj.8.1611604083512; Mon, 25 Jan 2021 11:48:03 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id c12sm12121569qtq.76.2021.01.25.11.48.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Jan 2021 11:48:02 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org, ira.weiny@intel.com, linux-kselftest@vger.kernel.org, jmorris@namei.org Subject: [PATCH v8 06/14] mm: apply per-task gfp constraints in fast path Date: Mon, 25 Jan 2021 14:47:43 -0500 Message-Id: <20210125194751.1275316-7-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210125194751.1275316-1-pasha.tatashin@soleen.com> References: <20210125194751.1275316-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Function current_gfp_context() is called after fast path. However, soon we will add more constraints which will also limit zones based on context. Move this call into fast path, and apply the correct constraints for all allocations. Also update .reclaim_idx based on value returned by current_gfp_context() because it soon will modify the allowed zones. Note: With this patch we will do one extra current->flags load during fast path, but we already load current->flags in fast-path: __alloc_pages_nodemask() prepare_alloc_pages() current_alloc_flags(gfp_mask, *alloc_flags); Later, when we add the zone constrain logic to current_gfp_context() we will be able to remove current->flags load from current_alloc_flags, and therefore return fast-path to the current performance level. Suggested-by: Michal Hocko Signed-off-by: Pavel Tatashin Acked-by: Michal Hocko --- mm/page_alloc.c | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index f92d7c810953..c93e801a45e9 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4981,6 +4981,13 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid, } gfp_mask &= gfp_allowed_mask; + /* + * Apply scoped allocation constraints. This is mainly about GFP_NOFS + * resp. GFP_NOIO which has to be inherited for all allocation requests + * from a particular context which has been marked by + * memalloc_no{fs,io}_{save,restore}. + */ + gfp_mask = current_gfp_context(gfp_mask); alloc_mask = gfp_mask; if (!prepare_alloc_pages(gfp_mask, order, preferred_nid, nodemask, &ac, &alloc_mask, &alloc_flags)) return NULL; @@ -4996,13 +5003,7 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid, if (likely(page)) goto out; - /* - * Apply scoped allocation constraints. This is mainly about GFP_NOFS - * resp. GFP_NOIO which has to be inherited for all allocation requests - * from a particular context which has been marked by - * memalloc_no{fs,io}_{save,restore}. - */ - alloc_mask = current_gfp_context(gfp_mask); + alloc_mask = gfp_mask; ac.spread_dirty_pages = false; /* From patchwork Mon Jan 25 19:47:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 370371 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66244C433DB for ; Mon, 25 Jan 2021 20:38:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3A6992256F for ; Mon, 25 Jan 2021 20:38:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732364AbhAYUik (ORCPT ); Mon, 25 Jan 2021 15:38:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58552 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732197AbhAYTvF (ORCPT ); Mon, 25 Jan 2021 14:51:05 -0500 Received: from mail-qt1-x829.google.com (mail-qt1-x829.google.com [IPv6:2607:f8b0:4864:20::829]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 68730C061220 for ; Mon, 25 Jan 2021 11:48:09 -0800 (PST) Received: by mail-qt1-x829.google.com with SMTP id l23so8107744qtq.13 for ; Mon, 25 Jan 2021 11:48:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=bRb14bTDsR904QOnrllhSqwBPW3GMdN/JMUfm/Chs0c=; b=Yce291vTXFuFg6HWK5DarOVUy6VNEXMKKHc2/PWyfgcgBGHMArKI34zEdgbwX37TUk 4xsLIJ/cC3d4mCy2VB0kpIAfkmUYIm+0oJKuKM3F3/+pIIaSS87ulRYfQXS2oZpzadY9 U9IGU4bwhpmlcONxVTv9xmlCNONipQENehKmQr/s0/XqpWA5E3LdkepM3z6wX++r1z+K GqqKHEDee8X4bQka6QWROdGW6+BKhyFWzdMwUnNRRMR6Ck0GprvcjqdKvnMqNaM43B1Z 7cUjqpxN1FHUKMbWkgSt+8hhYD450JXF8PHABF772KbPluHkeVEQyMgPyAMAhn4710fA EqXQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=bRb14bTDsR904QOnrllhSqwBPW3GMdN/JMUfm/Chs0c=; b=uW6794g0KdUfoVS2H3QfH1RcwcWcl3UWQxrdyHCi+s7sLNmlMuMlVHWftiVVJLSM6y 5YcLfLq41bpK8Fwcpv8QcUh6UB9K5kYeaJvyXG5dmFtpUryruP68l0p3tiJytQ5xP4vS ACIWrsStBXs4xLILNsroaDiudiPA2O3tvTEZ+0e355pSXwrVT14Mdhdr4VPMELWEP/Nu f7NAxVicDaygYY/3YdF9+N5Z8SUiQr7GlyjILnAaaEW0xYJtswYjR/UJ74rmnRucqrep yEpXvZnHG3g/qjhqL0EZReIbQTMuhmGZF9luG247AeKDbmvWz5L8NazfwRhuP2IPffJ5 ptig== X-Gm-Message-State: AOAM532hmbW/MhEuLyhUiabhHvRPiAsPpYubBB4OWPAvNTIjlgkYl1e2 q2zy0ERQj4sbl8O5mjy1jxwT3w== X-Google-Smtp-Source: ABdhPJzLax7absKUXwWSnNAImqALXsM7+m78dFnmv3NFCy3/ObclAzmJhIB1kY5AYoANPa+AwMgjxA== X-Received: by 2002:ac8:5c92:: with SMTP id r18mr1994203qta.27.1611604088519; Mon, 25 Jan 2021 11:48:08 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id c12sm12121569qtq.76.2021.01.25.11.48.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Jan 2021 11:48:07 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org, ira.weiny@intel.com, linux-kselftest@vger.kernel.org, jmorris@namei.org Subject: [PATCH v8 09/14] mm/gup: migrate pinned pages out of movable zone Date: Mon, 25 Jan 2021 14:47:46 -0500 Message-Id: <20210125194751.1275316-10-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210125194751.1275316-1-pasha.tatashin@soleen.com> References: <20210125194751.1275316-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org We should not pin pages in ZONE_MOVABLE. Currently, we do not pin only movable CMA pages. Generalize the function that migrates CMA pages to migrate all movable pages. Use is_pinnable_page() to check which pages need to be migrated Signed-off-by: Pavel Tatashin Reviewed-by: John Hubbard --- include/linux/migrate.h | 1 + include/linux/mmzone.h | 9 ++++- include/trace/events/migrate.h | 3 +- mm/gup.c | 67 +++++++++++++++++----------------- 4 files changed, 44 insertions(+), 36 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 3a389633b68f..fdf65f23acec 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -27,6 +27,7 @@ enum migrate_reason { MR_MEMPOLICY_MBIND, MR_NUMA_MISPLACED, MR_CONTIG_RANGE, + MR_LONGTERM_PIN, MR_TYPES }; diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 72b0b6eba854..300eb31439cb 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -407,8 +407,13 @@ enum zone_type { * to increase the number of THP/huge pages. Notable special cases are: * * 1. Pinned pages: (long-term) pinning of movable pages might - * essentially turn such pages unmovable. Memory offlining might - * retry a long time. + * essentially turn such pages unmovable. Therefore, we do not allow + * pinning long-term pages in ZONE_MOVABLE. When pages are pinned and + * faulted, they come from the right zone right away. However, it is + * still possible that address space already has pages in + * ZONE_MOVABLE at the time when pages are pinned (i.e. user has + * touches that memory before pinning). In such case we migrate them + * to a different zone. When migration fails - pinning fails. * 2. memblock allocations: kernelcore/movablecore setups might create * situations where ZONE_MOVABLE contains unmovable allocations * after boot. Memory offlining and allocations fail early. diff --git a/include/trace/events/migrate.h b/include/trace/events/migrate.h index 4d434398d64d..363b54ce104c 100644 --- a/include/trace/events/migrate.h +++ b/include/trace/events/migrate.h @@ -20,7 +20,8 @@ EM( MR_SYSCALL, "syscall_or_cpuset") \ EM( MR_MEMPOLICY_MBIND, "mempolicy_mbind") \ EM( MR_NUMA_MISPLACED, "numa_misplaced") \ - EMe(MR_CONTIG_RANGE, "contig_range") + EM( MR_CONTIG_RANGE, "contig_range") \ + EMe(MR_LONGTERM_PIN, "longterm_pin") /* * First define the enums in the above macros to be exported to userspace diff --git a/mm/gup.c b/mm/gup.c index 857b273e32ac..df29825305f8 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -89,11 +89,12 @@ static __maybe_unused struct page *try_grab_compound_head(struct page *page, int orig_refs = refs; /* - * Can't do FOLL_LONGTERM + FOLL_PIN with CMA in the gup fast - * path, so fail and let the caller fall back to the slow path. + * Can't do FOLL_LONGTERM + FOLL_PIN gup fast path if not in a + * right zone, so fail and let the caller fall back to the slow + * path. */ - if (unlikely(flags & FOLL_LONGTERM) && - is_migrate_cma_page(page)) + if (unlikely((flags & FOLL_LONGTERM) && + !is_pinnable_page(page))) return NULL; /* @@ -1547,17 +1548,17 @@ struct page *get_dump_page(unsigned long addr) } #endif /* CONFIG_ELF_CORE */ -#ifdef CONFIG_CMA -static long check_and_migrate_cma_pages(struct mm_struct *mm, - unsigned long start, - unsigned long nr_pages, - struct page **pages, - struct vm_area_struct **vmas, - unsigned int gup_flags) +#ifdef CONFIG_MIGRATION +static long check_and_migrate_movable_pages(struct mm_struct *mm, + unsigned long start, + unsigned long nr_pages, + struct page **pages, + struct vm_area_struct **vmas, + unsigned int gup_flags) { unsigned long i, isolation_error_count; bool drain_allow; - LIST_HEAD(cma_page_list); + LIST_HEAD(movable_page_list); long ret = nr_pages; struct page *prev_head, *head; struct migration_target_control mtc = { @@ -1575,13 +1576,12 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, continue; prev_head = head; /* - * If we get a page from the CMA zone, since we are going to - * be pinning these entries, we might as well move them out - * of the CMA zone if possible. + * If we get a movable page, since we are going to be pinning + * these entries, try to move them out if possible. */ - if (is_migrate_cma_page(head)) { + if (!is_pinnable_page(head)) { if (PageHuge(head)) { - if (!isolate_huge_page(head, &cma_page_list)) + if (!isolate_huge_page(head, &movable_page_list)) isolation_error_count++; } else { if (!PageLRU(head) && drain_allow) { @@ -1593,7 +1593,7 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, isolation_error_count++; continue; } - list_add_tail(&head->lru, &cma_page_list); + list_add_tail(&head->lru, &movable_page_list); mod_node_page_state(page_pgdat(head), NR_ISOLATED_ANON + page_is_file_lru(head), @@ -1606,10 +1606,10 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, * If list is empty, and no isolation errors, means that all pages are * in the correct zone. */ - if (list_empty(&cma_page_list) && !isolation_error_count) + if (list_empty(&movable_page_list) && !isolation_error_count) return ret; - if (!list_empty(&cma_page_list)) { + if (!list_empty(&movable_page_list)) { /* * drop the above get_user_pages reference. */ @@ -1619,12 +1619,12 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, for (i = 0; i < nr_pages; i++) put_page(pages[i]); - ret = migrate_pages(&cma_page_list, alloc_migration_target, + ret = migrate_pages(&movable_page_list, alloc_migration_target, NULL, (unsigned long)&mtc, MIGRATE_SYNC, - MR_CONTIG_RANGE); + MR_LONGTERM_PIN); if (ret) { - if (!list_empty(&cma_page_list)) - putback_movable_pages(&cma_page_list); + if (!list_empty(&movable_page_list)) + putback_movable_pages(&movable_page_list); return ret > 0 ? -ENOMEM : ret; } @@ -1643,16 +1643,16 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, goto check_again; } #else -static long check_and_migrate_cma_pages(struct mm_struct *mm, - unsigned long start, - unsigned long nr_pages, - struct page **pages, - struct vm_area_struct **vmas, - unsigned int gup_flags) +static long check_and_migrate_movable_pages(struct mm_struct *mm, + unsigned long start, + unsigned long nr_pages, + struct page **pages, + struct vm_area_struct **vmas, + unsigned int gup_flags) { return nr_pages; } -#endif /* CONFIG_CMA */ +#endif /* CONFIG_MIGRATION */ /* * __gup_longterm_locked() is a wrapper for __get_user_pages_locked which @@ -1676,8 +1676,9 @@ static long __gup_longterm_locked(struct mm_struct *mm, if (gup_flags & FOLL_LONGTERM) { if (rc > 0) - rc = check_and_migrate_cma_pages(mm, start, rc, pages, - vmas, gup_flags); + rc = check_and_migrate_movable_pages(mm, start, rc, + pages, vmas, + gup_flags); memalloc_pin_restore(flags); } return rc; From patchwork Mon Jan 25 19:47:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 370372 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1C4DC4332B for ; Mon, 25 Jan 2021 20:37:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9A95622597 for ; Mon, 25 Jan 2021 20:37:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732059AbhAYUhn (ORCPT ); Mon, 25 Jan 2021 15:37:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58558 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732199AbhAYTvG (ORCPT ); Mon, 25 Jan 2021 14:51:06 -0500 Received: from mail-qt1-x82d.google.com (mail-qt1-x82d.google.com [IPv6:2607:f8b0:4864:20::82d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9EF74C061225 for ; Mon, 25 Jan 2021 11:48:12 -0800 (PST) Received: by mail-qt1-x82d.google.com with SMTP id d15so10566737qtw.12 for ; Mon, 25 Jan 2021 11:48:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=mjWbyN3QlCSRxacF3EdFD9Q6Qqvve11XewDvx8+hMpo=; b=QyqCNtzOZjM+ofJjxKN6Zxgsv1aotNfM8QD9SAxjhV1OH52yIpl6YtH/AqGv+QUzGh LbI7sONzqoiweiAtLaCZJ2f3bi2JeBjwuemwTo+nxdrLnJgeJftszsKtGaaVvnvOmZq/ Pf6Smrxlzj3DHcnp9U+v4hb5LnxPVq1o8gPuiZl8h15rkU5kyPWYd1kgA3hKdi1qxrRY 54IAZGiME/nPd2KM6TzR8sC9B+rRdaWe0pHCxG5CVbXJN0cA0ROp3dQCrIX0pEfhTO1U EANQarLiJ/JrKdo+vZGT8pU5m8TS/BBpC7lOvL/B7bw2ovLArcwG8gDmZpAhNNKiAYNn 4KCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=mjWbyN3QlCSRxacF3EdFD9Q6Qqvve11XewDvx8+hMpo=; b=O8X67kLFQF+wlWE21W//iJZFkQ4UrKJUCDCn1oZP5zEqFLu6gk3J3kjQuko8Ps6QWD VR41Gr+zYXVmHHjicKGyJWFP5T1VJfy/QKhwiptmWWN78eI79DfgrjeximOs9bAgdkhv vwYIOiDR5KAFKcl7YrIPjeSYK/4363yGNc2Nyt2FSteZRc4DouMWmPXTSfl56oW9jOZy 6TtdNyh5PG8WMNIZu0RChwEOkspjLvmZUwIdr+uXtoGb17Ik4odlVm3HpOO7BCCZJNs4 0N8sGl1aZawvHIpw6Ufa1cKCcho6TYxp99bGyYpDfKU49mVFbgMrw6DWRsX+aCZrQLHN JOgg== X-Gm-Message-State: AOAM530wOvoD21zV9UO8SeaZyaSED18BuEvryUEBJoxO6GyPWyPypRyy 3leTCjN+JqV2Nlmkmf79SLL25w== X-Google-Smtp-Source: ABdhPJyeXbEGvlROSYzc12iqp04Fh8xESSSyL4/yMP4LQTF+v2sZ3EQkzQM1HBNIDwwwu17OzD/NHA== X-Received: by 2002:a05:622a:102:: with SMTP id u2mr2061298qtw.37.1611604091925; Mon, 25 Jan 2021 11:48:11 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id c12sm12121569qtq.76.2021.01.25.11.48.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Jan 2021 11:48:11 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org, ira.weiny@intel.com, linux-kselftest@vger.kernel.org, jmorris@namei.org Subject: [PATCH v8 11/14] mm/gup: change index type to long as it counts pages Date: Mon, 25 Jan 2021 14:47:48 -0500 Message-Id: <20210125194751.1275316-12-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210125194751.1275316-1-pasha.tatashin@soleen.com> References: <20210125194751.1275316-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org In __get_user_pages_locked() i counts number of pages which should be long, as long is used in all other places to contain number of pages, and 32-bit becomes increasingly small for handling page count proportional values. Signed-off-by: Pavel Tatashin Acked-by: Michal Hocko --- mm/gup.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/gup.c b/mm/gup.c index df29825305f8..f98af75dab0f 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1479,7 +1479,7 @@ static long __get_user_pages_locked(struct mm_struct *mm, unsigned long start, { struct vm_area_struct *vma; unsigned long vm_flags; - int i; + long i; /* calculate required read or write permissions. * If FOLL_FORCE is set, we only require the "MAY" flags. From patchwork Mon Jan 25 19:47:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 370373 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 939D1C433E6 for ; Mon, 25 Jan 2021 20:17:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6B77F22583 for ; Mon, 25 Jan 2021 20:17:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732066AbhAYURe (ORCPT ); Mon, 25 Jan 2021 15:17:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58220 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732123AbhAYTvT (ORCPT ); Mon, 25 Jan 2021 14:51:19 -0500 Received: from mail-qk1-x731.google.com (mail-qk1-x731.google.com [IPv6:2607:f8b0:4864:20::731]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 89C7CC061A27 for ; Mon, 25 Jan 2021 11:48:17 -0800 (PST) Received: by mail-qk1-x731.google.com with SMTP id k193so13706193qke.6 for ; Mon, 25 Jan 2021 11:48:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=9bnQ1cYpKSogma2d9jSskcM/kWG+ieYizroT/wf+fYw=; b=SbFXkwHCcoJpnp3pSh+U+6dAEB5fNp4W6yYJzWpDQ2OxM4IV0JCl+/ZnQ3BxeLrKH7 dgSLcul/DjwmdcLD2+cBU2tjGAHgVmGjD7gTz2lEXgnZuAg3VfjHI3ymQ0780T3H9i8C MvZSPDRaNt/VBDO7iKzpiVlorxOyMkS/LQUFkidHyCvLA6jtmtCK+svAOB2xKkgqqQck Pgy4QOY3TY8GmkX4/f+cAwFmOtL0ElcJEBWKNMNCLvSqVEeDkvcz4yYez4nxTtPKzJ7f YM6MyXdx9c1/4uihA1f0UOlOf6gDbN9KrKRxoHAr91DQOlPpppUCc65/yMCMEFRBCTpT I1oA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9bnQ1cYpKSogma2d9jSskcM/kWG+ieYizroT/wf+fYw=; b=RSr6M3d7oB1hKixOBwGxdqMMa4UjZo/M3JlP3TpLJjOUz99tZQP2vC2k/vkp6agkev BCGpJYXQeXNq7zrfCqOFhOyYNGA1tdUuuAyiZSjsWRln72SYE913Z872VBWYXk1gxjCp 0xmWfuphPkpa9n1SjJ5Sk+NfgcErwZRFTrhWBXuBi9PtkckIPVd45f7XWaTvi7fFreyK jXOq/QOSzDE2PO2KkNIQr6U3YxILg8D2MXiSmmCEBbxxH5SP/z6+OjxW+kRzJUBaDOxf 4zLBcmrbBraX0nvyCzRmsQ7BKaGZc6w75G1rUZvh9FVGdd/7yWWzjwhwilXiKsBNEmK0 OmCA== X-Gm-Message-State: AOAM533QSNluipeqnIRKsMVkyyzFxRn29o/vPSinkP983cCKKgEpWFZt b/xRQEKeJF4ywh5sVew8FelQIA== X-Google-Smtp-Source: ABdhPJxfjv7ObuguLsaXbUEv43CLPW8iNfijHOlMaUxjKGwAIxhp+wxrf0HI6Y+/uMKENQ0ADNF3oA== X-Received: by 2002:a37:5d07:: with SMTP id r7mr2402284qkb.335.1611604096795; Mon, 25 Jan 2021 11:48:16 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id c12sm12121569qtq.76.2021.01.25.11.48.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Jan 2021 11:48:16 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org, ira.weiny@intel.com, linux-kselftest@vger.kernel.org, jmorris@namei.org Subject: [PATCH v8 14/14] selftests/vm: gup_test: test faulting in kernel, and verify pinnable pages Date: Mon, 25 Jan 2021 14:47:51 -0500 Message-Id: <20210125194751.1275316-15-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210125194751.1275316-1-pasha.tatashin@soleen.com> References: <20210125194751.1275316-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org When pages are pinned they can be faulted in userland and migrated, and they can be faulted right in kernel without migration. In either case, the pinned pages must end-up being pinnable (not movable). Add a new test to gup_test, to help verify that the gup/pup (get_user_pages() / pin_user_pages()) behavior with respect to pinnable and movable pages is reasonable and correct. Specifically, provide a way to: 1) Verify that only "pinnable" pages are pinned. This is checked automatically for you. 2) Verify that gup/pup performance is reasonable. This requires comparing benchmarks between doing gup/pup on pages that have been pre-faulted in from user space, vs. doing gup/pup on pages that are not faulted in until gup/pup time (via FOLL_TOUCH). This decision is controlled with the new -z command line option. Signed-off-by: Pavel Tatashin Reviewed-by: John Hubbard --- mm/gup_test.c | 6 ++++++ tools/testing/selftests/vm/gup_test.c | 23 +++++++++++++++++++---- 2 files changed, 25 insertions(+), 4 deletions(-) diff --git a/mm/gup_test.c b/mm/gup_test.c index a6ed1c877679..d974dec19e1c 100644 --- a/mm/gup_test.c +++ b/mm/gup_test.c @@ -52,6 +52,12 @@ static void verify_dma_pinned(unsigned int cmd, struct page **pages, dump_page(page, "gup_test failure"); break; + } else if (cmd == PIN_LONGTERM_BENCHMARK && + WARN(!is_pinnable_page(page), + "pages[%lu] is NOT pinnable but pinned\n", + i)) { + dump_page(page, "gup_test failure"); + break; } } break; diff --git a/tools/testing/selftests/vm/gup_test.c b/tools/testing/selftests/vm/gup_test.c index 943cc2608dc2..1e662d59c502 100644 --- a/tools/testing/selftests/vm/gup_test.c +++ b/tools/testing/selftests/vm/gup_test.c @@ -13,6 +13,7 @@ /* Just the flags we need, copied from mm.h: */ #define FOLL_WRITE 0x01 /* check pte is writable */ +#define FOLL_TOUCH 0x02 /* mark page accessed */ static char *cmd_to_str(unsigned long cmd) { @@ -39,11 +40,11 @@ int main(int argc, char **argv) unsigned long size = 128 * MB; int i, fd, filed, opt, nr_pages = 1, thp = -1, repeats = 1, write = 1; unsigned long cmd = GUP_FAST_BENCHMARK; - int flags = MAP_PRIVATE; + int flags = MAP_PRIVATE, touch = 0; char *file = "/dev/zero"; char *p; - while ((opt = getopt(argc, argv, "m:r:n:F:f:abctTLUuwWSHp")) != -1) { + while ((opt = getopt(argc, argv, "m:r:n:F:f:abctTLUuwWSHpz")) != -1) { switch (opt) { case 'a': cmd = PIN_FAST_BENCHMARK; @@ -110,6 +111,10 @@ int main(int argc, char **argv) case 'H': flags |= (MAP_HUGETLB | MAP_ANONYMOUS); break; + case 'z': + /* fault pages in gup, do not fault in userland */ + touch = 1; + break; default: return -1; } @@ -167,8 +172,18 @@ int main(int argc, char **argv) else if (thp == 0) madvise(p, size, MADV_NOHUGEPAGE); - for (; (unsigned long)p < gup.addr + size; p += PAGE_SIZE) - p[0] = 0; + /* + * FOLL_TOUCH, in gup_test, is used as an either/or case: either + * fault pages in from the kernel via FOLL_TOUCH, or fault them + * in here, from user space. This allows comparison of performance + * between those two cases. + */ + if (touch) { + gup.gup_flags |= FOLL_TOUCH; + } else { + for (; (unsigned long)p < gup.addr + size; p += PAGE_SIZE) + p[0] = 0; + } /* Only report timing information on the *_BENCHMARK commands: */ if ((cmd == PIN_FAST_BENCHMARK) || (cmd == GUP_FAST_BENCHMARK) ||