From patchwork Fri Oct 9 19:49:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 286583 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE54CC55ABD for ; Fri, 9 Oct 2020 19:51:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B222E222C3 for ; Fri, 9 Oct 2020 19:51:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390848AbgJITvK (ORCPT ); Fri, 9 Oct 2020 15:51:10 -0400 Received: from mga04.intel.com ([192.55.52.120]:43231 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2403811AbgJITuy (ORCPT ); Fri, 9 Oct 2020 15:50:54 -0400 IronPort-SDR: TODn5nPyhBhMttOIv0R0R1Sxkxcpeku+IG0sPA74UvEwvhiUoQ5XH2zh3FOpLg7jGmc5VZIMhn fS90ytquo+OQ== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="162893208" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="162893208" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:50:51 -0700 IronPort-SDR: r9rFWs4LxSt/DLhc0vk8qSUxzY/hR2W3QZLi44jmj/rGXS9Mfwq/Fn231GdfrO+iUZYF22umi/ 70MWS29lQZXg== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="529052813" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:50:50 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 02/58] x86/pks/test: Add testing for global option Date: Fri, 9 Oct 2020 12:49:37 -0700 Message-Id: <20201009195033.3208459-3-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org From: Ira Weiny Now that PKS can be enabled globaly (for all threads) add a test which spawns a thread and tests the same PKS functionality. The test enables/disables PKS in 1 thread while attempting to access the page in another thread. We use the same test array as in the 'local' PKS testing. Signed-off-by: Ira Weiny --- arch/x86/mm/fault.c | 4 ++ lib/pks/pks_test.c | 128 +++++++++++++++++++++++++++++++++++++++++--- 2 files changed, 124 insertions(+), 8 deletions(-) diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index 4b4ff9efa298..4c74f52fbc23 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -1108,6 +1108,10 @@ static int spurious_kernel_fault_check(unsigned long error_code, pte_t *pte, if (global_pkey_is_enabled(pte, is_write, irq_state)) return 1; + /* + * NOTE: This must be after the global_pkey_is_enabled() call + * to allow the fixup code to be tested. + */ if (handle_pks_testing(error_code, irq_state)) return 1; diff --git a/lib/pks/pks_test.c b/lib/pks/pks_test.c index 286c8b8457da..dfddccbe4cb6 100644 --- a/lib/pks/pks_test.c +++ b/lib/pks/pks_test.c @@ -154,7 +154,8 @@ static void check_exception(irqentry_state_t *irq_state) } /* Check the exception state */ - if (!check_pkrs(test_armed_key, PKEY_DISABLE_ACCESS)) { + if (!check_pkrs(test_armed_key, + PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE)) { pr_err(" FAIL: PKRS cache and MSR\n"); test_exception_ctx->pass = false; } @@ -308,24 +309,29 @@ static int test_it(struct pks_test_ctx *ctx, struct pks_access_test *test, void return ret; } -static int run_access_test(struct pks_test_ctx *ctx, - struct pks_access_test *test, - void *ptr) +static void set_protection(int pkey, enum pks_access_mode mode, bool global) { - switch (test->mode) { + switch (mode) { case PKS_TEST_NO_ACCESS: - pks_mknoaccess(ctx->pkey, false); + pks_mknoaccess(pkey, global); break; case PKS_TEST_RDWR: - pks_mkrdwr(ctx->pkey, false); + pks_mkrdwr(pkey, global); break; case PKS_TEST_RDONLY: - pks_mkread(ctx->pkey, false); + pks_mkread(pkey, global); break; default: pr_err("BUG in test invalid mode\n"); break; } +} + +static int run_access_test(struct pks_test_ctx *ctx, + struct pks_access_test *test, + void *ptr) +{ + set_protection(ctx->pkey, test->mode, false); return test_it(ctx, test, ptr); } @@ -516,6 +522,110 @@ static void run_exception_test(void) pass ? "PASS" : "FAIL"); } +struct shared_data { + struct mutex lock; + struct pks_test_ctx *ctx; + void *kmap_addr; + struct pks_access_test *test; +}; + +static int thread_main(void *d) +{ + struct shared_data *data = d; + struct pks_test_ctx *ctx = data->ctx; + + while (!kthread_should_stop()) { + mutex_lock(&data->lock); + /* + * wait for the main thread to hand us the page + * We should be spinning so hopefully we will not have gotten + * the global value from a schedule in. + */ + if (data->kmap_addr) { + if (test_it(ctx, data->test, data->kmap_addr)) + ctx->pass = false; + data->kmap_addr = NULL; + } + mutex_unlock(&data->lock); + } + + return 0; +} + +static void run_thread_access_test(struct shared_data *data, + struct pks_test_ctx *ctx, + struct pks_access_test *test, + void *ptr) +{ + set_protection(ctx->pkey, test->mode, true); + + pr_info("checking... mode %s; write %s\n", + get_mode_str(test->mode), test->write ? "TRUE" : "FALSE"); + + mutex_lock(&data->lock); + data->test = test; + data->kmap_addr = ptr; + mutex_unlock(&data->lock); + + while (data->kmap_addr) { + msleep(10); + } +} + +static void run_global_test(void) +{ + struct task_struct *other_task; + struct pks_test_ctx *ctx; + struct shared_data data; + bool pass = true; + void *ptr; + int i; + + pr_info(" ***** BEGIN: global pkey checking\n"); + + /* Set up context, data pgae, and thread */ + ctx = alloc_ctx("global pkey test"); + if (IS_ERR(ctx)) { + pr_err(" FAIL: no context\n"); + pass = false; + goto result; + } + ptr = alloc_test_page(ctx->pkey); + if (!ptr) { + pr_err(" FAIL: no vmalloc page\n"); + pass = false; + goto free_context; + } + other_task = kthread_run(thread_main, &data, "PKRS global test"); + if (IS_ERR(other_task)) { + pr_err(" FAIL: Failed to start thread\n"); + pass = false; + goto free_page; + } + + memset(&data, 0, sizeof(data)); + mutex_init(&data.lock); + data.ctx = ctx; + + /* Start testing */ + ctx->pass = true; + + for (i = 0; i < ARRAY_SIZE(pkey_test_ary); i++) { + run_thread_access_test(&data, ctx, &pkey_test_ary[i], ptr); + } + + kthread_stop(other_task); + pass = ctx->pass; + +free_page: + vfree(ptr); +free_context: + free_ctx(ctx); +result: + pr_info(" ***** END: global pkey checking : %s\n", + pass ? "PASS" : "FAIL"); +} + static void run_all(void) { struct pks_test_ctx *ctx[PKS_NUM_KEYS]; @@ -538,6 +648,8 @@ static void run_all(void) } run_exception_test(); + + run_global_test(); } static void crash_it(void) From patchwork Fri Oct 9 19:49:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 286555 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7645CC433DF for ; Fri, 9 Oct 2020 20:10:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 424A62053B for ; Fri, 9 Oct 2020 20:10:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391922AbgJIUKn (ORCPT ); Fri, 9 Oct 2020 16:10:43 -0400 Received: from mga17.intel.com ([192.55.52.151]:33872 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2403820AbgJITvD (ORCPT ); Fri, 9 Oct 2020 15:51:03 -0400 IronPort-SDR: eqJBBpkrLGrKis4SJXQed/VSxmErjrdjeWymzKs05Z9O/Kb0aN689/V3XKJa5P7vgO3L6+mg6b 0K1uqxQQciVg== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="145397219" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="145397219" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:50:59 -0700 IronPort-SDR: /j9oTf64Lc/fwCVe0LT3Z3/pz5zJXnOJNQAbfqyJqoBSoEhIO3zt/eOhOuH8Xl1K1eBwx2q0zq 2edyiKrU43UQ== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="519846797" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:50:59 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Randy Dunlap , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 04/58] kmap: Add stray access protection for device pages Date: Fri, 9 Oct 2020 12:49:39 -0700 Message-Id: <20201009195033.3208459-5-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org From: Ira Weiny Device managed pages may have additional protections. These protections need to be removed prior to valid use by kernel users. Check for special treatment of device managed pages in kmap and take action if needed. We use kmap as an interface for generic kernel code because under normal circumstances it would be a bug for general kernel code to not use kmap prior to accessing kernel memory. Therefore, this should allow any valid kernel users to seamlessly use these pages without issues. Because of the critical nature of kmap it must be pointed out that the over head on regular DRAM is carefully implemented to be as fast as possible. Furthermore the underlying MSR write required on device pages when protected is better than a normal MSR write. Specifically, WRMSR(MSR_IA32_PKRS) is not serializing but still maintains ordering properties similar to WRPKRU. The current SDM section on PKRS needs updating but should be the same as that of WRPKRU. So to quote from the WRPKRU text: WRPKRU will never execute speculatively. Memory accesses affected by PKRU register will not execute (even speculatively) until all prior executions of WRPKRU have completed execution and updated the PKRU register. Still this will make accessing pmem more expensive from the kernel but the overhead is minimized and many pmem users access this memory through user page mappings which are not affected at all. Cc: Randy Dunlap Signed-off-by: Ira Weiny --- include/linux/highmem.h | 32 +++++++++++++++++++++++++++++++- 1 file changed, 31 insertions(+), 1 deletion(-) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 14e6202ce47f..2a9806e3b8d2 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -8,6 +8,7 @@ #include #include #include +#include #include @@ -31,6 +32,20 @@ static inline void invalidate_kernel_vmap_range(void *vaddr, int size) #include +static inline void dev_page_enable_access(struct page *page, bool global) +{ + if (!page_is_access_protected(page)) + return; + dev_access_enable(global); +} + +static inline void dev_page_disable_access(struct page *page, bool global) +{ + if (!page_is_access_protected(page)) + return; + dev_access_disable(global); +} + #ifdef CONFIG_HIGHMEM extern void *kmap_atomic_high_prot(struct page *page, pgprot_t prot); extern void kunmap_atomic_high(void *kvaddr); @@ -55,6 +70,11 @@ static inline void *kmap(struct page *page) else addr = kmap_high(page); kmap_flush_tlb((unsigned long)addr); + /* + * Even non-highmem pages may have additional access protections which + * need to be checked and potentially enabled. + */ + dev_page_enable_access(page, true); return addr; } @@ -63,6 +83,11 @@ void kunmap_high(struct page *page); static inline void kunmap(struct page *page) { might_sleep(); + /* + * Even non-highmem pages may have additional access protections which + * need to be checked and potentially disabled. + */ + dev_page_disable_access(page, true); if (!PageHighMem(page)) return; kunmap_high(page); @@ -85,6 +110,7 @@ static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot) { preempt_disable(); pagefault_disable(); + dev_page_enable_access(page, false); if (!PageHighMem(page)) return page_address(page); return kmap_atomic_high_prot(page, prot); @@ -137,6 +163,7 @@ static inline unsigned long totalhigh_pages(void) { return 0UL; } static inline void *kmap(struct page *page) { might_sleep(); + dev_page_enable_access(page, true); return page_address(page); } @@ -146,6 +173,7 @@ static inline void kunmap_high(struct page *page) static inline void kunmap(struct page *page) { + dev_page_disable_access(page, true); #ifdef ARCH_HAS_FLUSH_ON_KUNMAP kunmap_flush_on_unmap(page_address(page)); #endif @@ -155,6 +183,7 @@ static inline void *kmap_atomic(struct page *page) { preempt_disable(); pagefault_disable(); + dev_page_enable_access(page, false); return page_address(page); } #define kmap_atomic_prot(page, prot) kmap_atomic(page) @@ -216,7 +245,8 @@ static inline void kmap_atomic_idx_pop(void) #define kunmap_atomic(addr) \ do { \ BUILD_BUG_ON(__same_type((addr), struct page *)); \ - kunmap_atomic_high(addr); \ + dev_page_disable_access(kmap_to_page(addr), false); \ + kunmap_atomic_high(addr); \ pagefault_enable(); \ preempt_enable(); \ } while (0) From patchwork Fri Oct 9 19:49:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 286556 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A1D5C433DF for ; Fri, 9 Oct 2020 20:10:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 120682053B for ; Fri, 9 Oct 2020 20:10:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391537AbgJIUKI (ORCPT ); Fri, 9 Oct 2020 16:10:08 -0400 Received: from mga11.intel.com ([192.55.52.93]:40406 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2403832AbgJITvG (ORCPT ); Fri, 9 Oct 2020 15:51:06 -0400 IronPort-SDR: SD9nz5fTmWZyBOKnNyNhFLCkSRCkJ1a/XNJy/OvFfOQBQwdny4dR7EruCRRxmv68e4U8o8tfxx MKHZNHFKQmKA== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="162067781" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="162067781" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:03 -0700 IronPort-SDR: Zj7B8Ym6Opm+j7NgzILIxHvFMHLpJ+KeG5qMdRlGsqJzYlMhgHY+wciO+/VIwDrpca7gdal3tS LaUxL8Xa1x8w== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="343971995" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:02 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Randy Dunlap , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 05/58] kmap: Introduce k[un]map_thread Date: Fri, 9 Oct 2020 12:49:40 -0700 Message-Id: <20201009195033.3208459-6-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org From: Ira Weiny To correctly support the semantics of kmap() with Kernel protection keys (PKS), kmap() may be required to set the protections on multiple processors (globally). Enabling PKS globally can be very expensive depending on the requested operation. Furthermore, enabling a domain globally reduces the protection afforded by PKS. Most kmap() (Aprox 209 of 229) callers use the map within a single thread and have no need for the protection domain to be enabled globally. However, the remaining callers do not follow this pattern and, as best I can tell, expect the mapping to be 'global' and available to any thread who may access the mapping.[1] We don't anticipate global mappings to pmem, however in general there is a danger in changing the semantics of kmap(). Effectively, this would cause an unresolved page fault with little to no information about why the failure occurred. To resolve this a number of options were considered. 1) Attempt to change all the thread local kmap() calls to kmap_atomic()[2] 2) Introduce a flags parameter to kmap() to indicate if the mapping should be global or not 3) Change ~20 call sites to 'kmap_global()' to indicate that they require a global enablement of the pages. 4) Change ~209 call sites to 'kmap_thread()' to indicate that the mapping is to be used within that thread of execution only Option 1 is simply not feasible. Option 2 would require all of the call sites of kmap() to change. Option 3 seems like a good minimal change but there is a danger that new code may miss the semantic change of kmap() and not get the behavior the developer intended. Therefore, #4 was chosen. Subsequent patches will convert most ~90% of the kmap callers to this new call leaving about 10% of the existing kmap callers to enable PKS globally. Cc: Randy Dunlap Signed-off-by: Ira Weiny --- include/linux/highmem.h | 34 ++++++++++++++++++++++++++-------- 1 file changed, 26 insertions(+), 8 deletions(-) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 2a9806e3b8d2..ef7813544719 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -60,7 +60,7 @@ static inline void kmap_flush_tlb(unsigned long addr) { } #endif void *kmap_high(struct page *page); -static inline void *kmap(struct page *page) +static inline void *__kmap(struct page *page, bool global) { void *addr; @@ -74,20 +74,20 @@ static inline void *kmap(struct page *page) * Even non-highmem pages may have additional access protections which * need to be checked and potentially enabled. */ - dev_page_enable_access(page, true); + dev_page_enable_access(page, global); return addr; } void kunmap_high(struct page *page); -static inline void kunmap(struct page *page) +static inline void __kunmap(struct page *page, bool global) { might_sleep(); /* * Even non-highmem pages may have additional access protections which * need to be checked and potentially disabled. */ - dev_page_disable_access(page, true); + dev_page_disable_access(page, global); if (!PageHighMem(page)) return; kunmap_high(page); @@ -160,10 +160,10 @@ static inline struct page *kmap_to_page(void *addr) static inline unsigned long totalhigh_pages(void) { return 0UL; } -static inline void *kmap(struct page *page) +static inline void *__kmap(struct page *page, bool global) { might_sleep(); - dev_page_enable_access(page, true); + dev_page_enable_access(page, global); return page_address(page); } @@ -171,9 +171,9 @@ static inline void kunmap_high(struct page *page) { } -static inline void kunmap(struct page *page) +static inline void __kunmap(struct page *page, bool global) { - dev_page_disable_access(page, true); + dev_page_disable_access(page, global); #ifdef ARCH_HAS_FLUSH_ON_KUNMAP kunmap_flush_on_unmap(page_address(page)); #endif @@ -238,6 +238,24 @@ static inline void kmap_atomic_idx_pop(void) #endif +static inline void *kmap(struct page *page) +{ + return __kmap(page, true); +} +static inline void kunmap(struct page *page) +{ + __kunmap(page, true); +} + +static inline void *kmap_thread(struct page *page) +{ + return __kmap(page, false); +} +static inline void kunmap_thread(struct page *page) +{ + __kunmap(page, false); +} + /* * Prevent people trying to call kunmap_atomic() as if it were kunmap() * kunmap_atomic() should get the return value of kmap_atomic, not the page. From patchwork Fri Oct 9 19:49:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 286557 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 078DFC43467 for ; Fri, 9 Oct 2020 20:09:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B47B52067D for ; Fri, 9 Oct 2020 20:09:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391791AbgJIUJB (ORCPT ); Fri, 9 Oct 2020 16:09:01 -0400 Received: from mga03.intel.com ([134.134.136.65]:25936 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2403924AbgJITv3 (ORCPT ); Fri, 9 Oct 2020 15:51:29 -0400 IronPort-SDR: y57ejaJgVBNlpT3G8VL+eb/8qirNhMK6yqdMgqA2Zofe3N1jmyHJKpOn1cxbYcjftnOrosB3AT yDtdD/JZt5Pw== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="165592018" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="165592018" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:11 -0700 IronPort-SDR: 7ZsxuYrd4RIh+I9L87qE8QKpDxHdxlIArpw96UL8x0WQ0i4hLag3NZ1m816qhb9i0FMTl4f1za dG2KA2sC/NWA== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="312652519" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:09 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Jens Axboe , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 07/58] drivers/drbd: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:49:42 -0700 Message-Id: <20201009195033.3208459-8-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org From: Ira Weiny The kmap() calls in this driver are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Jens Axboe Signed-off-by: Ira Weiny --- drivers/block/drbd/drbd_main.c | 4 ++-- drivers/block/drbd/drbd_receiver.c | 12 ++++++------ 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c index 573dbf6f0c31..f0d0c6b0745e 100644 --- a/drivers/block/drbd/drbd_main.c +++ b/drivers/block/drbd/drbd_main.c @@ -1532,9 +1532,9 @@ static int _drbd_no_send_page(struct drbd_peer_device *peer_device, struct page int err; socket = peer_device->connection->data.socket; - addr = kmap(page) + offset; + addr = kmap_thread(page) + offset; err = drbd_send_all(peer_device->connection, socket, addr, size, msg_flags); - kunmap(page); + kunmap_thread(page); if (!err) peer_device->device->send_cnt += size >> 9; return err; diff --git a/drivers/block/drbd/drbd_receiver.c b/drivers/block/drbd/drbd_receiver.c index 422363daa618..4704bc0564e2 100644 --- a/drivers/block/drbd/drbd_receiver.c +++ b/drivers/block/drbd/drbd_receiver.c @@ -1951,13 +1951,13 @@ read_in_block(struct drbd_peer_device *peer_device, u64 id, sector_t sector, page = peer_req->pages; page_chain_for_each(page) { unsigned len = min_t(int, ds, PAGE_SIZE); - data = kmap(page); + data = kmap_thread(page); err = drbd_recv_all_warn(peer_device->connection, data, len); if (drbd_insert_fault(device, DRBD_FAULT_RECEIVE)) { drbd_err(device, "Fault injection: Corrupting data on receive\n"); data[0] = data[0] ^ (unsigned long)-1; } - kunmap(page); + kunmap_thread(page); if (err) { drbd_free_peer_req(device, peer_req); return NULL; @@ -1992,7 +1992,7 @@ static int drbd_drain_block(struct drbd_peer_device *peer_device, int data_size) page = drbd_alloc_pages(peer_device, 1, 1); - data = kmap(page); + data = kmap_thread(page); while (data_size) { unsigned int len = min_t(int, data_size, PAGE_SIZE); @@ -2001,7 +2001,7 @@ static int drbd_drain_block(struct drbd_peer_device *peer_device, int data_size) break; data_size -= len; } - kunmap(page); + kunmap_thread(page); drbd_free_pages(peer_device->device, page, 0); return err; } @@ -2033,10 +2033,10 @@ static int recv_dless_read(struct drbd_peer_device *peer_device, struct drbd_req D_ASSERT(peer_device->device, sector == bio->bi_iter.bi_sector); bio_for_each_segment(bvec, bio, iter) { - void *mapped = kmap(bvec.bv_page) + bvec.bv_offset; + void *mapped = kmap_thread(bvec.bv_page) + bvec.bv_offset; expect = min_t(int, data_size, bvec.bv_len); err = drbd_recv_all_warn(peer_device->connection, mapped, expect); - kunmap(bvec.bv_page); + kunmap_thread(bvec.bv_page); if (err) return err; data_size -= expect; From patchwork Fri Oct 9 19:49:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 286582 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 471BEC83B87 for ; Fri, 9 Oct 2020 19:51:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 098DB223AC for ; Fri, 9 Oct 2020 19:51:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390867AbgJITvu (ORCPT ); Fri, 9 Oct 2020 15:51:50 -0400 Received: from mga02.intel.com ([134.134.136.20]:57597 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2403912AbgJITvZ (ORCPT ); Fri, 9 Oct 2020 15:51:25 -0400 IronPort-SDR: Buki3qs6NBvS3hmZtQwJlqZQICcMzBGBfXe0E6UxKErEylTOKT2moJBC37CGA2p4eQB2eod5/G Jwp65DGsuMYA== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="152450817" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="152450817" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:24 -0700 IronPort-SDR: XjNk9w0mzDhi5BMigEI5d3C8W/iIq4vdRrLZr3Ki7urqpxKKtyqLLY7j1Bq5ukpJe/vk25ySpH cq+qGeUOhHow== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="355862741" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:22 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Mike Marciniszyn , Dennis Dalessandro , Doug Ledford , Jason Gunthorpe , Faisal Latif , Shiraz Saleem , Bernard Metzler , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 10/58] drivers/rdma: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:49:45 -0700 Message-Id: <20201009195033.3208459-11-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org From: Ira Weiny The kmap() calls in these drivers are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Mike Marciniszyn Cc: Dennis Dalessandro Cc: Doug Ledford Cc: Jason Gunthorpe Cc: Faisal Latif Cc: Shiraz Saleem Cc: Bernard Metzler Signed-off-by: Ira Weiny --- drivers/infiniband/hw/hfi1/sdma.c | 4 ++-- drivers/infiniband/hw/i40iw/i40iw_cm.c | 10 +++++----- drivers/infiniband/sw/siw/siw_qp_tx.c | 14 +++++++------- 3 files changed, 14 insertions(+), 14 deletions(-) diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c index 04575c9afd61..09d206e3229a 100644 --- a/drivers/infiniband/hw/hfi1/sdma.c +++ b/drivers/infiniband/hw/hfi1/sdma.c @@ -3130,7 +3130,7 @@ int ext_coal_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx, } if (type == SDMA_MAP_PAGE) { - kvaddr = kmap(page); + kvaddr = kmap_thread(page); kvaddr += offset; } else if (WARN_ON(!kvaddr)) { __sdma_txclean(dd, tx); @@ -3140,7 +3140,7 @@ int ext_coal_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx, memcpy(tx->coalesce_buf + tx->coalesce_idx, kvaddr, len); tx->coalesce_idx += len; if (type == SDMA_MAP_PAGE) - kunmap(page); + kunmap_thread(page); /* If there is more data, return */ if (tx->tlen - tx->coalesce_idx) diff --git a/drivers/infiniband/hw/i40iw/i40iw_cm.c b/drivers/infiniband/hw/i40iw/i40iw_cm.c index a3b95805c154..122d7a5642a1 100644 --- a/drivers/infiniband/hw/i40iw/i40iw_cm.c +++ b/drivers/infiniband/hw/i40iw/i40iw_cm.c @@ -3721,7 +3721,7 @@ int i40iw_accept(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param) ibmr->device = iwpd->ibpd.device; iwqp->lsmm_mr = ibmr; if (iwqp->page) - iwqp->sc_qp.qp_uk.sq_base = kmap(iwqp->page); + iwqp->sc_qp.qp_uk.sq_base = kmap_thread(iwqp->page); dev->iw_priv_qp_ops->qp_send_lsmm(&iwqp->sc_qp, iwqp->ietf_mem.va, (accept.size + conn_param->private_data_len), @@ -3729,12 +3729,12 @@ int i40iw_accept(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param) } else { if (iwqp->page) - iwqp->sc_qp.qp_uk.sq_base = kmap(iwqp->page); + iwqp->sc_qp.qp_uk.sq_base = kmap_thread(iwqp->page); dev->iw_priv_qp_ops->qp_send_lsmm(&iwqp->sc_qp, NULL, 0, 0); } if (iwqp->page) - kunmap(iwqp->page); + kunmap_thread(iwqp->page); iwqp->cm_id = cm_id; cm_node->cm_id = cm_id; @@ -4102,10 +4102,10 @@ static void i40iw_cm_event_connected(struct i40iw_cm_event *event) i40iw_cm_init_tsa_conn(iwqp, cm_node); read0 = (cm_node->send_rdma0_op == SEND_RDMA_READ_ZERO); if (iwqp->page) - iwqp->sc_qp.qp_uk.sq_base = kmap(iwqp->page); + iwqp->sc_qp.qp_uk.sq_base = kmap_thread(iwqp->page); dev->iw_priv_qp_ops->qp_send_rtt(&iwqp->sc_qp, read0); if (iwqp->page) - kunmap(iwqp->page); + kunmap_thread(iwqp->page); memset(&attr, 0, sizeof(attr)); attr.qp_state = IB_QPS_RTS; diff --git a/drivers/infiniband/sw/siw/siw_qp_tx.c b/drivers/infiniband/sw/siw/siw_qp_tx.c index d19d8325588b..4ed37c328d02 100644 --- a/drivers/infiniband/sw/siw/siw_qp_tx.c +++ b/drivers/infiniband/sw/siw/siw_qp_tx.c @@ -76,7 +76,7 @@ static int siw_try_1seg(struct siw_iwarp_tx *c_tx, void *paddr) if (unlikely(!p)) return -EFAULT; - buffer = kmap(p); + buffer = kmap_thread(p); if (likely(PAGE_SIZE - off >= bytes)) { memcpy(paddr, buffer + off, bytes); @@ -84,7 +84,7 @@ static int siw_try_1seg(struct siw_iwarp_tx *c_tx, void *paddr) unsigned long part = bytes - (PAGE_SIZE - off); memcpy(paddr, buffer + off, part); - kunmap(p); + kunmap_thread(p); if (!mem->is_pbl) p = siw_get_upage(mem->umem, @@ -96,10 +96,10 @@ static int siw_try_1seg(struct siw_iwarp_tx *c_tx, void *paddr) if (unlikely(!p)) return -EFAULT; - buffer = kmap(p); + buffer = kmap_thread(p); memcpy(paddr + part, buffer, bytes - part); } - kunmap(p); + kunmap_thread(p); } } return (int)bytes; @@ -505,7 +505,7 @@ static int siw_tx_hdt(struct siw_iwarp_tx *c_tx, struct socket *s) page_array[seg] = p; if (!c_tx->use_sendpage) { - iov[seg].iov_base = kmap(p) + fp_off; + iov[seg].iov_base = kmap_thread(p) + fp_off; iov[seg].iov_len = plen; /* Remember for later kunmap() */ @@ -518,9 +518,9 @@ static int siw_tx_hdt(struct siw_iwarp_tx *c_tx, struct socket *s) plen); } else if (do_crc) { crypto_shash_update(c_tx->mpa_crc_hd, - kmap(p) + fp_off, + kmap_thread(p) + fp_off, plen); - kunmap(p); + kunmap_thread(p); } } else { u64 va = sge->laddr + sge_off; From patchwork Fri Oct 9 19:49:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 286558 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55D11C4363A for ; Fri, 9 Oct 2020 20:08:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 16AC420659 for ; Fri, 9 Oct 2020 20:08:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391707AbgJIUHl (ORCPT ); Fri, 9 Oct 2020 16:07:41 -0400 Received: from mga09.intel.com ([134.134.136.24]:28486 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2403855AbgJITve (ORCPT ); Fri, 9 Oct 2020 15:51:34 -0400 IronPort-SDR: xxLTFcykYmc5KgZD1HbS1g7i4OYmICVIRRif8W7Jpite0DKStiZN2ggNBumfLr6hCgxjqqigjG UdsgKzTVcxow== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="165642918" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="165642918" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:29 -0700 IronPort-SDR: 0+lGONiHFSeyE/0kQGUe/v/Z0vqstKsQGCedA4TvP63fkNzDWGjPYFj9hsR1yECUQTFrwG697Z McTBAsxZaLwQ== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="298419323" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:28 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , David Howells , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 12/58] fs/afs: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:49:47 -0700 Message-Id: <20201009195033.3208459-13-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: David Howells Signed-off-by: Ira Weiny --- fs/afs/dir.c | 16 ++++++++-------- fs/afs/dir_edit.c | 16 ++++++++-------- fs/afs/mntpt.c | 4 ++-- fs/afs/write.c | 4 ++-- 4 files changed, 20 insertions(+), 20 deletions(-) diff --git a/fs/afs/dir.c b/fs/afs/dir.c index 1d2e61e0ab04..5d01cdb590de 100644 --- a/fs/afs/dir.c +++ b/fs/afs/dir.c @@ -127,14 +127,14 @@ static bool afs_dir_check_page(struct afs_vnode *dvnode, struct page *page, qty /= sizeof(union afs_xdr_dir_block); /* check them */ - dbuf = kmap(page); + dbuf = kmap_thread(page); for (tmp = 0; tmp < qty; tmp++) { if (dbuf->blocks[tmp].hdr.magic != AFS_DIR_MAGIC) { printk("kAFS: %s(%lx): bad magic %d/%d is %04hx\n", __func__, dvnode->vfs_inode.i_ino, tmp, qty, ntohs(dbuf->blocks[tmp].hdr.magic)); trace_afs_dir_check_failed(dvnode, off, i_size); - kunmap(page); + kunmap_thread(page); trace_afs_file_error(dvnode, -EIO, afs_file_error_dir_bad_magic); goto error; } @@ -146,7 +146,7 @@ static bool afs_dir_check_page(struct afs_vnode *dvnode, struct page *page, ((u8 *)&dbuf->blocks[tmp])[AFS_DIR_BLOCK_SIZE - 1] = 0; } - kunmap(page); + kunmap_thread(page); checked: afs_stat_v(dvnode, n_read_dir); @@ -177,13 +177,13 @@ static bool afs_dir_check_pages(struct afs_vnode *dvnode, struct afs_read *req) req->pos, req->index, req->nr_pages, req->offset); for (i = 0; i < req->nr_pages; i++) { - dbuf = kmap(req->pages[i]); + dbuf = kmap_thread(req->pages[i]); for (j = 0; j < qty; j++) { union afs_xdr_dir_block *block = &dbuf->blocks[j]; pr_warn("[%02x] %32phN\n", i * qty + j, block); } - kunmap(req->pages[i]); + kunmap_thread(req->pages[i]); } return false; } @@ -481,7 +481,7 @@ static int afs_dir_iterate(struct inode *dir, struct dir_context *ctx, limit = blkoff & ~(PAGE_SIZE - 1); - dbuf = kmap(page); + dbuf = kmap_thread(page); /* deal with the individual blocks stashed on this page */ do { @@ -489,7 +489,7 @@ static int afs_dir_iterate(struct inode *dir, struct dir_context *ctx, sizeof(union afs_xdr_dir_block)]; ret = afs_dir_iterate_block(dvnode, ctx, dblock, blkoff); if (ret != 1) { - kunmap(page); + kunmap_thread(page); goto out; } @@ -497,7 +497,7 @@ static int afs_dir_iterate(struct inode *dir, struct dir_context *ctx, } while (ctx->pos < dir->i_size && blkoff < limit); - kunmap(page); + kunmap_thread(page); ret = 0; } diff --git a/fs/afs/dir_edit.c b/fs/afs/dir_edit.c index b108528bf010..35ed6828e205 100644 --- a/fs/afs/dir_edit.c +++ b/fs/afs/dir_edit.c @@ -218,7 +218,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode, need_slots = round_up(12 + name->len + 1 + 4, AFS_DIR_DIRENT_SIZE); need_slots /= AFS_DIR_DIRENT_SIZE; - meta_page = kmap(page0); + meta_page = kmap_thread(page0); meta = &meta_page->blocks[0]; if (i_size == 0) goto new_directory; @@ -247,7 +247,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode, set_page_private(page, 1); SetPagePrivate(page); } - dir_page = kmap(page); + dir_page = kmap_thread(page); } /* Abandon the edit if we got a callback break. */ @@ -284,7 +284,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode, if (page != page0) { unlock_page(page); - kunmap(page); + kunmap_thread(page); put_page(page); } } @@ -323,7 +323,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode, afs_set_contig_bits(block, slot, need_slots); if (page != page0) { unlock_page(page); - kunmap(page); + kunmap_thread(page); put_page(page); } @@ -337,7 +337,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode, out_unmap: unlock_page(page0); - kunmap(page0); + kunmap_thread(page0); put_page(page0); _leave(""); return; @@ -346,7 +346,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode, trace_afs_edit_dir(vnode, why, afs_edit_dir_create_inval, 0, 0, 0, 0, name->name); clear_bit(AFS_VNODE_DIR_VALID, &vnode->flags); if (page != page0) { - kunmap(page); + kunmap_thread(page); put_page(page); } goto out_unmap; @@ -398,7 +398,7 @@ void afs_edit_dir_remove(struct afs_vnode *vnode, need_slots = round_up(12 + name->len + 1 + 4, AFS_DIR_DIRENT_SIZE); need_slots /= AFS_DIR_DIRENT_SIZE; - meta_page = kmap(page0); + meta_page = kmap_thread(page0); meta = &meta_page->blocks[0]; /* Find a page that has sufficient slots available. Each VM page @@ -410,7 +410,7 @@ void afs_edit_dir_remove(struct afs_vnode *vnode, page = find_lock_page(vnode->vfs_inode.i_mapping, index); if (!page) goto error; - dir_page = kmap(page); + dir_page = kmap_thread(page); } else { page = page0; dir_page = meta_page; diff --git a/fs/afs/mntpt.c b/fs/afs/mntpt.c index 79bc5f1338ed..562454e2fd5c 100644 --- a/fs/afs/mntpt.c +++ b/fs/afs/mntpt.c @@ -139,11 +139,11 @@ static int afs_mntpt_set_params(struct fs_context *fc, struct dentry *mntpt) return ret; } - buf = kmap(page); + buf = kmap_thread(page); ret = -EINVAL; if (buf[size - 1] == '.') ret = vfs_parse_fs_string(fc, "source", buf, size - 1); - kunmap(page); + kunmap_thread(page); put_page(page); if (ret < 0) return ret; diff --git a/fs/afs/write.c b/fs/afs/write.c index 4b2265cb1891..c56e5b4db4ae 100644 --- a/fs/afs/write.c +++ b/fs/afs/write.c @@ -38,9 +38,9 @@ static int afs_fill_page(struct afs_vnode *vnode, struct key *key, if (pos >= vnode->vfs_inode.i_size) { p = pos & ~PAGE_MASK; ASSERTCMP(p + len, <=, PAGE_SIZE); - data = kmap(page); + data = kmap_thread(page); memset(data + p, 0, len); - kunmap(page); + kunmap_thread(page); return 0; } From patchwork Fri Oct 9 19:49:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 286559 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EBC1AC433DF for ; Fri, 9 Oct 2020 20:08:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B8F7F2067D for ; Fri, 9 Oct 2020 20:08:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391716AbgJIUHn (ORCPT ); Fri, 9 Oct 2020 16:07:43 -0400 Received: from mga06.intel.com ([134.134.136.31]:1621 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2403959AbgJITvm (ORCPT ); Fri, 9 Oct 2020 15:51:42 -0400 IronPort-SDR: OvUdnTIo53H2AyyTnYh4FWnAgUT0vb6jIYBsFTvfZM4VsCqvxXdHL/PCZfQ+hdONhRPcaXWgsk GWtbV+Ff7Znw== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="227178845" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="227178845" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:39 -0700 IronPort-SDR: 6nDyG/rNIU8VCp7PRxt8N7VM6Ts7FaR6Cq3BMAOPdzxAbIieQhQY1hWISzmf8TSYIGXE2cP/wS 1DDas2UbMNkw== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="345147432" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:37 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Steve French , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 14/58] fs/cifs: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:49:49 -0700 Message-Id: <20201009195033.3208459-15-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Steve French Signed-off-by: Ira Weiny --- fs/cifs/cifsencrypt.c | 6 +++--- fs/cifs/file.c | 16 ++++++++-------- fs/cifs/smb2ops.c | 8 ++++---- 3 files changed, 15 insertions(+), 15 deletions(-) diff --git a/fs/cifs/cifsencrypt.c b/fs/cifs/cifsencrypt.c index 9daa256f69d4..2f8232d01a56 100644 --- a/fs/cifs/cifsencrypt.c +++ b/fs/cifs/cifsencrypt.c @@ -82,17 +82,17 @@ int __cifs_calc_signature(struct smb_rqst *rqst, rqst_page_get_length(rqst, i, &len, &offset); - kaddr = (char *) kmap(rqst->rq_pages[i]) + offset; + kaddr = (char *) kmap_thread(rqst->rq_pages[i]) + offset; rc = crypto_shash_update(shash, kaddr, len); if (rc) { cifs_dbg(VFS, "%s: Could not update with payload\n", __func__); - kunmap(rqst->rq_pages[i]); + kunmap_thread(rqst->rq_pages[i]); return rc; } - kunmap(rqst->rq_pages[i]); + kunmap_thread(rqst->rq_pages[i]); } rc = crypto_shash_final(shash, signature); diff --git a/fs/cifs/file.c b/fs/cifs/file.c index be46fab4c96d..6db2caab8852 100644 --- a/fs/cifs/file.c +++ b/fs/cifs/file.c @@ -2145,17 +2145,17 @@ static int cifs_partialpagewrite(struct page *page, unsigned from, unsigned to) inode = page->mapping->host; offset += (loff_t)from; - write_data = kmap(page); + write_data = kmap_thread(page); write_data += from; if ((to > PAGE_SIZE) || (from > to)) { - kunmap(page); + kunmap_thread(page); return -EIO; } /* racing with truncate? */ if (offset > mapping->host->i_size) { - kunmap(page); + kunmap_thread(page); return 0; /* don't care */ } @@ -2183,7 +2183,7 @@ static int cifs_partialpagewrite(struct page *page, unsigned from, unsigned to) rc = -EIO; } - kunmap(page); + kunmap_thread(page); return rc; } @@ -2559,10 +2559,10 @@ static int cifs_write_end(struct file *file, struct address_space *mapping, known which we might as well leverage */ /* BB check if anything else missing out of ppw such as updating last write time */ - page_data = kmap(page); + page_data = kmap_thread(page); rc = cifs_write(cfile, pid, page_data + offset, copied, &pos); /* if (rc < 0) should we set writebehind rc? */ - kunmap(page); + kunmap_thread(page); free_xid(xid); } else { @@ -4511,7 +4511,7 @@ static int cifs_readpage_worker(struct file *file, struct page *page, if (rc == 0) goto read_complete; - read_data = kmap(page); + read_data = kmap_thread(page); /* for reads over a certain size could initiate async read ahead */ rc = cifs_read(file, read_data, PAGE_SIZE, poffset); @@ -4540,7 +4540,7 @@ static int cifs_readpage_worker(struct file *file, struct page *page, rc = 0; io_error: - kunmap(page); + kunmap_thread(page); unlock_page(page); read_complete: diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c index 32f90dc82c84..a3e7ebab38b6 100644 --- a/fs/cifs/smb2ops.c +++ b/fs/cifs/smb2ops.c @@ -4068,12 +4068,12 @@ smb3_init_transform_rq(struct TCP_Server_Info *server, int num_rqst, rqst_page_get_length(&new_rq[i], j, &len, &offset); - dst = (char *) kmap(new_rq[i].rq_pages[j]) + offset; - src = (char *) kmap(old_rq[i - 1].rq_pages[j]) + offset; + dst = (char *) kmap_thread(new_rq[i].rq_pages[j]) + offset; + src = (char *) kmap_thread(old_rq[i - 1].rq_pages[j]) + offset; memcpy(dst, src, len); - kunmap(new_rq[i].rq_pages[j]); - kunmap(old_rq[i - 1].rq_pages[j]); + kunmap_thread(new_rq[i].rq_pages[j]); + kunmap_thread(old_rq[i - 1].rq_pages[j]); } } From patchwork Fri Oct 9 19:49:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 286581 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E91C4C5517A for ; Fri, 9 Oct 2020 19:52:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BE4692240B for ; Fri, 9 Oct 2020 19:52:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390918AbgJITwT (ORCPT ); Fri, 9 Oct 2020 15:52:19 -0400 Received: from mga14.intel.com ([192.55.52.115]:15463 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2403988AbgJITvr (ORCPT ); Fri, 9 Oct 2020 15:51:47 -0400 IronPort-SDR: ZkAdcgi4swrVMUBVctT1IGj49yKvAen433ACK6SntQoH6405vFPb4w9B8cF6kgYV0SvPIiBLpC Gj7D0CTpxGCw== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="164743769" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="164743769" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:44 -0700 IronPort-SDR: /XLKsCzQipLZYQ1rW+ETY2grttZNamy0/VnWHXeIGpFKMMZaq8ytLhObxugEuPiPsSx/1cqrId IdubLVsw8k0A== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="419536913" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:44 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Bob Peterson , Andreas Gruenbacher , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 16/58] fs/gfs2: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:49:51 -0700 Message-Id: <20201009195033.3208459-17-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Bob Peterson Cc: Andreas Gruenbacher Signed-off-by: Ira Weiny --- fs/gfs2/bmap.c | 4 ++-- fs/gfs2/ops_fstype.c | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c index 0f69fbd4af66..375af4528411 100644 --- a/fs/gfs2/bmap.c +++ b/fs/gfs2/bmap.c @@ -67,7 +67,7 @@ static int gfs2_unstuffer_page(struct gfs2_inode *ip, struct buffer_head *dibh, } if (!PageUptodate(page)) { - void *kaddr = kmap(page); + void *kaddr = kmap_thread(page); u64 dsize = i_size_read(inode); if (dsize > gfs2_max_stuffed_size(ip)) @@ -75,7 +75,7 @@ static int gfs2_unstuffer_page(struct gfs2_inode *ip, struct buffer_head *dibh, memcpy(kaddr, dibh->b_data + sizeof(struct gfs2_dinode), dsize); memset(kaddr + dsize, 0, PAGE_SIZE - dsize); - kunmap(page); + kunmap_thread(page); SetPageUptodate(page); } diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c index 6d18d2c91add..a5d20d9b504a 100644 --- a/fs/gfs2/ops_fstype.c +++ b/fs/gfs2/ops_fstype.c @@ -263,9 +263,9 @@ static int gfs2_read_super(struct gfs2_sbd *sdp, sector_t sector, int silent) __free_page(page); return -EIO; } - p = kmap(page); + p = kmap_thread(page); gfs2_sb_in(sdp, p); - kunmap(page); + kunmap_thread(page); __free_page(page); return gfs2_check_sb(sdp, silent); } From patchwork Fri Oct 9 19:49:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 286580 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24C38C84639 for ; Fri, 9 Oct 2020 19:53:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EC8212245B for ; Fri, 9 Oct 2020 19:53:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391031AbgJITw7 (ORCPT ); Fri, 9 Oct 2020 15:52:59 -0400 Received: from mga06.intel.com ([134.134.136.31]:1664 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2403996AbgJITvy (ORCPT ); Fri, 9 Oct 2020 15:51:54 -0400 IronPort-SDR: dx2C651xObHLbB3ebXZq7O3vvDLCkCp7YyEriat/+Fn1vQtkkOifEieLP9BqvQO5TVmymYi7WM okOGmreQGHCQ== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="227178891" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="227178891" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:47 -0700 IronPort-SDR: vdHXKYZiEWUJ0fN4OExd2VqdiXE9aYdlaLF6p7mD2X3g24JwvvcNWIkVlfM1uW7/GA3kn+ap6+ Rex2Wyxl2AaA== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="298531066" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:47 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Ryusuke Konishi , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 17/58] fs/nilfs2: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:49:52 -0700 Message-Id: <20201009195033.3208459-18-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Ryusuke Konishi Signed-off-by: Ira Weiny --- fs/nilfs2/alloc.c | 34 +++++++++++++++++----------------- fs/nilfs2/cpfile.c | 4 ++-- 2 files changed, 19 insertions(+), 19 deletions(-) diff --git a/fs/nilfs2/alloc.c b/fs/nilfs2/alloc.c index adf3bb0a8048..2aa4c34094ef 100644 --- a/fs/nilfs2/alloc.c +++ b/fs/nilfs2/alloc.c @@ -524,7 +524,7 @@ int nilfs_palloc_prepare_alloc_entry(struct inode *inode, ret = nilfs_palloc_get_desc_block(inode, group, 1, &desc_bh); if (ret < 0) return ret; - desc_kaddr = kmap(desc_bh->b_page); + desc_kaddr = kmap_thread(desc_bh->b_page); desc = nilfs_palloc_block_get_group_desc( inode, group, desc_bh, desc_kaddr); n = nilfs_palloc_rest_groups_in_desc_block(inode, group, @@ -536,7 +536,7 @@ int nilfs_palloc_prepare_alloc_entry(struct inode *inode, inode, group, 1, &bitmap_bh); if (ret < 0) goto out_desc; - bitmap_kaddr = kmap(bitmap_bh->b_page); + bitmap_kaddr = kmap_thread(bitmap_bh->b_page); bitmap = bitmap_kaddr + bh_offset(bitmap_bh); pos = nilfs_palloc_find_available_slot( bitmap, group_offset, @@ -547,21 +547,21 @@ int nilfs_palloc_prepare_alloc_entry(struct inode *inode, desc, lock, -1); req->pr_entry_nr = entries_per_group * group + pos; - kunmap(desc_bh->b_page); - kunmap(bitmap_bh->b_page); + kunmap_thread(desc_bh->b_page); + kunmap_thread(bitmap_bh->b_page); req->pr_desc_bh = desc_bh; req->pr_bitmap_bh = bitmap_bh; return 0; } - kunmap(bitmap_bh->b_page); + kunmap_thread(bitmap_bh->b_page); brelse(bitmap_bh); } group_offset = 0; } - kunmap(desc_bh->b_page); + kunmap_thread(desc_bh->b_page); brelse(desc_bh); } @@ -569,7 +569,7 @@ int nilfs_palloc_prepare_alloc_entry(struct inode *inode, return -ENOSPC; out_desc: - kunmap(desc_bh->b_page); + kunmap_thread(desc_bh->b_page); brelse(desc_bh); return ret; } @@ -605,10 +605,10 @@ void nilfs_palloc_commit_free_entry(struct inode *inode, spinlock_t *lock; group = nilfs_palloc_group(inode, req->pr_entry_nr, &group_offset); - desc_kaddr = kmap(req->pr_desc_bh->b_page); + desc_kaddr = kmap_thread(req->pr_desc_bh->b_page); desc = nilfs_palloc_block_get_group_desc(inode, group, req->pr_desc_bh, desc_kaddr); - bitmap_kaddr = kmap(req->pr_bitmap_bh->b_page); + bitmap_kaddr = kmap_thread(req->pr_bitmap_bh->b_page); bitmap = bitmap_kaddr + bh_offset(req->pr_bitmap_bh); lock = nilfs_mdt_bgl_lock(inode, group); @@ -620,8 +620,8 @@ void nilfs_palloc_commit_free_entry(struct inode *inode, else nilfs_palloc_group_desc_add_entries(desc, lock, 1); - kunmap(req->pr_bitmap_bh->b_page); - kunmap(req->pr_desc_bh->b_page); + kunmap_thread(req->pr_bitmap_bh->b_page); + kunmap_thread(req->pr_desc_bh->b_page); mark_buffer_dirty(req->pr_desc_bh); mark_buffer_dirty(req->pr_bitmap_bh); @@ -646,10 +646,10 @@ void nilfs_palloc_abort_alloc_entry(struct inode *inode, spinlock_t *lock; group = nilfs_palloc_group(inode, req->pr_entry_nr, &group_offset); - desc_kaddr = kmap(req->pr_desc_bh->b_page); + desc_kaddr = kmap_thread(req->pr_desc_bh->b_page); desc = nilfs_palloc_block_get_group_desc(inode, group, req->pr_desc_bh, desc_kaddr); - bitmap_kaddr = kmap(req->pr_bitmap_bh->b_page); + bitmap_kaddr = kmap_thread(req->pr_bitmap_bh->b_page); bitmap = bitmap_kaddr + bh_offset(req->pr_bitmap_bh); lock = nilfs_mdt_bgl_lock(inode, group); @@ -661,8 +661,8 @@ void nilfs_palloc_abort_alloc_entry(struct inode *inode, else nilfs_palloc_group_desc_add_entries(desc, lock, 1); - kunmap(req->pr_bitmap_bh->b_page); - kunmap(req->pr_desc_bh->b_page); + kunmap_thread(req->pr_bitmap_bh->b_page); + kunmap_thread(req->pr_desc_bh->b_page); brelse(req->pr_bitmap_bh); brelse(req->pr_desc_bh); @@ -754,7 +754,7 @@ int nilfs_palloc_freev(struct inode *inode, __u64 *entry_nrs, size_t nitems) /* Get the first entry number of the group */ group_min_nr = (__u64)group * epg; - bitmap_kaddr = kmap(bitmap_bh->b_page); + bitmap_kaddr = kmap_thread(bitmap_bh->b_page); bitmap = bitmap_kaddr + bh_offset(bitmap_bh); lock = nilfs_mdt_bgl_lock(inode, group); @@ -800,7 +800,7 @@ int nilfs_palloc_freev(struct inode *inode, __u64 *entry_nrs, size_t nitems) entry_start = rounddown(group_offset, epb); } while (true); - kunmap(bitmap_bh->b_page); + kunmap_thread(bitmap_bh->b_page); mark_buffer_dirty(bitmap_bh); brelse(bitmap_bh); diff --git a/fs/nilfs2/cpfile.c b/fs/nilfs2/cpfile.c index 86d4d850d130..402ab8bfce29 100644 --- a/fs/nilfs2/cpfile.c +++ b/fs/nilfs2/cpfile.c @@ -235,11 +235,11 @@ int nilfs_cpfile_get_checkpoint(struct inode *cpfile, ret = nilfs_cpfile_get_checkpoint_block(cpfile, cno, create, &cp_bh); if (ret < 0) goto out_header; - kaddr = kmap(cp_bh->b_page); + kaddr = kmap_thread(cp_bh->b_page); cp = nilfs_cpfile_block_get_checkpoint(cpfile, cno, cp_bh, kaddr); if (nilfs_checkpoint_invalid(cp)) { if (!create) { - kunmap(cp_bh->b_page); + kunmap_thread(cp_bh->b_page); brelse(cp_bh); ret = -ENOENT; goto out_header; From patchwork Fri Oct 9 19:49:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 286560 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9464DC433E7 for ; Fri, 9 Oct 2020 20:06:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 509962225B for ; Fri, 9 Oct 2020 20:06:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391624AbgJIUG5 (ORCPT ); Fri, 9 Oct 2020 16:06:57 -0400 Received: from mga02.intel.com ([134.134.136.20]:57645 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390898AbgJITwA (ORCPT ); Fri, 9 Oct 2020 15:52:00 -0400 IronPort-SDR: 5+LUY2Awrr9j/tUQIAvjFtQA09wGVszIUmNQ5L2DFU9VGy9iI/QOZXZ6nS089EMR6yn+dOlt80 BHiurkWZK9Mw== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="152450896" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="152450896" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:59 -0700 IronPort-SDR: eWSKFRksQmWHvS1j4lFouXjU/VQ78asIZFlodYo3aryNHFOArT/TLAMBqja15iBonRYEaVT+Tm vvSyZvytK3gg== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="462300719" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:57 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , David Woodhouse , Richard Weinberger , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 20/58] fs/jffs2: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:49:55 -0700 Message-Id: <20201009195033.3208459-21-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: David Woodhouse Cc: Richard Weinberger Signed-off-by: Ira Weiny --- fs/jffs2/file.c | 4 ++-- fs/jffs2/gc.c | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/fs/jffs2/file.c b/fs/jffs2/file.c index f8fb89b10227..3e6d54f9b011 100644 --- a/fs/jffs2/file.c +++ b/fs/jffs2/file.c @@ -88,7 +88,7 @@ static int jffs2_do_readpage_nolock (struct inode *inode, struct page *pg) BUG_ON(!PageLocked(pg)); - pg_buf = kmap(pg); + pg_buf = kmap_thread(pg); /* FIXME: Can kmap fail? */ ret = jffs2_read_inode_range(c, f, pg_buf, pg->index << PAGE_SHIFT, @@ -103,7 +103,7 @@ static int jffs2_do_readpage_nolock (struct inode *inode, struct page *pg) } flush_dcache_page(pg); - kunmap(pg); + kunmap_thread(pg); jffs2_dbg(2, "readpage finished\n"); return ret; diff --git a/fs/jffs2/gc.c b/fs/jffs2/gc.c index 373b3b7c9f44..a7259783ab84 100644 --- a/fs/jffs2/gc.c +++ b/fs/jffs2/gc.c @@ -1335,7 +1335,7 @@ static int jffs2_garbage_collect_dnode(struct jffs2_sb_info *c, struct jffs2_era return PTR_ERR(page); } - pg_ptr = kmap(page); + pg_ptr = kmap_thread(page); mutex_lock(&f->sem); offset = start; @@ -1400,7 +1400,7 @@ static int jffs2_garbage_collect_dnode(struct jffs2_sb_info *c, struct jffs2_era } } - kunmap(page); + kunmap_thread(page); put_page(page); return ret; } From patchwork Fri Oct 9 19:49:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 286561 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB536C43467 for ; Fri, 9 Oct 2020 20:06:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 81EE52225B for ; Fri, 9 Oct 2020 20:06:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391434AbgJIUGZ (ORCPT ); Fri, 9 Oct 2020 16:06:25 -0400 Received: from mga03.intel.com ([134.134.136.65]:26030 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387978AbgJITwH (ORCPT ); Fri, 9 Oct 2020 15:52:07 -0400 IronPort-SDR: OTj9s3cvBtRyPQTZAybKfscTVxPDK48TJR7rMLCvUFwCTYBh25MUK6MTvmozzMmi3IFQlUR46z sA4Nku3xxaJQ== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="165592224" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="165592224" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:05 -0700 IronPort-SDR: UirI0CQIu4U61+QlSzJhUSBnMwMbOjm+hcyTouEwvOwx6I0trp+50TKD6KEUa5cRPFVp+hPY69 eHlTO5RS+u4A== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="343972211" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:04 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Jaegeuk Kim , Chao Yu , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 22/58] fs/f2fs: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:49:57 -0700 Message-Id: <20201009195033.3208459-23-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Jaegeuk Kim Cc: Chao Yu Signed-off-by: Ira Weiny --- fs/f2fs/f2fs.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index d9e52a7f3702..ff72a45a577e 100644 --- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h @@ -2410,12 +2410,12 @@ static inline struct page *f2fs_pagecache_get_page( static inline void f2fs_copy_page(struct page *src, struct page *dst) { - char *src_kaddr = kmap(src); - char *dst_kaddr = kmap(dst); + char *src_kaddr = kmap_thread(src); + char *dst_kaddr = kmap_thread(dst); memcpy(dst_kaddr, src_kaddr, PAGE_SIZE); - kunmap(dst); - kunmap(src); + kunmap_thread(dst); + kunmap_thread(src); } static inline void f2fs_put_page(struct page *page, int unlock) From patchwork Fri Oct 9 19:50:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 286562 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0C5EC433E7 for ; Fri, 9 Oct 2020 20:05:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7F012222BA for ; Fri, 9 Oct 2020 20:05:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391006AbgJIUFl (ORCPT ); Fri, 9 Oct 2020 16:05:41 -0400 Received: from mga06.intel.com ([134.134.136.31]:1746 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390926AbgJITwV (ORCPT ); Fri, 9 Oct 2020 15:52:21 -0400 IronPort-SDR: y1JJh2sq7n7qvtBHhoeiZDBcJHqQXNvk9XowiHILd4LeNp+F4Dd7RSvV34lAiBY6eVvCkDMg+q 734l2x7TLtxQ== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="227178997" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="227178997" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:18 -0700 IronPort-SDR: wr8yF8/sTlHJunnzFelUAfkdGIrCwMwbAv+fVYfwv9n4Q9ri2s7tZy/sCWglaZc+50HSVTr6HF aNXvhi/ktrcA== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="518801285" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:18 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Jan Kara , "Theodore Ts'o" , Randy Dunlap , Alex Shi , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 25/58] fs/reiserfs: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:00 -0700 Message-Id: <20201009195033.3208459-26-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Jan Kara Cc: "Theodore Ts'o" Cc: Randy Dunlap Cc: Alex Shi Signed-off-by: Ira Weiny --- fs/reiserfs/journal.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/fs/reiserfs/journal.c b/fs/reiserfs/journal.c index e98f99338f8f..be8f56261e8c 100644 --- a/fs/reiserfs/journal.c +++ b/fs/reiserfs/journal.c @@ -4194,11 +4194,11 @@ static int do_journal_end(struct reiserfs_transaction_handle *th, int flags) SB_ONDISK_JOURNAL_SIZE(sb))); set_buffer_uptodate(tmp_bh); page = cn->bh->b_page; - addr = kmap(page); + addr = kmap_thread(page); memcpy(tmp_bh->b_data, addr + offset_in_page(cn->bh->b_data), cn->bh->b_size); - kunmap(page); + kunmap_thread(page); mark_buffer_dirty(tmp_bh); jindex++; set_buffer_journal_dirty(cn->bh); From patchwork Fri Oct 9 19:50:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 286563 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, UNWANTED_LANGUAGE_BODY, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F1D2BC433DF for ; Fri, 9 Oct 2020 20:04:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AA4EF2225B for ; Fri, 9 Oct 2020 20:04:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391254AbgJIUEY (ORCPT ); Fri, 9 Oct 2020 16:04:24 -0400 Received: from mga11.intel.com ([192.55.52.93]:40547 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389150AbgJITw2 (ORCPT ); Fri, 9 Oct 2020 15:52:28 -0400 IronPort-SDR: JXvRbqfBSh4WcKKhlWmGAJjVVwBBqA0c3vVa62KiqhYKBoCAMWZ7xt97eKLwzIZS1eytWF28Tv 5/HLX+aDaNNA== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="162067998" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="162067998" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:26 -0700 IronPort-SDR: enOwdrUvox960YRmC5C0rYPH71vGkXRrRSf++MkJJnBOIwe1z1BLjEriaudfjb7SF8F3v1X02a epHq/DZLuLjw== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="355863002" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:25 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Richard Weinberger , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 27/58] fs/ubifs: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:02 -0700 Message-Id: <20201009195033.3208459-28-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Richard Weinberger Signed-off-by: Ira Weiny --- fs/ubifs/file.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/fs/ubifs/file.c b/fs/ubifs/file.c index b77d1637bbbc..a3537447a885 100644 --- a/fs/ubifs/file.c +++ b/fs/ubifs/file.c @@ -111,7 +111,7 @@ static int do_readpage(struct page *page) ubifs_assert(c, !PageChecked(page)); ubifs_assert(c, !PagePrivate(page)); - addr = kmap(page); + addr = kmap_thread(page); block = page->index << UBIFS_BLOCKS_PER_PAGE_SHIFT; beyond = (i_size + UBIFS_BLOCK_SIZE - 1) >> UBIFS_BLOCK_SHIFT; @@ -174,7 +174,7 @@ static int do_readpage(struct page *page) SetPageUptodate(page); ClearPageError(page); flush_dcache_page(page); - kunmap(page); + kunmap_thread(page); return 0; error: @@ -182,7 +182,7 @@ static int do_readpage(struct page *page) ClearPageUptodate(page); SetPageError(page); flush_dcache_page(page); - kunmap(page); + kunmap_thread(page); return err; } @@ -616,7 +616,7 @@ static int populate_page(struct ubifs_info *c, struct page *page, dbg_gen("ino %lu, pg %lu, i_size %lld, flags %#lx", inode->i_ino, page->index, i_size, page->flags); - addr = zaddr = kmap(page); + addr = zaddr = kmap_thread(page); end_index = (i_size - 1) >> PAGE_SHIFT; if (!i_size || page->index > end_index) { @@ -692,7 +692,7 @@ static int populate_page(struct ubifs_info *c, struct page *page, SetPageUptodate(page); ClearPageError(page); flush_dcache_page(page); - kunmap(page); + kunmap_thread(page); *n = nn; return 0; @@ -700,7 +700,7 @@ static int populate_page(struct ubifs_info *c, struct page *page, ClearPageUptodate(page); SetPageError(page); flush_dcache_page(page); - kunmap(page); + kunmap_thread(page); ubifs_err(c, "bad data node (block %u, inode %lu)", page_block, inode->i_ino); return -EINVAL; @@ -918,7 +918,7 @@ static int do_writepage(struct page *page, int len) /* Update radix tree tags */ set_page_writeback(page); - addr = kmap(page); + addr = kmap_thread(page); block = page->index << UBIFS_BLOCKS_PER_PAGE_SHIFT; i = 0; while (len) { @@ -950,7 +950,7 @@ static int do_writepage(struct page *page, int len) ClearPagePrivate(page); ClearPageChecked(page); - kunmap(page); + kunmap_thread(page); unlock_page(page); end_page_writeback(page); return err; From patchwork Fri Oct 9 19:50:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 286564 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54B98C433E7 for ; Fri, 9 Oct 2020 20:04:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0C99720732 for ; Fri, 9 Oct 2020 20:04:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391084AbgJIUDn (ORCPT ); Fri, 9 Oct 2020 16:03:43 -0400 Received: from mga07.intel.com ([134.134.136.100]:56781 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388726AbgJITwe (ORCPT ); Fri, 9 Oct 2020 15:52:34 -0400 IronPort-SDR: k5n5dJgad3PH5z1nKWRCqhUQsH9CIT69v4l8qL7SJaW8QquVIZ7+Rkvin4YtVTXz0/2gWsqL9u 0/qOsLBGzxnw== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="229715221" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="229715221" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:30 -0700 IronPort-SDR: R8dnd4Gi4VR6B2jqh++vX5H71gVtYpBk3ChyBwkq2ll0tg176OYljcvG78jHbXA7T9l0eBxuZS YLX/odwmLDNw== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="329006590" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:29 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , David Howells , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 28/58] fs/cachefiles: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:03 -0700 Message-Id: <20201009195033.3208459-29-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: David Howells Signed-off-by: Ira Weiny --- fs/cachefiles/rdwr.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/fs/cachefiles/rdwr.c b/fs/cachefiles/rdwr.c index 3080cda9e824..2468e5c067ba 100644 --- a/fs/cachefiles/rdwr.c +++ b/fs/cachefiles/rdwr.c @@ -936,9 +936,9 @@ int cachefiles_write_page(struct fscache_storage *op, struct page *page) } } - data = kmap(page); + data = kmap_thread(page); ret = kernel_write(file, data, len, &pos); - kunmap(page); + kunmap_thread(page); fput(file); if (ret != len) goto error_eio; From patchwork Fri Oct 9 19:50:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 286569 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B034DC35272 for ; Fri, 9 Oct 2020 20:00:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 67548225A9 for ; Fri, 9 Oct 2020 20:00:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390440AbgJIUAS (ORCPT ); Fri, 9 Oct 2020 16:00:18 -0400 Received: from mga14.intel.com ([192.55.52.115]:15567 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391075AbgJITxV (ORCPT ); Fri, 9 Oct 2020 15:53:21 -0400 IronPort-SDR: CHG01MNstSpG17mDzGS4y9elLRbBJnVOXJxRUg+piAaYWUVMm+uwaS1rsw4JmTP4Aw9xRUutoh mCMDnkWd+amQ== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="164743944" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="164743944" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:41 -0700 IronPort-SDR: L0Vefok2A93xih84zuFNBu3LKLwMRamDsDkR1F4Gust5uwQPsEktlHxFcxfQt1U0vxS4VptD4d rIp50GoBaQKg== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="345148048" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:40 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Hans de Goede , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 31/58] fs/vboxsf: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:06 -0700 Message-Id: <20201009195033.3208459-32-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Hans de Goede Signed-off-by: Ira Weiny --- fs/vboxsf/file.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/fs/vboxsf/file.c b/fs/vboxsf/file.c index c4ab5996d97a..d9c7e6b7b4cc 100644 --- a/fs/vboxsf/file.c +++ b/fs/vboxsf/file.c @@ -216,7 +216,7 @@ static int vboxsf_readpage(struct file *file, struct page *page) u8 *buf; int err; - buf = kmap(page); + buf = kmap_thread(page); err = vboxsf_read(sf_handle->root, sf_handle->handle, off, &nread, buf); if (err == 0) { @@ -227,7 +227,7 @@ static int vboxsf_readpage(struct file *file, struct page *page) SetPageError(page); } - kunmap(page); + kunmap_thread(page); unlock_page(page); return err; } @@ -268,10 +268,10 @@ static int vboxsf_writepage(struct page *page, struct writeback_control *wbc) if (!sf_handle) return -EBADF; - buf = kmap(page); + buf = kmap_thread(page); err = vboxsf_write(sf_handle->root, sf_handle->handle, off, &nwrite, buf); - kunmap(page); + kunmap_thread(page); kref_put(&sf_handle->refcount, vboxsf_handle_release); @@ -302,10 +302,10 @@ static int vboxsf_write_end(struct file *file, struct address_space *mapping, if (!PageUptodate(page) && copied < len) zero_user(page, from + copied, len - copied); - buf = kmap(page); + buf = kmap_thread(page); err = vboxsf_write(sf_handle->root, sf_handle->handle, pos, &nwritten, buf + from); - kunmap(page); + kunmap_thread(page); if (err) { nwritten = 0; From patchwork Fri Oct 9 19:50:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 286565 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE513C43467 for ; Fri, 9 Oct 2020 20:03:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7D7CD2225B for ; Fri, 9 Oct 2020 20:03:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390831AbgJIUDV (ORCPT ); Fri, 9 Oct 2020 16:03:21 -0400 Received: from mga17.intel.com ([192.55.52.151]:34046 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389118AbgJITwr (ORCPT ); Fri, 9 Oct 2020 15:52:47 -0400 IronPort-SDR: HCJqdmCgpU6pXG1LsAwcTZbp9UGZICOvUpbuOIRNLSKkgIBl4bErNnLIrbSn24FVu1Q3zQC/2Q 2KazPDmPpjUQ== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="145397499" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="145397499" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:45 -0700 IronPort-SDR: A7QVfEYpWd+zukvvJSzSDlmpc6UtcI1VTl49mnWle7o9+shZWHPb4ayBrZUheEhKcyM4kHv/QX ai2WmfcpOoFA== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="389237190" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:43 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Jeff Dike , Richard Weinberger , Anton Ivanov , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 32/58] fs/hostfs: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:07 -0700 Message-Id: <20201009195033.3208459-33-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Jeff Dike Cc: Richard Weinberger Cc: Anton Ivanov Signed-off-by: Ira Weiny --- fs/hostfs/hostfs_kern.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/fs/hostfs/hostfs_kern.c b/fs/hostfs/hostfs_kern.c index c070c0d8e3e9..608efd0f83cb 100644 --- a/fs/hostfs/hostfs_kern.c +++ b/fs/hostfs/hostfs_kern.c @@ -409,7 +409,7 @@ static int hostfs_writepage(struct page *page, struct writeback_control *wbc) if (page->index >= end_index) count = inode->i_size & (PAGE_SIZE-1); - buffer = kmap(page); + buffer = kmap_thread(page); err = write_file(HOSTFS_I(inode)->fd, &base, buffer, count); if (err != count) { @@ -425,7 +425,7 @@ static int hostfs_writepage(struct page *page, struct writeback_control *wbc) err = 0; out: - kunmap(page); + kunmap_thread(page); unlock_page(page); return err; @@ -437,7 +437,7 @@ static int hostfs_readpage(struct file *file, struct page *page) loff_t start = page_offset(page); int bytes_read, ret = 0; - buffer = kmap(page); + buffer = kmap_thread(page); bytes_read = read_file(FILE_HOSTFS_I(file)->fd, &start, buffer, PAGE_SIZE); if (bytes_read < 0) { @@ -454,7 +454,7 @@ static int hostfs_readpage(struct file *file, struct page *page) out: flush_dcache_page(page); - kunmap(page); + kunmap_thread(page); unlock_page(page); return ret; } @@ -480,9 +480,9 @@ static int hostfs_write_end(struct file *file, struct address_space *mapping, unsigned from = pos & (PAGE_SIZE - 1); int err; - buffer = kmap(page); + buffer = kmap_thread(page); err = write_file(FILE_HOSTFS_I(file)->fd, &pos, buffer + from, copied); - kunmap(page); + kunmap_thread(page); if (!PageUptodate(page) && err == PAGE_SIZE) SetPageUptodate(page); From patchwork Fri Oct 9 19:50:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 286566 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 461AFC9DC83 for ; Fri, 9 Oct 2020 20:02:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EB09C222C8 for ; Fri, 9 Oct 2020 20:02:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390930AbgJIUB7 (ORCPT ); Fri, 9 Oct 2020 16:01:59 -0400 Received: from mga07.intel.com ([134.134.136.100]:56812 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390983AbgJITwz (ORCPT ); Fri, 9 Oct 2020 15:52:55 -0400 IronPort-SDR: Op9kc88PPdmQBPFTqx/+39WorDKjyG3ipU0g0nie1KwnkHl76GrMsKA5g1TH6J/GMtPSElTjiP VPk0GASkFqkA== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="229715275" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="229715275" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:51 -0700 IronPort-SDR: raa31ON9t7yf3LIgfzeAsEKPUCppaesVfvBVJ2ruufMJvWsy9JzU1jTdgl2++U0w8tR7AuyOxm Q4+3OjaqYbiw== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="298531317" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:50 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Gao Xiang , Chao Yu , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 34/58] fs/erofs: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:09 -0700 Message-Id: <20201009195033.3208459-35-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Gao Xiang Cc: Chao Yu Signed-off-by: Ira Weiny --- fs/erofs/super.c | 4 ++-- fs/erofs/xattr.c | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/fs/erofs/super.c b/fs/erofs/super.c index ddaa516c008a..41696b60f1b3 100644 --- a/fs/erofs/super.c +++ b/fs/erofs/super.c @@ -139,7 +139,7 @@ static int erofs_read_superblock(struct super_block *sb) sbi = EROFS_SB(sb); - data = kmap(page); + data = kmap_thread(page); dsb = (struct erofs_super_block *)(data + EROFS_SUPER_OFFSET); ret = -EINVAL; @@ -189,7 +189,7 @@ static int erofs_read_superblock(struct super_block *sb) } ret = 0; out: - kunmap(page); + kunmap_thread(page); put_page(page); return ret; } diff --git a/fs/erofs/xattr.c b/fs/erofs/xattr.c index c8c381eadcd6..1771baa99d77 100644 --- a/fs/erofs/xattr.c +++ b/fs/erofs/xattr.c @@ -20,7 +20,7 @@ static inline void xattr_iter_end(struct xattr_iter *it, bool atomic) { /* the only user of kunmap() is 'init_inode_xattrs' */ if (!atomic) - kunmap(it->page); + kunmap_thread(it->page); else kunmap_atomic(it->kaddr); @@ -96,7 +96,7 @@ static int init_inode_xattrs(struct inode *inode) } /* read in shared xattr array (non-atomic, see kmalloc below) */ - it.kaddr = kmap(it.page); + it.kaddr = kmap_thread(it.page); atomic_map = false; ih = (struct erofs_xattr_ibody_header *)(it.kaddr + it.ofs); From patchwork Fri Oct 9 19:50:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 286567 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 151ADC9DCA3 for ; Fri, 9 Oct 2020 20:01:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BE09222403 for ; Fri, 9 Oct 2020 20:01:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390688AbgJIUBN (ORCPT ); Fri, 9 Oct 2020 16:01:13 -0400 Received: from mga18.intel.com ([134.134.136.126]:42454 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389721AbgJITxQ (ORCPT ); Fri, 9 Oct 2020 15:53:16 -0400 IronPort-SDR: /0PxmQQKh9INXhqYDaA/qd0UIC/Ocy9AjDBS2exom/XczYR4EeFFQ0eDp921T8r9Qs7rCQoehA KURfPAOckKjQ== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="153363728" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="153363728" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:59 -0700 IronPort-SDR: Wf0Bi1Gs1P3ZgILN9Dk8+ODZhbqXc7J3tJZabe+2AUx5fg/nQYQB2E9E/9NHNQgBhyfVAmIu+q D7ag8zQ3enrg== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="462300964" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:58 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Jan Kara , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 36/58] fs/ext2: Use ext2_put_page Date: Fri, 9 Oct 2020 12:50:11 -0700 Message-Id: <20201009195033.3208459-37-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org From: Ira Weiny There are 3 places in namei.c where the equivalent of ext2_put_page() is open coded. We want to use k[un]map_thread() instead of k[un]map() in ext2_[get|put]_page(). Move ext2_put_page() to ext2.h and use it in namei.c in prep for converting the k[un]map() code. Cc: Jan Kara Signed-off-by: Ira Weiny --- fs/ext2/dir.c | 6 ------ fs/ext2/ext2.h | 8 ++++++++ fs/ext2/namei.c | 15 +++++---------- 3 files changed, 13 insertions(+), 16 deletions(-) diff --git a/fs/ext2/dir.c b/fs/ext2/dir.c index 70355ab6740e..f3194bf20733 100644 --- a/fs/ext2/dir.c +++ b/fs/ext2/dir.c @@ -66,12 +66,6 @@ static inline unsigned ext2_chunk_size(struct inode *inode) return inode->i_sb->s_blocksize; } -static inline void ext2_put_page(struct page *page) -{ - kunmap(page); - put_page(page); -} - /* * Return the offset into page `page_nr' of the last valid * byte in that page, plus one. diff --git a/fs/ext2/ext2.h b/fs/ext2/ext2.h index 5136b7289e8d..021ec8b42ac3 100644 --- a/fs/ext2/ext2.h +++ b/fs/ext2/ext2.h @@ -16,6 +16,8 @@ #include #include #include +#include +#include /* XXX Here for now... not interested in restructing headers JUST now */ @@ -745,6 +747,12 @@ extern int ext2_delete_entry (struct ext2_dir_entry_2 *, struct page *); extern int ext2_empty_dir (struct inode *); extern struct ext2_dir_entry_2 * ext2_dotdot (struct inode *, struct page **); extern void ext2_set_link(struct inode *, struct ext2_dir_entry_2 *, struct page *, struct inode *, int); +static inline void ext2_put_page(struct page *page) +{ + kunmap(page); + put_page(page); +} + /* ialloc.c */ extern struct inode * ext2_new_inode (struct inode *, umode_t, const struct qstr *); diff --git a/fs/ext2/namei.c b/fs/ext2/namei.c index 5bf2c145643b..ea980f1e2e99 100644 --- a/fs/ext2/namei.c +++ b/fs/ext2/namei.c @@ -389,23 +389,18 @@ static int ext2_rename (struct inode * old_dir, struct dentry * old_dentry, if (dir_de) { if (old_dir != new_dir) ext2_set_link(old_inode, dir_de, dir_page, new_dir, 0); - else { - kunmap(dir_page); - put_page(dir_page); - } + else + ext2_put_page(dir_page); inode_dec_link_count(old_dir); } return 0; out_dir: - if (dir_de) { - kunmap(dir_page); - put_page(dir_page); - } + if (dir_de) + ext2_put_page(dir_page); out_old: - kunmap(old_page); - put_page(old_page); + ext2_put_page(old_page); out: return err; } From patchwork Fri Oct 9 19:50:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 286568 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4D2FC388F2 for ; Fri, 9 Oct 2020 20:00:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 589AE22267 for ; Fri, 9 Oct 2020 20:00:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390475AbgJIUAS (ORCPT ); Fri, 9 Oct 2020 16:00:18 -0400 Received: from mga11.intel.com ([192.55.52.93]:40547 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388887AbgJITxV (ORCPT ); Fri, 9 Oct 2020 15:53:21 -0400 IronPort-SDR: zq5dvuFd6jXpbgXHXDd3kanr7kGVYAspkSVx4KqGtzBddQtGcPnRKRjJwx5QrQv4TFmaka5zeS Gu84SEdAGvPQ== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="162068063" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="162068063" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:06 -0700 IronPort-SDR: +9tLMQNrOc9ZmtQe+NKSSXZyhjRooFfxfongQijBj4blQjg4c6+fJ/ZHkl3bUeAx7HZ0qDTuwd uW08H1SVCXkg== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="343972363" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:05 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 38/58] fs/isofs: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:13 -0700 Message-Id: <20201009195033.3208459-39-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org From: Ira Weiny These kmap() calls are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Signed-off-by: Ira Weiny --- fs/isofs/compress.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/fs/isofs/compress.c b/fs/isofs/compress.c index bc12ac7e2312..ddd3fd99d2e1 100644 --- a/fs/isofs/compress.c +++ b/fs/isofs/compress.c @@ -344,7 +344,7 @@ static int zisofs_readpage(struct file *file, struct page *page) pages[i] = grab_cache_page_nowait(mapping, index); if (pages[i]) { ClearPageError(pages[i]); - kmap(pages[i]); + kmap_thread(pages[i]); } } @@ -356,7 +356,7 @@ static int zisofs_readpage(struct file *file, struct page *page) flush_dcache_page(pages[i]); if (i == full_page && err) SetPageError(pages[i]); - kunmap(pages[i]); + kunmap_thread(pages[i]); unlock_page(pages[i]); if (i != full_page) put_page(pages[i]); From patchwork Fri Oct 9 19:50:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 286579 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 763C8C8A861 for ; Fri, 9 Oct 2020 19:53:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5005D2250F for ; Fri, 9 Oct 2020 19:53:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391120AbgJITxb (ORCPT ); Fri, 9 Oct 2020 15:53:31 -0400 Received: from mga14.intel.com ([192.55.52.115]:15570 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391073AbgJITxW (ORCPT ); Fri, 9 Oct 2020 15:53:22 -0400 IronPort-SDR: 7KkidwkDSmKI3kwWlMria/Sju91b/R21i0w1kLrwQYSn3K4W1EsDTLeZ7OuYero5ReiYzcFEEs JOGLzwSLgu9Q== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="164744016" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="164744016" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:09 -0700 IronPort-SDR: FciD93oXa6TN5PQF74JzeuCyRYkWBlT2lC/7ncntBELjkiuqMo1u2JSAhMViBtfJLu9YdFEYCB Wo/5PMHidIaw== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="317147397" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:08 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 39/58] fs/jffs2: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:14 -0700 Message-Id: <20201009195033.3208459-40-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org From: Ira Weiny These kmap() calls are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Signed-off-by: Ira Weiny --- fs/jffs2/file.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/fs/jffs2/file.c b/fs/jffs2/file.c index 3e6d54f9b011..14dd2b18cc16 100644 --- a/fs/jffs2/file.c +++ b/fs/jffs2/file.c @@ -287,13 +287,13 @@ static int jffs2_write_end(struct file *filp, struct address_space *mapping, /* In 2.4, it was already kmapped by generic_file_write(). Doesn't hurt to do it again. The alternative is ifdefs, which are ugly. */ - kmap(pg); + kmap_thread(pg); ret = jffs2_write_inode_range(c, f, ri, page_address(pg) + aligned_start, (pg->index << PAGE_SHIFT) + aligned_start, end - aligned_start, &writtenlen); - kunmap(pg); + kunmap_thread(pg); if (ret) { /* There was an error writing. */ From patchwork Fri Oct 9 19:50:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 286572 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5BDDC9DC83 for ; Fri, 9 Oct 2020 19:58:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A2F1F22403 for ; Fri, 9 Oct 2020 19:58:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391118AbgJITxa (ORCPT ); Fri, 9 Oct 2020 15:53:30 -0400 Received: from mga11.intel.com ([192.55.52.93]:40581 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391069AbgJITxV (ORCPT ); Fri, 9 Oct 2020 15:53:21 -0400 IronPort-SDR: fwO/I5DdQY0ZiXv5xzaxY5B2Hn9Yf4xSouziRDHGndFFdQXVy99ZUvdLs9AkGAZ8tn4i39Gq1m jIBrPxeeBsJg== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="162068081" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="162068081" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:13 -0700 IronPort-SDR: kHLawvWt7hk0hVI10Fetwn7tOzYF5yIQvhqqrAQq3flxek1V3ypdWT/2gH8vwQndln/BZvc5kH kTRjULDVKoGw== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="312652948" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:12 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , "David S. Miller" , Jakub Kicinski , Alexey Kuznetsov , Hideaki YOSHIFUJI , Trond Myklebust , Anna Schumaker , Boris Pismenny , Aviad Yehezkel , John Fastabend , Daniel Borkmann , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 40/58] net: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:15 -0700 Message-Id: <20201009195033.3208459-41-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org From: Ira Weiny These kmap() calls in these drivers are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: "David S. Miller" Cc: Jakub Kicinski Cc: Alexey Kuznetsov Cc: Hideaki YOSHIFUJI Cc: Trond Myklebust Cc: Anna Schumaker Cc: Boris Pismenny Cc: Aviad Yehezkel Cc: John Fastabend Cc: Daniel Borkmann Signed-off-by: Ira Weiny --- net/ceph/messenger.c | 4 ++-- net/core/datagram.c | 4 ++-- net/core/sock.c | 8 ++++---- net/ipv4/ip_output.c | 4 ++-- net/sunrpc/cache.c | 4 ++-- net/sunrpc/xdr.c | 8 ++++---- net/tls/tls_device.c | 4 ++-- 7 files changed, 18 insertions(+), 18 deletions(-) diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c index d4d7a0e52491..0c49b8e333da 100644 --- a/net/ceph/messenger.c +++ b/net/ceph/messenger.c @@ -1535,10 +1535,10 @@ static u32 ceph_crc32c_page(u32 crc, struct page *page, { char *kaddr; - kaddr = kmap(page); + kaddr = kmap_thread(page); BUG_ON(kaddr == NULL); crc = crc32c(crc, kaddr + page_offset, length); - kunmap(page); + kunmap_thread(page); return crc; } diff --git a/net/core/datagram.c b/net/core/datagram.c index 639745d4f3b9..cbd0a343074a 100644 --- a/net/core/datagram.c +++ b/net/core/datagram.c @@ -441,14 +441,14 @@ static int __skb_datagram_iter(const struct sk_buff *skb, int offset, end = start + skb_frag_size(frag); if ((copy = end - offset) > 0) { struct page *page = skb_frag_page(frag); - u8 *vaddr = kmap(page); + u8 *vaddr = kmap_thread(page); if (copy > len) copy = len; n = INDIRECT_CALL_1(cb, simple_copy_to_iter, vaddr + skb_frag_off(frag) + offset - start, copy, data, to); - kunmap(page); + kunmap_thread(page); offset += n; if (n != copy) goto short_copy; diff --git a/net/core/sock.c b/net/core/sock.c index 6c5c6b18eff4..9b46a75cd8c1 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -2846,11 +2846,11 @@ ssize_t sock_no_sendpage(struct socket *sock, struct page *page, int offset, siz ssize_t res; struct msghdr msg = {.msg_flags = flags}; struct kvec iov; - char *kaddr = kmap(page); + char *kaddr = kmap_thread(page); iov.iov_base = kaddr + offset; iov.iov_len = size; res = kernel_sendmsg(sock, &msg, &iov, 1, size); - kunmap(page); + kunmap_thread(page); return res; } EXPORT_SYMBOL(sock_no_sendpage); @@ -2861,12 +2861,12 @@ ssize_t sock_no_sendpage_locked(struct sock *sk, struct page *page, ssize_t res; struct msghdr msg = {.msg_flags = flags}; struct kvec iov; - char *kaddr = kmap(page); + char *kaddr = kmap_thread(page); iov.iov_base = kaddr + offset; iov.iov_len = size; res = kernel_sendmsg_locked(sk, &msg, &iov, 1, size); - kunmap(page); + kunmap_thread(page); return res; } EXPORT_SYMBOL(sock_no_sendpage_locked); diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c index e6f2ada9e7d5..05304fb251a4 100644 --- a/net/ipv4/ip_output.c +++ b/net/ipv4/ip_output.c @@ -949,9 +949,9 @@ csum_page(struct page *page, int offset, int copy) { char *kaddr; __wsum csum; - kaddr = kmap(page); + kaddr = kmap_thread(page); csum = csum_partial(kaddr + offset, copy, 0); - kunmap(page); + kunmap_thread(page); return csum; } diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c index baef5ee43dbb..88193f2a8e6f 100644 --- a/net/sunrpc/cache.c +++ b/net/sunrpc/cache.c @@ -935,9 +935,9 @@ static ssize_t cache_downcall(struct address_space *mapping, if (!page) goto out_slow; - kaddr = kmap(page); + kaddr = kmap_thread(page); ret = cache_do_downcall(kaddr, buf, count, cd); - kunmap(page); + kunmap_thread(page); unlock_page(page); put_page(page); return ret; diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c index be11d672b5b9..00afbb48fb0a 100644 --- a/net/sunrpc/xdr.c +++ b/net/sunrpc/xdr.c @@ -1353,7 +1353,7 @@ xdr_xcode_array2(struct xdr_buf *buf, unsigned int base, base &= ~PAGE_MASK; avail_page = min_t(unsigned int, PAGE_SIZE - base, avail_here); - c = kmap(*ppages) + base; + c = kmap_thread(*ppages) + base; while (avail_here) { avail_here -= avail_page; @@ -1429,9 +1429,9 @@ xdr_xcode_array2(struct xdr_buf *buf, unsigned int base, } } if (avail_here) { - kunmap(*ppages); + kunmap_thread(*ppages); ppages++; - c = kmap(*ppages); + c = kmap_thread(*ppages); } avail_page = min(avail_here, @@ -1471,7 +1471,7 @@ xdr_xcode_array2(struct xdr_buf *buf, unsigned int base, out: kfree(elem); if (ppages) - kunmap(*ppages); + kunmap_thread(*ppages); return err; } diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c index b74e2741f74f..ead5b1c485f8 100644 --- a/net/tls/tls_device.c +++ b/net/tls/tls_device.c @@ -576,13 +576,13 @@ int tls_device_sendpage(struct sock *sk, struct page *page, goto out; } - kaddr = kmap(page); + kaddr = kmap_thread(page); iov.iov_base = kaddr + offset; iov.iov_len = size; iov_iter_kvec(&msg_iter, WRITE, &iov, 1, size); rc = tls_push_data(sk, &msg_iter, size, flags, TLS_RECORD_TYPE_DATA); - kunmap(page); + kunmap_thread(page); out: release_sock(sk); From patchwork Fri Oct 9 19:50:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 286570 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DFC6DC388CF for ; Fri, 9 Oct 2020 19:59:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id ABF5C22267 for ; Fri, 9 Oct 2020 19:59:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390481AbgJIT7a (ORCPT ); Fri, 9 Oct 2020 15:59:30 -0400 Received: from mga01.intel.com ([192.55.52.88]:3555 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391077AbgJITxW (ORCPT ); Fri, 9 Oct 2020 15:53:22 -0400 IronPort-SDR: GdHOoe2CgN4Z8QtSwH48CYegfGQDzphHJdhrL7dkMVhJPRdKdvndKGG7T8IbZ9UjohN5JKrtfB oaF9Ql3ig3bQ== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="182976424" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="182976424" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:19 -0700 IronPort-SDR: W4xxziDMxkncUnjnfmhtERgTle+fDEBI/syTBEG/tlt++YK2v+9/w8FvvM36z1FzCpPIzrEgyR 3zBS2pDk0hzg== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="354959339" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:18 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , "James E.J. Bottomley" , "Martin K. Petersen" , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 42/58] drivers/scsi: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:17 -0700 Message-Id: <20201009195033.3208459-43-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org From: Ira Weiny These kmap() calls are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: "James E.J. Bottomley" Cc: "Martin K. Petersen" Signed-off-by: Ira Weiny --- drivers/scsi/ipr.c | 8 ++++---- drivers/scsi/pmcraid.c | 8 ++++---- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/drivers/scsi/ipr.c b/drivers/scsi/ipr.c index b0aa58d117cc..a5a0b8feb661 100644 --- a/drivers/scsi/ipr.c +++ b/drivers/scsi/ipr.c @@ -3923,9 +3923,9 @@ static int ipr_copy_ucode_buffer(struct ipr_sglist *sglist, buffer += bsize_elem) { struct page *page = sg_page(sg); - kaddr = kmap(page); + kaddr = kmap_thread(page); memcpy(kaddr, buffer, bsize_elem); - kunmap(page); + kunmap_thread(page); sg->length = bsize_elem; @@ -3938,9 +3938,9 @@ static int ipr_copy_ucode_buffer(struct ipr_sglist *sglist, if (len % bsize_elem) { struct page *page = sg_page(sg); - kaddr = kmap(page); + kaddr = kmap_thread(page); memcpy(kaddr, buffer, len % bsize_elem); - kunmap(page); + kunmap_thread(page); sg->length = len % bsize_elem; } diff --git a/drivers/scsi/pmcraid.c b/drivers/scsi/pmcraid.c index aa9ae2ae8579..4b05ba4b8a11 100644 --- a/drivers/scsi/pmcraid.c +++ b/drivers/scsi/pmcraid.c @@ -3269,13 +3269,13 @@ static int pmcraid_copy_sglist( for (i = 0; i < (len / bsize_elem); i++, sg = sg_next(sg), buffer += bsize_elem) { struct page *page = sg_page(sg); - kaddr = kmap(page); + kaddr = kmap_thread(page); if (direction == DMA_TO_DEVICE) rc = copy_from_user(kaddr, buffer, bsize_elem); else rc = copy_to_user(buffer, kaddr, bsize_elem); - kunmap(page); + kunmap_thread(page); if (rc) { pmcraid_err("failed to copy user data into sg list\n"); @@ -3288,14 +3288,14 @@ static int pmcraid_copy_sglist( if (len % bsize_elem) { struct page *page = sg_page(sg); - kaddr = kmap(page); + kaddr = kmap_thread(page); if (direction == DMA_TO_DEVICE) rc = copy_from_user(kaddr, buffer, len % bsize_elem); else rc = copy_to_user(buffer, kaddr, len % bsize_elem); - kunmap(page); + kunmap_thread(page); sg->length = len % bsize_elem; } From patchwork Fri Oct 9 19:50:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 286571 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BAD59C9DCA0 for ; Fri, 9 Oct 2020 19:58:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 89DAB22267 for ; Fri, 9 Oct 2020 19:58:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387915AbgJIT6m (ORCPT ); Fri, 9 Oct 2020 15:58:42 -0400 Received: from mga12.intel.com ([192.55.52.136]:29334 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391094AbgJITx1 (ORCPT ); Fri, 9 Oct 2020 15:53:27 -0400 IronPort-SDR: dNjI/+CbWDQK7qTqW5BqQsQgclEAOq7y/cS1i5jjle6fiDG/78S+D7ApyW4zeTXOr/fIB3FyQB H0HlrqWAKRjQ== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="144851032" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="144851032" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:26 -0700 IronPort-SDR: 2ROrwlCi2HA6jb/vbADUiWNpPA9fz//CxLO3iPg6ChQCY2CkDUKdp2BuOf7Z0K54YjJfUTECcl a5kJtnEVjTKQ== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="329006788" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:25 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Stefano Stabellini , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 44/58] drivers/xen: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:19 -0700 Message-Id: <20201009195033.3208459-45-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org From: Ira Weiny These kmap() calls are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Stefano Stabellini Signed-off-by: Ira Weiny --- drivers/xen/gntalloc.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/xen/gntalloc.c b/drivers/xen/gntalloc.c index 3fa40c723e8e..3b78e055feff 100644 --- a/drivers/xen/gntalloc.c +++ b/drivers/xen/gntalloc.c @@ -184,9 +184,9 @@ static int add_grefs(struct ioctl_gntalloc_alloc_gref *op, static void __del_gref(struct gntalloc_gref *gref) { if (gref->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) { - uint8_t *tmp = kmap(gref->page); + uint8_t *tmp = kmap_thread(gref->page); tmp[gref->notify.pgoff] = 0; - kunmap(gref->page); + kunmap_thread(gref->page); } if (gref->notify.flags & UNMAP_NOTIFY_SEND_EVENT) { notify_remote_via_evtchn(gref->notify.event); From patchwork Fri Oct 9 19:50:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 286573 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 831D9C352B9 for ; Fri, 9 Oct 2020 19:57:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 43D9120691 for ; Fri, 9 Oct 2020 19:57:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390492AbgJIT53 (ORCPT ); Fri, 9 Oct 2020 15:57:29 -0400 Received: from mga12.intel.com ([192.55.52.136]:29334 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391149AbgJITxg (ORCPT ); Fri, 9 Oct 2020 15:53:36 -0400 IronPort-SDR: pgLxEXDdwlbiZXzBeT4ttrQBELII+3B5k1Ohhm5Voa/aT/zXiJVCv3SzCNFZeRFoxNhz6/kSzp U8bGcVHkZDFg== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="144851045" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="144851045" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:35 -0700 IronPort-SDR: 1BDjhBWRiZHjmax0H5jHPL9Ee2R7VPzFxt4XNyUZ5d5LafdvLgy40B0wtQFVU3AX2TCQgDyRso HhmDRI33qIDA== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="345148602" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:34 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 47/58] drivers/mtd: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:22 -0700 Message-Id: <20201009195033.3208459-48-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org From: Ira Weiny These kmap() calls are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Miquel Raynal Cc: Richard Weinberger Cc: Vignesh Raghavendra Signed-off-by: Ira Weiny --- drivers/mtd/mtd_blkdevs.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/mtd/mtd_blkdevs.c b/drivers/mtd/mtd_blkdevs.c index 0c05f77f9b21..4b18998273fa 100644 --- a/drivers/mtd/mtd_blkdevs.c +++ b/drivers/mtd/mtd_blkdevs.c @@ -88,14 +88,14 @@ static blk_status_t do_blktrans_request(struct mtd_blktrans_ops *tr, return BLK_STS_IOERR; return BLK_STS_OK; case REQ_OP_READ: - buf = kmap(bio_page(req->bio)) + bio_offset(req->bio); + buf = kmap_thread(bio_page(req->bio)) + bio_offset(req->bio); for (; nsect > 0; nsect--, block++, buf += tr->blksize) { if (tr->readsect(dev, block, buf)) { - kunmap(bio_page(req->bio)); + kunmap_thread(bio_page(req->bio)); return BLK_STS_IOERR; } } - kunmap(bio_page(req->bio)); + kunmap_thread(bio_page(req->bio)); rq_flush_dcache_pages(req); return BLK_STS_OK; case REQ_OP_WRITE: @@ -103,14 +103,14 @@ static blk_status_t do_blktrans_request(struct mtd_blktrans_ops *tr, return BLK_STS_IOERR; rq_flush_dcache_pages(req); - buf = kmap(bio_page(req->bio)) + bio_offset(req->bio); + buf = kmap_thread(bio_page(req->bio)) + bio_offset(req->bio); for (; nsect > 0; nsect--, block++, buf += tr->blksize) { if (tr->writesect(dev, block, buf)) { - kunmap(bio_page(req->bio)); + kunmap_thread(bio_page(req->bio)); return BLK_STS_IOERR; } } - kunmap(bio_page(req->bio)); + kunmap_thread(bio_page(req->bio)); return BLK_STS_OK; default: return BLK_STS_IOERR; From patchwork Fri Oct 9 19:50:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 286574 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67B93C83017 for ; Fri, 9 Oct 2020 19:56:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 44459225A9 for ; Fri, 9 Oct 2020 19:56:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391197AbgJITxt (ORCPT ); Fri, 9 Oct 2020 15:53:49 -0400 Received: from mga11.intel.com ([192.55.52.93]:40655 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389472AbgJITxk (ORCPT ); Fri, 9 Oct 2020 15:53:40 -0400 IronPort-SDR: ALSFHBkR/52rttPjScavW0L6HaNv2mpe0XISWaZZJAY4d3ezNEk8iVejYdgcm7+PBdVjuPXNU0 7YZwbayMJGWA== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="162068148" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="162068148" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:38 -0700 IronPort-SDR: 77OyjRxsHRqLdlwGyIEmSTWShPVNVQULX7LnGcMeG3NkwcCC4UbesWRkZ9bTTqcjV6t6jmiGf4 lDgqS7BTpvKw== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="389237394" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:37 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Coly Li , Kent Overstreet , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 48/58] drivers/md: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:23 -0700 Message-Id: <20201009195033.3208459-49-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org From: Ira Weiny These kmap() calls are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Coly Li (maintainer:BCACHE (BLOCK LAYER CACHE)) Cc: Kent Overstreet (maintainer:BCACHE (BLOCK LAYER CACHE)) Signed-off-by: Ira Weiny --- drivers/md/bcache/request.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c index c7cadaafa947..a4571f6d09dd 100644 --- a/drivers/md/bcache/request.c +++ b/drivers/md/bcache/request.c @@ -44,10 +44,10 @@ static void bio_csum(struct bio *bio, struct bkey *k) uint64_t csum = 0; bio_for_each_segment(bv, bio, iter) { - void *d = kmap(bv.bv_page) + bv.bv_offset; + void *d = kmap_thread(bv.bv_page) + bv.bv_offset; csum = bch_crc64_update(csum, d, bv.bv_len); - kunmap(bv.bv_page); + kunmap_thread(bv.bv_page); } k->ptr[KEY_PTRS(k)] = csum & (~0ULL >> 1); From patchwork Fri Oct 9 19:50:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 286575 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C3DAC832FA for ; Fri, 9 Oct 2020 19:56:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1C8AE225A9 for ; Fri, 9 Oct 2020 19:56:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390156AbgJITzr (ORCPT ); Fri, 9 Oct 2020 15:55:47 -0400 Received: from mga11.intel.com ([192.55.52.93]:40692 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391217AbgJITxx (ORCPT ); Fri, 9 Oct 2020 15:53:53 -0400 IronPort-SDR: UEyefvdFo8RhoS2CyQvHWs4S368T1Q2/u02koU3gcMNc8vJGdiDc16SNWw6hcKwXxyDmkb/Pj3 l4VxCwhGx8mA== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="162068217" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="162068217" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:51 -0700 IronPort-SDR: V1KD7wTraPak1zxWaGCn8NDyAgfcS4zsKsBv4Z0vY7HUn25qjJm9u99gQjtiJGVT+R+oXmWyM0 H2mADCd5/x5Q== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="462301148" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:50 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 52/58] mm: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:27 -0700 Message-Id: <20201009195033.3208459-53-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org From: Ira Weiny These kmap() calls are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Signed-off-by: Ira Weiny --- mm/memory.c | 8 ++++---- mm/swapfile.c | 4 ++-- mm/userfaultfd.c | 4 ++-- 3 files changed, 8 insertions(+), 8 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index fcfc4ca36eba..75a054882d7a 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4945,7 +4945,7 @@ int __access_remote_vm(struct task_struct *tsk, struct mm_struct *mm, if (bytes > PAGE_SIZE-offset) bytes = PAGE_SIZE-offset; - maddr = kmap(page); + maddr = kmap_thread(page); if (write) { copy_to_user_page(vma, page, addr, maddr + offset, buf, bytes); @@ -4954,7 +4954,7 @@ int __access_remote_vm(struct task_struct *tsk, struct mm_struct *mm, copy_from_user_page(vma, page, addr, buf, maddr + offset, bytes); } - kunmap(page); + kunmap_thread(page); put_page(page); } len -= bytes; @@ -5216,14 +5216,14 @@ long copy_huge_page_from_user(struct page *dst_page, for (i = 0; i < pages_per_huge_page; i++) { if (allow_pagefault) - page_kaddr = kmap(dst_page + i); + page_kaddr = kmap_thread(dst_page + i); else page_kaddr = kmap_atomic(dst_page + i); rc = copy_from_user(page_kaddr, (const void __user *)(src + i * PAGE_SIZE), PAGE_SIZE); if (allow_pagefault) - kunmap(dst_page + i); + kunmap_thread(dst_page + i); else kunmap_atomic(page_kaddr); diff --git a/mm/swapfile.c b/mm/swapfile.c index debc94155f74..e3296ff95648 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -3219,7 +3219,7 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags) error = PTR_ERR(page); goto bad_swap_unlock_inode; } - swap_header = kmap(page); + swap_header = kmap_thread(page); maxpages = read_swap_header(p, swap_header, inode); if (unlikely(!maxpages)) { @@ -3395,7 +3395,7 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags) filp_close(swap_file, NULL); out: if (page && !IS_ERR(page)) { - kunmap(page); + kunmap_thread(page); put_page(page); } if (name) diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 9a3d451402d7..4d38c881bb2d 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -586,11 +586,11 @@ static __always_inline ssize_t __mcopy_atomic(struct mm_struct *dst_mm, mmap_read_unlock(dst_mm); BUG_ON(!page); - page_kaddr = kmap(page); + page_kaddr = kmap_thread(page); err = copy_from_user(page_kaddr, (const void __user *) src_addr, PAGE_SIZE); - kunmap(page); + kunmap_thread(page); if (unlikely(err)) { err = -EFAULT; goto out; From patchwork Fri Oct 9 19:50:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 286576 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB677C83012 for ; Fri, 9 Oct 2020 19:56:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C4F7922B4B for ; Fri, 9 Oct 2020 19:56:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391454AbgJITzs (ORCPT ); Fri, 9 Oct 2020 15:55:48 -0400 Received: from mga09.intel.com ([134.134.136.24]:28724 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391234AbgJITx7 (ORCPT ); Fri, 9 Oct 2020 15:53:59 -0400 IronPort-SDR: 3iDe5NejxsDh9FOYg6KOLN9iu6KOFOJjJxeE0i5kD8Ld7EF88MErV+Y2f55npStNuALGtPOVmt VstioKeBLamQ== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="165643381" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="165643381" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:57 -0700 IronPort-SDR: I2dap8dqfU4jkUPjfDUEInlXytro00aRpG9kPai7q+149QpAXJ//5ZcvM1PB/3zTdLq6ZFWD0B HAucu70KNsIA== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="343972515" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:56 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Benjamin Herrenschmidt , Paul Mackerras , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 54/58] powerpc: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:29 -0700 Message-Id: <20201009195033.3208459-55-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org From: Ira Weiny These kmap() calls are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Signed-off-by: Ira Weiny --- arch/powerpc/mm/mem.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c index 42e25874f5a8..6ef557b8dda6 100644 --- a/arch/powerpc/mm/mem.c +++ b/arch/powerpc/mm/mem.c @@ -573,9 +573,9 @@ void flush_icache_user_page(struct vm_area_struct *vma, struct page *page, { unsigned long maddr; - maddr = (unsigned long) kmap(page) + (addr & ~PAGE_MASK); + maddr = (unsigned long) kmap_thread(page) + (addr & ~PAGE_MASK); flush_icache_range(maddr, maddr + len); - kunmap(page); + kunmap_thread(page); } /* From patchwork Fri Oct 9 19:50:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 286578 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22BA2C9DC8F for ; Fri, 9 Oct 2020 19:54:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 008BE22B4E for ; Fri, 9 Oct 2020 19:54:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391260AbgJITyI (ORCPT ); Fri, 9 Oct 2020 15:54:08 -0400 Received: from mga14.intel.com ([192.55.52.115]:15724 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391245AbgJITyG (ORCPT ); Fri, 9 Oct 2020 15:54:06 -0400 IronPort-SDR: 7qaDjl2/ZxevUxp9SIgeuOAu8od3VQFfONS9RP6IrIk3xZmTXRvkzFi7/Qwdz+i1yGXcanWplh jXuBSYf4vxRQ== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="164744163" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="164744163" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:54:04 -0700 IronPort-SDR: NEiErKYZfPZGqrWyxIZ0z43ef9+IKjrcXd9NJIRckxXQwdDEMd5poh/dknsqhifod6Ym/UoaZd Mu4h9c7DJvbA== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="462301216" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:54:03 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 56/58] dax: Stray access protection for dax_direct_access() Date: Fri, 9 Oct 2020 12:50:31 -0700 Message-Id: <20201009195033.3208459-57-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org From: Ira Weiny dax_direct_access() is a special case of accessing pmem via a page offset and without a struct page. Because the dax driver is well aware of the special protections it has mapped memory with, call dev_access_[en|dis]able() directly instead of the unnecessary overhead of trying to get a page to kmap. Similar to kmap, we leverage existing functions, dax_read_[un]lock(), because they are already required to surround the use of the memory returned from dax_direct_access(). Signed-off-by: Ira Weiny --- drivers/dax/super.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/dax/super.c b/drivers/dax/super.c index e84070b55463..0ddb3ee73e36 100644 --- a/drivers/dax/super.c +++ b/drivers/dax/super.c @@ -30,6 +30,7 @@ static DEFINE_SPINLOCK(dax_host_lock); int dax_read_lock(void) { + dev_access_enable(false); return srcu_read_lock(&dax_srcu); } EXPORT_SYMBOL_GPL(dax_read_lock); @@ -37,6 +38,7 @@ EXPORT_SYMBOL_GPL(dax_read_lock); void dax_read_unlock(int id) { srcu_read_unlock(&dax_srcu, id); + dev_access_disable(false); } EXPORT_SYMBOL_GPL(dax_read_unlock); From patchwork Fri Oct 9 19:50:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 286577 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 13392C832D8 for ; Fri, 9 Oct 2020 19:55:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DC22622B4B for ; Fri, 9 Oct 2020 19:55:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391381AbgJITzL (ORCPT ); Fri, 9 Oct 2020 15:55:11 -0400 Received: from mga14.intel.com ([192.55.52.115]:15724 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391255AbgJITyI (ORCPT ); Fri, 9 Oct 2020 15:54:08 -0400 IronPort-SDR: PD9NuRA2y6vLuu3OosnNPzcT1pNAov1T/CEwGxuuRvOmvcbi4nkWO5n3VSTlagk71ZTFri+BcW DyTljUks/Rvg== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="164744176" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="164744176" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:54:07 -0700 IronPort-SDR: src9VKOvZKZdzmhAmOplZF/CbXKWViy2IZEce2k3Dk75HVdMa01qWYEbu8ywq24hGpakhLlrfI iYxmJaXcxtuw== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="519847335" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:54:06 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 57/58] nvdimm/pmem: Stray access protection for pmem->virt_addr Date: Fri, 9 Oct 2020 12:50:32 -0700 Message-Id: <20201009195033.3208459-58-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org From: Ira Weiny The pmem driver uses a cached virtual address to access its memory directly. Because the nvdimm driver is well aware of the special protections it has mapped memory with, we call dev_access_[en|dis]able() around the direct pmem->virt_addr (pmem_addr) usage instead of the unnecessary overhead of trying to get a page to kmap. Signed-off-by: Ira Weiny --- drivers/nvdimm/pmem.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index fab29b514372..e4dc1ae990fc 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -148,7 +148,9 @@ static blk_status_t pmem_do_read(struct pmem_device *pmem, if (unlikely(is_bad_pmem(&pmem->bb, sector, len))) return BLK_STS_IOERR; + dev_access_enable(false); rc = read_pmem(page, page_off, pmem_addr, len); + dev_access_disable(false); flush_dcache_page(page); return rc; } @@ -180,11 +182,13 @@ static blk_status_t pmem_do_write(struct pmem_device *pmem, * after clear poison. */ flush_dcache_page(page); + dev_access_enable(false); write_pmem(pmem_addr, page, page_off, len); if (unlikely(bad_pmem)) { rc = pmem_clear_poison(pmem, pmem_off, len); write_pmem(pmem_addr, page, page_off, len); } + dev_access_disable(false); return rc; }