From patchwork Fri Oct 9 19:49:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 284808 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AAE53C04EBE for ; Fri, 9 Oct 2020 19:51:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5D469222C3 for ; Fri, 9 Oct 2020 19:51:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2403825AbgJITvF (ORCPT ); Fri, 9 Oct 2020 15:51:05 -0400 Received: from mga05.intel.com ([192.55.52.43]:56235 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390840AbgJITuu (ORCPT ); Fri, 9 Oct 2020 15:50:50 -0400 IronPort-SDR: w1qCiG3AJrbHpSZTfimcXSJzO/yJWuRyVF4QWgalq7actQXgPvgRXk29wA5KGD6UbigUjAM3yo dw4QNiCtCnjg== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="250225864" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="250225864" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:50:46 -0700 IronPort-SDR: qnBWOjzlCrAn0iz4juA1IxPDmIdlUS8zYMFKRiVxpP3UruuZtN+Up+nA/7WPjpw84o9yPhVhmH jUXVTF6t7/+g== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="298530884" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:50:46 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 01/58] x86/pks: Add a global pkrs option Date: Fri, 9 Oct 2020 12:49:36 -0700 Message-Id: <20201009195033.3208459-2-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org From: Ira Weiny Some users, such as kmap(), sometimes requires PKS to be global. However, updating all CPUs, and worse yet all threads is expensive. Introduce a global PKRS state which is checked at critical times to allow the state to enable access when global PKS is required. To accomplish this with minimal locking; the code is carefully designed with the following key concepts. 1) Borrow the idea of lazy TLB invalidations from the fault handler code. When enabling PKS access we anticipate that other threads are not yet running. However, if they are we catch the fault and clean up the MSR value. 2) When disabling PKS access we force all MSR values across all CPU's. This is required to block access as soon as possible.[1] However, it is key that we never attempt to update the per-task PKS values directly. See next point. 3) Per-task PKS values never get updated with global PKS values. This is key to prevent locking requirements and a nearly intractable problem of trying to update every task in the system. Here are a few key points. 3a) The MSR value can be updated with the global PKS value if that global value happened to change while the task was running. 3b) If the task was sleeping while the global PKS was updated then the global value is added in when task's are scheduled. 3c) If the global PKS value restricts access the MSR is updated as soon as possible[1] and the thread value is not updated which ensures the thread does not retain the elevated privileges after a context switch. 4) Follow on patches must be careful to preserve the separation of the thread PKRS value and the MSR value. 5) Access Disable on any individual pkey is turned into (Access Disable | Write Disable) to facilitate faster integration of the global value into the thread local MSR through a simple '&' operation. Doing otherwise would result in complicated individual bit manipulation for each pkey. [1] There is a race condition which is ignored which is required for performance issues. This potentially allows access to a thread until the end of it's time slice. After the context switch the global value will be restored. Signed-off-by: Ira Weiny --- Documentation/core-api/protection-keys.rst | 11 +- arch/x86/entry/common.c | 7 + arch/x86/include/asm/pkeys.h | 6 +- arch/x86/include/asm/pkeys_common.h | 8 +- arch/x86/kernel/process.c | 74 +++++++- arch/x86/mm/fault.c | 189 ++++++++++++++++----- arch/x86/mm/pkeys.c | 88 ++++++++-- include/linux/pkeys.h | 6 +- lib/pks/pks_test.c | 16 +- 9 files changed, 329 insertions(+), 76 deletions(-) diff --git a/Documentation/core-api/protection-keys.rst b/Documentation/core-api/protection-keys.rst index c60366921d60..9e8a98653e13 100644 --- a/Documentation/core-api/protection-keys.rst +++ b/Documentation/core-api/protection-keys.rst @@ -121,9 +121,9 @@ mapping adds that mapping to the protection domain. int pks_key_alloc(const char * const pkey_user); #define PAGE_KERNEL_PKEY(pkey) #define _PAGE_KEY(pkey) - void pks_mknoaccess(int pkey); - void pks_mkread(int pkey); - void pks_mkrdwr(int pkey); + void pks_mknoaccess(int pkey, bool global); + void pks_mkread(int pkey, bool global); + void pks_mkrdwr(int pkey, bool global); void pks_key_free(int pkey); pks_key_alloc() allocates keys dynamically to allow better use of the limited @@ -141,7 +141,10 @@ _PAGE_KEY(). The pks_mk*() family of calls allows kernel users the ability to change the protections for the domain identified by the pkey specified. 3 states are available pks_mknoaccess(), pks_mkread(), and pks_mkrdwr() which set the access -to none, read, and read/write respectively. +to none, read, and read/write respectively. 'global' specifies that the +protection should be set across all threads (logical CPU's) not just the +current running thread/CPU. This increases the overhead of PKS and lessens the +protection so it should be used sparingly. Finally, pks_key_free() allows a user to return the key to the allocator for use by others. diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c index 324a8fd5ac10..86ad32e0095e 100644 --- a/arch/x86/entry/common.c +++ b/arch/x86/entry/common.c @@ -261,12 +261,19 @@ noinstr void idtentry_exit_nmi(struct pt_regs *regs, irqentry_state_t *irq_state * current running value and set the default PKRS value for the duration of the * exception. Thus preventing exception handlers from having the elevated * access of the interrupted task. + * + * NOTE That the thread saved PKRS must be preserved separately to ensure + * global overrides do not 'stick' on a thread. */ noinstr void irq_save_pkrs(irqentry_state_t *state) { if (!cpu_feature_enabled(X86_FEATURE_PKS)) return; + /* + * The thread_pkrs must be maintained separately to prevent global + * overrides from 'sticking' on a thread. + */ state->thread_pkrs = current->thread.saved_pkrs; state->pkrs = this_cpu_read(pkrs_cache); write_pkrs(INIT_PKRS_VALUE); diff --git a/arch/x86/include/asm/pkeys.h b/arch/x86/include/asm/pkeys.h index 79952216474e..cae0153a5480 100644 --- a/arch/x86/include/asm/pkeys.h +++ b/arch/x86/include/asm/pkeys.h @@ -143,9 +143,9 @@ u32 update_pkey_val(u32 pk_reg, int pkey, unsigned int flags); int pks_key_alloc(const char *const pkey_user); void pks_key_free(int pkey); -void pks_mknoaccess(int pkey); -void pks_mkread(int pkey); -void pks_mkrdwr(int pkey); +void pks_mknoaccess(int pkey, bool global); +void pks_mkread(int pkey, bool global); +void pks_mkrdwr(int pkey, bool global); #endif /* CONFIG_ARCH_HAS_SUPERVISOR_PKEYS */ diff --git a/arch/x86/include/asm/pkeys_common.h b/arch/x86/include/asm/pkeys_common.h index 8961e2ddd6ff..e380679ba1bb 100644 --- a/arch/x86/include/asm/pkeys_common.h +++ b/arch/x86/include/asm/pkeys_common.h @@ -6,7 +6,12 @@ #define PKR_WD_BIT 0x2 #define PKR_BITS_PER_PKEY 2 -#define PKR_AD_KEY(pkey) (PKR_AD_BIT << ((pkey) * PKR_BITS_PER_PKEY)) +/* + * We must define 11b as the default to make global overrides efficient. + * See arch/x86/kernel/process.c where the global pkrs is factored in during + * context switch. + */ +#define PKR_AD_KEY(pkey) ((PKR_WD_BIT | PKR_AD_BIT) << ((pkey) * PKR_BITS_PER_PKEY)) /* * Define a default PKRS value for each task. @@ -27,6 +32,7 @@ #define PKS_NUM_KEYS 16 #ifdef CONFIG_ARCH_HAS_SUPERVISOR_PKEYS +extern u32 pkrs_global_cache; DECLARE_PER_CPU(u32, pkrs_cache); noinstr void write_pkrs(u32 new_pkrs); #else diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index eb3a95a69392..58edd162d9cb 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -43,7 +43,7 @@ #include #include #include -#include +#include #include "process.h" @@ -189,15 +189,83 @@ int copy_thread(unsigned long clone_flags, unsigned long sp, unsigned long arg, } #ifdef CONFIG_ARCH_HAS_SUPERVISOR_PKEYS -DECLARE_PER_CPU(u32, pkrs_cache); static inline void pks_init_task(struct task_struct *tsk) { /* New tasks get the most restrictive PKRS value */ tsk->thread.saved_pkrs = INIT_PKRS_VALUE; } + +extern u32 pkrs_global_cache; + +/** + * The global PKRS value can only increase access. Because 01b and 11b both + * disable access. The following truth table is our desired result for each of + * the pkeys when we add in the global permissions. + * + * 00 R/W - Write enabled (all access) + * 10 Read - write disabled (Read only) + * 01 NO Acc - access disabled + * 11 NO Acc - also access disabled + * + * local global desired required + * result operation + * 00 00 00 & + * 00 10 00 & + * 00 01 00 & + * 00 11 00 & + * + * 10 00 00 & + * 10 10 10 & + * 10 01 10 ^ special case + * 10 11 10 & + * + * 01 00 00 & + * 01 10 10 ^ special case + * 01 01 01 & + * 01 11 01 & + * + * 11 00 00 & + * 11 10 10 & + * 11 01 01 & + * 11 11 11 & + * + * In order to eliminate the need to loop through each pkey and deal with the 2 + * above special cases we force all 01b values to 11b through the API thus + * resulting in the simplified truth table below. + * + * 00 R/W - Write enabled (all access) + * 10 Read - write disabled (Read only) + * 01 NO Acc - access disabled + * (Not allowed in the API always use 11) + * 11 NO Acc - access disabled + * + * local global desired effective + * result operation + * 00 00 00 & + * 00 10 00 & + * 00 11 00 & + * 00 11 00 & + * + * 10 00 00 & + * 10 10 10 & + * 10 11 10 & + * 10 11 10 & + * + * 11 00 00 & + * 11 10 10 & + * 11 11 11 & + * 11 11 11 & + * + * 11 00 00 & + * 11 10 10 & + * 11 11 11 & + * 11 11 11 & + * + * Thus we can simply 'AND' in the global pkrs value. + */ static inline void pks_sched_in(void) { - write_pkrs(current->thread.saved_pkrs); + write_pkrs(current->thread.saved_pkrs & pkrs_global_cache); } #else static inline void pks_init_task(struct task_struct *tsk) { } diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index dd5af9399131..4b4ff9efa298 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -32,6 +32,8 @@ #include /* VMALLOC_START, ... */ #include /* kvm_handle_async_pf */ +#include + #define CREATE_TRACE_POINTS #include @@ -995,9 +997,124 @@ mm_fault_error(struct pt_regs *regs, unsigned long error_code, } } -static int spurious_kernel_fault_check(unsigned long error_code, pte_t *pte) +#ifdef CONFIG_ARCH_HAS_SUPERVISOR_PKEYS +/* + * check if we have had a 'global' pkey update. If so, handle this like a lazy + * TLB; fix up the local MSR and return + * + * See arch/x86/kernel/process.c for the explanation on how global is handled + * with a simple '&' operation. + * + * Also we don't update the current thread saved_pkrs because we don't want the + * global value to 'stick' with the thread. Rather we want this to be valid + * only for the remainder of this time slice. For subsequent time slices the + * global value will be factored in during schedule; see arch/x86/kernel/process.c + * + * Finally we have a trade off between performance and forcing a restriction of + * permissions across all CPUs on a global update. + * + * Given the following window. + * + * Global PKRS CPU #0 CPU #1 + * cache MSR MSR + * + * | | | + * Global |----------\ | | + * Restriction | ------------> read | <= T1 + * (on CPU #0) | | | | + * ------\ | | | | + * ------>| | | | + * | | | | + * Update CPU #1 |--------\ | | | + * | --------------\ | | + * | | --|------------>| + * Global remote | | | | + * MSR update | | | | + * (CPU 2-n) | | | | + * |-----> CPU's | v | + * local | (2-N) | local --\ | + * update | | update ------>|(Update <= T2 + * ----------------\ | | Incorrect) + * | -----------\ | | + * | --->|(Update OK) | + * Context | | | + * Switch |----------\ | | + * | ------------> read | + * | | | | + * | | | | + * | | v | + * | | local --\ | + * | | update ------>|(Update + * | | | Correct) + * + * We allow for a larger window of the global pkey being open because global + * updates should be rare and we don't want to burden normal faults with having + * to read the global state. + */ +static bool global_pkey_is_enabled(pte_t *pte, bool is_write, + irqentry_state_t *irq_state) +{ + u8 pkey = pte_flags_pkey(pte->pte); + int pkey_shift = pkey * PKR_BITS_PER_PKEY; + u32 mask = (((1 << PKR_BITS_PER_PKEY) - 1) << pkey_shift); + u32 global = READ_ONCE(pkrs_global_cache); + u32 val; + + /* Return early if global access is not valid */ + val = (global & mask) >> pkey_shift; + if ((val & PKR_AD_BIT) || (is_write && (val & PKR_WD_BIT))) + return false; + + irq_state->pkrs &= global; + + return true; +} + +#else /* !CONFIG_ARCH_HAS_SUPERVISOR_PKEYS */ +__always_inline bool global_pkey_is_enabled(pte_t *pte, bool is_write, + irqentry_state_t *irq_state) +{ + return false; +} +#endif /* CONFIG_ARCH_HAS_SUPERVISOR_PKEYS */ + +#ifdef CONFIG_PKS_TESTING +bool pks_test_callback(irqentry_state_t *irq_state); +static bool handle_pks_testing(unsigned long hw_error_code, irqentry_state_t *irq_state) +{ + /* + * If we get a protection key exception it could be because we + * are running the PKS test. If so, pks_test_callback() will + * clear the protection mechanism and return true to indicate + * the fault was handled + */ + return pks_test_callback(irq_state); +} +#else /* !CONFIG_PKS_TESTING */ +static bool handle_pks_testing(unsigned long hw_error_code, irqentry_state_t *irq_state) +{ + return false; +} +#endif /* CONFIG_PKS_TESTING */ + + +static int spurious_kernel_fault_check(unsigned long error_code, pte_t *pte, + irqentry_state_t *irq_state) { - if ((error_code & X86_PF_WRITE) && !pte_write(*pte)) + bool is_write = (error_code & X86_PF_WRITE); + + if (IS_ENABLED(CONFIG_ARCH_HAS_SUPERVISOR_PKEYS) && + error_code & X86_PF_PK) { + if (global_pkey_is_enabled(pte, is_write, irq_state)) + return 1; + + if (handle_pks_testing(error_code, irq_state)) + return 1; + + return 0; + } + + if (is_write && !pte_write(*pte)) return 0; if ((error_code & X86_PF_INSTR) && !pte_exec(*pte)) @@ -1007,7 +1124,7 @@ static int spurious_kernel_fault_check(unsigned long error_code, pte_t *pte) } /* - * Handle a spurious fault caused by a stale TLB entry. + * Handle a spurious fault caused by a stale TLB entry or a lazy PKRS update. * * This allows us to lazily refresh the TLB when increasing the * permissions of a kernel page (RO -> RW or NX -> X). Doing it @@ -1022,13 +1139,19 @@ static int spurious_kernel_fault_check(unsigned long error_code, pte_t *pte) * There are no security implications to leaving a stale TLB when * increasing the permissions on a page. * + * Similarly, PKRS increases in permissions are done on a thread local level. + * But if the caller indicates the permission should be allowd globaly we can + * lazily update only those threads which fault and avoid a global IPI MSR + * update. + * * Returns non-zero if a spurious fault was handled, zero otherwise. * * See Intel Developer's Manual Vol 3 Section 4.10.4.3, bullet 3 * (Optional Invalidation). */ static noinline int -spurious_kernel_fault(unsigned long error_code, unsigned long address) +spurious_kernel_fault(unsigned long error_code, unsigned long address, + irqentry_state_t *irq_state) { pgd_t *pgd; p4d_t *p4d; @@ -1038,17 +1161,19 @@ spurious_kernel_fault(unsigned long error_code, unsigned long address) int ret; /* - * Only writes to RO or instruction fetches from NX may cause - * spurious faults. + * Only PKey faults or writes to RO or instruction fetches from NX may + * cause spurious faults. * * These could be from user or supervisor accesses but the TLB * is only lazily flushed after a kernel mapping protection * change, so user accesses are not expected to cause spurious * faults. */ - if (error_code != (X86_PF_WRITE | X86_PF_PROT) && - error_code != (X86_PF_INSTR | X86_PF_PROT)) - return 0; + if (!(error_code & X86_PF_PK)) { + if (error_code != (X86_PF_WRITE | X86_PF_PROT) && + error_code != (X86_PF_INSTR | X86_PF_PROT)) + return 0; + } pgd = init_mm.pgd + pgd_index(address); if (!pgd_present(*pgd)) @@ -1059,27 +1184,31 @@ spurious_kernel_fault(unsigned long error_code, unsigned long address) return 0; if (p4d_large(*p4d)) - return spurious_kernel_fault_check(error_code, (pte_t *) p4d); + return spurious_kernel_fault_check(error_code, (pte_t *) p4d, + irq_state); pud = pud_offset(p4d, address); if (!pud_present(*pud)) return 0; if (pud_large(*pud)) - return spurious_kernel_fault_check(error_code, (pte_t *) pud); + return spurious_kernel_fault_check(error_code, (pte_t *) pud, + irq_state); pmd = pmd_offset(pud, address); if (!pmd_present(*pmd)) return 0; if (pmd_large(*pmd)) - return spurious_kernel_fault_check(error_code, (pte_t *) pmd); + return spurious_kernel_fault_check(error_code, (pte_t *) pmd, + irq_state); pte = pte_offset_kernel(pmd, address); if (!pte_present(*pte)) return 0; - ret = spurious_kernel_fault_check(error_code, pte); + ret = spurious_kernel_fault_check(error_code, pte, + irq_state); if (!ret) return 0; @@ -1087,7 +1216,8 @@ spurious_kernel_fault(unsigned long error_code, unsigned long address) * Make sure we have permissions in PMD. * If not, then there's a bug in the page tables: */ - ret = spurious_kernel_fault_check(error_code, (pte_t *) pmd); + ret = spurious_kernel_fault_check(error_code, (pte_t *) pmd, + irq_state); WARN_ONCE(!ret, "PMD has incorrect permission bits\n"); return ret; @@ -1150,25 +1280,6 @@ static int fault_in_kernel_space(unsigned long address) return address >= TASK_SIZE_MAX; } -#ifdef CONFIG_PKS_TESTING -bool pks_test_callback(irqentry_state_t *irq_state); -static bool handle_pks_testing(unsigned long hw_error_code, irqentry_state_t *irq_state) -{ - /* - * If we get a protection key exception it could be because we - * are running the PKS test. If so, pks_test_callback() will - * clear the protection mechanism and return true to indicate - * the fault was handled. - */ - return (hw_error_code & X86_PF_PK) && pks_test_callback(irq_state); -} -#else -static bool handle_pks_testing(unsigned long hw_error_code, irqentry_state_t *irq_state) -{ - return false; -} -#endif - /* * Called for all faults where 'address' is part of the kernel address * space. Might get called for faults that originate from *code* that @@ -1186,9 +1297,6 @@ do_kern_addr_fault(struct pt_regs *regs, unsigned long hw_error_code, !cpu_feature_enabled(X86_FEATURE_PKS)) WARN_ON_ONCE(hw_error_code & X86_PF_PK); - if (handle_pks_testing(hw_error_code, irq_state)) - return; - #ifdef CONFIG_X86_32 /* * We can fault-in kernel-space virtual memory on-demand. The @@ -1220,8 +1328,11 @@ do_kern_addr_fault(struct pt_regs *regs, unsigned long hw_error_code, } #endif - /* Was the fault spurious, caused by lazy TLB invalidation? */ - if (spurious_kernel_fault(hw_error_code, address)) + /* + * Was the fault spurious; caused by lazy TLB invalidation or PKRS + * update? + */ + if (spurious_kernel_fault(hw_error_code, address, irq_state)) return; /* kprobes don't want to hook the spurious faults: */ @@ -1492,7 +1603,7 @@ DEFINE_IDTENTRY_RAW_ERRORCODE(exc_page_fault) * * Fingers crossed. * - * The async #PF handling code takes care of idtentry handling + * The async #PF handling code takes care of irqentry handling * itself. */ if (kvm_handle_async_pf(regs, (u32)address)) diff --git a/arch/x86/mm/pkeys.c b/arch/x86/mm/pkeys.c index 2431c68ef752..a45893069877 100644 --- a/arch/x86/mm/pkeys.c +++ b/arch/x86/mm/pkeys.c @@ -263,33 +263,84 @@ noinstr void write_pkrs(u32 new_pkrs) } EXPORT_SYMBOL_GPL(write_pkrs); +/* + * NOTE: The pkrs_global_cache is _never_ stored in the per thread PKRS cache + * values [thread.saved_pkrs] by design + * + * This allows us to invalidate access on running threads immediately upon + * invalidate. Sleeping threads will not be enabled due to the algorithm + * during pkrs_sched_in() + */ +DEFINE_SPINLOCK(pkrs_global_cache_lock); +u32 pkrs_global_cache = INIT_PKRS_VALUE; +EXPORT_SYMBOL_GPL(pkrs_global_cache); + +static inline void update_global_pkrs(int pkey, unsigned long protection) +{ + int pkey_shift = pkey * PKR_BITS_PER_PKEY; + u32 mask = (((1 << PKR_BITS_PER_PKEY) - 1) << pkey_shift); + u32 old_val; + + spin_lock(&pkrs_global_cache_lock); + old_val = (pkrs_global_cache & mask) >> pkey_shift; + pkrs_global_cache &= ~mask; + if (protection & PKEY_DISABLE_ACCESS) + pkrs_global_cache |= PKR_AD_BIT << pkey_shift; + if (protection & PKEY_DISABLE_WRITE) + pkrs_global_cache |= PKR_WD_BIT << pkey_shift; + + /* + * If we are preventing access from the old value. Force the + * update on all running threads. + */ + if (((old_val == 0) && protection) || + ((old_val & PKR_WD_BIT) && (protection & PKEY_DISABLE_ACCESS))) { + int cpu; + + for_each_online_cpu(cpu) { + u32 *ptr = per_cpu_ptr(&pkrs_cache, cpu); + + *ptr = update_pkey_val(*ptr, pkey, protection); + wrmsrl_on_cpu(cpu, MSR_IA32_PKRS, *ptr); + put_cpu_ptr(ptr); + } + } + spin_unlock(&pkrs_global_cache_lock); +} + /** * Do not call this directly, see pks_mk*() below. * * @pkey: Key for the domain to change * @protection: protection bits to be used + * @global: should this change be made globally or not. * * Protection utilizes the same protection bits specified for User pkeys * PKEY_DISABLE_ACCESS * PKEY_DISABLE_WRITE * */ -static inline void pks_update_protection(int pkey, unsigned long protection) +static inline void pks_update_protection(int pkey, unsigned long protection, + bool global) { - current->thread.saved_pkrs = update_pkey_val(current->thread.saved_pkrs, - pkey, protection); preempt_disable(); + if (global) + update_global_pkrs(pkey, protection); + + current->thread.saved_pkrs = update_pkey_val(current->thread.saved_pkrs, pkey, + protection); write_pkrs(current->thread.saved_pkrs); + preempt_enable(); } /** * PKS access control functions * - * Change the access of the domain specified by the pkey. These are global - * updates. They only affects the current running thread. It is undefined and - * a bug for users to call this without having allocated a pkey and using it as - * pkey here. + * Change the access of the domain specified by the pkey. These may be global + * updates depending on the value of global. It is undefined and a bug for + * users to call this without having allocated a pkey and using it as pkey + * here. * * pks_mknoaccess() * Disable all access to the domain @@ -299,23 +350,30 @@ static inline void pks_update_protection(int pkey, unsigned long protection) * Make the domain Read/Write * * @pkey the pkey for which the access should change. - * + * @global if true the access is enabled on all threads/logical cpus */ -void pks_mknoaccess(int pkey) +void pks_mknoaccess(int pkey, bool global) { - pks_update_protection(pkey, PKEY_DISABLE_ACCESS); + /* + * We force disable access to be 11b + * (PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE) + * instaed of 01b See arch/x86/kernel/process.c where the global pkrs + * is factored in during context switch. + */ + pks_update_protection(pkey, PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE, + global); } EXPORT_SYMBOL_GPL(pks_mknoaccess); -void pks_mkread(int pkey) +void pks_mkread(int pkey, bool global) { - pks_update_protection(pkey, PKEY_DISABLE_WRITE); + pks_update_protection(pkey, PKEY_DISABLE_WRITE, global); } EXPORT_SYMBOL_GPL(pks_mkread); -void pks_mkrdwr(int pkey) +void pks_mkrdwr(int pkey, bool global) { - pks_update_protection(pkey, 0); + pks_update_protection(pkey, 0, global); } EXPORT_SYMBOL_GPL(pks_mkrdwr); @@ -377,7 +435,7 @@ void pks_key_free(int pkey) return; /* Restore to default of no access */ - pks_mknoaccess(pkey); + pks_mknoaccess(pkey, true); pks_key_users[pkey] = NULL; __clear_bit(pkey, &pks_key_allocation_map); } diff --git a/include/linux/pkeys.h b/include/linux/pkeys.h index f9552bd9341f..8f3bfec83949 100644 --- a/include/linux/pkeys.h +++ b/include/linux/pkeys.h @@ -57,15 +57,15 @@ static inline int pks_key_alloc(const char * const pkey_user) static inline void pks_key_free(int pkey) { } -static inline void pks_mknoaccess(int pkey) +static inline void pks_mknoaccess(int pkey, bool global) { WARN_ON_ONCE(1); } -static inline void pks_mkread(int pkey) +static inline void pks_mkread(int pkey, bool global) { WARN_ON_ONCE(1); } -static inline void pks_mkrdwr(int pkey) +static inline void pks_mkrdwr(int pkey, bool global) { WARN_ON_ONCE(1); } diff --git a/lib/pks/pks_test.c b/lib/pks/pks_test.c index d7dbf92527bd..286c8b8457da 100644 --- a/lib/pks/pks_test.c +++ b/lib/pks/pks_test.c @@ -163,12 +163,12 @@ static void check_exception(irqentry_state_t *irq_state) * Check we can update the value during exception without affecting the * calling thread. The calling thread is checked after exception... */ - pks_mkrdwr(test_armed_key); + pks_mkrdwr(test_armed_key, false); if (!check_pkrs(test_armed_key, 0)) { pr_err(" FAIL: exception did not change register to 0\n"); test_exception_ctx->pass = false; } - pks_mknoaccess(test_armed_key); + pks_mknoaccess(test_armed_key, false); if (!check_pkrs(test_armed_key, PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE)) { pr_err(" FAIL: exception did not change register to 0x3\n"); test_exception_ctx->pass = false; @@ -314,13 +314,13 @@ static int run_access_test(struct pks_test_ctx *ctx, { switch (test->mode) { case PKS_TEST_NO_ACCESS: - pks_mknoaccess(ctx->pkey); + pks_mknoaccess(ctx->pkey, false); break; case PKS_TEST_RDWR: - pks_mkrdwr(ctx->pkey); + pks_mkrdwr(ctx->pkey, false); break; case PKS_TEST_RDONLY: - pks_mkread(ctx->pkey); + pks_mkread(ctx->pkey, false); break; default: pr_err("BUG in test invalid mode\n"); @@ -476,7 +476,7 @@ static void run_exception_test(void) goto free_context; } - pks_mkread(ctx->pkey); + pks_mkread(ctx->pkey, false); spin_lock(&test_lock); WRITE_ONCE(test_exception_ctx, ctx); @@ -556,7 +556,7 @@ static void crash_it(void) return; } - pks_mknoaccess(ctx->pkey); + pks_mknoaccess(ctx->pkey, false); spin_lock(&test_lock); WRITE_ONCE(test_armed_key, 0); @@ -618,7 +618,7 @@ static ssize_t pks_write_file(struct file *file, const char __user *user_buf, /* start of context switch test */ if (!strcmp(buf, "1")) { /* Ensure a known state to test context switch */ - pks_mknoaccess(ctx->pkey); + pks_mknoaccess(ctx->pkey, false); } /* After context switch msr should be restored */ From patchwork Fri Oct 9 19:49:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 284780 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8FAB4C433DF for ; Fri, 9 Oct 2020 20:11:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 41E942053B for ; Fri, 9 Oct 2020 20:11:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391890AbgJIUKj (ORCPT ); Fri, 9 Oct 2020 16:10:39 -0400 Received: from mga04.intel.com ([192.55.52.120]:43246 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2403817AbgJITu7 (ORCPT ); Fri, 9 Oct 2020 15:50:59 -0400 IronPort-SDR: I+kluzUq8pxlBDtt5OqAnT2WCF+BNOozZc8KJ9HRHJVfpcd0kw39MYtQEO55RaZxKxDv2NPRPB 5bQ8ZYnMvMug== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="162893218" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="162893218" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:50:56 -0700 IronPort-SDR: WlCZORaTV0ocl1yC7IDa4J4IbRZJZvP+KytzS1LtxRNRUTgHeg4W5ZWyzNndx3DpBXRsqL9R3c CQItA3N5s8ig== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="462300571" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:50:55 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 03/58] memremap: Add zone device access protection Date: Fri, 9 Oct 2020 12:49:38 -0700 Message-Id: <20201009195033.3208459-4-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org From: Ira Weiny Device managed memory exposes itself to the kernel direct map which allows stray pointers to access these device memories. Stray pointers to normal memory may result in a crash or other undesirable behavior which, while unfortunate, are usually recoverable with a reboot. Stray access, specifically stray writes, to areas such as non-volatile memory are permanent in nature and thus are more likely to result in permanent user data loss vs stray access to other memory areas. Furthermore, we protect against reads which can help with speculative reads to poison areas as well. But this is a secondary reason. Set up an infrastructure for extra device access protection. Then implement the new protection using the new Protection Keys Supervisor (PKS) on architectures which support it. To enable this extra protection devices specify a flag in the pgmap to indicate that these areas wish to use additional protection. Kernel code which intends to access this memory can do so automatically through the use of the kmap infrastructure calling into dev_access_[enable|disable]() described here. The kmap infrastructure is implemented in a follow on patch. In addition, users can directly enable/disable the access through dev_access_[enable|disable]() if they have a priori knowledge of the type of pages they are accessing. All calls to enable/disable protection flow through dev_access_[enable|disable]() and are nestable by the use of a per task reference count. This reference count does 2 things. 1) Allows a thread to nest calls to disable protection such that the first call to re-enable protection does not 'break' the last access of the pmem device memory. 2) Provides faster performance by avoiding lots of MSR writes. For example, looping over a sequence of pmem pages. In addition, we must ensure the reference count is preserved through an exception so we add the count to irqentry_state_t and save/restore the reference count while giving exceptions their own count should they use a kmap call. The following shows how this works through an exception: ... // ref == 0 dev_access_enable() // ref += 1 ==> disable protection irq() // enable protection // ref = 0 _handler() dev_access_enable() // ref += 1 ==> disable protection dev_access_disable() // ref -= 1 ==> enable protection // WARN_ON(ref != 0) // disable protection do_pmem_thing() // all good here dev_access_disable() // ref -= 1 ==> 0 ==> enable protection ... Nested exceptions operate the same way with each exception storing the interrupted exception state all the way down. The pkey value is never free'ed as this optimizes the implementation to be either on or off using a static branch conditional in the fast paths. Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Steven Rostedt Cc: Ben Segall Cc: Mel Gorman Signed-off-by: Ira Weiny --- arch/x86/entry/common.c | 21 +++++++++ include/linux/entry-common.h | 3 ++ include/linux/memremap.h | 1 + include/linux/mm.h | 43 +++++++++++++++++ include/linux/sched.h | 3 ++ init/init_task.c | 3 ++ kernel/fork.c | 3 ++ mm/Kconfig | 13 ++++++ mm/memremap.c | 90 ++++++++++++++++++++++++++++++++++++ 9 files changed, 180 insertions(+) diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c index 86ad32e0095e..3680724c1a4d 100644 --- a/arch/x86/entry/common.c +++ b/arch/x86/entry/common.c @@ -264,12 +264,27 @@ noinstr void idtentry_exit_nmi(struct pt_regs *regs, irqentry_state_t *irq_state * * NOTE That the thread saved PKRS must be preserved separately to ensure * global overrides do not 'stick' on a thread. + * + * Furthermore, Zone Device Access Protection maintains access in a re-entrant + * manner through a reference count which also needs to be maintained should + * exception handlers use those interfaces for memory access. Here we start + * off the exception handler ref count to 0 and ensure it is 0 when the + * exception is done. Then restore it for the interrupted task. */ noinstr void irq_save_pkrs(irqentry_state_t *state) { if (!cpu_feature_enabled(X86_FEATURE_PKS)) return; +#ifdef CONFIG_ZONE_DEVICE_ACCESS_PROTECTION + /* + * Save the ref count of the current running process and set it to 0 + * for any irq users to properly track re-entrance + */ + state->pkrs_ref = current->dev_page_access_ref; + current->dev_page_access_ref = 0; +#endif + /* * The thread_pkrs must be maintained separately to prevent global * overrides from 'sticking' on a thread. @@ -286,6 +301,12 @@ noinstr void irq_restore_pkrs(irqentry_state_t *state) write_pkrs(state->pkrs); current->thread.saved_pkrs = state->thread_pkrs; + +#ifdef CONFIG_ZONE_DEVICE_ACCESS_PROTECTION + WARN_ON_ONCE(current->dev_page_access_ref != 0); + /* Restore the interrupted process reference */ + current->dev_page_access_ref = state->pkrs_ref; +#endif } #endif /* CONFIG_ARCH_HAS_SUPERVISOR_PKEYS */ diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h index c3b361ffa059..06743cce2dbf 100644 --- a/include/linux/entry-common.h +++ b/include/linux/entry-common.h @@ -343,6 +343,9 @@ void irqentry_exit_to_user_mode(struct pt_regs *regs); #ifndef irqentry_state typedef struct irqentry_state { #ifdef CONFIG_ARCH_HAS_SUPERVISOR_PKEYS +#ifdef CONFIG_ZONE_DEVICE_ACCESS_PROTECTION + unsigned int pkrs_ref; +#endif u32 pkrs; u32 thread_pkrs; #endif diff --git a/include/linux/memremap.h b/include/linux/memremap.h index e5862746751b..b6713ee7b218 100644 --- a/include/linux/memremap.h +++ b/include/linux/memremap.h @@ -89,6 +89,7 @@ struct dev_pagemap_ops { }; #define PGMAP_ALTMAP_VALID (1 << 0) +#define PGMAP_PROT_ENABLED (1 << 1) /** * struct dev_pagemap - metadata for ZONE_DEVICE mappings diff --git a/include/linux/mm.h b/include/linux/mm.h index 16b799a0522c..9e845515ff15 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1141,6 +1141,49 @@ static inline bool is_pci_p2pdma_page(const struct page *page) page->pgmap->type == MEMORY_DEVICE_PCI_P2PDMA; } +#ifdef CONFIG_ZONE_DEVICE_ACCESS_PROTECTION +DECLARE_STATIC_KEY_FALSE(dev_protection_static_key); + +/* + * We make page_is_access_protected() as quick as possible. + * 1) If no mappings have been enabled with extra protection we skip this + * entirely + * 2) Skip pages which are not ZONE_DEVICE + * 3) Only then check if this particular page was mapped with extra + * protections. + */ +static inline bool page_is_access_protected(struct page *page) +{ + if (!static_branch_unlikely(&dev_protection_static_key)) + return false; + if (!is_zone_device_page(page)) + return false; + if (page->pgmap->flags & PGMAP_PROT_ENABLED) + return true; + return false; +} + +void __dev_access_enable(bool global); +void __dev_access_disable(bool global); +static __always_inline void dev_access_enable(bool global) +{ + if (static_branch_unlikely(&dev_protection_static_key)) + __dev_access_enable(global); +} +static __always_inline void dev_access_disable(bool global) +{ + if (static_branch_unlikely(&dev_protection_static_key)) + __dev_access_disable(global); +} +#else +static inline bool page_is_access_protected(struct page *page) +{ + return false; +} +static inline void dev_access_enable(bool global) { } +static inline void dev_access_disable(bool global) { } +#endif /* CONFIG_ZONE_DEVICE_ACCESS_PROTECTION */ + /* 127: arbitrary random number, small enough to assemble well */ #define page_ref_zero_or_close_to_overflow(page) \ ((unsigned int) page_ref_count(page) + 127u <= 127u) diff --git a/include/linux/sched.h b/include/linux/sched.h index afe01e232935..25d97ab6c757 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1315,6 +1315,9 @@ struct task_struct { struct callback_head mce_kill_me; #endif +#ifdef CONFIG_ZONE_DEVICE_ACCESS_PROTECTION + unsigned int dev_page_access_ref; +#endif /* * New fields for task_struct should be added above here, so that * they are included in the randomized portion of task_struct. diff --git a/init/init_task.c b/init/init_task.c index f6889fce64af..9b39f25de59b 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -209,6 +209,9 @@ struct task_struct init_task #ifdef CONFIG_SECCOMP .seccomp = { .filter_count = ATOMIC_INIT(0) }, #endif +#ifdef CONFIG_ZONE_DEVICE_ACCESS_PROTECTION + .dev_page_access_ref = 0, +#endif }; EXPORT_SYMBOL(init_task); diff --git a/kernel/fork.c b/kernel/fork.c index da8d360fb032..b6a3ee328a89 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -940,6 +940,9 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node) #ifdef CONFIG_MEMCG tsk->active_memcg = NULL; +#endif +#ifdef CONFIG_ZONE_DEVICE_ACCESS_PROTECTION + tsk->dev_page_access_ref = 0; #endif return tsk; diff --git a/mm/Kconfig b/mm/Kconfig index 1b9bc004d9bc..01dd75720ae6 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -794,6 +794,19 @@ config ZONE_DEVICE If FS_DAX is enabled, then say Y. +config ZONE_DEVICE_ACCESS_PROTECTION + bool "Device memory access protection" + depends on ZONE_DEVICE + depends on ARCH_HAS_SUPERVISOR_PKEYS + + help + Enable the option of having access protections on device memory + areas. This protects against access to device memory which is not + intended such as stray writes. This feature is particularly useful + to protect against corruption of persistent memory. + + If in doubt, say 'Y'. + config DEV_PAGEMAP_OPS bool diff --git a/mm/memremap.c b/mm/memremap.c index fbfc79fd9c24..edad2aa0bd24 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -6,12 +6,16 @@ #include #include #include +#include #include #include #include #include #include #include +#include + +#define PKEY_INVALID (INT_MIN) static DEFINE_XARRAY(pgmap_array); @@ -67,6 +71,89 @@ static void devmap_managed_enable_put(void) } #endif /* CONFIG_DEV_PAGEMAP_OPS */ +#ifdef CONFIG_ZONE_DEVICE_ACCESS_PROTECTION +/* + * Note; all devices which have asked for protections share the same key. The + * key may, or may not, have been provided by the core. If not, protection + * will remain disabled. The key acquisition is attempted at init time and + * never again. So we don't have to worry about dev_page_pkey changing. + */ +static int dev_page_pkey = PKEY_INVALID; +DEFINE_STATIC_KEY_FALSE(dev_protection_static_key); +EXPORT_SYMBOL(dev_protection_static_key); + +static pgprot_t dev_pgprot_get(struct dev_pagemap *pgmap, pgprot_t prot) +{ + if (pgmap->flags & PGMAP_PROT_ENABLED && dev_page_pkey != PKEY_INVALID) { + pgprotval_t val = pgprot_val(prot); + + static_branch_inc(&dev_protection_static_key); + prot = __pgprot(val | _PAGE_PKEY(dev_page_pkey)); + } + return prot; +} + +static void dev_pgprot_put(struct dev_pagemap *pgmap) +{ + if (pgmap->flags & PGMAP_PROT_ENABLED && dev_page_pkey != PKEY_INVALID) + static_branch_dec(&dev_protection_static_key); +} + +void __dev_access_disable(bool global) +{ + unsigned long flags; + + local_irq_save(flags); + if (!--current->dev_page_access_ref) + pks_mknoaccess(dev_page_pkey, global); + local_irq_restore(flags); +} +EXPORT_SYMBOL_GPL(__dev_access_disable); + +void __dev_access_enable(bool global) +{ + unsigned long flags; + + local_irq_save(flags); + /* 0 clears the PKEY_DISABLE_ACCESS bit, allowing access */ + if (!current->dev_page_access_ref++) + pks_mkrdwr(dev_page_pkey, global); + local_irq_restore(flags); +} +EXPORT_SYMBOL_GPL(__dev_access_enable); + +/** + * dev_access_protection_init: Configure a PKS key domain for device pages + * + * The domain defaults to the protected state. Device page mappings should set + * the PGMAP_PROT_ENABLED flag when mapping pages. + * + * Note the pkey is never free'ed. This is run at init time and we either get + * the key or we do not. We need to do this to maintian a constant key (or + * not) as device memory is added or removed. + */ +static int __init __dev_access_protection_init(void) +{ + int pkey = pks_key_alloc("Device Memory"); + + if (pkey < 0) + return 0; + + dev_page_pkey = pkey; + + return 0; +} +subsys_initcall(__dev_access_protection_init); +#else +static pgprot_t dev_pgprot_get(struct dev_pagemap *pgmap, pgprot_t prot) +{ + return prot; +} +static void dev_pgprot_put(struct dev_pagemap *pgmap) +{ +} +#endif /* CONFIG_ZONE_DEVICE_ACCESS_PROTECTION */ + static void pgmap_array_delete(struct resource *res) { xa_store_range(&pgmap_array, PHYS_PFN(res->start), PHYS_PFN(res->end), @@ -156,6 +243,7 @@ void memunmap_pages(struct dev_pagemap *pgmap) pgmap_array_delete(res); WARN_ONCE(pgmap->altmap.alloc, "failed to free all reserved pages\n"); devmap_managed_enable_put(); + dev_pgprot_put(pgmap); } EXPORT_SYMBOL_GPL(memunmap_pages); @@ -191,6 +279,8 @@ void *memremap_pages(struct dev_pagemap *pgmap, int nid) int error, is_ram; bool need_devmap_managed = true; + params.pgprot = dev_pgprot_get(pgmap, params.pgprot); + switch (pgmap->type) { case MEMORY_DEVICE_PRIVATE: if (!IS_ENABLED(CONFIG_DEVICE_PRIVATE)) { From patchwork Fri Oct 9 19:49:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 284781 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85ACBC2D0A4 for ; Fri, 9 Oct 2020 20:10:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4D4902067D for ; Fri, 9 Oct 2020 20:10:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391144AbgJIUKH (ORCPT ); Fri, 9 Oct 2020 16:10:07 -0400 Received: from mga11.intel.com ([192.55.52.93]:40406 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2403832AbgJITvG (ORCPT ); Fri, 9 Oct 2020 15:51:06 -0400 IronPort-SDR: SD9nz5fTmWZyBOKnNyNhFLCkSRCkJ1a/XNJy/OvFfOQBQwdny4dR7EruCRRxmv68e4U8o8tfxx MKHZNHFKQmKA== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="162067781" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="162067781" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:03 -0700 IronPort-SDR: Zj7B8Ym6Opm+j7NgzILIxHvFMHLpJ+KeG5qMdRlGsqJzYlMhgHY+wciO+/VIwDrpca7gdal3tS LaUxL8Xa1x8w== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="343971995" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:02 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Randy Dunlap , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 05/58] kmap: Introduce k[un]map_thread Date: Fri, 9 Oct 2020 12:49:40 -0700 Message-Id: <20201009195033.3208459-6-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org From: Ira Weiny To correctly support the semantics of kmap() with Kernel protection keys (PKS), kmap() may be required to set the protections on multiple processors (globally). Enabling PKS globally can be very expensive depending on the requested operation. Furthermore, enabling a domain globally reduces the protection afforded by PKS. Most kmap() (Aprox 209 of 229) callers use the map within a single thread and have no need for the protection domain to be enabled globally. However, the remaining callers do not follow this pattern and, as best I can tell, expect the mapping to be 'global' and available to any thread who may access the mapping.[1] We don't anticipate global mappings to pmem, however in general there is a danger in changing the semantics of kmap(). Effectively, this would cause an unresolved page fault with little to no information about why the failure occurred. To resolve this a number of options were considered. 1) Attempt to change all the thread local kmap() calls to kmap_atomic()[2] 2) Introduce a flags parameter to kmap() to indicate if the mapping should be global or not 3) Change ~20 call sites to 'kmap_global()' to indicate that they require a global enablement of the pages. 4) Change ~209 call sites to 'kmap_thread()' to indicate that the mapping is to be used within that thread of execution only Option 1 is simply not feasible. Option 2 would require all of the call sites of kmap() to change. Option 3 seems like a good minimal change but there is a danger that new code may miss the semantic change of kmap() and not get the behavior the developer intended. Therefore, #4 was chosen. Subsequent patches will convert most ~90% of the kmap callers to this new call leaving about 10% of the existing kmap callers to enable PKS globally. Cc: Randy Dunlap Signed-off-by: Ira Weiny --- include/linux/highmem.h | 34 ++++++++++++++++++++++++++-------- 1 file changed, 26 insertions(+), 8 deletions(-) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 2a9806e3b8d2..ef7813544719 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -60,7 +60,7 @@ static inline void kmap_flush_tlb(unsigned long addr) { } #endif void *kmap_high(struct page *page); -static inline void *kmap(struct page *page) +static inline void *__kmap(struct page *page, bool global) { void *addr; @@ -74,20 +74,20 @@ static inline void *kmap(struct page *page) * Even non-highmem pages may have additional access protections which * need to be checked and potentially enabled. */ - dev_page_enable_access(page, true); + dev_page_enable_access(page, global); return addr; } void kunmap_high(struct page *page); -static inline void kunmap(struct page *page) +static inline void __kunmap(struct page *page, bool global) { might_sleep(); /* * Even non-highmem pages may have additional access protections which * need to be checked and potentially disabled. */ - dev_page_disable_access(page, true); + dev_page_disable_access(page, global); if (!PageHighMem(page)) return; kunmap_high(page); @@ -160,10 +160,10 @@ static inline struct page *kmap_to_page(void *addr) static inline unsigned long totalhigh_pages(void) { return 0UL; } -static inline void *kmap(struct page *page) +static inline void *__kmap(struct page *page, bool global) { might_sleep(); - dev_page_enable_access(page, true); + dev_page_enable_access(page, global); return page_address(page); } @@ -171,9 +171,9 @@ static inline void kunmap_high(struct page *page) { } -static inline void kunmap(struct page *page) +static inline void __kunmap(struct page *page, bool global) { - dev_page_disable_access(page, true); + dev_page_disable_access(page, global); #ifdef ARCH_HAS_FLUSH_ON_KUNMAP kunmap_flush_on_unmap(page_address(page)); #endif @@ -238,6 +238,24 @@ static inline void kmap_atomic_idx_pop(void) #endif +static inline void *kmap(struct page *page) +{ + return __kmap(page, true); +} +static inline void kunmap(struct page *page) +{ + __kunmap(page, true); +} + +static inline void *kmap_thread(struct page *page) +{ + return __kmap(page, false); +} +static inline void kunmap_thread(struct page *page) +{ + __kunmap(page, false); +} + /* * Prevent people trying to call kunmap_atomic() as if it were kunmap() * kunmap_atomic() should get the return value of kmap_atomic, not the page. From patchwork Fri Oct 9 19:49:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 284806 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 526BBC832DF for ; Fri, 9 Oct 2020 19:52:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1D9432242A for ; Fri, 9 Oct 2020 19:52:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2403977AbgJITvp (ORCPT ); Fri, 9 Oct 2020 15:51:45 -0400 Received: from mga17.intel.com ([192.55.52.151]:33909 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2403811AbgJITvV (ORCPT ); Fri, 9 Oct 2020 15:51:21 -0400 IronPort-SDR: gAinJeSUNfpg+wR4xJTZtqPlJf7JMSP7H+uP85nCsIR2xz1K+nAlaJyBRxijlJk7TheVMDYWX9 xQ4cQk3mxjQA== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="145397269" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="145397269" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:19 -0700 IronPort-SDR: 5T2OCVzeHkG6wr08UcIl5CMQca5a2EWHDQvUbsMI5eegPrFwWedcb1Yb3fuBoEZmXZa2FGZF8Z Z3c1XDGR3/oA== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="354958923" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:19 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , David Airlie , Daniel Vetter , Patrik Jakobsson , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 09/58] drivers/gpu: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:49:44 -0700 Message-Id: <20201009195033.3208459-10-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org From: Ira Weiny These kmap() calls in the gpu stack are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: David Airlie Cc: Daniel Vetter Cc: Patrik Jakobsson Signed-off-by: Ira Weiny --- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 12 ++++++------ drivers/gpu/drm/gma500/gma_display.c | 4 ++-- drivers/gpu/drm/gma500/mmu.c | 10 +++++----- drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 4 ++-- .../gpu/drm/i915/gem/selftests/i915_gem_context.c | 4 ++-- drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c | 8 ++++---- drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c | 4 ++-- drivers/gpu/drm/i915/gt/intel_gtt.c | 4 ++-- drivers/gpu/drm/i915/gt/shmem_utils.c | 4 ++-- drivers/gpu/drm/i915/i915_gem.c | 8 ++++---- drivers/gpu/drm/i915/i915_gpu_error.c | 4 ++-- drivers/gpu/drm/i915/selftests/i915_perf.c | 4 ++-- drivers/gpu/drm/radeon/radeon_ttm.c | 4 ++-- 13 files changed, 37 insertions(+), 37 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c index 978bae731398..bd564bccb7a3 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c @@ -2437,11 +2437,11 @@ static ssize_t amdgpu_ttm_gtt_read(struct file *f, char __user *buf, page = adev->gart.pages[p]; if (page) { - ptr = kmap(page); + ptr = kmap_thread(page); ptr += off; r = copy_to_user(buf, ptr, cur_size); - kunmap(adev->gart.pages[p]); + kunmap_thread(adev->gart.pages[p]); } else r = clear_user(buf, cur_size); @@ -2507,9 +2507,9 @@ static ssize_t amdgpu_iomem_read(struct file *f, char __user *buf, if (p->mapping != adev->mman.bdev.dev_mapping) return -EPERM; - ptr = kmap(p); + ptr = kmap_thread(p); r = copy_to_user(buf, ptr + off, bytes); - kunmap(p); + kunmap_thread(p); if (r) return -EFAULT; @@ -2558,9 +2558,9 @@ static ssize_t amdgpu_iomem_write(struct file *f, const char __user *buf, if (p->mapping != adev->mman.bdev.dev_mapping) return -EPERM; - ptr = kmap(p); + ptr = kmap_thread(p); r = copy_from_user(ptr + off, buf, bytes); - kunmap(p); + kunmap_thread(p); if (r) return -EFAULT; diff --git a/drivers/gpu/drm/gma500/gma_display.c b/drivers/gpu/drm/gma500/gma_display.c index 3df6d6e850f5..35f4e55c941f 100644 --- a/drivers/gpu/drm/gma500/gma_display.c +++ b/drivers/gpu/drm/gma500/gma_display.c @@ -400,9 +400,9 @@ int gma_crtc_cursor_set(struct drm_crtc *crtc, /* Copy the cursor to cursor mem */ tmp_dst = dev_priv->vram_addr + cursor_gt->offset; for (i = 0; i < cursor_pages; i++) { - tmp_src = kmap(gt->pages[i]); + tmp_src = kmap_thread(gt->pages[i]); memcpy(tmp_dst, tmp_src, PAGE_SIZE); - kunmap(gt->pages[i]); + kunmap_thread(gt->pages[i]); tmp_dst += PAGE_SIZE; } diff --git a/drivers/gpu/drm/gma500/mmu.c b/drivers/gpu/drm/gma500/mmu.c index 505044c9a673..fba7a3a461fd 100644 --- a/drivers/gpu/drm/gma500/mmu.c +++ b/drivers/gpu/drm/gma500/mmu.c @@ -192,20 +192,20 @@ struct psb_mmu_pd *psb_mmu_alloc_pd(struct psb_mmu_driver *driver, pd->invalid_pte = 0; } - v = kmap(pd->dummy_pt); + v = kmap_thread(pd->dummy_pt); for (i = 0; i < (PAGE_SIZE / sizeof(uint32_t)); ++i) v[i] = pd->invalid_pte; - kunmap(pd->dummy_pt); + kunmap_thread(pd->dummy_pt); - v = kmap(pd->p); + v = kmap_thread(pd->p); for (i = 0; i < (PAGE_SIZE / sizeof(uint32_t)); ++i) v[i] = pd->invalid_pde; - kunmap(pd->p); + kunmap_thread(pd->p); clear_page(kmap(pd->dummy_page)); - kunmap(pd->dummy_page); + kunmap_thread(pd->dummy_page); pd->tables = vmalloc_user(sizeof(struct psb_mmu_pt *) * 1024); if (!pd->tables) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index 38113d3c0138..274424795fb7 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -566,9 +566,9 @@ i915_gem_object_create_shmem_from_data(struct drm_i915_private *dev_priv, if (err < 0) goto fail; - vaddr = kmap(page); + vaddr = kmap_thread(page); memcpy(vaddr, data, len); - kunmap(page); + kunmap_thread(page); err = pagecache_write_end(file, file->f_mapping, offset, len, len, diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c index 7ffc3c751432..b466c677d007 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c @@ -1754,7 +1754,7 @@ static int check_scratch_page(struct i915_gem_context *ctx, u32 *out) return -EINVAL; } - vaddr = kmap(page); + vaddr = kmap_thread(page); if (!vaddr) { pr_err("No (mappable) scratch page!\n"); return -EINVAL; @@ -1765,7 +1765,7 @@ static int check_scratch_page(struct i915_gem_context *ctx, u32 *out) pr_err("Inconsistent initial state of scratch page!\n"); err = -EINVAL; } - kunmap(page); + kunmap_thread(page); return err; } diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c index 9c7402ce5bf9..447df22e2e06 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c @@ -143,7 +143,7 @@ static int check_partial_mapping(struct drm_i915_gem_object *obj, intel_gt_flush_ggtt_writes(&to_i915(obj->base.dev)->gt); p = i915_gem_object_get_page(obj, offset >> PAGE_SHIFT); - cpu = kmap(p) + offset_in_page(offset); + cpu = kmap_thread(p) + offset_in_page(offset); drm_clflush_virt_range(cpu, sizeof(*cpu)); if (*cpu != (u32)page) { pr_err("Partial view for %lu [%u] (offset=%llu, size=%u [%llu, row size %u], fence=%d, tiling=%d, stride=%d) misalignment, expected write to page (%llu + %u [0x%llx]) of 0x%x, found 0x%x\n", @@ -161,7 +161,7 @@ static int check_partial_mapping(struct drm_i915_gem_object *obj, } *cpu = 0; drm_clflush_virt_range(cpu, sizeof(*cpu)); - kunmap(p); + kunmap_thread(p); out: __i915_vma_put(vma); @@ -236,7 +236,7 @@ static int check_partial_mappings(struct drm_i915_gem_object *obj, intel_gt_flush_ggtt_writes(&to_i915(obj->base.dev)->gt); p = i915_gem_object_get_page(obj, offset >> PAGE_SHIFT); - cpu = kmap(p) + offset_in_page(offset); + cpu = kmap_thread(p) + offset_in_page(offset); drm_clflush_virt_range(cpu, sizeof(*cpu)); if (*cpu != (u32)page) { pr_err("Partial view for %lu [%u] (offset=%llu, size=%u [%llu, row size %u], fence=%d, tiling=%d, stride=%d) misalignment, expected write to page (%llu + %u [0x%llx]) of 0x%x, found 0x%x\n", @@ -254,7 +254,7 @@ static int check_partial_mappings(struct drm_i915_gem_object *obj, } *cpu = 0; drm_clflush_virt_range(cpu, sizeof(*cpu)); - kunmap(p); + kunmap_thread(p); if (err) return err; diff --git a/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c b/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c index 7fb36b12fe7a..38da348282f1 100644 --- a/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c +++ b/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c @@ -731,7 +731,7 @@ static void swizzle_page(struct page *page) char *vaddr; int i; - vaddr = kmap(page); + vaddr = kmap_thread(page); for (i = 0; i < PAGE_SIZE; i += 128) { memcpy(temp, &vaddr[i], 64); @@ -739,7 +739,7 @@ static void swizzle_page(struct page *page) memcpy(&vaddr[i + 64], temp, 64); } - kunmap(page); + kunmap_thread(page); } /** diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.c b/drivers/gpu/drm/i915/gt/intel_gtt.c index 2a72cce63fd9..4cfb24e9ed62 100644 --- a/drivers/gpu/drm/i915/gt/intel_gtt.c +++ b/drivers/gpu/drm/i915/gt/intel_gtt.c @@ -312,9 +312,9 @@ static void poison_scratch_page(struct page *page, unsigned long size) do { void *vaddr; - vaddr = kmap(page); + vaddr = kmap_thread(page); memset(vaddr, POISON_FREE, PAGE_SIZE); - kunmap(page); + kunmap_thread(page); page = pfn_to_page(page_to_pfn(page) + 1); size -= PAGE_SIZE; diff --git a/drivers/gpu/drm/i915/gt/shmem_utils.c b/drivers/gpu/drm/i915/gt/shmem_utils.c index 43c7acbdc79d..a40d3130cebf 100644 --- a/drivers/gpu/drm/i915/gt/shmem_utils.c +++ b/drivers/gpu/drm/i915/gt/shmem_utils.c @@ -142,12 +142,12 @@ static int __shmem_rw(struct file *file, loff_t off, if (IS_ERR(page)) return PTR_ERR(page); - vaddr = kmap(page); + vaddr = kmap_thread(page); if (write) memcpy(vaddr + offset_in_page(off), ptr, this); else memcpy(ptr, vaddr + offset_in_page(off), this); - kunmap(page); + kunmap_thread(page); put_page(page); len -= this; diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 9aa3066cb75d..cae8300fd224 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -312,14 +312,14 @@ shmem_pread(struct page *page, int offset, int len, char __user *user_data, char *vaddr; int ret; - vaddr = kmap(page); + vaddr = kmap_thread(page); if (needs_clflush) drm_clflush_virt_range(vaddr + offset, len); ret = __copy_to_user(user_data, vaddr + offset, len); - kunmap(page); + kunmap_thread(page); return ret ? -EFAULT : 0; } @@ -708,7 +708,7 @@ shmem_pwrite(struct page *page, int offset, int len, char __user *user_data, char *vaddr; int ret; - vaddr = kmap(page); + vaddr = kmap_thread(page); if (needs_clflush_before) drm_clflush_virt_range(vaddr + offset, len); @@ -717,7 +717,7 @@ shmem_pwrite(struct page *page, int offset, int len, char __user *user_data, if (!ret && needs_clflush_after) drm_clflush_virt_range(vaddr + offset, len); - kunmap(page); + kunmap_thread(page); return ret ? -EFAULT : 0; } diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c index 3e6cbb0d1150..aecd469b6b6e 100644 --- a/drivers/gpu/drm/i915/i915_gpu_error.c +++ b/drivers/gpu/drm/i915/i915_gpu_error.c @@ -1058,9 +1058,9 @@ i915_vma_coredump_create(const struct intel_gt *gt, drm_clflush_pages(&page, 1); - s = kmap(page); + s = kmap_thread(page); ret = compress_page(compress, s, dst, false); - kunmap(page); + kunmap_thread(page); drm_clflush_pages(&page, 1); diff --git a/drivers/gpu/drm/i915/selftests/i915_perf.c b/drivers/gpu/drm/i915/selftests/i915_perf.c index c2d001d9c0ec..7f7ef2d056f4 100644 --- a/drivers/gpu/drm/i915/selftests/i915_perf.c +++ b/drivers/gpu/drm/i915/selftests/i915_perf.c @@ -307,7 +307,7 @@ static int live_noa_gpr(void *arg) } /* Poison the ce->vm so we detect writes not to the GGTT gt->scratch */ - scratch = kmap(ce->vm->scratch[0].base.page); + scratch = kmap_thread(ce->vm->scratch[0].base.page); memset(scratch, POISON_FREE, PAGE_SIZE); rq = intel_context_create_request(ce); @@ -405,7 +405,7 @@ static int live_noa_gpr(void *arg) out_rq: i915_request_put(rq); out_ce: - kunmap(ce->vm->scratch[0].base.page); + kunmap_thread(ce->vm->scratch[0].base.page); intel_context_put(ce); out: stream_destroy(stream); diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c index 004344dce140..0aba0cac51e1 100644 --- a/drivers/gpu/drm/radeon/radeon_ttm.c +++ b/drivers/gpu/drm/radeon/radeon_ttm.c @@ -1013,11 +1013,11 @@ static ssize_t radeon_ttm_gtt_read(struct file *f, char __user *buf, page = rdev->gart.pages[p]; if (page) { - ptr = kmap(page); + ptr = kmap_thread(page); ptr += off; r = copy_to_user(buf, ptr, cur_size); - kunmap(rdev->gart.pages[p]); + kunmap_thread(rdev->gart.pages[p]); } else r = clear_user(buf, cur_size); From patchwork Fri Oct 9 19:49:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 284807 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3E41C8302F for ; Fri, 9 Oct 2020 19:51:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A0F172231B for ; Fri, 9 Oct 2020 19:51:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2403966AbgJITvn (ORCPT ); Fri, 9 Oct 2020 15:51:43 -0400 Received: from mga02.intel.com ([134.134.136.20]:57597 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2403912AbgJITvZ (ORCPT ); Fri, 9 Oct 2020 15:51:25 -0400 IronPort-SDR: Buki3qs6NBvS3hmZtQwJlqZQICcMzBGBfXe0E6UxKErEylTOKT2moJBC37CGA2p4eQB2eod5/G Jwp65DGsuMYA== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="152450817" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="152450817" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:24 -0700 IronPort-SDR: XjNk9w0mzDhi5BMigEI5d3C8W/iIq4vdRrLZr3Ki7urqpxKKtyqLLY7j1Bq5ukpJe/vk25ySpH cq+qGeUOhHow== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="355862741" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:22 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Mike Marciniszyn , Dennis Dalessandro , Doug Ledford , Jason Gunthorpe , Faisal Latif , Shiraz Saleem , Bernard Metzler , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 10/58] drivers/rdma: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:49:45 -0700 Message-Id: <20201009195033.3208459-11-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org From: Ira Weiny The kmap() calls in these drivers are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Mike Marciniszyn Cc: Dennis Dalessandro Cc: Doug Ledford Cc: Jason Gunthorpe Cc: Faisal Latif Cc: Shiraz Saleem Cc: Bernard Metzler Signed-off-by: Ira Weiny --- drivers/infiniband/hw/hfi1/sdma.c | 4 ++-- drivers/infiniband/hw/i40iw/i40iw_cm.c | 10 +++++----- drivers/infiniband/sw/siw/siw_qp_tx.c | 14 +++++++------- 3 files changed, 14 insertions(+), 14 deletions(-) diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c index 04575c9afd61..09d206e3229a 100644 --- a/drivers/infiniband/hw/hfi1/sdma.c +++ b/drivers/infiniband/hw/hfi1/sdma.c @@ -3130,7 +3130,7 @@ int ext_coal_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx, } if (type == SDMA_MAP_PAGE) { - kvaddr = kmap(page); + kvaddr = kmap_thread(page); kvaddr += offset; } else if (WARN_ON(!kvaddr)) { __sdma_txclean(dd, tx); @@ -3140,7 +3140,7 @@ int ext_coal_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx, memcpy(tx->coalesce_buf + tx->coalesce_idx, kvaddr, len); tx->coalesce_idx += len; if (type == SDMA_MAP_PAGE) - kunmap(page); + kunmap_thread(page); /* If there is more data, return */ if (tx->tlen - tx->coalesce_idx) diff --git a/drivers/infiniband/hw/i40iw/i40iw_cm.c b/drivers/infiniband/hw/i40iw/i40iw_cm.c index a3b95805c154..122d7a5642a1 100644 --- a/drivers/infiniband/hw/i40iw/i40iw_cm.c +++ b/drivers/infiniband/hw/i40iw/i40iw_cm.c @@ -3721,7 +3721,7 @@ int i40iw_accept(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param) ibmr->device = iwpd->ibpd.device; iwqp->lsmm_mr = ibmr; if (iwqp->page) - iwqp->sc_qp.qp_uk.sq_base = kmap(iwqp->page); + iwqp->sc_qp.qp_uk.sq_base = kmap_thread(iwqp->page); dev->iw_priv_qp_ops->qp_send_lsmm(&iwqp->sc_qp, iwqp->ietf_mem.va, (accept.size + conn_param->private_data_len), @@ -3729,12 +3729,12 @@ int i40iw_accept(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param) } else { if (iwqp->page) - iwqp->sc_qp.qp_uk.sq_base = kmap(iwqp->page); + iwqp->sc_qp.qp_uk.sq_base = kmap_thread(iwqp->page); dev->iw_priv_qp_ops->qp_send_lsmm(&iwqp->sc_qp, NULL, 0, 0); } if (iwqp->page) - kunmap(iwqp->page); + kunmap_thread(iwqp->page); iwqp->cm_id = cm_id; cm_node->cm_id = cm_id; @@ -4102,10 +4102,10 @@ static void i40iw_cm_event_connected(struct i40iw_cm_event *event) i40iw_cm_init_tsa_conn(iwqp, cm_node); read0 = (cm_node->send_rdma0_op == SEND_RDMA_READ_ZERO); if (iwqp->page) - iwqp->sc_qp.qp_uk.sq_base = kmap(iwqp->page); + iwqp->sc_qp.qp_uk.sq_base = kmap_thread(iwqp->page); dev->iw_priv_qp_ops->qp_send_rtt(&iwqp->sc_qp, read0); if (iwqp->page) - kunmap(iwqp->page); + kunmap_thread(iwqp->page); memset(&attr, 0, sizeof(attr)); attr.qp_state = IB_QPS_RTS; diff --git a/drivers/infiniband/sw/siw/siw_qp_tx.c b/drivers/infiniband/sw/siw/siw_qp_tx.c index d19d8325588b..4ed37c328d02 100644 --- a/drivers/infiniband/sw/siw/siw_qp_tx.c +++ b/drivers/infiniband/sw/siw/siw_qp_tx.c @@ -76,7 +76,7 @@ static int siw_try_1seg(struct siw_iwarp_tx *c_tx, void *paddr) if (unlikely(!p)) return -EFAULT; - buffer = kmap(p); + buffer = kmap_thread(p); if (likely(PAGE_SIZE - off >= bytes)) { memcpy(paddr, buffer + off, bytes); @@ -84,7 +84,7 @@ static int siw_try_1seg(struct siw_iwarp_tx *c_tx, void *paddr) unsigned long part = bytes - (PAGE_SIZE - off); memcpy(paddr, buffer + off, part); - kunmap(p); + kunmap_thread(p); if (!mem->is_pbl) p = siw_get_upage(mem->umem, @@ -96,10 +96,10 @@ static int siw_try_1seg(struct siw_iwarp_tx *c_tx, void *paddr) if (unlikely(!p)) return -EFAULT; - buffer = kmap(p); + buffer = kmap_thread(p); memcpy(paddr + part, buffer, bytes - part); } - kunmap(p); + kunmap_thread(p); } } return (int)bytes; @@ -505,7 +505,7 @@ static int siw_tx_hdt(struct siw_iwarp_tx *c_tx, struct socket *s) page_array[seg] = p; if (!c_tx->use_sendpage) { - iov[seg].iov_base = kmap(p) + fp_off; + iov[seg].iov_base = kmap_thread(p) + fp_off; iov[seg].iov_len = plen; /* Remember for later kunmap() */ @@ -518,9 +518,9 @@ static int siw_tx_hdt(struct siw_iwarp_tx *c_tx, struct socket *s) plen); } else if (do_crc) { crypto_shash_update(c_tx->mpa_crc_hd, - kmap(p) + fp_off, + kmap_thread(p) + fp_off, plen); - kunmap(p); + kunmap_thread(p); } } else { u64 va = sge->laddr + sge_off; From patchwork Fri Oct 9 19:49:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 284782 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8BACC433E7 for ; Fri, 9 Oct 2020 20:09:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 98BB020659 for ; Fri, 9 Oct 2020 20:09:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391518AbgJIUIw (ORCPT ); Fri, 9 Oct 2020 16:08:52 -0400 Received: from mga01.intel.com ([192.55.52.88]:3555 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2403946AbgJITvb (ORCPT ); Fri, 9 Oct 2020 15:51:31 -0400 IronPort-SDR: 0M+PJwZD1jFuMQocaU6wMT6hcydV5DguoJbJA0ETTZLrD7/zXfBuqvEBCFLTliVirEmY2c0PEc qHvKAiR1KIcg== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="182976115" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="182976115" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:27 -0700 IronPort-SDR: zNYl6viNpTmcuoOUFeW6ebDlF6mcWw7xHrLDRpkX9eSJEFyww70EqY5cZFdXSl9ODsqjoVqJW/ EpbiFUFW5goQ== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="329006240" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:25 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , "David S. Miller" , Jakub Kicinski , Jesse Brandeburg , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 11/58] drivers/net: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:49:46 -0700 Message-Id: <20201009195033.3208459-12-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org From: Ira Weiny The kmap() calls in these drivers are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: "David S. Miller" Cc: Jakub Kicinski Cc: Jesse Brandeburg Signed-off-by: Ira Weiny --- drivers/net/ethernet/intel/igb/igb_ethtool.c | 4 ++-- drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/intel/igb/igb_ethtool.c b/drivers/net/ethernet/intel/igb/igb_ethtool.c index 6e8231c1ddf0..ac9189752012 100644 --- a/drivers/net/ethernet/intel/igb/igb_ethtool.c +++ b/drivers/net/ethernet/intel/igb/igb_ethtool.c @@ -1794,14 +1794,14 @@ static int igb_check_lbtest_frame(struct igb_rx_buffer *rx_buffer, frame_size >>= 1; - data = kmap(rx_buffer->page); + data = kmap_thread(rx_buffer->page); if (data[3] != 0xFF || data[frame_size + 10] != 0xBE || data[frame_size + 12] != 0xAF) match = false; - kunmap(rx_buffer->page); + kunmap_thread(rx_buffer->page); return match; } diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c index 71ec908266a6..7d469425f8b4 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c @@ -1963,14 +1963,14 @@ static bool ixgbe_check_lbtest_frame(struct ixgbe_rx_buffer *rx_buffer, frame_size >>= 1; - data = kmap(rx_buffer->page) + rx_buffer->page_offset; + data = kmap_thread(rx_buffer->page) + rx_buffer->page_offset; if (data[3] != 0xFF || data[frame_size + 10] != 0xBE || data[frame_size + 12] != 0xAF) match = false; - kunmap(rx_buffer->page); + kunmap_thread(rx_buffer->page); return match; } From patchwork Fri Oct 9 19:49:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 284783 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2C67C4363A for ; Fri, 9 Oct 2020 20:07:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8D35E2053B for ; Fri, 9 Oct 2020 20:07:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391541AbgJIUHf (ORCPT ); Fri, 9 Oct 2020 16:07:35 -0400 Received: from mga12.intel.com ([192.55.52.136]:29143 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2403958AbgJITve (ORCPT ); Fri, 9 Oct 2020 15:51:34 -0400 IronPort-SDR: uaD/tDmFHK22veBGR3YzL5VTBhxNLwUX5oCzX0P1SaQEJowj8ybTYyBX2T5ywdU6U+5NcEf+EP CKZez4YOWWdA== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="144850690" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="144850690" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:33 -0700 IronPort-SDR: HIW7PNpXrID3gOC4BJBiMK8gMhcX7pYwx4FKSDabG5rp79Jg8ur1teAkpcUK61qJFFQS9ID+ws 9ehpgS1/cOeQ== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="349957192" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:33 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Chris Mason , Josef Bacik , David Sterba , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 13/58] fs/btrfs: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:49:48 -0700 Message-Id: <20201009195033.3208459-14-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Chris Mason Cc: Josef Bacik Cc: David Sterba Signed-off-by: Ira Weiny --- fs/btrfs/check-integrity.c | 4 ++-- fs/btrfs/compression.c | 4 ++-- fs/btrfs/inode.c | 16 ++++++++-------- fs/btrfs/lzo.c | 24 ++++++++++++------------ fs/btrfs/raid56.c | 34 +++++++++++++++++----------------- fs/btrfs/reflink.c | 8 ++++---- fs/btrfs/send.c | 4 ++-- fs/btrfs/zlib.c | 32 ++++++++++++++++---------------- fs/btrfs/zstd.c | 20 ++++++++++---------- 9 files changed, 73 insertions(+), 73 deletions(-) diff --git a/fs/btrfs/check-integrity.c b/fs/btrfs/check-integrity.c index 81a8c87a5afb..9e5a02512ab5 100644 --- a/fs/btrfs/check-integrity.c +++ b/fs/btrfs/check-integrity.c @@ -2706,7 +2706,7 @@ static void __btrfsic_submit_bio(struct bio *bio) bio_for_each_segment(bvec, bio, iter) { BUG_ON(bvec.bv_len != PAGE_SIZE); - mapped_datav[i] = kmap(bvec.bv_page); + mapped_datav[i] = kmap_thread(bvec.bv_page); i++; if (dev_state->state->print_mask & @@ -2720,7 +2720,7 @@ static void __btrfsic_submit_bio(struct bio *bio) bio, &bio_is_patched, bio->bi_opf); bio_for_each_segment(bvec, bio, iter) - kunmap(bvec.bv_page); + kunmap_thread(bvec.bv_page); kfree(mapped_datav); } else if (NULL != dev_state && (bio->bi_opf & REQ_PREFLUSH)) { if (dev_state->state->print_mask & diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c index 1ab56a734e70..5944fb36d68a 100644 --- a/fs/btrfs/compression.c +++ b/fs/btrfs/compression.c @@ -1626,7 +1626,7 @@ static void heuristic_collect_sample(struct inode *inode, u64 start, u64 end, curr_sample_pos = 0; while (index < index_end) { page = find_get_page(inode->i_mapping, index); - in_data = kmap(page); + in_data = kmap_thread(page); /* Handle case where the start is not aligned to PAGE_SIZE */ i = start % PAGE_SIZE; while (i < PAGE_SIZE - SAMPLING_READ_SIZE) { @@ -1639,7 +1639,7 @@ static void heuristic_collect_sample(struct inode *inode, u64 start, u64 end, start += SAMPLING_INTERVAL; curr_sample_pos += SAMPLING_READ_SIZE; } - kunmap(page); + kunmap_thread(page); put_page(page); index++; diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 9570458aa847..9710a52c6c42 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -4603,7 +4603,7 @@ int btrfs_truncate_block(struct inode *inode, loff_t from, loff_t len, if (offset != blocksize) { if (!len) len = blocksize - offset; - kaddr = kmap(page); + kaddr = kmap_thread(page); if (front) memset(kaddr + (block_start - page_offset(page)), 0, offset); @@ -4611,7 +4611,7 @@ int btrfs_truncate_block(struct inode *inode, loff_t from, loff_t len, memset(kaddr + (block_start - page_offset(page)) + offset, 0, len); flush_dcache_page(page); - kunmap(page); + kunmap_thread(page); } ClearPageChecked(page); set_page_dirty(page); @@ -6509,9 +6509,9 @@ static noinline int uncompress_inline(struct btrfs_path *path, */ if (max_size + pg_offset < PAGE_SIZE) { - char *map = kmap(page); + char *map = kmap_thread(page); memset(map + pg_offset + max_size, 0, PAGE_SIZE - max_size - pg_offset); - kunmap(page); + kunmap_thread(page); } kfree(tmp); return ret; @@ -6704,7 +6704,7 @@ struct extent_map *btrfs_get_extent(struct btrfs_inode *inode, goto out; } } else { - map = kmap(page); + map = kmap_thread(page); read_extent_buffer(leaf, map + pg_offset, ptr, copy_size); if (pg_offset + copy_size < PAGE_SIZE) { @@ -6712,7 +6712,7 @@ struct extent_map *btrfs_get_extent(struct btrfs_inode *inode, PAGE_SIZE - pg_offset - copy_size); } - kunmap(page); + kunmap_thread(page); } flush_dcache_page(page); } @@ -8326,10 +8326,10 @@ vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf) zero_start = PAGE_SIZE; if (zero_start != PAGE_SIZE) { - kaddr = kmap(page); + kaddr = kmap_thread(page); memset(kaddr + zero_start, 0, PAGE_SIZE - zero_start); flush_dcache_page(page); - kunmap(page); + kunmap_thread(page); } ClearPageChecked(page); set_page_dirty(page); diff --git a/fs/btrfs/lzo.c b/fs/btrfs/lzo.c index aa9cd11f4b78..f29dcc9ec573 100644 --- a/fs/btrfs/lzo.c +++ b/fs/btrfs/lzo.c @@ -140,7 +140,7 @@ int lzo_compress_pages(struct list_head *ws, struct address_space *mapping, *total_in = 0; in_page = find_get_page(mapping, start >> PAGE_SHIFT); - data_in = kmap(in_page); + data_in = kmap_thread(in_page); /* * store the size of all chunks of compressed data in @@ -151,7 +151,7 @@ int lzo_compress_pages(struct list_head *ws, struct address_space *mapping, ret = -ENOMEM; goto out; } - cpage_out = kmap(out_page); + cpage_out = kmap_thread(out_page); out_offset = LZO_LEN; tot_out = LZO_LEN; pages[0] = out_page; @@ -209,7 +209,7 @@ int lzo_compress_pages(struct list_head *ws, struct address_space *mapping, if (out_len == 0 && tot_in >= len) break; - kunmap(out_page); + kunmap_thread(out_page); if (nr_pages == nr_dest_pages) { out_page = NULL; ret = -E2BIG; @@ -221,7 +221,7 @@ int lzo_compress_pages(struct list_head *ws, struct address_space *mapping, ret = -ENOMEM; goto out; } - cpage_out = kmap(out_page); + cpage_out = kmap_thread(out_page); pages[nr_pages++] = out_page; pg_bytes_left = PAGE_SIZE; @@ -243,12 +243,12 @@ int lzo_compress_pages(struct list_head *ws, struct address_space *mapping, break; bytes_left = len - tot_in; - kunmap(in_page); + kunmap_thread(in_page); put_page(in_page); start += PAGE_SIZE; in_page = find_get_page(mapping, start >> PAGE_SHIFT); - data_in = kmap(in_page); + data_in = kmap_thread(in_page); in_len = min(bytes_left, PAGE_SIZE); } @@ -258,10 +258,10 @@ int lzo_compress_pages(struct list_head *ws, struct address_space *mapping, } /* store the size of all chunks of compressed data */ - cpage_out = kmap(pages[0]); + cpage_out = kmap_thread(pages[0]); write_compress_length(cpage_out, tot_out); - kunmap(pages[0]); + kunmap_thread(pages[0]); ret = 0; *total_out = tot_out; @@ -269,10 +269,10 @@ int lzo_compress_pages(struct list_head *ws, struct address_space *mapping, out: *out_pages = nr_pages; if (out_page) - kunmap(out_page); + kunmap_thread(out_page); if (in_page) { - kunmap(in_page); + kunmap_thread(in_page); put_page(in_page); } @@ -305,7 +305,7 @@ int lzo_decompress_bio(struct list_head *ws, struct compressed_bio *cb) u64 disk_start = cb->start; struct bio *orig_bio = cb->orig_bio; - data_in = kmap(pages_in[0]); + data_in = kmap_thread(pages_in[0]); tot_len = read_compress_length(data_in); /* * Compressed data header check. @@ -387,7 +387,7 @@ int lzo_decompress_bio(struct list_head *ws, struct compressed_bio *cb) else kunmap(pages_in[page_in_index]); - data_in = kmap(pages_in[++page_in_index]); + data_in = kmap_thread(pages_in[++page_in_index]); in_page_bytes_left = PAGE_SIZE; in_offset = 0; diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c index 255490f42b5d..34e646e4548c 100644 --- a/fs/btrfs/raid56.c +++ b/fs/btrfs/raid56.c @@ -262,13 +262,13 @@ static void cache_rbio_pages(struct btrfs_raid_bio *rbio) if (!rbio->bio_pages[i]) continue; - s = kmap(rbio->bio_pages[i]); - d = kmap(rbio->stripe_pages[i]); + s = kmap_thread(rbio->bio_pages[i]); + d = kmap_thread(rbio->stripe_pages[i]); copy_page(d, s); - kunmap(rbio->bio_pages[i]); - kunmap(rbio->stripe_pages[i]); + kunmap_thread(rbio->bio_pages[i]); + kunmap_thread(rbio->stripe_pages[i]); SetPageUptodate(rbio->stripe_pages[i]); } set_bit(RBIO_CACHE_READY_BIT, &rbio->flags); @@ -1241,13 +1241,13 @@ static noinline void finish_rmw(struct btrfs_raid_bio *rbio) /* first collect one page from each data stripe */ for (stripe = 0; stripe < nr_data; stripe++) { p = page_in_rbio(rbio, stripe, pagenr, 0); - pointers[stripe] = kmap(p); + pointers[stripe] = kmap_thread(p); } /* then add the parity stripe */ p = rbio_pstripe_page(rbio, pagenr); SetPageUptodate(p); - pointers[stripe++] = kmap(p); + pointers[stripe++] = kmap_thread(p); if (has_qstripe) { @@ -1257,7 +1257,7 @@ static noinline void finish_rmw(struct btrfs_raid_bio *rbio) */ p = rbio_qstripe_page(rbio, pagenr); SetPageUptodate(p); - pointers[stripe++] = kmap(p); + pointers[stripe++] = kmap_thread(p); raid6_call.gen_syndrome(rbio->real_stripes, PAGE_SIZE, pointers); @@ -1269,7 +1269,7 @@ static noinline void finish_rmw(struct btrfs_raid_bio *rbio) for (stripe = 0; stripe < rbio->real_stripes; stripe++) - kunmap(page_in_rbio(rbio, stripe, pagenr, 0)); + kunmap_thread(page_in_rbio(rbio, stripe, pagenr, 0)); } /* @@ -1835,7 +1835,7 @@ static void __raid_recover_end_io(struct btrfs_raid_bio *rbio) } else { page = rbio_stripe_page(rbio, stripe, pagenr); } - pointers[stripe] = kmap(page); + pointers[stripe] = kmap_thread(page); } /* all raid6 handling here */ @@ -1940,7 +1940,7 @@ static void __raid_recover_end_io(struct btrfs_raid_bio *rbio) } else { page = rbio_stripe_page(rbio, stripe, pagenr); } - kunmap(page); + kunmap_thread(page); } } @@ -2379,18 +2379,18 @@ static noinline void finish_parity_scrub(struct btrfs_raid_bio *rbio, /* first collect one page from each data stripe */ for (stripe = 0; stripe < nr_data; stripe++) { p = page_in_rbio(rbio, stripe, pagenr, 0); - pointers[stripe] = kmap(p); + pointers[stripe] = kmap_thread(p); } /* then add the parity stripe */ - pointers[stripe++] = kmap(p_page); + pointers[stripe++] = kmap_thread(p_page); if (has_qstripe) { /* * raid6, add the qstripe and call the * library function to fill in our p/q */ - pointers[stripe++] = kmap(q_page); + pointers[stripe++] = kmap_thread(q_page); raid6_call.gen_syndrome(rbio->real_stripes, PAGE_SIZE, pointers); @@ -2402,17 +2402,17 @@ static noinline void finish_parity_scrub(struct btrfs_raid_bio *rbio, /* Check scrubbing parity and repair it */ p = rbio_stripe_page(rbio, rbio->scrubp, pagenr); - parity = kmap(p); + parity = kmap_thread(p); if (memcmp(parity, pointers[rbio->scrubp], PAGE_SIZE)) copy_page(parity, pointers[rbio->scrubp]); else /* Parity is right, needn't writeback */ bitmap_clear(rbio->dbitmap, pagenr, 1); - kunmap(p); + kunmap_thread(p); for (stripe = 0; stripe < nr_data; stripe++) - kunmap(page_in_rbio(rbio, stripe, pagenr, 0)); - kunmap(p_page); + kunmap_thread(page_in_rbio(rbio, stripe, pagenr, 0)); + kunmap_thread(p_page); } __free_page(p_page); diff --git a/fs/btrfs/reflink.c b/fs/btrfs/reflink.c index 5cd02514cf4d..10e53d7eba8c 100644 --- a/fs/btrfs/reflink.c +++ b/fs/btrfs/reflink.c @@ -92,10 +92,10 @@ static int copy_inline_to_page(struct inode *inode, if (comp_type == BTRFS_COMPRESS_NONE) { char *map; - map = kmap(page); + map = kmap_thread(page); memcpy(map, data_start, datal); flush_dcache_page(page); - kunmap(page); + kunmap_thread(page); } else { ret = btrfs_decompress(comp_type, data_start, page, 0, inline_size, datal); @@ -119,10 +119,10 @@ static int copy_inline_to_page(struct inode *inode, if (datal < block_size) { char *map; - map = kmap(page); + map = kmap_thread(page); memset(map + datal, 0, block_size - datal); flush_dcache_page(page); - kunmap(page); + kunmap_thread(page); } SetPageUptodate(page); diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c index d9813a5b075a..06c383d3dc43 100644 --- a/fs/btrfs/send.c +++ b/fs/btrfs/send.c @@ -4863,9 +4863,9 @@ static ssize_t fill_read_buf(struct send_ctx *sctx, u64 offset, u32 len) } } - addr = kmap(page); + addr = kmap_thread(page); memcpy(sctx->read_buf + ret, addr + pg_offset, cur_len); - kunmap(page); + kunmap_thread(page); unlock_page(page); put_page(page); index++; diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c index 05615a1099db..45b7a907bab3 100644 --- a/fs/btrfs/zlib.c +++ b/fs/btrfs/zlib.c @@ -126,7 +126,7 @@ int zlib_compress_pages(struct list_head *ws, struct address_space *mapping, ret = -ENOMEM; goto out; } - cpage_out = kmap(out_page); + cpage_out = kmap_thread(out_page); pages[0] = out_page; nr_pages = 1; @@ -149,12 +149,12 @@ int zlib_compress_pages(struct list_head *ws, struct address_space *mapping, for (i = 0; i < in_buf_pages; i++) { if (in_page) { - kunmap(in_page); + kunmap_thread(in_page); put_page(in_page); } in_page = find_get_page(mapping, start >> PAGE_SHIFT); - data_in = kmap(in_page); + data_in = kmap_thread(in_page); memcpy(workspace->buf + i * PAGE_SIZE, data_in, PAGE_SIZE); start += PAGE_SIZE; @@ -162,12 +162,12 @@ int zlib_compress_pages(struct list_head *ws, struct address_space *mapping, workspace->strm.next_in = workspace->buf; } else { if (in_page) { - kunmap(in_page); + kunmap_thread(in_page); put_page(in_page); } in_page = find_get_page(mapping, start >> PAGE_SHIFT); - data_in = kmap(in_page); + data_in = kmap_thread(in_page); start += PAGE_SIZE; workspace->strm.next_in = data_in; } @@ -196,7 +196,7 @@ int zlib_compress_pages(struct list_head *ws, struct address_space *mapping, * the stream end if required */ if (workspace->strm.avail_out == 0) { - kunmap(out_page); + kunmap_thread(out_page); if (nr_pages == nr_dest_pages) { out_page = NULL; ret = -E2BIG; @@ -207,7 +207,7 @@ int zlib_compress_pages(struct list_head *ws, struct address_space *mapping, ret = -ENOMEM; goto out; } - cpage_out = kmap(out_page); + cpage_out = kmap_thread(out_page); pages[nr_pages] = out_page; nr_pages++; workspace->strm.avail_out = PAGE_SIZE; @@ -234,7 +234,7 @@ int zlib_compress_pages(struct list_head *ws, struct address_space *mapping, goto out; } else if (workspace->strm.avail_out == 0) { /* get another page for the stream end */ - kunmap(out_page); + kunmap_thread(out_page); if (nr_pages == nr_dest_pages) { out_page = NULL; ret = -E2BIG; @@ -245,7 +245,7 @@ int zlib_compress_pages(struct list_head *ws, struct address_space *mapping, ret = -ENOMEM; goto out; } - cpage_out = kmap(out_page); + cpage_out = kmap_thread(out_page); pages[nr_pages] = out_page; nr_pages++; workspace->strm.avail_out = PAGE_SIZE; @@ -265,10 +265,10 @@ int zlib_compress_pages(struct list_head *ws, struct address_space *mapping, out: *out_pages = nr_pages; if (out_page) - kunmap(out_page); + kunmap_thread(out_page); if (in_page) { - kunmap(in_page); + kunmap_thread(in_page); put_page(in_page); } return ret; @@ -289,7 +289,7 @@ int zlib_decompress_bio(struct list_head *ws, struct compressed_bio *cb) u64 disk_start = cb->start; struct bio *orig_bio = cb->orig_bio; - data_in = kmap(pages_in[page_in_index]); + data_in = kmap_thread(pages_in[page_in_index]); workspace->strm.next_in = data_in; workspace->strm.avail_in = min_t(size_t, srclen, PAGE_SIZE); workspace->strm.total_in = 0; @@ -311,7 +311,7 @@ int zlib_decompress_bio(struct list_head *ws, struct compressed_bio *cb) if (Z_OK != zlib_inflateInit2(&workspace->strm, wbits)) { pr_warn("BTRFS: inflateInit failed\n"); - kunmap(pages_in[page_in_index]); + kunmap_thread(pages_in[page_in_index]); return -EIO; } while (workspace->strm.total_in < srclen) { @@ -339,13 +339,13 @@ int zlib_decompress_bio(struct list_head *ws, struct compressed_bio *cb) if (workspace->strm.avail_in == 0) { unsigned long tmp; - kunmap(pages_in[page_in_index]); + kunmap_thread(pages_in[page_in_index]); page_in_index++; if (page_in_index >= total_pages_in) { data_in = NULL; break; } - data_in = kmap(pages_in[page_in_index]); + data_in = kmap_thread(pages_in[page_in_index]); workspace->strm.next_in = data_in; tmp = srclen - workspace->strm.total_in; workspace->strm.avail_in = min(tmp, @@ -359,7 +359,7 @@ int zlib_decompress_bio(struct list_head *ws, struct compressed_bio *cb) done: zlib_inflateEnd(&workspace->strm); if (data_in) - kunmap(pages_in[page_in_index]); + kunmap_thread(pages_in[page_in_index]); if (!ret) zero_fill_bio(orig_bio); return ret; diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c index 9a4871636c6c..48e03f6dcef7 100644 --- a/fs/btrfs/zstd.c +++ b/fs/btrfs/zstd.c @@ -399,7 +399,7 @@ int zstd_compress_pages(struct list_head *ws, struct address_space *mapping, /* map in the first page of input data */ in_page = find_get_page(mapping, start >> PAGE_SHIFT); - workspace->in_buf.src = kmap(in_page); + workspace->in_buf.src = kmap_thread(in_page); workspace->in_buf.pos = 0; workspace->in_buf.size = min_t(size_t, len, PAGE_SIZE); @@ -411,7 +411,7 @@ int zstd_compress_pages(struct list_head *ws, struct address_space *mapping, goto out; } pages[nr_pages++] = out_page; - workspace->out_buf.dst = kmap(out_page); + workspace->out_buf.dst = kmap_thread(out_page); workspace->out_buf.pos = 0; workspace->out_buf.size = min_t(size_t, max_out, PAGE_SIZE); @@ -446,7 +446,7 @@ int zstd_compress_pages(struct list_head *ws, struct address_space *mapping, if (workspace->out_buf.pos == workspace->out_buf.size) { tot_out += PAGE_SIZE; max_out -= PAGE_SIZE; - kunmap(out_page); + kunmap_thread(out_page); if (nr_pages == nr_dest_pages) { out_page = NULL; ret = -E2BIG; @@ -458,7 +458,7 @@ int zstd_compress_pages(struct list_head *ws, struct address_space *mapping, goto out; } pages[nr_pages++] = out_page; - workspace->out_buf.dst = kmap(out_page); + workspace->out_buf.dst = kmap_thread(out_page); workspace->out_buf.pos = 0; workspace->out_buf.size = min_t(size_t, max_out, PAGE_SIZE); @@ -479,7 +479,7 @@ int zstd_compress_pages(struct list_head *ws, struct address_space *mapping, start += PAGE_SIZE; len -= PAGE_SIZE; in_page = find_get_page(mapping, start >> PAGE_SHIFT); - workspace->in_buf.src = kmap(in_page); + workspace->in_buf.src = kmap_thread(in_page); workspace->in_buf.pos = 0; workspace->in_buf.size = min_t(size_t, len, PAGE_SIZE); } @@ -518,7 +518,7 @@ int zstd_compress_pages(struct list_head *ws, struct address_space *mapping, goto out; } pages[nr_pages++] = out_page; - workspace->out_buf.dst = kmap(out_page); + workspace->out_buf.dst = kmap_thread(out_page); workspace->out_buf.pos = 0; workspace->out_buf.size = min_t(size_t, max_out, PAGE_SIZE); } @@ -565,7 +565,7 @@ int zstd_decompress_bio(struct list_head *ws, struct compressed_bio *cb) goto done; } - workspace->in_buf.src = kmap(pages_in[page_in_index]); + workspace->in_buf.src = kmap_thread(pages_in[page_in_index]); workspace->in_buf.pos = 0; workspace->in_buf.size = min_t(size_t, srclen, PAGE_SIZE); @@ -601,14 +601,14 @@ int zstd_decompress_bio(struct list_head *ws, struct compressed_bio *cb) break; if (workspace->in_buf.pos == workspace->in_buf.size) { - kunmap(pages_in[page_in_index++]); + kunmap_thread(pages_in[page_in_index++]); if (page_in_index >= total_pages_in) { workspace->in_buf.src = NULL; ret = -EIO; goto done; } srclen -= PAGE_SIZE; - workspace->in_buf.src = kmap(pages_in[page_in_index]); + workspace->in_buf.src = kmap_thread(pages_in[page_in_index]); workspace->in_buf.pos = 0; workspace->in_buf.size = min_t(size_t, srclen, PAGE_SIZE); } @@ -617,7 +617,7 @@ int zstd_decompress_bio(struct list_head *ws, struct compressed_bio *cb) zero_fill_bio(orig_bio); done: if (workspace->in_buf.src) - kunmap(pages_in[page_in_index]); + kunmap_thread(pages_in[page_in_index]); return ret; } From patchwork Fri Oct 9 19:49:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 284784 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9CA87C433DF for ; Fri, 9 Oct 2020 20:07:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5B0C72053B for ; Fri, 9 Oct 2020 20:07:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389953AbgJIUHb (ORCPT ); Fri, 9 Oct 2020 16:07:31 -0400 Received: from mga06.intel.com ([134.134.136.31]:1621 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2403959AbgJITvm (ORCPT ); Fri, 9 Oct 2020 15:51:42 -0400 IronPort-SDR: OvUdnTIo53H2AyyTnYh4FWnAgUT0vb6jIYBsFTvfZM4VsCqvxXdHL/PCZfQ+hdONhRPcaXWgsk GWtbV+Ff7Znw== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="227178845" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="227178845" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:39 -0700 IronPort-SDR: 6nDyG/rNIU8VCp7PRxt8N7VM6Ts7FaR6Cq3BMAOPdzxAbIieQhQY1hWISzmf8TSYIGXE2cP/wS 1DDas2UbMNkw== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="345147432" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:37 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Steve French , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 14/58] fs/cifs: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:49:49 -0700 Message-Id: <20201009195033.3208459-15-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Steve French Signed-off-by: Ira Weiny --- fs/cifs/cifsencrypt.c | 6 +++--- fs/cifs/file.c | 16 ++++++++-------- fs/cifs/smb2ops.c | 8 ++++---- 3 files changed, 15 insertions(+), 15 deletions(-) diff --git a/fs/cifs/cifsencrypt.c b/fs/cifs/cifsencrypt.c index 9daa256f69d4..2f8232d01a56 100644 --- a/fs/cifs/cifsencrypt.c +++ b/fs/cifs/cifsencrypt.c @@ -82,17 +82,17 @@ int __cifs_calc_signature(struct smb_rqst *rqst, rqst_page_get_length(rqst, i, &len, &offset); - kaddr = (char *) kmap(rqst->rq_pages[i]) + offset; + kaddr = (char *) kmap_thread(rqst->rq_pages[i]) + offset; rc = crypto_shash_update(shash, kaddr, len); if (rc) { cifs_dbg(VFS, "%s: Could not update with payload\n", __func__); - kunmap(rqst->rq_pages[i]); + kunmap_thread(rqst->rq_pages[i]); return rc; } - kunmap(rqst->rq_pages[i]); + kunmap_thread(rqst->rq_pages[i]); } rc = crypto_shash_final(shash, signature); diff --git a/fs/cifs/file.c b/fs/cifs/file.c index be46fab4c96d..6db2caab8852 100644 --- a/fs/cifs/file.c +++ b/fs/cifs/file.c @@ -2145,17 +2145,17 @@ static int cifs_partialpagewrite(struct page *page, unsigned from, unsigned to) inode = page->mapping->host; offset += (loff_t)from; - write_data = kmap(page); + write_data = kmap_thread(page); write_data += from; if ((to > PAGE_SIZE) || (from > to)) { - kunmap(page); + kunmap_thread(page); return -EIO; } /* racing with truncate? */ if (offset > mapping->host->i_size) { - kunmap(page); + kunmap_thread(page); return 0; /* don't care */ } @@ -2183,7 +2183,7 @@ static int cifs_partialpagewrite(struct page *page, unsigned from, unsigned to) rc = -EIO; } - kunmap(page); + kunmap_thread(page); return rc; } @@ -2559,10 +2559,10 @@ static int cifs_write_end(struct file *file, struct address_space *mapping, known which we might as well leverage */ /* BB check if anything else missing out of ppw such as updating last write time */ - page_data = kmap(page); + page_data = kmap_thread(page); rc = cifs_write(cfile, pid, page_data + offset, copied, &pos); /* if (rc < 0) should we set writebehind rc? */ - kunmap(page); + kunmap_thread(page); free_xid(xid); } else { @@ -4511,7 +4511,7 @@ static int cifs_readpage_worker(struct file *file, struct page *page, if (rc == 0) goto read_complete; - read_data = kmap(page); + read_data = kmap_thread(page); /* for reads over a certain size could initiate async read ahead */ rc = cifs_read(file, read_data, PAGE_SIZE, poffset); @@ -4540,7 +4540,7 @@ static int cifs_readpage_worker(struct file *file, struct page *page, rc = 0; io_error: - kunmap(page); + kunmap_thread(page); unlock_page(page); read_complete: diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c index 32f90dc82c84..a3e7ebab38b6 100644 --- a/fs/cifs/smb2ops.c +++ b/fs/cifs/smb2ops.c @@ -4068,12 +4068,12 @@ smb3_init_transform_rq(struct TCP_Server_Info *server, int num_rqst, rqst_page_get_length(&new_rq[i], j, &len, &offset); - dst = (char *) kmap(new_rq[i].rq_pages[j]) + offset; - src = (char *) kmap(old_rq[i - 1].rq_pages[j]) + offset; + dst = (char *) kmap_thread(new_rq[i].rq_pages[j]) + offset; + src = (char *) kmap_thread(old_rq[i - 1].rq_pages[j]) + offset; memcpy(dst, src, len); - kunmap(new_rq[i].rq_pages[j]); - kunmap(old_rq[i - 1].rq_pages[j]); + kunmap_thread(new_rq[i].rq_pages[j]); + kunmap_thread(old_rq[i - 1].rq_pages[j]); } } From patchwork Fri Oct 9 19:49:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 284791 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3CC1C43457 for ; Fri, 9 Oct 2020 20:02:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8165520732 for ; Fri, 9 Oct 2020 20:02:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390981AbgJITww (ORCPT ); Fri, 9 Oct 2020 15:52:52 -0400 Received: from mga06.intel.com ([134.134.136.31]:1664 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2403996AbgJITvy (ORCPT ); Fri, 9 Oct 2020 15:51:54 -0400 IronPort-SDR: dx2C651xObHLbB3ebXZq7O3vvDLCkCp7YyEriat/+Fn1vQtkkOifEieLP9BqvQO5TVmymYi7WM okOGmreQGHCQ== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="227178891" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="227178891" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:47 -0700 IronPort-SDR: vdHXKYZiEWUJ0fN4OExd2VqdiXE9aYdlaLF6p7mD2X3g24JwvvcNWIkVlfM1uW7/GA3kn+ap6+ Rex2Wyxl2AaA== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="298531066" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:47 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Ryusuke Konishi , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 17/58] fs/nilfs2: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:49:52 -0700 Message-Id: <20201009195033.3208459-18-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Ryusuke Konishi Signed-off-by: Ira Weiny --- fs/nilfs2/alloc.c | 34 +++++++++++++++++----------------- fs/nilfs2/cpfile.c | 4 ++-- 2 files changed, 19 insertions(+), 19 deletions(-) diff --git a/fs/nilfs2/alloc.c b/fs/nilfs2/alloc.c index adf3bb0a8048..2aa4c34094ef 100644 --- a/fs/nilfs2/alloc.c +++ b/fs/nilfs2/alloc.c @@ -524,7 +524,7 @@ int nilfs_palloc_prepare_alloc_entry(struct inode *inode, ret = nilfs_palloc_get_desc_block(inode, group, 1, &desc_bh); if (ret < 0) return ret; - desc_kaddr = kmap(desc_bh->b_page); + desc_kaddr = kmap_thread(desc_bh->b_page); desc = nilfs_palloc_block_get_group_desc( inode, group, desc_bh, desc_kaddr); n = nilfs_palloc_rest_groups_in_desc_block(inode, group, @@ -536,7 +536,7 @@ int nilfs_palloc_prepare_alloc_entry(struct inode *inode, inode, group, 1, &bitmap_bh); if (ret < 0) goto out_desc; - bitmap_kaddr = kmap(bitmap_bh->b_page); + bitmap_kaddr = kmap_thread(bitmap_bh->b_page); bitmap = bitmap_kaddr + bh_offset(bitmap_bh); pos = nilfs_palloc_find_available_slot( bitmap, group_offset, @@ -547,21 +547,21 @@ int nilfs_palloc_prepare_alloc_entry(struct inode *inode, desc, lock, -1); req->pr_entry_nr = entries_per_group * group + pos; - kunmap(desc_bh->b_page); - kunmap(bitmap_bh->b_page); + kunmap_thread(desc_bh->b_page); + kunmap_thread(bitmap_bh->b_page); req->pr_desc_bh = desc_bh; req->pr_bitmap_bh = bitmap_bh; return 0; } - kunmap(bitmap_bh->b_page); + kunmap_thread(bitmap_bh->b_page); brelse(bitmap_bh); } group_offset = 0; } - kunmap(desc_bh->b_page); + kunmap_thread(desc_bh->b_page); brelse(desc_bh); } @@ -569,7 +569,7 @@ int nilfs_palloc_prepare_alloc_entry(struct inode *inode, return -ENOSPC; out_desc: - kunmap(desc_bh->b_page); + kunmap_thread(desc_bh->b_page); brelse(desc_bh); return ret; } @@ -605,10 +605,10 @@ void nilfs_palloc_commit_free_entry(struct inode *inode, spinlock_t *lock; group = nilfs_palloc_group(inode, req->pr_entry_nr, &group_offset); - desc_kaddr = kmap(req->pr_desc_bh->b_page); + desc_kaddr = kmap_thread(req->pr_desc_bh->b_page); desc = nilfs_palloc_block_get_group_desc(inode, group, req->pr_desc_bh, desc_kaddr); - bitmap_kaddr = kmap(req->pr_bitmap_bh->b_page); + bitmap_kaddr = kmap_thread(req->pr_bitmap_bh->b_page); bitmap = bitmap_kaddr + bh_offset(req->pr_bitmap_bh); lock = nilfs_mdt_bgl_lock(inode, group); @@ -620,8 +620,8 @@ void nilfs_palloc_commit_free_entry(struct inode *inode, else nilfs_palloc_group_desc_add_entries(desc, lock, 1); - kunmap(req->pr_bitmap_bh->b_page); - kunmap(req->pr_desc_bh->b_page); + kunmap_thread(req->pr_bitmap_bh->b_page); + kunmap_thread(req->pr_desc_bh->b_page); mark_buffer_dirty(req->pr_desc_bh); mark_buffer_dirty(req->pr_bitmap_bh); @@ -646,10 +646,10 @@ void nilfs_palloc_abort_alloc_entry(struct inode *inode, spinlock_t *lock; group = nilfs_palloc_group(inode, req->pr_entry_nr, &group_offset); - desc_kaddr = kmap(req->pr_desc_bh->b_page); + desc_kaddr = kmap_thread(req->pr_desc_bh->b_page); desc = nilfs_palloc_block_get_group_desc(inode, group, req->pr_desc_bh, desc_kaddr); - bitmap_kaddr = kmap(req->pr_bitmap_bh->b_page); + bitmap_kaddr = kmap_thread(req->pr_bitmap_bh->b_page); bitmap = bitmap_kaddr + bh_offset(req->pr_bitmap_bh); lock = nilfs_mdt_bgl_lock(inode, group); @@ -661,8 +661,8 @@ void nilfs_palloc_abort_alloc_entry(struct inode *inode, else nilfs_palloc_group_desc_add_entries(desc, lock, 1); - kunmap(req->pr_bitmap_bh->b_page); - kunmap(req->pr_desc_bh->b_page); + kunmap_thread(req->pr_bitmap_bh->b_page); + kunmap_thread(req->pr_desc_bh->b_page); brelse(req->pr_bitmap_bh); brelse(req->pr_desc_bh); @@ -754,7 +754,7 @@ int nilfs_palloc_freev(struct inode *inode, __u64 *entry_nrs, size_t nitems) /* Get the first entry number of the group */ group_min_nr = (__u64)group * epg; - bitmap_kaddr = kmap(bitmap_bh->b_page); + bitmap_kaddr = kmap_thread(bitmap_bh->b_page); bitmap = bitmap_kaddr + bh_offset(bitmap_bh); lock = nilfs_mdt_bgl_lock(inode, group); @@ -800,7 +800,7 @@ int nilfs_palloc_freev(struct inode *inode, __u64 *entry_nrs, size_t nitems) entry_start = rounddown(group_offset, epb); } while (true); - kunmap(bitmap_bh->b_page); + kunmap_thread(bitmap_bh->b_page); mark_buffer_dirty(bitmap_bh); brelse(bitmap_bh); diff --git a/fs/nilfs2/cpfile.c b/fs/nilfs2/cpfile.c index 86d4d850d130..402ab8bfce29 100644 --- a/fs/nilfs2/cpfile.c +++ b/fs/nilfs2/cpfile.c @@ -235,11 +235,11 @@ int nilfs_cpfile_get_checkpoint(struct inode *cpfile, ret = nilfs_cpfile_get_checkpoint_block(cpfile, cno, create, &cp_bh); if (ret < 0) goto out_header; - kaddr = kmap(cp_bh->b_page); + kaddr = kmap_thread(cp_bh->b_page); cp = nilfs_cpfile_block_get_checkpoint(cpfile, cno, cp_bh, kaddr); if (nilfs_checkpoint_invalid(cp)) { if (!create) { - kunmap(cp_bh->b_page); + kunmap_thread(cp_bh->b_page); brelse(cp_bh); ret = -ENOENT; goto out_header; From patchwork Fri Oct 9 19:49:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 284805 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C7BAC84609 for ; Fri, 9 Oct 2020 19:52:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CE5CB2240A for ; Fri, 9 Oct 2020 19:52:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390994AbgJITwx (ORCPT ); Fri, 9 Oct 2020 15:52:53 -0400 Received: from mga01.intel.com ([192.55.52.88]:3587 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390882AbgJITvx (ORCPT ); Fri, 9 Oct 2020 15:51:53 -0400 IronPort-SDR: mzoU95oSP1VwD5uWDae0guS0a6cE+H7YgLc5+R2J4zXJztdZotOqq9cM2YFYDL1izjhOdl5LMS EbOn31fl/5qQ== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="182976174" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="182976174" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:51 -0700 IronPort-SDR: OASmflq+Rx+yTtbdwfMJ0o3fYX/22afbp7t4aASuh19KdC4DJdh+fKQIdVVFr7y3ZwUlZzPFTz pRsXSfU26jow== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="529053257" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:50 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 18/58] fs/hfs: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:49:53 -0700 Message-Id: <20201009195033.3208459-19-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Signed-off-by: Ira Weiny --- fs/hfs/bnode.c | 14 +++++++------- fs/hfs/btree.c | 20 ++++++++++---------- 2 files changed, 17 insertions(+), 17 deletions(-) diff --git a/fs/hfs/bnode.c b/fs/hfs/bnode.c index b63a4df7327b..8b4d02576405 100644 --- a/fs/hfs/bnode.c +++ b/fs/hfs/bnode.c @@ -23,8 +23,8 @@ void hfs_bnode_read(struct hfs_bnode *node, void *buf, off += node->page_offset; page = node->page[0]; - memcpy(buf, kmap(page) + off, len); - kunmap(page); + memcpy(buf, kmap_thread(page) + off, len); + kunmap_thread(page); } u16 hfs_bnode_read_u16(struct hfs_bnode *node, int off) @@ -108,9 +108,9 @@ void hfs_bnode_copy(struct hfs_bnode *dst_node, int dst, src_page = src_node->page[0]; dst_page = dst_node->page[0]; - memcpy(kmap(dst_page) + dst, kmap(src_page) + src, len); - kunmap(src_page); - kunmap(dst_page); + memcpy(kmap_thread(dst_page) + dst, kmap_thread(src_page) + src, len); + kunmap_thread(src_page); + kunmap_thread(dst_page); set_page_dirty(dst_page); } @@ -125,9 +125,9 @@ void hfs_bnode_move(struct hfs_bnode *node, int dst, int src, int len) src += node->page_offset; dst += node->page_offset; page = node->page[0]; - ptr = kmap(page); + ptr = kmap_thread(page); memmove(ptr + dst, ptr + src, len); - kunmap(page); + kunmap_thread(page); set_page_dirty(page); } diff --git a/fs/hfs/btree.c b/fs/hfs/btree.c index 19017d296173..bd4a6d35e361 100644 --- a/fs/hfs/btree.c +++ b/fs/hfs/btree.c @@ -80,7 +80,7 @@ struct hfs_btree *hfs_btree_open(struct super_block *sb, u32 id, btree_keycmp ke goto free_inode; /* Load the header */ - head = (struct hfs_btree_header_rec *)(kmap(page) + sizeof(struct hfs_bnode_desc)); + head = (struct hfs_btree_header_rec *)(kmap_thread(page) + sizeof(struct hfs_bnode_desc)); tree->root = be32_to_cpu(head->root); tree->leaf_count = be32_to_cpu(head->leaf_count); tree->leaf_head = be32_to_cpu(head->leaf_head); @@ -119,7 +119,7 @@ struct hfs_btree *hfs_btree_open(struct super_block *sb, u32 id, btree_keycmp ke tree->node_size_shift = ffs(size) - 1; tree->pages_per_bnode = (tree->node_size + PAGE_SIZE - 1) >> PAGE_SHIFT; - kunmap(page); + kunmap_thread(page); put_page(page); return tree; @@ -268,7 +268,7 @@ struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree) off += node->page_offset; pagep = node->page + (off >> PAGE_SHIFT); - data = kmap(*pagep); + data = kmap_thread(*pagep); off &= ~PAGE_MASK; idx = 0; @@ -281,7 +281,7 @@ struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree) idx += i; data[off] |= m; set_page_dirty(*pagep); - kunmap(*pagep); + kunmap_thread(*pagep); tree->free_nodes--; mark_inode_dirty(tree->inode); hfs_bnode_put(node); @@ -290,14 +290,14 @@ struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree) } } if (++off >= PAGE_SIZE) { - kunmap(*pagep); - data = kmap(*++pagep); + kunmap_thread(*pagep); + data = kmap_thread(*++pagep); off = 0; } idx += 8; len--; } - kunmap(*pagep); + kunmap_thread(*pagep); nidx = node->next; if (!nidx) { printk(KERN_DEBUG "create new bmap node...\n"); @@ -313,7 +313,7 @@ struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree) off = off16; off += node->page_offset; pagep = node->page + (off >> PAGE_SHIFT); - data = kmap(*pagep); + data = kmap_thread(*pagep); off &= ~PAGE_MASK; } } @@ -360,7 +360,7 @@ void hfs_bmap_free(struct hfs_bnode *node) } off += node->page_offset + nidx / 8; page = node->page[off >> PAGE_SHIFT]; - data = kmap(page); + data = kmap_thread(page); off &= ~PAGE_MASK; m = 1 << (~nidx & 7); byte = data[off]; @@ -373,7 +373,7 @@ void hfs_bmap_free(struct hfs_bnode *node) } data[off] = byte & ~m; set_page_dirty(page); - kunmap(page); + kunmap_thread(page); hfs_bnode_put(node); tree->free_nodes++; mark_inode_dirty(tree->inode); From patchwork Fri Oct 9 19:49:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 284785 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6CEB6C2D0AE for ; Fri, 9 Oct 2020 20:07:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1A4582053B for ; Fri, 9 Oct 2020 20:07:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733272AbgJIUGy (ORCPT ); Fri, 9 Oct 2020 16:06:54 -0400 Received: from mga01.intel.com ([192.55.52.88]:3593 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389151AbgJITv5 (ORCPT ); Fri, 9 Oct 2020 15:51:57 -0400 IronPort-SDR: LXMDrio76eD4QJ0LjRz4EmmRBmjTijr/fa5SezwV0QRnonKitI0rO4pKp7fXuLJKLgkOJhzNOa sK9IuxnkWw1Q== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="182976189" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="182976189" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:55 -0700 IronPort-SDR: vmw0BK7OYY8h0fZsnTgtJRpvmXsWJvpJiZ/O3ONyZujh0oi54UYOLunW4B1xesRkeTs3uxMx1d pOHAzA+ZEWzg== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="529053305" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:54 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 19/58] fs/hfsplus: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:49:54 -0700 Message-Id: <20201009195033.3208459-20-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Signed-off-by: Ira Weiny --- fs/hfsplus/bitmap.c | 20 ++++----- fs/hfsplus/bnode.c | 102 ++++++++++++++++++++++---------------------- fs/hfsplus/btree.c | 18 ++++---- 3 files changed, 70 insertions(+), 70 deletions(-) diff --git a/fs/hfsplus/bitmap.c b/fs/hfsplus/bitmap.c index cebce0cfe340..9ec7c1559a0c 100644 --- a/fs/hfsplus/bitmap.c +++ b/fs/hfsplus/bitmap.c @@ -39,7 +39,7 @@ int hfsplus_block_allocate(struct super_block *sb, u32 size, start = size; goto out; } - pptr = kmap(page); + pptr = kmap_thread(page); curr = pptr + (offset & (PAGE_CACHE_BITS - 1)) / 32; i = offset % 32; offset &= ~(PAGE_CACHE_BITS - 1); @@ -74,7 +74,7 @@ int hfsplus_block_allocate(struct super_block *sb, u32 size, } curr++; } - kunmap(page); + kunmap_thread(page); offset += PAGE_CACHE_BITS; if (offset >= size) break; @@ -84,7 +84,7 @@ int hfsplus_block_allocate(struct super_block *sb, u32 size, start = size; goto out; } - curr = pptr = kmap(page); + curr = pptr = kmap_thread(page); if ((size ^ offset) / PAGE_CACHE_BITS) end = pptr + PAGE_CACHE_BITS / 32; else @@ -127,7 +127,7 @@ int hfsplus_block_allocate(struct super_block *sb, u32 size, len -= 32; } set_page_dirty(page); - kunmap(page); + kunmap_thread(page); offset += PAGE_CACHE_BITS; page = read_mapping_page(mapping, offset / PAGE_CACHE_BITS, NULL); @@ -135,7 +135,7 @@ int hfsplus_block_allocate(struct super_block *sb, u32 size, start = size; goto out; } - pptr = kmap(page); + pptr = kmap_thread(page); curr = pptr; end = pptr + PAGE_CACHE_BITS / 32; } @@ -151,7 +151,7 @@ int hfsplus_block_allocate(struct super_block *sb, u32 size, done: *curr = cpu_to_be32(n); set_page_dirty(page); - kunmap(page); + kunmap_thread(page); *max = offset + (curr - pptr) * 32 + i - start; sbi->free_blocks -= *max; hfsplus_mark_mdb_dirty(sb); @@ -185,7 +185,7 @@ int hfsplus_block_free(struct super_block *sb, u32 offset, u32 count) page = read_mapping_page(mapping, pnr, NULL); if (IS_ERR(page)) goto kaboom; - pptr = kmap(page); + pptr = kmap_thread(page); curr = pptr + (offset & (PAGE_CACHE_BITS - 1)) / 32; end = pptr + PAGE_CACHE_BITS / 32; len = count; @@ -215,11 +215,11 @@ int hfsplus_block_free(struct super_block *sb, u32 offset, u32 count) if (!count) break; set_page_dirty(page); - kunmap(page); + kunmap_thread(page); page = read_mapping_page(mapping, ++pnr, NULL); if (IS_ERR(page)) goto kaboom; - pptr = kmap(page); + pptr = kmap_thread(page); curr = pptr; end = pptr + PAGE_CACHE_BITS / 32; } @@ -231,7 +231,7 @@ int hfsplus_block_free(struct super_block *sb, u32 offset, u32 count) } out: set_page_dirty(page); - kunmap(page); + kunmap_thread(page); sbi->free_blocks += len; hfsplus_mark_mdb_dirty(sb); mutex_unlock(&sbi->alloc_mutex); diff --git a/fs/hfsplus/bnode.c b/fs/hfsplus/bnode.c index 177fae4e6581..62757d92fbbd 100644 --- a/fs/hfsplus/bnode.c +++ b/fs/hfsplus/bnode.c @@ -29,14 +29,14 @@ void hfs_bnode_read(struct hfs_bnode *node, void *buf, int off, int len) off &= ~PAGE_MASK; l = min_t(int, len, PAGE_SIZE - off); - memcpy(buf, kmap(*pagep) + off, l); - kunmap(*pagep); + memcpy(buf, kmap_thread(*pagep) + off, l); + kunmap_thread(*pagep); while ((len -= l) != 0) { buf += l; l = min_t(int, len, PAGE_SIZE); - memcpy(buf, kmap(*++pagep), l); - kunmap(*pagep); + memcpy(buf, kmap_thread(*++pagep), l); + kunmap_thread(*pagep); } } @@ -82,16 +82,16 @@ void hfs_bnode_write(struct hfs_bnode *node, void *buf, int off, int len) off &= ~PAGE_MASK; l = min_t(int, len, PAGE_SIZE - off); - memcpy(kmap(*pagep) + off, buf, l); + memcpy(kmap_thread(*pagep) + off, buf, l); set_page_dirty(*pagep); - kunmap(*pagep); + kunmap_thread(*pagep); while ((len -= l) != 0) { buf += l; l = min_t(int, len, PAGE_SIZE); - memcpy(kmap(*++pagep), buf, l); + memcpy(kmap_thread(*++pagep), buf, l); set_page_dirty(*pagep); - kunmap(*pagep); + kunmap_thread(*pagep); } } @@ -112,15 +112,15 @@ void hfs_bnode_clear(struct hfs_bnode *node, int off, int len) off &= ~PAGE_MASK; l = min_t(int, len, PAGE_SIZE - off); - memset(kmap(*pagep) + off, 0, l); + memset(kmap_thread(*pagep) + off, 0, l); set_page_dirty(*pagep); - kunmap(*pagep); + kunmap_thread(*pagep); while ((len -= l) != 0) { l = min_t(int, len, PAGE_SIZE); - memset(kmap(*++pagep), 0, l); + memset(kmap_thread(*++pagep), 0, l); set_page_dirty(*pagep); - kunmap(*pagep); + kunmap_thread(*pagep); } } @@ -142,24 +142,24 @@ void hfs_bnode_copy(struct hfs_bnode *dst_node, int dst, if (src == dst) { l = min_t(int, len, PAGE_SIZE - src); - memcpy(kmap(*dst_page) + src, kmap(*src_page) + src, l); - kunmap(*src_page); + memcpy(kmap_thread(*dst_page) + src, kmap_thread(*src_page) + src, l); + kunmap_thread(*src_page); set_page_dirty(*dst_page); - kunmap(*dst_page); + kunmap_thread(*dst_page); while ((len -= l) != 0) { l = min_t(int, len, PAGE_SIZE); - memcpy(kmap(*++dst_page), kmap(*++src_page), l); - kunmap(*src_page); + memcpy(kmap_thread(*++dst_page), kmap_thread(*++src_page), l); + kunmap_thread(*src_page); set_page_dirty(*dst_page); - kunmap(*dst_page); + kunmap_thread(*dst_page); } } else { void *src_ptr, *dst_ptr; do { - src_ptr = kmap(*src_page) + src; - dst_ptr = kmap(*dst_page) + dst; + src_ptr = kmap_thread(*src_page) + src; + dst_ptr = kmap_thread(*dst_page) + dst; if (PAGE_SIZE - src < PAGE_SIZE - dst) { l = PAGE_SIZE - src; src = 0; @@ -171,9 +171,9 @@ void hfs_bnode_copy(struct hfs_bnode *dst_node, int dst, } l = min(len, l); memcpy(dst_ptr, src_ptr, l); - kunmap(*src_page); + kunmap_thread(*src_page); set_page_dirty(*dst_page); - kunmap(*dst_page); + kunmap_thread(*dst_page); if (!dst) dst_page++; else @@ -202,27 +202,27 @@ void hfs_bnode_move(struct hfs_bnode *node, int dst, int src, int len) if (src == dst) { while (src < len) { - memmove(kmap(*dst_page), kmap(*src_page), src); - kunmap(*src_page); + memmove(kmap_thread(*dst_page), kmap_thread(*src_page), src); + kunmap_thread(*src_page); set_page_dirty(*dst_page); - kunmap(*dst_page); + kunmap_thread(*dst_page); len -= src; src = PAGE_SIZE; src_page--; dst_page--; } src -= len; - memmove(kmap(*dst_page) + src, - kmap(*src_page) + src, len); - kunmap(*src_page); + memmove(kmap_thread(*dst_page) + src, + kmap_thread(*src_page) + src, len); + kunmap_thread(*src_page); set_page_dirty(*dst_page); - kunmap(*dst_page); + kunmap_thread(*dst_page); } else { void *src_ptr, *dst_ptr; do { - src_ptr = kmap(*src_page) + src; - dst_ptr = kmap(*dst_page) + dst; + src_ptr = kmap_thread(*src_page) + src; + dst_ptr = kmap_thread(*dst_page) + dst; if (src < dst) { l = src; src = PAGE_SIZE; @@ -234,9 +234,9 @@ void hfs_bnode_move(struct hfs_bnode *node, int dst, int src, int len) } l = min(len, l); memmove(dst_ptr - l, src_ptr - l, l); - kunmap(*src_page); + kunmap_thread(*src_page); set_page_dirty(*dst_page); - kunmap(*dst_page); + kunmap_thread(*dst_page); if (dst == PAGE_SIZE) dst_page--; else @@ -251,26 +251,26 @@ void hfs_bnode_move(struct hfs_bnode *node, int dst, int src, int len) if (src == dst) { l = min_t(int, len, PAGE_SIZE - src); - memmove(kmap(*dst_page) + src, - kmap(*src_page) + src, l); - kunmap(*src_page); + memmove(kmap_thread(*dst_page) + src, + kmap_thread(*src_page) + src, l); + kunmap_thread(*src_page); set_page_dirty(*dst_page); - kunmap(*dst_page); + kunmap_thread(*dst_page); while ((len -= l) != 0) { l = min_t(int, len, PAGE_SIZE); - memmove(kmap(*++dst_page), - kmap(*++src_page), l); - kunmap(*src_page); + memmove(kmap_thread(*++dst_page), + kmap_thread(*++src_page), l); + kunmap_thread(*src_page); set_page_dirty(*dst_page); - kunmap(*dst_page); + kunmap_thread(*dst_page); } } else { void *src_ptr, *dst_ptr; do { - src_ptr = kmap(*src_page) + src; - dst_ptr = kmap(*dst_page) + dst; + src_ptr = kmap_thread(*src_page) + src; + dst_ptr = kmap_thread(*dst_page) + dst; if (PAGE_SIZE - src < PAGE_SIZE - dst) { l = PAGE_SIZE - src; @@ -283,9 +283,9 @@ void hfs_bnode_move(struct hfs_bnode *node, int dst, int src, int len) } l = min(len, l); memmove(dst_ptr, src_ptr, l); - kunmap(*src_page); + kunmap_thread(*src_page); set_page_dirty(*dst_page); - kunmap(*dst_page); + kunmap_thread(*dst_page); if (!dst) dst_page++; else @@ -502,14 +502,14 @@ struct hfs_bnode *hfs_bnode_find(struct hfs_btree *tree, u32 num) if (!test_bit(HFS_BNODE_NEW, &node->flags)) return node; - desc = (struct hfs_bnode_desc *)(kmap(node->page[0]) + + desc = (struct hfs_bnode_desc *)(kmap_thread(node->page[0]) + node->page_offset); node->prev = be32_to_cpu(desc->prev); node->next = be32_to_cpu(desc->next); node->num_recs = be16_to_cpu(desc->num_recs); node->type = desc->type; node->height = desc->height; - kunmap(node->page[0]); + kunmap_thread(node->page[0]); switch (node->type) { case HFS_NODE_HEADER: @@ -593,14 +593,14 @@ struct hfs_bnode *hfs_bnode_create(struct hfs_btree *tree, u32 num) } pagep = node->page; - memset(kmap(*pagep) + node->page_offset, 0, + memset(kmap_thread(*pagep) + node->page_offset, 0, min_t(int, PAGE_SIZE, tree->node_size)); set_page_dirty(*pagep); - kunmap(*pagep); + kunmap_thread(*pagep); for (i = 1; i < tree->pages_per_bnode; i++) { - memset(kmap(*++pagep), 0, PAGE_SIZE); + memset(kmap_thread(*++pagep), 0, PAGE_SIZE); set_page_dirty(*pagep); - kunmap(*pagep); + kunmap_thread(*pagep); } clear_bit(HFS_BNODE_NEW, &node->flags); wake_up(&node->lock_wq); diff --git a/fs/hfsplus/btree.c b/fs/hfsplus/btree.c index 66774f4cb4fd..74fcef3a1628 100644 --- a/fs/hfsplus/btree.c +++ b/fs/hfsplus/btree.c @@ -394,7 +394,7 @@ struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree) off += node->page_offset; pagep = node->page + (off >> PAGE_SHIFT); - data = kmap(*pagep); + data = kmap_thread(*pagep); off &= ~PAGE_MASK; idx = 0; @@ -407,7 +407,7 @@ struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree) idx += i; data[off] |= m; set_page_dirty(*pagep); - kunmap(*pagep); + kunmap_thread(*pagep); tree->free_nodes--; mark_inode_dirty(tree->inode); hfs_bnode_put(node); @@ -417,14 +417,14 @@ struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree) } } if (++off >= PAGE_SIZE) { - kunmap(*pagep); - data = kmap(*++pagep); + kunmap_thread(*pagep); + data = kmap_thread(*++pagep); off = 0; } idx += 8; len--; } - kunmap(*pagep); + kunmap_thread(*pagep); nidx = node->next; if (!nidx) { hfs_dbg(BNODE_MOD, "create new bmap node\n"); @@ -440,7 +440,7 @@ struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree) off = off16; off += node->page_offset; pagep = node->page + (off >> PAGE_SHIFT); - data = kmap(*pagep); + data = kmap_thread(*pagep); off &= ~PAGE_MASK; } } @@ -490,7 +490,7 @@ void hfs_bmap_free(struct hfs_bnode *node) } off += node->page_offset + nidx / 8; page = node->page[off >> PAGE_SHIFT]; - data = kmap(page); + data = kmap_thread(page); off &= ~PAGE_MASK; m = 1 << (~nidx & 7); byte = data[off]; @@ -498,13 +498,13 @@ void hfs_bmap_free(struct hfs_bnode *node) pr_crit("trying to free free bnode " "%u(%d)\n", node->this, node->type); - kunmap(page); + kunmap_thread(page); hfs_bnode_put(node); return; } data[off] = byte & ~m; set_page_dirty(page); - kunmap(page); + kunmap_thread(page); hfs_bnode_put(node); tree->free_nodes++; mark_inode_dirty(tree->inode); From patchwork Fri Oct 9 19:49:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 284787 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D909C2D0A3 for ; Fri, 9 Oct 2020 20:06:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 00D4F20732 for ; Fri, 9 Oct 2020 20:06:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390465AbgJIUFf (ORCPT ); Fri, 9 Oct 2020 16:05:35 -0400 Received: from mga01.intel.com ([192.55.52.88]:3555 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389053AbgJITwO (ORCPT ); Fri, 9 Oct 2020 15:52:14 -0400 IronPort-SDR: ZY1DEGETObfXdZFnoxsbKB7fzbzbRt8xd8bKb+YrYXtkHifcbpn+XQoZG1l3r2KlCKayB9JESb /9AfAPRZl62Q== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="182976216" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="182976216" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:01 -0700 IronPort-SDR: tVhE30/49ehqkoc80ZCcDXB4dMTZze5YTnWPIjaBRWP4P6o/AEUDuRNENWQluyt3yOqCx99fob q9C+vTCoMURQ== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="519846944" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:01 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Trond Myklebust , Anna Schumaker , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 21/58] fs/nfs: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:49:56 -0700 Message-Id: <20201009195033.3208459-22-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Trond Myklebust Cc: Anna Schumaker Signed-off-by: Ira Weiny --- fs/nfs/dir.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c index cb52db9a0cfb..fee321acccb4 100644 --- a/fs/nfs/dir.c +++ b/fs/nfs/dir.c @@ -213,7 +213,7 @@ int nfs_readdir_make_qstr(struct qstr *string, const char *name, unsigned int le static int nfs_readdir_add_to_array(struct nfs_entry *entry, struct page *page) { - struct nfs_cache_array *array = kmap(page); + struct nfs_cache_array *array = kmap_thread(page); struct nfs_cache_array_entry *cache_entry; int ret; @@ -235,7 +235,7 @@ int nfs_readdir_add_to_array(struct nfs_entry *entry, struct page *page) if (entry->eof != 0) array->eof_index = array->size; out: - kunmap(page); + kunmap_thread(page); return ret; } @@ -347,7 +347,7 @@ int nfs_readdir_search_array(nfs_readdir_descriptor_t *desc) struct nfs_cache_array *array; int status; - array = kmap(desc->page); + array = kmap_thread(desc->page); if (*desc->dir_cookie == 0) status = nfs_readdir_search_for_pos(array, desc); @@ -359,7 +359,7 @@ int nfs_readdir_search_array(nfs_readdir_descriptor_t *desc) desc->current_index += array->size; desc->page_index++; } - kunmap(desc->page); + kunmap_thread(desc->page); return status; } @@ -602,10 +602,10 @@ int nfs_readdir_page_filler(nfs_readdir_descriptor_t *desc, struct nfs_entry *en out_nopages: if (count == 0 || (status == -EBADCOOKIE && entry->eof != 0)) { - array = kmap(page); + array = kmap_thread(page); array->eof_index = array->size; status = 0; - kunmap(page); + kunmap_thread(page); } put_page(scratch); @@ -669,7 +669,7 @@ int nfs_readdir_xdr_to_array(nfs_readdir_descriptor_t *desc, struct page *page, goto out; } - array = kmap(page); + array = kmap_thread(page); status = nfs_readdir_alloc_pages(pages, array_size); if (status < 0) @@ -691,7 +691,7 @@ int nfs_readdir_xdr_to_array(nfs_readdir_descriptor_t *desc, struct page *page, nfs_readdir_free_pages(pages, array_size); out_release_array: - kunmap(page); + kunmap_thread(page); nfs4_label_free(entry.label); out: nfs_free_fattr(entry.fattr); @@ -803,7 +803,7 @@ int nfs_do_filldir(nfs_readdir_descriptor_t *desc) struct nfs_cache_array *array = NULL; struct nfs_open_dir_context *ctx = file->private_data; - array = kmap(desc->page); + array = kmap_thread(desc->page); for (i = desc->cache_entry_index; i < array->size; i++) { struct nfs_cache_array_entry *ent; @@ -827,7 +827,7 @@ int nfs_do_filldir(nfs_readdir_descriptor_t *desc) if (array->eof_index >= 0) desc->eof = true; - kunmap(desc->page); + kunmap_thread(desc->page); dfprintk(DIRCACHE, "NFS: nfs_do_filldir() filling ended @ cookie %Lu; returning = %d\n", (unsigned long long)*desc->dir_cookie, res); return res; From patchwork Fri Oct 9 19:49:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 284786 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED989C43467 for ; Fri, 9 Oct 2020 20:06:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A898322284 for ; Fri, 9 Oct 2020 20:06:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391178AbgJIUGW (ORCPT ); Fri, 9 Oct 2020 16:06:22 -0400 Received: from mga03.intel.com ([134.134.136.65]:26030 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387978AbgJITwH (ORCPT ); Fri, 9 Oct 2020 15:52:07 -0400 IronPort-SDR: OTj9s3cvBtRyPQTZAybKfscTVxPDK48TJR7rMLCvUFwCTYBh25MUK6MTvmozzMmi3IFQlUR46z sA4Nku3xxaJQ== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="165592224" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="165592224" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:05 -0700 IronPort-SDR: UirI0CQIu4U61+QlSzJhUSBnMwMbOjm+hcyTouEwvOwx6I0trp+50TKD6KEUa5cRPFVp+hPY69 eHlTO5RS+u4A== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="343972211" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:04 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Jaegeuk Kim , Chao Yu , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 22/58] fs/f2fs: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:49:57 -0700 Message-Id: <20201009195033.3208459-23-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Jaegeuk Kim Cc: Chao Yu Signed-off-by: Ira Weiny --- fs/f2fs/f2fs.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index d9e52a7f3702..ff72a45a577e 100644 --- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h @@ -2410,12 +2410,12 @@ static inline struct page *f2fs_pagecache_get_page( static inline void f2fs_copy_page(struct page *src, struct page *dst) { - char *src_kaddr = kmap(src); - char *dst_kaddr = kmap(dst); + char *src_kaddr = kmap_thread(src); + char *dst_kaddr = kmap_thread(dst); memcpy(dst_kaddr, src_kaddr, PAGE_SIZE); - kunmap(dst); - kunmap(src); + kunmap_thread(dst); + kunmap_thread(src); } static inline void f2fs_put_page(struct page *page, int unlock) From patchwork Fri Oct 9 19:50:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 284788 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D38CC433E7 for ; Fri, 9 Oct 2020 20:04:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5A33B20732 for ; Fri, 9 Oct 2020 20:04:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390605AbgJIUER (ORCPT ); Fri, 9 Oct 2020 16:04:17 -0400 Received: from mga02.intel.com ([134.134.136.20]:57702 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389234AbgJITwY (ORCPT ); Fri, 9 Oct 2020 15:52:24 -0400 IronPort-SDR: Z6f1VsV8EmW1VaVM5g+3erhSBYFfszK0u5IKA3dqifKhpwBC6O2L/QLn5Bh7tXS7Tj32Kfq/aO 8Ia6OM4tj/Gg== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="152450970" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="152450970" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:22 -0700 IronPort-SDR: 66NbHcTTg+QzLX/XjjZVbeOCocpzHQlXc/1uyGjsCwgKA6U0t9qDgsp0fGjjJhdH2o4WKM8nKt 1m4fFh6MimYQ== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="354959189" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:21 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Damien Le Moal , Naohiro Aota , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 26/58] fs/zonefs: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:01 -0700 Message-Id: <20201009195033.3208459-27-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Damien Le Moal Cc: Naohiro Aota Signed-off-by: Ira Weiny --- fs/zonefs/super.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c index 8ec7c8f109d7..2fd6c86beee1 100644 --- a/fs/zonefs/super.c +++ b/fs/zonefs/super.c @@ -1297,7 +1297,7 @@ static int zonefs_read_super(struct super_block *sb) if (ret) goto free_page; - super = kmap(page); + super = kmap_thread(page); ret = -EINVAL; if (le32_to_cpu(super->s_magic) != ZONEFS_MAGIC) @@ -1349,7 +1349,7 @@ static int zonefs_read_super(struct super_block *sb) ret = 0; unmap: - kunmap(page); + kunmap_thread(page); free_page: __free_page(page); From patchwork Fri Oct 9 19:50:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 284789 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC368C2D0A1 for ; Fri, 9 Oct 2020 20:04:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7C68C20732 for ; Fri, 9 Oct 2020 20:04:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390558AbgJIUDg (ORCPT ); Fri, 9 Oct 2020 16:03:36 -0400 Received: from mga11.intel.com ([192.55.52.93]:40561 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390977AbgJITwj (ORCPT ); Fri, 9 Oct 2020 15:52:39 -0400 IronPort-SDR: Exjvp/k/vju92C/1HmDJJ3INkGiq2BDsyk1PJOB/fYIXN4StFZXIuvzbkTFTF8uhXPVM7irVri H8miumYbiukQ== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="162068015" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="162068015" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:38 -0700 IronPort-SDR: y48WLHg756NhaQ3a3jufh8VKJ1E37wUwe8fJ/HzDHZ6hDk/W8p1+XzOieTDyZGUDdKxAAVZp75 YdcYJzI+gesw== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="349957490" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:36 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 30/58] fs/romfs: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:05 -0700 Message-Id: <20201009195033.3208459-31-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Signed-off-by: Ira Weiny --- fs/romfs/super.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/fs/romfs/super.c b/fs/romfs/super.c index e582d001f792..9050074c6755 100644 --- a/fs/romfs/super.c +++ b/fs/romfs/super.c @@ -107,7 +107,7 @@ static int romfs_readpage(struct file *file, struct page *page) void *buf; int ret; - buf = kmap(page); + buf = kmap_thread(page); if (!buf) return -ENOMEM; @@ -136,7 +136,7 @@ static int romfs_readpage(struct file *file, struct page *page) SetPageUptodate(page); flush_dcache_page(page); - kunmap(page); + kunmap_thread(page); unlock_page(page); return ret; } From patchwork Fri Oct 9 19:50:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 284790 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F498C43457 for ; Fri, 9 Oct 2020 20:03:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2192420732 for ; Fri, 9 Oct 2020 20:03:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390251AbgJIUDT (ORCPT ); Fri, 9 Oct 2020 16:03:19 -0400 Received: from mga17.intel.com ([192.55.52.151]:34046 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389118AbgJITwr (ORCPT ); Fri, 9 Oct 2020 15:52:47 -0400 IronPort-SDR: HCJqdmCgpU6pXG1LsAwcTZbp9UGZICOvUpbuOIRNLSKkgIBl4bErNnLIrbSn24FVu1Q3zQC/2Q 2KazPDmPpjUQ== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="145397499" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="145397499" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:45 -0700 IronPort-SDR: A7QVfEYpWd+zukvvJSzSDlmpc6UtcI1VTl49mnWle7o9+shZWHPb4ayBrZUheEhKcyM4kHv/QX ai2WmfcpOoFA== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="389237190" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:43 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Jeff Dike , Richard Weinberger , Anton Ivanov , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 32/58] fs/hostfs: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:07 -0700 Message-Id: <20201009195033.3208459-33-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Jeff Dike Cc: Richard Weinberger Cc: Anton Ivanov Signed-off-by: Ira Weiny --- fs/hostfs/hostfs_kern.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/fs/hostfs/hostfs_kern.c b/fs/hostfs/hostfs_kern.c index c070c0d8e3e9..608efd0f83cb 100644 --- a/fs/hostfs/hostfs_kern.c +++ b/fs/hostfs/hostfs_kern.c @@ -409,7 +409,7 @@ static int hostfs_writepage(struct page *page, struct writeback_control *wbc) if (page->index >= end_index) count = inode->i_size & (PAGE_SIZE-1); - buffer = kmap(page); + buffer = kmap_thread(page); err = write_file(HOSTFS_I(inode)->fd, &base, buffer, count); if (err != count) { @@ -425,7 +425,7 @@ static int hostfs_writepage(struct page *page, struct writeback_control *wbc) err = 0; out: - kunmap(page); + kunmap_thread(page); unlock_page(page); return err; @@ -437,7 +437,7 @@ static int hostfs_readpage(struct file *file, struct page *page) loff_t start = page_offset(page); int bytes_read, ret = 0; - buffer = kmap(page); + buffer = kmap_thread(page); bytes_read = read_file(FILE_HOSTFS_I(file)->fd, &start, buffer, PAGE_SIZE); if (bytes_read < 0) { @@ -454,7 +454,7 @@ static int hostfs_readpage(struct file *file, struct page *page) out: flush_dcache_page(page); - kunmap(page); + kunmap_thread(page); unlock_page(page); return ret; } @@ -480,9 +480,9 @@ static int hostfs_write_end(struct file *file, struct address_space *mapping, unsigned from = pos & (PAGE_SIZE - 1); int err; - buffer = kmap(page); + buffer = kmap_thread(page); err = write_file(FILE_HOSTFS_I(file)->fd, &pos, buffer + from, copied); - kunmap(page); + kunmap_thread(page); if (!PageUptodate(page) && err == PAGE_SIZE) SetPageUptodate(page); From patchwork Fri Oct 9 19:50:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 284792 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE3FEC9DCBB for ; Fri, 9 Oct 2020 20:01:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5D421222C8 for ; Fri, 9 Oct 2020 20:01:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388775AbgJIUBs (ORCPT ); Fri, 9 Oct 2020 16:01:48 -0400 Received: from mga11.intel.com ([192.55.52.93]:40581 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388824AbgJITw6 (ORCPT ); Fri, 9 Oct 2020 15:52:58 -0400 IronPort-SDR: tGU1WWM0mFtP4sQxNjAr1vD1TEE/S82Fwey3kuQK7biXKtIM+OTpRGX42MrzZrbh17sxY7EBCK BJwnIk9ZOKlQ== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="162068045" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="162068045" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:55 -0700 IronPort-SDR: TN82OMSOJnR+1dzdu/wtYu+HarPMiqU5tCJLq9k+HDWuiKzSJliaVoZHfBedXzzykRk72guGi9 gJFQG6Pg4N1Q== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="529053748" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:54 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Alexander Viro , Jens Axboe , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 35/58] fs: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:10 -0700 Message-Id: <20201009195033.3208459-36-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org From: Ira Weiny These kmap() calls are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Alexander Viro Cc: Jens Axboe Signed-off-by: Ira Weiny --- fs/aio.c | 4 ++-- fs/binfmt_elf.c | 4 ++-- fs/binfmt_elf_fdpic.c | 4 ++-- fs/exec.c | 10 +++++----- fs/io_uring.c | 4 ++-- fs/splice.c | 4 ++-- 6 files changed, 15 insertions(+), 15 deletions(-) diff --git a/fs/aio.c b/fs/aio.c index d5ec30385566..27f95996d25f 100644 --- a/fs/aio.c +++ b/fs/aio.c @@ -1223,10 +1223,10 @@ static long aio_read_events_ring(struct kioctx *ctx, avail = min(avail, nr - ret); avail = min_t(long, avail, AIO_EVENTS_PER_PAGE - pos); - ev = kmap(page); + ev = kmap_thread(page); copy_ret = copy_to_user(event + ret, ev + pos, sizeof(*ev) * avail); - kunmap(page); + kunmap_thread(page); if (unlikely(copy_ret)) { ret = -EFAULT; diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c index 13d053982dd7..1a332ef1ae03 100644 --- a/fs/binfmt_elf.c +++ b/fs/binfmt_elf.c @@ -2430,9 +2430,9 @@ static int elf_core_dump(struct coredump_params *cprm) page = get_dump_page(addr); if (page) { - void *kaddr = kmap(page); + void *kaddr = kmap_thread(page); stop = !dump_emit(cprm, kaddr, PAGE_SIZE); - kunmap(page); + kunmap_thread(page); put_page(page); } else stop = !dump_skip(cprm, PAGE_SIZE); diff --git a/fs/binfmt_elf_fdpic.c b/fs/binfmt_elf_fdpic.c index 50f845702b92..8fbe188e0fdd 100644 --- a/fs/binfmt_elf_fdpic.c +++ b/fs/binfmt_elf_fdpic.c @@ -1542,9 +1542,9 @@ static bool elf_fdpic_dump_segments(struct coredump_params *cprm) bool res; struct page *page = get_dump_page(addr); if (page) { - void *kaddr = kmap(page); + void *kaddr = kmap_thread(page); res = dump_emit(cprm, kaddr, PAGE_SIZE); - kunmap(page); + kunmap_thread(page); put_page(page); } else { res = dump_skip(cprm, PAGE_SIZE); diff --git a/fs/exec.c b/fs/exec.c index a91003e28eaa..3948b8511e3a 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -575,11 +575,11 @@ static int copy_strings(int argc, struct user_arg_ptr argv, if (kmapped_page) { flush_kernel_dcache_page(kmapped_page); - kunmap(kmapped_page); + kunmap_thread(kmapped_page); put_arg_page(kmapped_page); } kmapped_page = page; - kaddr = kmap(kmapped_page); + kaddr = kmap_thread(kmapped_page); kpos = pos & PAGE_MASK; flush_arg_page(bprm, kpos, kmapped_page); } @@ -593,7 +593,7 @@ static int copy_strings(int argc, struct user_arg_ptr argv, out: if (kmapped_page) { flush_kernel_dcache_page(kmapped_page); - kunmap(kmapped_page); + kunmap_thread(kmapped_page); put_arg_page(kmapped_page); } return ret; @@ -871,11 +871,11 @@ int transfer_args_to_stack(struct linux_binprm *bprm, for (index = MAX_ARG_PAGES - 1; index >= stop; index--) { unsigned int offset = index == stop ? bprm->p & ~PAGE_MASK : 0; - char *src = kmap(bprm->page[index]) + offset; + char *src = kmap_thread(bprm->page[index]) + offset; sp -= PAGE_SIZE - offset; if (copy_to_user((void *) sp, src, PAGE_SIZE - offset) != 0) ret = -EFAULT; - kunmap(bprm->page[index]); + kunmap_thread(bprm->page[index]); if (ret) goto out; } diff --git a/fs/io_uring.c b/fs/io_uring.c index aae0ef2ec34d..f59bb079822d 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -2903,7 +2903,7 @@ static ssize_t loop_rw_iter(int rw, struct file *file, struct kiocb *kiocb, iovec = iov_iter_iovec(iter); } else { /* fixed buffers import bvec */ - iovec.iov_base = kmap(iter->bvec->bv_page) + iovec.iov_base = kmap_thread(iter->bvec->bv_page) + iter->iov_offset; iovec.iov_len = min(iter->count, iter->bvec->bv_len - iter->iov_offset); @@ -2918,7 +2918,7 @@ static ssize_t loop_rw_iter(int rw, struct file *file, struct kiocb *kiocb, } if (iov_iter_is_bvec(iter)) - kunmap(iter->bvec->bv_page); + kunmap_thread(iter->bvec->bv_page); if (nr < 0) { if (!ret) diff --git a/fs/splice.c b/fs/splice.c index ce75aec52274..190c4d218c30 100644 --- a/fs/splice.c +++ b/fs/splice.c @@ -815,9 +815,9 @@ static int write_pipe_buf(struct pipe_inode_info *pipe, struct pipe_buffer *buf, void *data; loff_t tmp = sd->pos; - data = kmap(buf->page); + data = kmap_thread(buf->page); ret = __kernel_write(sd->u.file, data + buf->offset, sd->len, &tmp); - kunmap(buf->page); + kunmap_thread(buf->page); return ret; } From patchwork Fri Oct 9 19:50:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 284793 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3298CC9DCB0 for ; Fri, 9 Oct 2020 20:01:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E5D4A222C8 for ; Fri, 9 Oct 2020 20:01:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388515AbgJIUBJ (ORCPT ); Fri, 9 Oct 2020 16:01:09 -0400 Received: from mga18.intel.com ([134.134.136.126]:42454 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389721AbgJITxQ (ORCPT ); Fri, 9 Oct 2020 15:53:16 -0400 IronPort-SDR: /0PxmQQKh9INXhqYDaA/qd0UIC/Ocy9AjDBS2exom/XczYR4EeFFQ0eDp921T8r9Qs7rCQoehA KURfPAOckKjQ== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="153363728" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="153363728" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:59 -0700 IronPort-SDR: Wf0Bi1Gs1P3ZgILN9Dk8+ODZhbqXc7J3tJZabe+2AUx5fg/nQYQB2E9E/9NHNQgBhyfVAmIu+q D7ag8zQ3enrg== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="462300964" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:58 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Jan Kara , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 36/58] fs/ext2: Use ext2_put_page Date: Fri, 9 Oct 2020 12:50:11 -0700 Message-Id: <20201009195033.3208459-37-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org From: Ira Weiny There are 3 places in namei.c where the equivalent of ext2_put_page() is open coded. We want to use k[un]map_thread() instead of k[un]map() in ext2_[get|put]_page(). Move ext2_put_page() to ext2.h and use it in namei.c in prep for converting the k[un]map() code. Cc: Jan Kara Signed-off-by: Ira Weiny --- fs/ext2/dir.c | 6 ------ fs/ext2/ext2.h | 8 ++++++++ fs/ext2/namei.c | 15 +++++---------- 3 files changed, 13 insertions(+), 16 deletions(-) diff --git a/fs/ext2/dir.c b/fs/ext2/dir.c index 70355ab6740e..f3194bf20733 100644 --- a/fs/ext2/dir.c +++ b/fs/ext2/dir.c @@ -66,12 +66,6 @@ static inline unsigned ext2_chunk_size(struct inode *inode) return inode->i_sb->s_blocksize; } -static inline void ext2_put_page(struct page *page) -{ - kunmap(page); - put_page(page); -} - /* * Return the offset into page `page_nr' of the last valid * byte in that page, plus one. diff --git a/fs/ext2/ext2.h b/fs/ext2/ext2.h index 5136b7289e8d..021ec8b42ac3 100644 --- a/fs/ext2/ext2.h +++ b/fs/ext2/ext2.h @@ -16,6 +16,8 @@ #include #include #include +#include +#include /* XXX Here for now... not interested in restructing headers JUST now */ @@ -745,6 +747,12 @@ extern int ext2_delete_entry (struct ext2_dir_entry_2 *, struct page *); extern int ext2_empty_dir (struct inode *); extern struct ext2_dir_entry_2 * ext2_dotdot (struct inode *, struct page **); extern void ext2_set_link(struct inode *, struct ext2_dir_entry_2 *, struct page *, struct inode *, int); +static inline void ext2_put_page(struct page *page) +{ + kunmap(page); + put_page(page); +} + /* ialloc.c */ extern struct inode * ext2_new_inode (struct inode *, umode_t, const struct qstr *); diff --git a/fs/ext2/namei.c b/fs/ext2/namei.c index 5bf2c145643b..ea980f1e2e99 100644 --- a/fs/ext2/namei.c +++ b/fs/ext2/namei.c @@ -389,23 +389,18 @@ static int ext2_rename (struct inode * old_dir, struct dentry * old_dentry, if (dir_de) { if (old_dir != new_dir) ext2_set_link(old_inode, dir_de, dir_page, new_dir, 0); - else { - kunmap(dir_page); - put_page(dir_page); - } + else + ext2_put_page(dir_page); inode_dec_link_count(old_dir); } return 0; out_dir: - if (dir_de) { - kunmap(dir_page); - put_page(dir_page); - } + if (dir_de) + ext2_put_page(dir_page); out_old: - kunmap(old_page); - put_page(old_page); + ext2_put_page(old_page); out: return err; } From patchwork Fri Oct 9 19:50:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 284795 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30C7DC8300B for ; Fri, 9 Oct 2020 20:00:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D3BE222267 for ; Fri, 9 Oct 2020 20:00:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387878AbgJIT7X (ORCPT ); Fri, 9 Oct 2020 15:59:23 -0400 Received: from mga01.intel.com ([192.55.52.88]:3593 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391078AbgJITxW (ORCPT ); Fri, 9 Oct 2020 15:53:22 -0400 IronPort-SDR: TuW6f/eUiPOWBVUWPJSST+0egNgNIn2HyrjOLitLNnO3XIXYcZZ7k6u/vtD7EeyPkkCEzhJArJ zTM7g1ijBJ7w== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="182976382" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="182976382" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:02 -0700 IronPort-SDR: bH2kRxXxeY8kuFBN5R8RneSpQbEQLW5eawptefbcIpm8xkU067aWcbaReXxx6/6TDXgDaxiIQ5 8lenxDucas2g== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="519847131" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:02 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Jan Kara , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 37/58] fs/ext2: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:12 -0700 Message-Id: <20201009195033.3208459-38-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org From: Ira Weiny These kmap() calls are localized to a single thread. To avoid the over head of global PKRS update use the new kmap_thread() call instead. Cc: Jan Kara Signed-off-by: Ira Weiny --- fs/ext2/dir.c | 2 +- fs/ext2/ext2.h | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/fs/ext2/dir.c b/fs/ext2/dir.c index f3194bf20733..abe97ba458c8 100644 --- a/fs/ext2/dir.c +++ b/fs/ext2/dir.c @@ -196,7 +196,7 @@ static struct page * ext2_get_page(struct inode *dir, unsigned long n, struct address_space *mapping = dir->i_mapping; struct page *page = read_mapping_page(mapping, n, NULL); if (!IS_ERR(page)) { - kmap(page); + kmap_thread(page); if (unlikely(!PageChecked(page))) { if (PageError(page) || !ext2_check_page(page, quiet)) goto fail; diff --git a/fs/ext2/ext2.h b/fs/ext2/ext2.h index 021ec8b42ac3..9bcb6714c255 100644 --- a/fs/ext2/ext2.h +++ b/fs/ext2/ext2.h @@ -749,7 +749,7 @@ extern struct ext2_dir_entry_2 * ext2_dotdot (struct inode *, struct page **); extern void ext2_set_link(struct inode *, struct ext2_dir_entry_2 *, struct page *, struct inode *, int); static inline void ext2_put_page(struct page *page) { - kunmap(page); + kunmap_thread(page); put_page(page); } From patchwork Fri Oct 9 19:50:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 284794 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48694C388D1 for ; Fri, 9 Oct 2020 20:00:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E950422267 for ; Fri, 9 Oct 2020 20:00:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388105AbgJIUAL (ORCPT ); Fri, 9 Oct 2020 16:00:11 -0400 Received: from mga11.intel.com ([192.55.52.93]:40547 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388887AbgJITxV (ORCPT ); Fri, 9 Oct 2020 15:53:21 -0400 IronPort-SDR: zq5dvuFd6jXpbgXHXDd3kanr7kGVYAspkSVx4KqGtzBddQtGcPnRKRjJwx5QrQv4TFmaka5zeS Gu84SEdAGvPQ== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="162068063" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="162068063" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:06 -0700 IronPort-SDR: +9tLMQNrOc9ZmtQe+NKSSXZyhjRooFfxfongQijBj4blQjg4c6+fJ/ZHkl3bUeAx7HZ0qDTuwd uW08H1SVCXkg== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="343972363" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:05 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 38/58] fs/isofs: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:13 -0700 Message-Id: <20201009195033.3208459-39-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org From: Ira Weiny These kmap() calls are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Signed-off-by: Ira Weiny --- fs/isofs/compress.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/fs/isofs/compress.c b/fs/isofs/compress.c index bc12ac7e2312..ddd3fd99d2e1 100644 --- a/fs/isofs/compress.c +++ b/fs/isofs/compress.c @@ -344,7 +344,7 @@ static int zisofs_readpage(struct file *file, struct page *page) pages[i] = grab_cache_page_nowait(mapping, index); if (pages[i]) { ClearPageError(pages[i]); - kmap(pages[i]); + kmap_thread(pages[i]); } } @@ -356,7 +356,7 @@ static int zisofs_readpage(struct file *file, struct page *page) flush_dcache_page(pages[i]); if (i == full_page && err) SetPageError(pages[i]); - kunmap(pages[i]); + kunmap_thread(pages[i]); unlock_page(pages[i]); if (i != full_page) put_page(pages[i]); From patchwork Fri Oct 9 19:50:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 284804 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46C9CC8A84A for ; Fri, 9 Oct 2020 19:53:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1B64722510 for ; Fri, 9 Oct 2020 19:53:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389719AbgJITxZ (ORCPT ); Fri, 9 Oct 2020 15:53:25 -0400 Received: from mga14.intel.com ([192.55.52.115]:15570 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391073AbgJITxW (ORCPT ); Fri, 9 Oct 2020 15:53:22 -0400 IronPort-SDR: 7KkidwkDSmKI3kwWlMria/Sju91b/R21i0w1kLrwQYSn3K4W1EsDTLeZ7OuYero5ReiYzcFEEs JOGLzwSLgu9Q== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="164744016" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="164744016" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:09 -0700 IronPort-SDR: FciD93oXa6TN5PQF74JzeuCyRYkWBlT2lC/7ncntBELjkiuqMo1u2JSAhMViBtfJLu9YdFEYCB Wo/5PMHidIaw== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="317147397" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:08 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 39/58] fs/jffs2: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:14 -0700 Message-Id: <20201009195033.3208459-40-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org From: Ira Weiny These kmap() calls are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Signed-off-by: Ira Weiny --- fs/jffs2/file.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/fs/jffs2/file.c b/fs/jffs2/file.c index 3e6d54f9b011..14dd2b18cc16 100644 --- a/fs/jffs2/file.c +++ b/fs/jffs2/file.c @@ -287,13 +287,13 @@ static int jffs2_write_end(struct file *filp, struct address_space *mapping, /* In 2.4, it was already kmapped by generic_file_write(). Doesn't hurt to do it again. The alternative is ifdefs, which are ugly. */ - kmap(pg); + kmap_thread(pg); ret = jffs2_write_inode_range(c, f, ri, page_address(pg) + aligned_start, (pg->index << PAGE_SHIFT) + aligned_start, end - aligned_start, &writtenlen); - kunmap(pg); + kunmap_thread(pg); if (ret) { /* There was an error writing. */ From patchwork Fri Oct 9 19:50:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 284796 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A8DEC35269 for ; Fri, 9 Oct 2020 19:59:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 35E8D22403 for ; Fri, 9 Oct 2020 19:59:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388396AbgJIT7B (ORCPT ); Fri, 9 Oct 2020 15:59:01 -0400 Received: from mga11.intel.com ([192.55.52.93]:40581 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391084AbgJITxY (ORCPT ); Fri, 9 Oct 2020 15:53:24 -0400 IronPort-SDR: u2C4+hGb35+mtK+ndhIxjBMDcky7nNybvSNM6b4gqHKiqPmi0zpuRfougNCgV6H0aeq0l38fWM RPrL1Si+UaTw== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="162068110" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="162068110" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:23 -0700 IronPort-SDR: 1WKybDcesSB1sLOn62ClPw5Oz5roBPmtIQsa1XcymP/Rm5On5MJ4hIzRHLHnwze0O/9dY0G7BT oI/HBaoknNhQ== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="355863255" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:21 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Ulf Hansson , Sascha Sommer , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 43/58] drivers/mmc: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:18 -0700 Message-Id: <20201009195033.3208459-44-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org From: Ira Weiny These kmap() calls are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Ulf Hansson Cc: Sascha Sommer Signed-off-by: Ira Weiny --- drivers/mmc/host/mmc_spi.c | 4 ++-- drivers/mmc/host/sdricoh_cs.c | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/mmc/host/mmc_spi.c b/drivers/mmc/host/mmc_spi.c index 18a850f37ddc..ab28e7103b8d 100644 --- a/drivers/mmc/host/mmc_spi.c +++ b/drivers/mmc/host/mmc_spi.c @@ -918,7 +918,7 @@ mmc_spi_data_do(struct mmc_spi_host *host, struct mmc_command *cmd, } /* allow pio too; we don't allow highmem */ - kmap_addr = kmap(sg_page(sg)); + kmap_addr = kmap_thread(sg_page(sg)); if (direction == DMA_TO_DEVICE) t->tx_buf = kmap_addr + sg->offset; else @@ -950,7 +950,7 @@ mmc_spi_data_do(struct mmc_spi_host *host, struct mmc_command *cmd, /* discard mappings */ if (direction == DMA_FROM_DEVICE) flush_kernel_dcache_page(sg_page(sg)); - kunmap(sg_page(sg)); + kunmap_thread(sg_page(sg)); if (dma_dev) dma_unmap_page(dma_dev, dma_addr, PAGE_SIZE, dir); diff --git a/drivers/mmc/host/sdricoh_cs.c b/drivers/mmc/host/sdricoh_cs.c index 76a8cd3a186f..7806bc69c4f1 100644 --- a/drivers/mmc/host/sdricoh_cs.c +++ b/drivers/mmc/host/sdricoh_cs.c @@ -312,11 +312,11 @@ static void sdricoh_request(struct mmc_host *mmc, struct mmc_request *mrq) int result; page = sg_page(data->sg); - buf = kmap(page) + data->sg->offset + (len * i); + buf = kmap_thread(page) + data->sg->offset + (len * i); result = sdricoh_blockio(host, data->flags & MMC_DATA_READ, buf, len); - kunmap(page); + kunmap_thread(page); flush_dcache_page(page); if (result) { dev_err(dev, "sdricoh_request: cmd %i " From patchwork Fri Oct 9 19:50:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 284797 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9835C9DC92 for ; Fri, 9 Oct 2020 19:58:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8FFD222267 for ; Fri, 9 Oct 2020 19:58:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388497AbgJIT6H (ORCPT ); Fri, 9 Oct 2020 15:58:07 -0400 Received: from mga05.intel.com ([192.55.52.43]:56506 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391106AbgJITxa (ORCPT ); Fri, 9 Oct 2020 15:53:30 -0400 IronPort-SDR: m8JsRDkVm7MuHdZA6Q/Wi+HEQskWKUPBnn02eowZAiho4j2b9XK2EVfHcV4ZPwqltdHnZP0RWM 2jCA1vqgkWjg== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="250226295" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="250226295" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:28 -0700 IronPort-SDR: BOVOU8SHaQbMr+qBBSrCzuqbJHuQd4ndzwss05aaSC7o5I30ANXdekITPCnqmJ5xtHPcWbAp0e 1H/r6jgeLwtA== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="298419945" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:28 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Ard Biesheuvel , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 45/58] drivers/firmware: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:20 -0700 Message-Id: <20201009195033.3208459-46-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org From: Ira Weiny These kmap() calls are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Ard Biesheuvel Signed-off-by: Ira Weiny --- drivers/firmware/efi/capsule-loader.c | 6 +++--- drivers/firmware/efi/capsule.c | 4 ++-- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/firmware/efi/capsule-loader.c b/drivers/firmware/efi/capsule-loader.c index 4dde8edd53b6..aa2e0b5940fd 100644 --- a/drivers/firmware/efi/capsule-loader.c +++ b/drivers/firmware/efi/capsule-loader.c @@ -197,7 +197,7 @@ static ssize_t efi_capsule_write(struct file *file, const char __user *buff, page = cap_info->pages[cap_info->index - 1]; } - kbuff = kmap(page); + kbuff = kmap_thread(page); kbuff += PAGE_SIZE - cap_info->page_bytes_remain; /* Copy capsule binary data from user space to kernel space buffer */ @@ -217,7 +217,7 @@ static ssize_t efi_capsule_write(struct file *file, const char __user *buff, } cap_info->count += write_byte; - kunmap(page); + kunmap_thread(page); /* Submit the full binary to efi_capsule_update() API */ if (cap_info->header.headersize > 0 && @@ -236,7 +236,7 @@ static ssize_t efi_capsule_write(struct file *file, const char __user *buff, return write_byte; fail_unmap: - kunmap(page); + kunmap_thread(page); failed: efi_free_all_buff_pages(cap_info); return ret; diff --git a/drivers/firmware/efi/capsule.c b/drivers/firmware/efi/capsule.c index 598b7800d14e..edb7797b0e4f 100644 --- a/drivers/firmware/efi/capsule.c +++ b/drivers/firmware/efi/capsule.c @@ -244,7 +244,7 @@ int efi_capsule_update(efi_capsule_header_t *capsule, phys_addr_t *pages) for (i = 0; i < sg_count; i++) { efi_capsule_block_desc_t *sglist; - sglist = kmap(sg_pages[i]); + sglist = kmap_thread(sg_pages[i]); for (j = 0; j < SGLIST_PER_PAGE && count > 0; j++) { u64 sz = min_t(u64, imagesize, @@ -265,7 +265,7 @@ int efi_capsule_update(efi_capsule_header_t *capsule, phys_addr_t *pages) else sglist[j].data = page_to_phys(sg_pages[i + 1]); - kunmap(sg_pages[i]); + kunmap_thread(sg_pages[i]); } mutex_lock(&capsule_mutex); From patchwork Fri Oct 9 19:50:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 284798 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96819C388F2 for ; Fri, 9 Oct 2020 19:57:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 732C420691 for ; Fri, 9 Oct 2020 19:57:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389646AbgJIT50 (ORCPT ); Fri, 9 Oct 2020 15:57:26 -0400 Received: from mga12.intel.com ([192.55.52.136]:29334 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391149AbgJITxg (ORCPT ); Fri, 9 Oct 2020 15:53:36 -0400 IronPort-SDR: pgLxEXDdwlbiZXzBeT4ttrQBELII+3B5k1Ohhm5Voa/aT/zXiJVCv3SzCNFZeRFoxNhz6/kSzp U8bGcVHkZDFg== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="144851045" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="144851045" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:35 -0700 IronPort-SDR: 1BDjhBWRiZHjmax0H5jHPL9Ee2R7VPzFxt4XNyUZ5d5LafdvLgy40B0wtQFVU3AX2TCQgDyRso HhmDRI33qIDA== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="345148602" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:34 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 47/58] drivers/mtd: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:22 -0700 Message-Id: <20201009195033.3208459-48-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org From: Ira Weiny These kmap() calls are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Miquel Raynal Cc: Richard Weinberger Cc: Vignesh Raghavendra Signed-off-by: Ira Weiny --- drivers/mtd/mtd_blkdevs.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/mtd/mtd_blkdevs.c b/drivers/mtd/mtd_blkdevs.c index 0c05f77f9b21..4b18998273fa 100644 --- a/drivers/mtd/mtd_blkdevs.c +++ b/drivers/mtd/mtd_blkdevs.c @@ -88,14 +88,14 @@ static blk_status_t do_blktrans_request(struct mtd_blktrans_ops *tr, return BLK_STS_IOERR; return BLK_STS_OK; case REQ_OP_READ: - buf = kmap(bio_page(req->bio)) + bio_offset(req->bio); + buf = kmap_thread(bio_page(req->bio)) + bio_offset(req->bio); for (; nsect > 0; nsect--, block++, buf += tr->blksize) { if (tr->readsect(dev, block, buf)) { - kunmap(bio_page(req->bio)); + kunmap_thread(bio_page(req->bio)); return BLK_STS_IOERR; } } - kunmap(bio_page(req->bio)); + kunmap_thread(bio_page(req->bio)); rq_flush_dcache_pages(req); return BLK_STS_OK; case REQ_OP_WRITE: @@ -103,14 +103,14 @@ static blk_status_t do_blktrans_request(struct mtd_blktrans_ops *tr, return BLK_STS_IOERR; rq_flush_dcache_pages(req); - buf = kmap(bio_page(req->bio)) + bio_offset(req->bio); + buf = kmap_thread(bio_page(req->bio)) + bio_offset(req->bio); for (; nsect > 0; nsect--, block++, buf += tr->blksize) { if (tr->writesect(dev, block, buf)) { - kunmap(bio_page(req->bio)); + kunmap_thread(bio_page(req->bio)); return BLK_STS_IOERR; } } - kunmap(bio_page(req->bio)); + kunmap_thread(bio_page(req->bio)); return BLK_STS_OK; default: return BLK_STS_IOERR; From patchwork Fri Oct 9 19:50:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 284799 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15FBEC83022 for ; Fri, 9 Oct 2020 19:57:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BE5C722B48 for ; Fri, 9 Oct 2020 19:57:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389238AbgJITxp (ORCPT ); Fri, 9 Oct 2020 15:53:45 -0400 Received: from mga11.intel.com ([192.55.52.93]:40655 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389472AbgJITxk (ORCPT ); Fri, 9 Oct 2020 15:53:40 -0400 IronPort-SDR: ALSFHBkR/52rttPjScavW0L6HaNv2mpe0XISWaZZJAY4d3ezNEk8iVejYdgcm7+PBdVjuPXNU0 7YZwbayMJGWA== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="162068148" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="162068148" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:38 -0700 IronPort-SDR: 77OyjRxsHRqLdlwGyIEmSTWShPVNVQULX7LnGcMeG3NkwcCC4UbesWRkZ9bTTqcjV6t6jmiGf4 lDgqS7BTpvKw== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="389237394" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:37 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Coly Li , Kent Overstreet , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 48/58] drivers/md: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:23 -0700 Message-Id: <20201009195033.3208459-49-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org From: Ira Weiny These kmap() calls are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Coly Li (maintainer:BCACHE (BLOCK LAYER CACHE)) Cc: Kent Overstreet (maintainer:BCACHE (BLOCK LAYER CACHE)) Signed-off-by: Ira Weiny --- drivers/md/bcache/request.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c index c7cadaafa947..a4571f6d09dd 100644 --- a/drivers/md/bcache/request.c +++ b/drivers/md/bcache/request.c @@ -44,10 +44,10 @@ static void bio_csum(struct bio *bio, struct bkey *k) uint64_t csum = 0; bio_for_each_segment(bv, bio, iter) { - void *d = kmap(bv.bv_page) + bv.bv_offset; + void *d = kmap_thread(bv.bv_page) + bv.bv_offset; csum = bch_crc64_update(csum, d, bv.bv_len); - kunmap(bv.bv_page); + kunmap_thread(bv.bv_page); } k->ptr[KEY_PTRS(k)] = csum & (~0ULL >> 1); From patchwork Fri Oct 9 19:50:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 284800 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EED6FC83BB4 for ; Fri, 9 Oct 2020 19:56:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BC8C9225A9 for ; Fri, 9 Oct 2020 19:56:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390193AbgJIT4b (ORCPT ); Fri, 9 Oct 2020 15:56:31 -0400 Received: from mga01.intel.com ([192.55.52.88]:3799 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391193AbgJITxu (ORCPT ); Fri, 9 Oct 2020 15:53:50 -0400 IronPort-SDR: MvNEL5UE1YZMf2FFryOugVO1MjMPvxhOr9+ZaSGMEz70R+AZvmLzfRjcHaH4uHG4JrQCtBf6H0 UOe6thpPlnjQ== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="182976499" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="182976499" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:48 -0700 IronPort-SDR: hSLATPILKZexzU9UbulfkIo/q3wPUuwdHpgGfd67f9RjOUrzIM+GeKhbZtX4gmiSR0zMVEA6Pm bSqmX2bz2skQ== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="529054109" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:46 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Eric Biederman , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 51/58] kernel: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:26 -0700 Message-Id: <20201009195033.3208459-52-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org From: Ira Weiny This kmap() call is localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Eric Biederman Signed-off-by: Ira Weiny --- kernel/kexec_core.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c index c19c0dad1ebe..272a9920c0d6 100644 --- a/kernel/kexec_core.c +++ b/kernel/kexec_core.c @@ -815,7 +815,7 @@ static int kimage_load_normal_segment(struct kimage *image, if (result < 0) goto out; - ptr = kmap(page); + ptr = kmap_thread(page); /* Start with a clear page */ clear_page(ptr); ptr += maddr & ~PAGE_MASK; @@ -828,7 +828,7 @@ static int kimage_load_normal_segment(struct kimage *image, memcpy(ptr, kbuf, uchunk); else result = copy_from_user(ptr, buf, uchunk); - kunmap(page); + kunmap_thread(page); if (result) { result = -EFAULT; goto out; @@ -879,7 +879,7 @@ static int kimage_load_crash_segment(struct kimage *image, goto out; } arch_kexec_post_alloc_pages(page_address(page), 1, 0); - ptr = kmap(page); + ptr = kmap_thread(page); ptr += maddr & ~PAGE_MASK; mchunk = min_t(size_t, mbytes, PAGE_SIZE - (maddr & ~PAGE_MASK)); @@ -895,7 +895,7 @@ static int kimage_load_crash_segment(struct kimage *image, else result = copy_from_user(ptr, buf, uchunk); kexec_flush_icache_page(page); - kunmap(page); + kunmap_thread(page); arch_kexec_pre_free_pages(page_address(page), 1); if (result) { result = -EFAULT; From patchwork Fri Oct 9 19:50:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 284801 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 47E2FC83BA4 for ; Fri, 9 Oct 2020 19:56:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 21D35225A9 for ; Fri, 9 Oct 2020 19:56:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390164AbgJITzk (ORCPT ); Fri, 9 Oct 2020 15:55:40 -0400 Received: from mga12.intel.com ([192.55.52.136]:29384 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389355AbgJITxz (ORCPT ); Fri, 9 Oct 2020 15:53:55 -0400 IronPort-SDR: bDmNjnNdMRZ+ioxLx0lwsy0xCkRGcoIkF1uO7/DGxT31PMyi/yHsDFfOYXBkHjpwCB8GF2sqCK rdqgPQHF8a+Q== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="144851091" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="144851091" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:54 -0700 IronPort-SDR: bZn0Zma13YkX7FJFkNsCYMjMNff2FSiB/MgoUwYT50qMOfu0yRYfZtOtUYx1uwuwPtN+tdFX+r hwly4wq6Ikyg== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="519847271" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:53 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Alexander Viro , =?utf-8?b?SsOpcsO0bWUgR2xp?= =?utf-8?q?sse?= , Martin KaFai Lau , Song Liu , Yonghong Song , Andrii Nakryiko , John Fastabend , KP Singh , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 53/58] lib: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:28 -0700 Message-Id: <20201009195033.3208459-54-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org From: Ira Weiny These kmap() calls are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Alexander Viro Cc: "Jérôme Glisse" Cc: Martin KaFai Lau Cc: Song Liu Cc: Yonghong Song Cc: Andrii Nakryiko Cc: John Fastabend Cc: KP Singh Signed-off-by: Ira Weiny --- lib/iov_iter.c | 12 ++++++------ lib/test_bpf.c | 4 ++-- lib/test_hmm.c | 8 ++++---- 3 files changed, 12 insertions(+), 12 deletions(-) diff --git a/lib/iov_iter.c b/lib/iov_iter.c index 5e40786c8f12..1d47f957cf95 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -208,7 +208,7 @@ static size_t copy_page_to_iter_iovec(struct page *page, size_t offset, size_t b } /* Too bad - revert to non-atomic kmap */ - kaddr = kmap(page); + kaddr = kmap_thread(page); from = kaddr + offset; left = copyout(buf, from, copy); copy -= left; @@ -225,7 +225,7 @@ static size_t copy_page_to_iter_iovec(struct page *page, size_t offset, size_t b from += copy; bytes -= copy; } - kunmap(page); + kunmap_thread(page); done: if (skip == iov->iov_len) { @@ -292,7 +292,7 @@ static size_t copy_page_from_iter_iovec(struct page *page, size_t offset, size_t } /* Too bad - revert to non-atomic kmap */ - kaddr = kmap(page); + kaddr = kmap_thread(page); to = kaddr + offset; left = copyin(to, buf, copy); copy -= left; @@ -309,7 +309,7 @@ static size_t copy_page_from_iter_iovec(struct page *page, size_t offset, size_t to += copy; bytes -= copy; } - kunmap(page); + kunmap_thread(page); done: if (skip == iov->iov_len) { @@ -1742,10 +1742,10 @@ int iov_iter_for_each_range(struct iov_iter *i, size_t bytes, return 0; iterate_all_kinds(i, bytes, v, -EINVAL, ({ - w.iov_base = kmap(v.bv_page) + v.bv_offset; + w.iov_base = kmap_thread(v.bv_page) + v.bv_offset; w.iov_len = v.bv_len; err = f(&w, context); - kunmap(v.bv_page); + kunmap_thread(v.bv_page); err;}), ({ w = v; err = f(&w, context);}) diff --git a/lib/test_bpf.c b/lib/test_bpf.c index ca7d635bccd9..441f822f56ba 100644 --- a/lib/test_bpf.c +++ b/lib/test_bpf.c @@ -6506,11 +6506,11 @@ static void *generate_test_data(struct bpf_test *test, int sub) if (!page) goto err_kfree_skb; - ptr = kmap(page); + ptr = kmap_thread(page); if (!ptr) goto err_free_page; memcpy(ptr, test->frag_data, MAX_DATA); - kunmap(page); + kunmap_thread(page); skb_add_rx_frag(skb, 0, page, 0, MAX_DATA, MAX_DATA); } diff --git a/lib/test_hmm.c b/lib/test_hmm.c index e7dc3de355b7..e40d26f97f45 100644 --- a/lib/test_hmm.c +++ b/lib/test_hmm.c @@ -329,9 +329,9 @@ static int dmirror_do_read(struct dmirror *dmirror, unsigned long start, if (!page) return -ENOENT; - tmp = kmap(page); + tmp = kmap_thread(page); memcpy(ptr, tmp, PAGE_SIZE); - kunmap(page); + kunmap_thread(page); ptr += PAGE_SIZE; bounce->cpages++; @@ -398,9 +398,9 @@ static int dmirror_do_write(struct dmirror *dmirror, unsigned long start, if (!page || xa_pointer_tag(entry) != DPT_XA_TAG_WRITE) return -ENOENT; - tmp = kmap(page); + tmp = kmap_thread(page); memcpy(tmp, ptr, PAGE_SIZE); - kunmap(page); + kunmap_thread(page); ptr += PAGE_SIZE; bounce->cpages++; From patchwork Fri Oct 9 19:50:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 284802 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64EEDC832F3 for ; Fri, 9 Oct 2020 19:55:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3988D225A9 for ; Fri, 9 Oct 2020 19:55:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391232AbgJITyF (ORCPT ); Fri, 9 Oct 2020 15:54:05 -0400 Received: from mga09.intel.com ([134.134.136.24]:28731 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391236AbgJITyD (ORCPT ); Fri, 9 Oct 2020 15:54:03 -0400 IronPort-SDR: rBsyJhOMb9UFhXYql3GLMUkKgqkgTDscbD99JyXrVAih+gm4OytoVCKEoMaLuveLaKTLTNgXBd qHkVHSzCQ6BA== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="165643397" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="165643397" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:54:01 -0700 IronPort-SDR: vscQ8/PPGEHFzlszF4epup1/yO0Zdiea6PUky2hcTMxo6TNU67DufPr1A76ntRogBKZEHAEf8L bsjRzl3g/9bQ== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="529054196" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:54:00 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Kirti Wankhede , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 55/58] samples: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:30 -0700 Message-Id: <20201009195033.3208459-56-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org From: Ira Weiny These kmap() calls are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Kirti Wankhede Signed-off-by: Ira Weiny --- samples/vfio-mdev/mbochs.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/samples/vfio-mdev/mbochs.c b/samples/vfio-mdev/mbochs.c index 3cc5e5921682..6d95422c0b46 100644 --- a/samples/vfio-mdev/mbochs.c +++ b/samples/vfio-mdev/mbochs.c @@ -479,12 +479,12 @@ static ssize_t mdev_access(struct mdev_device *mdev, char *buf, size_t count, pos -= MBOCHS_MMIO_BAR_OFFSET; poff = pos & ~PAGE_MASK; pg = __mbochs_get_page(mdev_state, pos >> PAGE_SHIFT); - map = kmap(pg); + map = kmap_thread(pg); if (is_write) memcpy(map + poff, buf, count); else memcpy(buf, map + poff, count); - kunmap(pg); + kunmap_thread(pg); put_page(pg); } else { From patchwork Fri Oct 9 19:50:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 284803 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81FAEC5517B for ; Fri, 9 Oct 2020 19:54:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5D20A22C9F for ; Fri, 9 Oct 2020 19:54:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391340AbgJITyr (ORCPT ); Fri, 9 Oct 2020 15:54:47 -0400 Received: from mga12.intel.com ([192.55.52.136]:29419 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391275AbgJITyL (ORCPT ); Fri, 9 Oct 2020 15:54:11 -0400 IronPort-SDR: q1PZMJUmKFKlQ0BClGvwyzOjNwzGSo5B944T29UwXjVfp2YopDniYa7pTU6UFtwNvsti9h6T08 dfV2TT7Nz02w== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="144851137" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="144851137" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:54:10 -0700 IronPort-SDR: LoljxrxdEbdUiebRGt6guGtjo6SPfpWGHMJuLgTgYB7BeDU8CJtiGau7DzbQyZ+Y5gpHVnCtT5 urTwCEpflSRg== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="343972696" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:54:09 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 58/58] [dax|pmem]: Enable stray access protection Date: Fri, 9 Oct 2020 12:50:33 -0700 Message-Id: <20201009195033.3208459-59-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org From: Ira Weiny Protecting against stray writes is particularly important for PMEM because, unlike writes to anonymous memory, writes to PMEM persists across a reboot. Thus data corruption could result in permanent loss of data. While stray writes are more serious than reads, protection is also enabled for reads. This helps to detect bugs in code which would incorrectly access device memory and prevents a more serious machine checks should those bug reads from a poison page. Enable stray access protection by setting the flag in pgmap which requests it. There is no option presented to the user. If Zone Device Access Protection not be supported this flag will have no affect. Signed-off-by: Ira Weiny --- drivers/dax/device.c | 2 ++ drivers/nvdimm/pmem.c | 2 ++ 2 files changed, 4 insertions(+) diff --git a/drivers/dax/device.c b/drivers/dax/device.c index 1e89513f3c59..e6fb35b4f0fb 100644 --- a/drivers/dax/device.c +++ b/drivers/dax/device.c @@ -430,6 +430,8 @@ int dev_dax_probe(struct device *dev) } dev_dax->pgmap.type = MEMORY_DEVICE_GENERIC; + dev_dax->pgmap.flags |= PGMAP_PROT_ENABLED; + addr = devm_memremap_pages(dev, &dev_dax->pgmap); if (IS_ERR(addr)) return PTR_ERR(addr); diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index e4dc1ae990fc..9fcd8338e23f 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -426,6 +426,8 @@ static int pmem_attach_disk(struct device *dev, return -EBUSY; } + pmem->pgmap.flags |= PGMAP_PROT_ENABLED; + q = blk_alloc_queue(dev_to_node(dev)); if (!q) return -ENOMEM;