From patchwork Fri Oct 9 19:49:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 287189 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C9C5C43457 for ; Fri, 9 Oct 2020 20:07:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D11FE22282 for ; Fri, 9 Oct 2020 20:07:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391727AbgJIUHo (ORCPT ); Fri, 9 Oct 2020 16:07:44 -0400 Received: from mga09.intel.com ([134.134.136.24]:28486 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2403855AbgJITve (ORCPT ); Fri, 9 Oct 2020 15:51:34 -0400 IronPort-SDR: xxLTFcykYmc5KgZD1HbS1g7i4OYmICVIRRif8W7Jpite0DKStiZN2ggNBumfLr6hCgxjqqigjG UdsgKzTVcxow== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="165642918" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="165642918" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:29 -0700 IronPort-SDR: 0+lGONiHFSeyE/0kQGUe/v/Z0vqstKsQGCedA4TvP63fkNzDWGjPYFj9hsR1yECUQTFrwG697Z McTBAsxZaLwQ== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="298419323" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:28 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , David Howells , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 12/58] fs/afs: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:49:47 -0700 Message-Id: <20201009195033.3208459-13-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: David Howells Signed-off-by: Ira Weiny --- fs/afs/dir.c | 16 ++++++++-------- fs/afs/dir_edit.c | 16 ++++++++-------- fs/afs/mntpt.c | 4 ++-- fs/afs/write.c | 4 ++-- 4 files changed, 20 insertions(+), 20 deletions(-) diff --git a/fs/afs/dir.c b/fs/afs/dir.c index 1d2e61e0ab04..5d01cdb590de 100644 --- a/fs/afs/dir.c +++ b/fs/afs/dir.c @@ -127,14 +127,14 @@ static bool afs_dir_check_page(struct afs_vnode *dvnode, struct page *page, qty /= sizeof(union afs_xdr_dir_block); /* check them */ - dbuf = kmap(page); + dbuf = kmap_thread(page); for (tmp = 0; tmp < qty; tmp++) { if (dbuf->blocks[tmp].hdr.magic != AFS_DIR_MAGIC) { printk("kAFS: %s(%lx): bad magic %d/%d is %04hx\n", __func__, dvnode->vfs_inode.i_ino, tmp, qty, ntohs(dbuf->blocks[tmp].hdr.magic)); trace_afs_dir_check_failed(dvnode, off, i_size); - kunmap(page); + kunmap_thread(page); trace_afs_file_error(dvnode, -EIO, afs_file_error_dir_bad_magic); goto error; } @@ -146,7 +146,7 @@ static bool afs_dir_check_page(struct afs_vnode *dvnode, struct page *page, ((u8 *)&dbuf->blocks[tmp])[AFS_DIR_BLOCK_SIZE - 1] = 0; } - kunmap(page); + kunmap_thread(page); checked: afs_stat_v(dvnode, n_read_dir); @@ -177,13 +177,13 @@ static bool afs_dir_check_pages(struct afs_vnode *dvnode, struct afs_read *req) req->pos, req->index, req->nr_pages, req->offset); for (i = 0; i < req->nr_pages; i++) { - dbuf = kmap(req->pages[i]); + dbuf = kmap_thread(req->pages[i]); for (j = 0; j < qty; j++) { union afs_xdr_dir_block *block = &dbuf->blocks[j]; pr_warn("[%02x] %32phN\n", i * qty + j, block); } - kunmap(req->pages[i]); + kunmap_thread(req->pages[i]); } return false; } @@ -481,7 +481,7 @@ static int afs_dir_iterate(struct inode *dir, struct dir_context *ctx, limit = blkoff & ~(PAGE_SIZE - 1); - dbuf = kmap(page); + dbuf = kmap_thread(page); /* deal with the individual blocks stashed on this page */ do { @@ -489,7 +489,7 @@ static int afs_dir_iterate(struct inode *dir, struct dir_context *ctx, sizeof(union afs_xdr_dir_block)]; ret = afs_dir_iterate_block(dvnode, ctx, dblock, blkoff); if (ret != 1) { - kunmap(page); + kunmap_thread(page); goto out; } @@ -497,7 +497,7 @@ static int afs_dir_iterate(struct inode *dir, struct dir_context *ctx, } while (ctx->pos < dir->i_size && blkoff < limit); - kunmap(page); + kunmap_thread(page); ret = 0; } diff --git a/fs/afs/dir_edit.c b/fs/afs/dir_edit.c index b108528bf010..35ed6828e205 100644 --- a/fs/afs/dir_edit.c +++ b/fs/afs/dir_edit.c @@ -218,7 +218,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode, need_slots = round_up(12 + name->len + 1 + 4, AFS_DIR_DIRENT_SIZE); need_slots /= AFS_DIR_DIRENT_SIZE; - meta_page = kmap(page0); + meta_page = kmap_thread(page0); meta = &meta_page->blocks[0]; if (i_size == 0) goto new_directory; @@ -247,7 +247,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode, set_page_private(page, 1); SetPagePrivate(page); } - dir_page = kmap(page); + dir_page = kmap_thread(page); } /* Abandon the edit if we got a callback break. */ @@ -284,7 +284,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode, if (page != page0) { unlock_page(page); - kunmap(page); + kunmap_thread(page); put_page(page); } } @@ -323,7 +323,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode, afs_set_contig_bits(block, slot, need_slots); if (page != page0) { unlock_page(page); - kunmap(page); + kunmap_thread(page); put_page(page); } @@ -337,7 +337,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode, out_unmap: unlock_page(page0); - kunmap(page0); + kunmap_thread(page0); put_page(page0); _leave(""); return; @@ -346,7 +346,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode, trace_afs_edit_dir(vnode, why, afs_edit_dir_create_inval, 0, 0, 0, 0, name->name); clear_bit(AFS_VNODE_DIR_VALID, &vnode->flags); if (page != page0) { - kunmap(page); + kunmap_thread(page); put_page(page); } goto out_unmap; @@ -398,7 +398,7 @@ void afs_edit_dir_remove(struct afs_vnode *vnode, need_slots = round_up(12 + name->len + 1 + 4, AFS_DIR_DIRENT_SIZE); need_slots /= AFS_DIR_DIRENT_SIZE; - meta_page = kmap(page0); + meta_page = kmap_thread(page0); meta = &meta_page->blocks[0]; /* Find a page that has sufficient slots available. Each VM page @@ -410,7 +410,7 @@ void afs_edit_dir_remove(struct afs_vnode *vnode, page = find_lock_page(vnode->vfs_inode.i_mapping, index); if (!page) goto error; - dir_page = kmap(page); + dir_page = kmap_thread(page); } else { page = page0; dir_page = meta_page; diff --git a/fs/afs/mntpt.c b/fs/afs/mntpt.c index 79bc5f1338ed..562454e2fd5c 100644 --- a/fs/afs/mntpt.c +++ b/fs/afs/mntpt.c @@ -139,11 +139,11 @@ static int afs_mntpt_set_params(struct fs_context *fc, struct dentry *mntpt) return ret; } - buf = kmap(page); + buf = kmap_thread(page); ret = -EINVAL; if (buf[size - 1] == '.') ret = vfs_parse_fs_string(fc, "source", buf, size - 1); - kunmap(page); + kunmap_thread(page); put_page(page); if (ret < 0) return ret; diff --git a/fs/afs/write.c b/fs/afs/write.c index 4b2265cb1891..c56e5b4db4ae 100644 --- a/fs/afs/write.c +++ b/fs/afs/write.c @@ -38,9 +38,9 @@ static int afs_fill_page(struct afs_vnode *vnode, struct key *key, if (pos >= vnode->vfs_inode.i_size) { p = pos & ~PAGE_MASK; ASSERTCMP(p + len, <=, PAGE_SIZE); - data = kmap(page); + data = kmap_thread(page); memset(data + p, 0, len); - kunmap(page); + kunmap_thread(page); return 0; }