Message ID | 20240729091532.855688-1-max.kellermann@ionos.com |
---|---|
State | New |
Headers | show |
Series | fs/netfs/fscache_io: remove the obsolete "using_pgpriv2" flag | expand |
On Tue, Jul 30, 2024 at 6:01 PM David Howells <dhowells@redhat.com> wrote: > Can you try this patch instead of either of yours? I booted it on one of the servers, and no problem so far. All tests complete successfully, even the one with copy_file_range that crashed with my patch. I'll let you know when problems occur later, but until then, I agree with merging your revert instead of my patches. If I understand this correctly, my other problem (the folio_attach_private conflict between netfs and ceph) I posted in https://lore.kernel.org/ceph-devel/CAKPOu+8q_1rCnQndOj3KAitNY2scPQFuSS-AxeGru02nP9ZO0w@mail.gmail.com/ was caused by my (bad) patch after all, wasn't it? > For the moment, ceph has to continue using PG_private_2. It doesn't use > netfs_writepages(). I have mostly complete patches to fix that, but they got > popped onto the back burner for a bit. When you're done with those patches, Cc me on those if you want me to help test them. Max
On Tue, Jul 30, 2024 at 6:28 PM Max Kellermann <max.kellermann@ionos.com> wrote: > I'll let you know when problems occur later, but until > then, I agree with merging your revert instead of my patches. Not sure if that's the same bug/cause (looks different), but 6.10.2 with your patch is still unstable: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 9-.... 15-.... } 521399 jiffies s: 2085 root: 0x1/. rcu: blocking rcu_node structures (internal RCU debug): l=1:0-15:0x8200/. Sending NMI from CPU 3 to CPUs 9: NMI backtrace for cpu 9 CPU: 9 PID: 2756 Comm: kworker/9:2 Tainted: G D 6.10.2-cm4all2-vm+ #171 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 Workqueue: ceph-msgr ceph_con_workfn RIP: 0010:native_queued_spin_lock_slowpath+0x80/0x260 Code: 57 85 c0 74 10 0f b6 03 84 c0 74 09 f3 90 0f b6 03 84 c0 75 f7 b8 01 00 00 00 66 89 03 5b 5d 41 5c 41 5d c3 cc cc cc cc f3 90 <eb> 93 8b 37 b8 00 02 00 00 81 fe 00 01 00 00 74 07 eb a1 83 e8 01 RSP: 0018:ffffaf5880c03bb8 EFLAGS: 00000202 RAX: 0000000000000001 RBX: ffffa02bc37c9e98 RCX: ffffaf5880c03c90 RDX: 0000000000000001 RSI: 0000000000000001 RDI: ffffa02bc37c9e98 RBP: ffffa02bc2f94000 R08: ffffaf5880c03c90 R09: 0000000000000010 R10: 0000000000000514 R11: 0000000000000000 R12: ffffaf5880c03c90 R13: ffffffffb4bcb2f0 R14: ffffa036c9e7e8e8 R15: ffffa02bc37c9e98 FS: 0000000000000000(0000) GS:ffffa036cf040000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 000055fecac48568 CR3: 000000030d82c002 CR4: 00000000001706b0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: <NMI> ? nmi_cpu_backtrace+0x83/0xf0 ? nmi_cpu_backtrace_handler+0xd/0x20 ? nmi_handle+0x56/0x120 ? default_do_nmi+0x40/0x100 ? exc_nmi+0xdc/0x100 ? end_repeat_nmi+0xf/0x53 ? __pfx_ceph_ino_compare+0x10/0x10 ? native_queued_spin_lock_slowpath+0x80/0x260 ? native_queued_spin_lock_slowpath+0x80/0x260 ? native_queued_spin_lock_slowpath+0x80/0x260 </NMI> <TASK> ? __pfx_ceph_ino_compare+0x10/0x10 _raw_spin_lock+0x1e/0x30 find_inode+0x6e/0xc0 ? __pfx_ceph_ino_compare+0x10/0x10 ? __pfx_ceph_set_ino_cb+0x10/0x10 ilookup5_nowait+0x6d/0xa0 ? __pfx_ceph_ino_compare+0x10/0x10 iget5_locked+0x33/0xe0 ceph_get_inode+0xb8/0xf0 mds_dispatch+0xfe8/0x1ff0 ? inet_recvmsg+0x4d/0xf0 ceph_con_process_message+0x66/0x80 ceph_con_v1_try_read+0xcfc/0x17c0 ? __switch_to_asm+0x39/0x70 ? finish_task_switch.isra.0+0x78/0x240 ? __schedule+0x32a/0x1440 ceph_con_workfn+0x339/0x4f0 process_one_work+0x138/0x2e0 worker_thread+0x2b9/0x3d0 ? __pfx_worker_thread+0x10/0x10 kthread+0xba/0xe0 ? __pfx_kthread+0x10/0x10 ret_from_fork+0x30/0x50 ? __pfx_kthread+0x10/0x10 ret_from_fork_asm+0x1a/0x30 </TASK>
On Tue, Jul 30, 2024 at 6:28 PM Max Kellermann <max.kellermann@ionos.com> wrote: > If I understand this correctly, my other problem (the > folio_attach_private conflict between netfs and ceph) I posted in > https://lore.kernel.org/ceph-devel/CAKPOu+8q_1rCnQndOj3KAitNY2scPQFuSS-AxeGru02nP9ZO0w@mail.gmail.com/ > was caused by my (bad) patch after all, wasn't it? It was not caused by my bad patch. Without my patch, but with your revert instead I just got a crash (this time, I enabled lots of debugging options in the kernel, including KASAN) - it's the same crash as in the post I linked in my previous email: ------------[ cut here ]------------ WARNING: CPU: 13 PID: 3621 at fs/ceph/caps.c:3386 ceph_put_wrbuffer_cap_refs+0x416/0x500 Modules linked in: CPU: 13 PID: 3621 Comm: rsync Not tainted 6.10.2-cm4all2-vm+ #176 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 RIP: 0010:ceph_put_wrbuffer_cap_refs+0x416/0x500 Code: e8 af 7f 50 01 45 84 ed 75 27 45 8d 74 24 ff e9 cf fd ff ff e8 ab ea 64 ff e9 4c fc ff ff 31 f6 48 89 df e8 3c 86 ff ff eb b5 <0f> 0b e9 7a ff ff ff 31 f6 48 89 df e8 29 86 ff ff eb cd 0f 0b 48 RSP: 0018:ffff88813c57f868 EFLAGS: 00010286 RAX: dffffc0000000000 RBX: ffff88823dc66588 RCX: 0000000000000000 RDX: 1ffff11047b8cda7 RSI: ffff88823dc66df0 RDI: ffff88823dc66d38 RBP: 0000000000000001 R08: 0000000000000000 R09: fffffbfff5f9a8cd R10: ffffffffafcd466f R11: 0000000000000001 R12: 0000000000000000 R13: ffffea000947af00 R14: 00000000ffffffff R15: 0000000000000356 FS: 00007f1e82957b80(0000) GS:ffff888a73400000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000559037dacea8 CR3: 000000013f1b2002 CR4: 00000000001706b0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: <TASK> ? __warn+0xc8/0x2c0 ? ceph_put_wrbuffer_cap_refs+0x416/0x500 ? report_bug+0x257/0x2b0 ? handle_bug+0x3c/0x70 ? exc_invalid_op+0x13/0x40 ? asm_exc_invalid_op+0x16/0x20 ? ceph_put_wrbuffer_cap_refs+0x416/0x500 ? ceph_put_wrbuffer_cap_refs+0x2e/0x500 ceph_invalidate_folio+0x241/0x310 truncate_cleanup_folio+0x277/0x330 truncate_inode_pages_range+0x1b4/0x940 ? __pfx_truncate_inode_pages_range+0x10/0x10 ? __lock_acquire+0x19f3/0x5c10 ? __lock_acquire+0x19f3/0x5c10 ? __pfx___lock_acquire+0x10/0x10 ? __pfx___lock_acquire+0x10/0x10 ? srso_alias_untrain_ret+0x1/0x10 ? lock_acquire+0x186/0x490 ? find_held_lock+0x2d/0x110 ? kvm_sched_clock_read+0xd/0x20 ? local_clock_noinstr+0x9/0xb0 ? __pfx_lock_release+0x10/0x10 ? lockdep_hardirqs_on_prepare+0x275/0x3e0 ceph_evict_inode+0xd5/0x530 evict+0x251/0x560 __dentry_kill+0x17b/0x500 dput+0x393/0x690 __fput+0x40e/0xa60 __x64_sys_close+0x78/0xd0 do_syscall_64+0x82/0x130 ? lockdep_hardirqs_on_prepare+0x275/0x3e0 ? syscall_exit_to_user_mode+0x9f/0x190 ? do_syscall_64+0x8e/0x130 ? lockdep_hardirqs_on_prepare+0x275/0x3e0 ? lockdep_hardirqs_on_prepare+0x275/0x3e0 ? syscall_exit_to_user_mode+0x9f/0x190 ? do_syscall_64+0x8e/0x130 ? do_syscall_64+0x8e/0x130 ? lockdep_hardirqs_on_prepare+0x275/0x3e0 entry_SYSCALL_64_after_hwframe+0x76/0x7e RIP: 0033:0x7f1e823178e0 Code: 0d 00 00 00 eb b2 e8 ff f7 01 00 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 80 3d 01 1d 0e 00 00 74 17 b8 03 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 48 c3 0f 1f 80 00 00 00 00 48 83 ec 18 89 7c RSP: 002b:00007ffe16c2e108 EFLAGS: 00000202 ORIG_RAX: 0000000000000003 RAX: ffffffffffffffda RBX: 000000000000001e RCX: 00007f1e823178e0 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000001 RBP: 00007f1e8219bc08 R08: 0000000000000000 R09: 0000559037df64b0 R10: fe04b91e88691591 R11: 0000000000000202 R12: 0000000000000001 R13: 0000000000000000 R14: 00007ffe16c2e220 R15: 0000000000000001 </TASK> irq event stamp: 26945 hardirqs last enabled at (26951): [<ffffffffaaac5a99>] console_unlock+0x189/0x1b0 hardirqs last disabled at (26956): [<ffffffffaaac5a7e>] console_unlock+0x16e/0x1b0 softirqs last enabled at (26518): [<ffffffffaa962375>] irq_exit_rcu+0x95/0xc0 softirqs last disabled at (26513): [<ffffffffaa962375>] irq_exit_rcu+0x95/0xc0 ---[ end trace 0000000000000000 ]--- ================================================================== BUG: KASAN: null-ptr-deref in ceph_put_snap_context+0x18/0x50 Write of size 4 at addr 0000000000000356 by task rsync/3621 CPU: 13 PID: 3621 Comm: rsync Tainted: G W 6.10.2-cm4all2-vm+ #176 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 Call Trace: <TASK> dump_stack_lvl+0x74/0xd0 kasan_report+0xb9/0xf0 ? ceph_put_snap_context+0x18/0x50 kasan_check_range+0xeb/0x1a0 ceph_put_snap_context+0x18/0x50 ceph_invalidate_folio+0x249/0x310 truncate_cleanup_folio+0x277/0x330 truncate_inode_pages_range+0x1b4/0x940 ? __pfx_truncate_inode_pages_range+0x10/0x10 ? __lock_acquire+0x19f3/0x5c10 ? __lock_acquire+0x19f3/0x5c10 ? __pfx___lock_acquire+0x10/0x10 ? __pfx___lock_acquire+0x10/0x10 ? srso_alias_untrain_ret+0x1/0x10 ? lock_acquire+0x186/0x490 ? find_held_lock+0x2d/0x110 ? kvm_sched_clock_read+0xd/0x20 ? local_clock_noinstr+0x9/0xb0 ? __pfx_lock_release+0x10/0x10 ? lockdep_hardirqs_on_prepare+0x275/0x3e0 ceph_evict_inode+0xd5/0x530 evict+0x251/0x560 __dentry_kill+0x17b/0x500 dput+0x393/0x690 __fput+0x40e/0xa60 __x64_sys_close+0x78/0xd0 do_syscall_64+0x82/0x130 ? lockdep_hardirqs_on_prepare+0x275/0x3e0 ? syscall_exit_to_user_mode+0x9f/0x190 ? do_syscall_64+0x8e/0x130 ? lockdep_hardirqs_on_prepare+0x275/0x3e0 ? lockdep_hardirqs_on_prepare+0x275/0x3e0 ? syscall_exit_to_user_mode+0x9f/0x190 ? do_syscall_64+0x8e/0x130 ? do_syscall_64+0x8e/0x130 ? lockdep_hardirqs_on_prepare+0x275/0x3e0 entry_SYSCALL_64_after_hwframe+0x76/0x7e RIP: 0033:0x7f1e823178e0 Code: 0d 00 00 00 eb b2 e8 ff f7 01 00 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 80 3d 01 1d 0e 00 00 74 17 b8 03 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 48 c3 0f 1f 80 00 00 00 00 48 83 ec 18 89 7c RSP: 002b:00007ffe16c2e108 EFLAGS: 00000202 ORIG_RAX: 0000000000000003 RAX: ffffffffffffffda RBX: 000000000000001e RCX: 00007f1e823178e0 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000001 RBP: 00007f1e8219bc08 R08: 0000000000000000 R09: 0000559037df64b0 R10: fe04b91e88691591 R11: 0000000000000202 R12: 0000000000000001 R13: 0000000000000000 R14: 00007ffe16c2e220 R15: 0000000000000001 </TASK>
Max Kellermann <max.kellermann@ionos.com> wrote: > It was not caused by my bad patch. Without my patch, but with your > revert instead I just got a crash (this time, I enabled lots of > debugging options in the kernel, including KASAN) - it's the same > crash as in the post I linked in my previous email: > > ------------[ cut here ]------------ > WARNING: CPU: 13 PID: 3621 at fs/ceph/caps.c:3386 > ceph_put_wrbuffer_cap_refs+0x416/0x500 Is that "WARN_ON_ONCE(ci->i_auth_cap);" for you? David
On Wed, Jul 31, 2024 at 12:41 PM David Howells <dhowells@redhat.com> wrote: > > ------------[ cut here ]------------ > > WARNING: CPU: 13 PID: 3621 at fs/ceph/caps.c:3386 > > ceph_put_wrbuffer_cap_refs+0x416/0x500 > > Is that "WARN_ON_ONCE(ci->i_auth_cap);" for you? Yes, and that happens because no "capsnap" was found, because the "snapc" parameter is 0x356 (NETFS_FOLIO_COPY_TO_CACHE); no snap_context with address 0x356 could be found, of course. Max
On Sun, 2024-08-04 at 16:57 +0300, Hristo Venev wrote: > In addition to Ceph, in NFS there are also some crashes related to > the > use of 0x356 as a pointer. > > `netfs_is_cache_enabled()` only returns true when the fscache cookie > is > fully initialized. This may happen after the request has been > created, > so check for the cookie's existence instead. > > Link: > https://lore.kernel.org/linux-nfs/b78c88db-8b3a-4008-94cb-82ae08f0e37b@free.fr/T/ > Fixes: 2ff1e97587f4 ("netfs: Replace PG_fscache by setting folio- > >private and marking dirty") > Cc: linux-nfs@vger.kernel.org <linux-nfs@vger.kernel.org> > Cc: blokos <blokos@free.fr> > Cc: Trond Myklebust <trondmy@hammerspace.com> > Cc: dan.aloni@vastdata.com <dan.aloni@vastdata.com> > Signed-off-by: Hristo Venev <hristo@venev.name> > --- > fs/netfs/objects.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > > diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c > index f4a6427274792..a74ca90c86c9b 100644 > --- a/fs/netfs/objects.c > +++ b/fs/netfs/objects.c > @@ -27,7 +27,6 @@ struct netfs_io_request *netfs_alloc_request(struct > address_space *mapping, > bool is_unbuffered = (origin == NETFS_UNBUFFERED_WRITE || > origin == NETFS_DIO_READ || > origin == NETFS_DIO_WRITE); > - bool cached = !is_unbuffered && netfs_is_cache_enabled(ctx); > int ret; > > for (;;) { > @@ -56,8 +55,9 @@ struct netfs_io_request *netfs_alloc_request(struct > address_space *mapping, > refcount_set(&rreq->ref, 1); > > __set_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags); > - if (cached) { > - __set_bit(NETFS_RREQ_WRITE_TO_CACHE, &rreq->flags); > + if (!is_unbuffered && > fscache_cookie_valid(netfs_i_cookie(ctx))) { > + if(netfs_is_cache_enabled(ctx)) > + __set_bit(NETFS_RREQ_WRITE_TO_CACHE, &rreq- > >flags); > if (test_bit(NETFS_ICTX_USE_PGPRIV2, &ctx->flags)) > /* Filesystem uses deprecated PG_private_2 > marking. */ > __set_bit(NETFS_RREQ_USE_PGPRIV2, &rreq- > >flags); Does this mean that netfs could still end up setting a value for folio- >private in NFS given some other set of circumstances?
On Sun, 2024-08-04 at 23:22 +0000, Trond Myklebust wrote: > On Sun, 2024-08-04 at 16:57 +0300, Hristo Venev wrote: > > In addition to Ceph, in NFS there are also some crashes related to > > the > > use of 0x356 as a pointer. > > > > `netfs_is_cache_enabled()` only returns true when the fscache > > cookie > > is > > fully initialized. This may happen after the request has been > > created, > > so check for the cookie's existence instead. > > > > Link: > > https://lore.kernel.org/linux-nfs/b78c88db-8b3a-4008-94cb-82ae08f0e37b@free.fr/T/ > > Fixes: 2ff1e97587f4 ("netfs: Replace PG_fscache by setting folio- > > > private and marking dirty") > > Cc: linux-nfs@vger.kernel.org <linux-nfs@vger.kernel.org> > > Cc: blokos <blokos@free.fr> > > Cc: Trond Myklebust <trondmy@hammerspace.com> > > Cc: dan.aloni@vastdata.com <dan.aloni@vastdata.com> > > Signed-off-by: Hristo Venev <hristo@venev.name> > > --- > > fs/netfs/objects.c | 6 +++--- > > 1 file changed, 3 insertions(+), 3 deletions(-) > > > > diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c > > index f4a6427274792..a74ca90c86c9b 100644 > > --- a/fs/netfs/objects.c > > +++ b/fs/netfs/objects.c > > @@ -27,7 +27,6 @@ struct netfs_io_request > > *netfs_alloc_request(struct > > address_space *mapping, > > bool is_unbuffered = (origin == NETFS_UNBUFFERED_WRITE || > > origin == NETFS_DIO_READ || > > origin == NETFS_DIO_WRITE); > > - bool cached = !is_unbuffered && > > netfs_is_cache_enabled(ctx); > > int ret; > > > > for (;;) { > > @@ -56,8 +55,9 @@ struct netfs_io_request > > *netfs_alloc_request(struct > > address_space *mapping, > > refcount_set(&rreq->ref, 1); > > > > __set_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags); > > - if (cached) { > > - __set_bit(NETFS_RREQ_WRITE_TO_CACHE, &rreq- > > >flags); > > + if (!is_unbuffered && > > fscache_cookie_valid(netfs_i_cookie(ctx))) { > > + if(netfs_is_cache_enabled(ctx)) > > + __set_bit(NETFS_RREQ_WRITE_TO_CACHE, > > &rreq- > > > flags); > > if (test_bit(NETFS_ICTX_USE_PGPRIV2, &ctx->flags)) > > /* Filesystem uses deprecated PG_private_2 > > marking. */ > > __set_bit(NETFS_RREQ_USE_PGPRIV2, &rreq- > > > flags); > > Does this mean that netfs could still end up setting a value for > folio- > > private in NFS given some other set of circumstances? Hopefully not? For NFS the cookie should be allocated in `nfs_fscache_init_inode`, and for Ceph I think `ceph_fill_inode` (which calls `ceph_fscache_register_inode_cookie`) should also be called early enough as well. > -- > Trond Myklebust > Linux NFS client maintainer, Hammerspace > trond.myklebust@hammerspace.com > >
diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index 8c16bc5250ef..485cbd1730d1 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -512,7 +512,7 @@ static void ceph_fscache_write_to_cache(struct inode *inode, u64 off, u64 len, b struct fscache_cookie *cookie = ceph_fscache_cookie(ci); fscache_write_to_cache(cookie, inode->i_mapping, off, len, i_size_read(inode), - ceph_fscache_write_terminated, inode, true, caching); + ceph_fscache_write_terminated, inode, caching); } #else static inline void ceph_fscache_write_to_cache(struct inode *inode, u64 off, u64 len, bool caching) diff --git a/fs/netfs/fscache_io.c b/fs/netfs/fscache_io.c index 38637e5c9b57..0d8f3f646598 100644 --- a/fs/netfs/fscache_io.c +++ b/fs/netfs/fscache_io.c @@ -166,30 +166,10 @@ struct fscache_write_request { loff_t start; size_t len; bool set_bits; - bool using_pgpriv2; netfs_io_terminated_t term_func; void *term_func_priv; }; -void __fscache_clear_page_bits(struct address_space *mapping, - loff_t start, size_t len) -{ - pgoff_t first = start / PAGE_SIZE; - pgoff_t last = (start + len - 1) / PAGE_SIZE; - struct page *page; - - if (len) { - XA_STATE(xas, &mapping->i_pages, first); - - rcu_read_lock(); - xas_for_each(&xas, page, last) { - folio_end_private_2(page_folio(page)); - } - rcu_read_unlock(); - } -} -EXPORT_SYMBOL(__fscache_clear_page_bits); - /* * Deal with the completion of writing the data to the cache. */ @@ -198,10 +178,6 @@ static void fscache_wreq_done(void *priv, ssize_t transferred_or_error, { struct fscache_write_request *wreq = priv; - if (wreq->using_pgpriv2) - fscache_clear_page_bits(wreq->mapping, wreq->start, wreq->len, - wreq->set_bits); - if (wreq->term_func) wreq->term_func(wreq->term_func_priv, transferred_or_error, was_async); @@ -214,7 +190,7 @@ void __fscache_write_to_cache(struct fscache_cookie *cookie, loff_t start, size_t len, loff_t i_size, netfs_io_terminated_t term_func, void *term_func_priv, - bool using_pgpriv2, bool cond) + bool cond) { struct fscache_write_request *wreq; struct netfs_cache_resources *cres; @@ -232,7 +208,6 @@ void __fscache_write_to_cache(struct fscache_cookie *cookie, wreq->mapping = mapping; wreq->start = start; wreq->len = len; - wreq->using_pgpriv2 = using_pgpriv2; wreq->set_bits = cond; wreq->term_func = term_func; wreq->term_func_priv = term_func_priv; @@ -260,8 +235,6 @@ void __fscache_write_to_cache(struct fscache_cookie *cookie, abandon_free: kfree(wreq); abandon: - if (using_pgpriv2) - fscache_clear_page_bits(mapping, start, len, cond); if (term_func) term_func(term_func_priv, ret, false); } diff --git a/include/linux/fscache.h b/include/linux/fscache.h index 9de27643607f..f8c52bddaa15 100644 --- a/include/linux/fscache.h +++ b/include/linux/fscache.h @@ -177,8 +177,7 @@ void __fscache_write_to_cache(struct fscache_cookie *cookie, loff_t start, size_t len, loff_t i_size, netfs_io_terminated_t term_func, void *term_func_priv, - bool using_pgpriv2, bool cond); -extern void __fscache_clear_page_bits(struct address_space *, loff_t, size_t); + bool cond); /** * fscache_acquire_volume - Register a volume as desiring caching services @@ -573,24 +572,6 @@ int fscache_write(struct netfs_cache_resources *cres, return ops->write(cres, start_pos, iter, term_func, term_func_priv); } -/** - * fscache_clear_page_bits - Clear the PG_fscache bits from a set of pages - * @mapping: The netfs inode to use as the source - * @start: The start position in @mapping - * @len: The amount of data to unlock - * @caching: If PG_fscache has been set - * - * Clear the PG_fscache flag from a sequence of pages and wake up anyone who's - * waiting. - */ -static inline void fscache_clear_page_bits(struct address_space *mapping, - loff_t start, size_t len, - bool caching) -{ - if (caching) - __fscache_clear_page_bits(mapping, start, len); -} - /** * fscache_write_to_cache - Save a write to the cache and clear PG_fscache * @cookie: The cookie representing the cache object @@ -600,7 +581,6 @@ static inline void fscache_clear_page_bits(struct address_space *mapping, * @i_size: The new size of the inode * @term_func: The function to call upon completion * @term_func_priv: The private data for @term_func - * @using_pgpriv2: If we're using PG_private_2 to mark in-progress write * @caching: If we actually want to do the caching * * Helper function for a netfs to write dirty data from an inode into the cache @@ -612,21 +592,19 @@ static inline void fscache_clear_page_bits(struct address_space *mapping, * marked with PG_fscache. * * If given, @term_func will be called upon completion and supplied with - * @term_func_priv. Note that if @using_pgpriv2 is set, the PG_private_2 flags - * will have been cleared by this point, so the netfs must retain its own pin - * on the mapping. + * @term_func_priv. */ static inline void fscache_write_to_cache(struct fscache_cookie *cookie, struct address_space *mapping, loff_t start, size_t len, loff_t i_size, netfs_io_terminated_t term_func, void *term_func_priv, - bool using_pgpriv2, bool caching) + bool caching) { if (caching) __fscache_write_to_cache(cookie, mapping, start, len, i_size, term_func, term_func_priv, - using_pgpriv2, caching); + caching); else if (term_func) term_func(term_func_priv, -ENOBUFS, false);